-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add package repository scripts, run in CI (staging PoC) #1916
Conversation
This is part of the effort to abandon Bintray[0] in favor of a self-hosted package repository solution. The current approach involves pushing the packages and repository metadata to an S3 bucket, which will eventually be served from a CDN (CloudFront). Part of #1247 [0]: https://jfrog.com/blog/into-the-sunset-bintray-jcenter-gocenter-and-chartcenter/
The default |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems okay to me.
Maybe we should merge this earlier in the cycle and build the repo for the current version and update the docs?
I don't think any of the TODOs are a must, and the two I am most interested in is the debsign
and the md5sum checks which arguably can be done with s3cmd which we need either way so I don't know why it wasn't just used instead of using curl
?
I'm wondering if we should bother with signing .deb packages with Verification is not enabled by default and the user needs to manually enable it. Even then it's not built into the So it's probably better to stick with the current approach done in Bintray to just have a separate .asc file and the user can verify it manually with RPMs are properly signed and verified automatically (if Re: md5sum checks, I'd also prefer if we just downloaded it with |
Yeah .. I remember that deb and signing packages was something ... strange ... But I think they check the md5sums and and md5sums were signed ? Is this the If moving to |
Have you considered something like @cloudsmith-io which does all of this for you? :-) (I work there, happy to help!) |
@lskillen Thanks! We considered some hosted services, including Cloudsmith, but ultimately decided it was best to roll our own and maintain control over the release process and package distribution. We don't want to repeat the migration issues we're dealing with now with Bintray, and setting up and maintaining this infrastructure ourselves is not an insurmountable amount of work. |
I understand, completely. It's unfortunate to be in a situation that requires such reactivity. 🤕 We'll not be going anywhere, so if you need help with that in the future we'll be there for you. If you did feel nervous about lock-in later, we offer custom domains to put the control back in your court. All the best of luck with the self-host! 😁👍 |
This is a bit friendlier and looks better with the logo.
We need to use --delete-removed in order for removed packages to be deleted from the bucket as well, otherwise it would get messy replicating the removal logic on S3 as well. And doing so is safer with the syncing done in each script, which also keeps entrypoint.sh cleaner.
dfe95a8
to
c34fc05
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks okay to me
I would prefer if we --cf-invalidate
the "index/packages" files instead of just them not having any caching as they are likely to be hitted all the time and given that we update usually once every 2 months I would argue it is probably more beneficial if they have 1hour caching then no caching
- again dropping old files IMO is low priority and doing it based on number of packages is IMO wrong (as we can have a situation where we need to release multiple fix releases). So IMO it will be better to not do it for now
I also wonder if it won't be better if all the "samely" named function have a prefix/suffix as just reading it it sounds like they will do the same thing but they are all specific (more or less0
|
||
# Download existing packages | ||
# For MSI packages this is only done to be able to generate the index.html correctly. | ||
# Should we fake it and create empty files that have the same timestamp and size as the original ones? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nah, the cost of doing this once every 2 months or so is negligible, especially when you're deleting old packages... Though, if we ever figure it out, we might save some bandwidth by completely not hosting the .msi packages on dl.k6.io
and just redirecting to the github release binaries 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmmm, I do wonder if we should not actually drop the whole msi hosting and truly just use github ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'd still need a way to redirect people to the latest msi release (and maybe to the other plain zipped binaries we have), which judging by the "CloudFront caches redirects aggressively and I wasn't able to invalidate it" comment below, probably won't be easy to do if we just host redirects to them on dl.k6.io...
The current solution is good enough, and has a nice side-benefit of having a folder with the old installations at hand, we should leave it be, I think...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this would be good, though instead of dropping it entirely I'd prefer to have S3 redirects to GH like Ned mentioned, so that we can point users to a central location for all packages.
The caching issue wouldn't be a problem for links to specific versions, as they could remain static. But the link to a latest version would be an issue as it needs to be updated, though we could workaround it if we started publishing an MSI without a version in its filename, e.g. k6-amd64.msi
. That way the latest link could also be static and redirect to https://github.com/k6io/k6/releases/latest/download/k6-amd64.msi
.
Anyways, let's leave it as is for now and consider doing this later. It should be transparent to users if done correctly. 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🎉 This is awesome to see! ❤️
316efaf
to
efab809
Compare
Codecov Report
@@ Coverage Diff @@
## master #1916 +/- ##
==========================================
+ Coverage 71.21% 71.43% +0.21%
==========================================
Files 184 183 -1
Lines 14338 14244 -94
==========================================
- Hits 10211 10175 -36
+ Misses 3501 3437 -64
- Partials 626 632 +6
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
This is part of the effort to abandon Bintray in favor of a self-hosted package repository solution. The current approach involves pushing the packages and repository metadata to an S3 bucket, which is served from a CDN (CloudFront).
You can explore the current test bucket at: https://dl.staging.k6.io/, and test it on Debian/etc. with:
... and Fedora/etc.:
Pending:
Create infrastructure with TerraformCreate NuGet/Chocolatey repoAs discussed, we don't need to move the.nupkg
building and publishing since the Bintray repository isn't being used for the Chocolatey package. We will however need to add this when we takeover the maintenance of the package.AddRedirects are aggressively cached by CloudFront and are not an option, so the latest file is duplicated.k6-latest-amd64.msi
. I wonder if we can do this with some S3 redirects, otherwise just duplicating the file would work as well.Publish theA separate CI job was added to publish the image manually and on a weekly schedule (to ensure building keeps working).k6io/packager
image to GHCR to speed up thepublish-packages
step and avoid rebuilding it everytime.Part of #1247