-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Overhaul binary release process #9258
Comments
I researched our hosting options apart from Github and IPFS. Hosting nightlies:
Hosting binaries outside of github:
|
RequirementsFor reference, the requirements posted by @chriseth on Gitter:
DetailsThis is the summary of today's discussion with @chriseth: New process for getting nightlies into
|
Actually, I don't think CircleCI should have any part in the release process - all of our builds and tests are run in docker images anyways, so it doesn't matter on which platform they are run. The only advantage of CircleCI is that it's fast, but that's mainly an argument for PRs, not for Releases (which should be safe, reliable and robust, but don't need to be particularly fast). We need to trust github anyways, there's no feasible way around that, but we can reduce whatever we have to trust on top of it. So instead of dealing with permissions for or access to CircleCI, let alone creating AWS instances that introduce even more points of attack in the process, I'd just have github actions build releases and pre-releases, pushing them to a solc-bin-like repo (or alternatively, maybe for releases not nightlies, merely create PRs to it, having the actual release binary branch be branch-protected) and adding them as assets to the github release page as well. The main issue about this is that it should be failure-robust, but that's no argument against having actions (etc) automate this. As long as at any failure of the actions, the failed parts of the process can just be repeated manually, automation makes this more robust, not less robust (due to having a dry run on the PR to In the best case the first step of a release (after preparations on develop) would be creating a PR from develop to release, which would already run all tests and verifications once. Any failure there should be fixed back on develop. Then upon hitting merge to release, the builds and tests are run again, github actions build all binaries, create a release page draft with the required artifacts and create a PR (resp. one PR per platform) to solc-bin adding the binaries. Also, if that's a concern: there should not be much duplication between the setup in github actions and CircleCI for building and testing due to this - we should organize the build and test runs independently of the platform they run on anyways, so we should have scripts in the repo for each step, meant just to be run inside our docker image - so we can just reuse the same scripts on github actions or CircleCI (or whereever we may want to). So much for my opinion about getting the releases both to the github release page and to a repo like solc-bin - publishing further from there to IPFS, or something equivalent to the current gh-pages on top of that, however, is independent of this part. |
@ekpyron I don't think we should have server-side automation for the release at all. It is just too prone to failure. My idea ould be that someone with write permission to the repo runs a script locally. This script queries the circleci api for the binaries for a certain tag and downloads them. Then we can test them locally if needed. If everything is fine, we can upload them to the release page and create a commit that adds them to solc-bin. Using automation will save us maybe 30-120 seconds per release if everything goes well, but it costs us at least 30 minutes if something goes wrong. The main server-side automation we need is for the nightly emscripten builds. This nightly build can possibly be created by a github action inside the solc-bin repository (not the solidity repository because it would not be able to easily push to the solc-bin repository) - this is written down at the beginning of the proposal. |
I really don't understand the argument. The current automation is indeed prone to failure, but that's for different reasons, because it's too many systems interacting with each other - in what way is having to run a script locally less error prone than running it automatically? |
It's more about S3 (so just storage buckets) than machine instances. But yeah, the preference is to avoid having any intermediate storage if we can get files from CircleCI easily. And I'm pretty sure we can.
I do like the idea of automated releases but it's really orthogonal this task. The focus of this task is on I am going to move nightly builds from Travis to CircleCI but it's mostly a matter of reusing the jobs that are already there and making them run on daily schedule in addition to PR builds. So not much extra work even if we later decide to move them to Github actions. We can't leave them as is anyway - we either need to debug file truncation on Travis or just sidestep the problem by moving somewhere else. |
@cameel Yep, building the binaries and hosting them are indeed somewhat orthogonal, so no reason for not preparing the hosting part right away - still in the end we will have to address both. And I'm not yet convinced at all by @chriseth's position of not automating it (especially, if it's automated properly, i.e. in such a way, that you can still fix things locally and run the very same script locally anyhow, if it fails, if ever necessary). |
Let's have a call about this on Wednesday! For me, it's just about having full control and giving away as few permissions as possible. @cameel if we pull the nightly binary from circelci, you can just search for the latest successful run on develop - no need to actually re-build anything. |
Good point. We don't even have to schedule a new job. We can just pull in what the existing jobs build. |
I have answers for our questions. Some unfortunately aren't good. Jekyll redirectsUnfortunately github does now allow proper HTTP 3xx redirects from GH pages. From Redirects on GitHub Pages:
(this is old article about Github Enterprise but, judging by old, dead links that now redirect to the main help article about Jekyll, it used to be stated on that page too). Anyway, the only supported way to redirect is Looks like our only real option is to point the domain to a small nginx instance and configure it to do the redirects. Symlinks in JekyllSymlinks on GH pages do work (see for example https://github.com/s4y/gh-pages-symlink-test). But it looks like they indeed create a copy of the file. I couldn't find it explicitly in Jekyll docs (they only say that in safe mode symlinks are ignored) but this old PR clearly shows that the files get copied in production mode or in safe mode (FileUtils.copy_entry() in Ruby preserves symlinks while MIME types in IPFS gatewaysFrom Content Type set by HTTP Gateway #152:
The above is a proposal to add a manifest that would allow us to set arbitrary content types. It's not even accepted yet. We cannot set the types ourselves until it gets implemented yet but we can control it to some extent by using the right file extension.
Looks like this is because this object does not have a file name so the gateway falls back to detection based on content. Since the The ipfs object get Qmad6iesaR5FQ45RRtMrt2Fa1EYDuq1oGoMJJB6beMHESn | jq {
"Links": [
{
"Name": "",
"Hash": "QmXTFox8jc3duxA4snM3zJokR6L1GJSCUE17v8UhAa5XFF",
"Size": 262158
},
{
"Name": "",
"Hash": "QmWcXmntJyRn8o6CHBYuPQkop6u6P1P4VLHpYkdLY6CAC3",
"Size": 262158
},
... We just need to make sure that we create objects with file names (rather than just unnamed blocks). One possible problem is that I'm not sure if we can get an ID of such an object knowing only the name and the hash of the file. IPFS gateway file size limitI haven't seen any official information about file size or even bandwith limits for Cloudflare and ipfs.io gateways. I have seen an issue in Getting artifacts from CircleCI and permissionsI see that when you have a link to an artifact you can freely download it without logging in. As for getting the link, there's an API endpoint for getting links to artifacts of the latest build. The only complication is that it requires an API token. For the script you run manually, you'd have to get your personal token from CircleCI once and then always specify it as a parameter (or we could make the script fetch it from a local config file). For nightlies we'd have to add the token as a secret in repository settings (see Creating and storing encrypted secrets). |
Ok, but that actually means that since jekyll cannot do http redirects, and we have to fall back to our own server anyway, we don't necessarily need ipfs after all, right? We could just have a server that has built-in hosting of plain directories. This would then also handle symlinks properly and the only "rendering" process is regularly doing a git checkout without the .git subdirectory. |
Does the token allow to do anything else on circleci apart from fetching the artefacts? |
A server that only does redirects would still be much lighter on bandwidth and storage than on one that also serves the files. So a redirect server + IPFS is still a viable alternative. One server that does both would probably be easier to configure and more reliable though (less moving parts). |
That is true! I just checked the permissions again and it is really horrible, both on the circleci and the github side. |
What about just hosting everything in the office ourselves after all - the main issue with that is bandwidth concerns, right? But all the content is long-lived and static, so we could just buy some CDN cache thing in front of it? Regarding mime-types and IPFS: The fact that gh-pages use copies for symlinks actually explains why we hit the size limit just now - fixing the short commit hashes introduced a lot of symlinks (for being backwards compatible when renaming the releases)... |
I think both options fit our requirements. Hosting on S3 will probably require less maintenance while the office computer gives us more control.
Yeah, looks like that would solve the MIME type issue with IPFS. |
@ekpyron Here's how I see the tasks here after all the discussions:
|
Maybe it's easier to discuss this in the call later, but for the record:
I still think there should rather be a github action in the solidity repo that builds, tests and pushes releases to My reasoning for this is that we have to trust github anyways, but there's no need to additionally trust CircleCI. If github is compromised, it can just serve the wrong code to CircleCI, so nothing we can do about it - but if github is fine and CircleCI is compromised, we have a problem, if we build releases there - and that can easily be avoided. Maybe this also relates to the other point:
That depends on how the binaries on the release page are built. For emscripten builds this will be fine, but I'm not sure the other builds will be reproducible just yet, so I'm not sure they will be identical. We'd at least need to check this first - we may have reproducible binaries if built with the same script in the same docker image, but we also may not. So in general I'm still not a fan of using CircleCI and artifacts on it for anything release-related or even for the nightlies for that matter. I'd be fine with doing it, if you all want to - I don't want to block this :-) - but I'm saying it's easily avoided and I'd argue for avoiding it :-). EDIT: But yeah - in general I'm pretty fine with any working solution to 2. (resp. 3.), although I'd tend to prefer to avoid cloud stuff and to have things primarily on IPFS - or at least something that can be easily migrated towards that. |
My problem with storing tokens was that I've seen threads where people mentioned storing them in a private repo as the only solution - which is not a good place for anything secret in my opinion. But since then I've noticed that github has a feature for adding encrypted secrets in repo settings so maybe that was just some old workaround that's no longer relevant... Another issue with tokens is that they're something that could leak if not secured properly - any solution where secrets do not have to be explicitly passed around is an advantage in my book.
Right now @chriseth wants release binaries to be uploaded manually so it's easy to ensure that they're the same. Just upload the same files to both places :) This would be just a sanity check to ensure you did just that. And if we automate publishing releases, I think it would just be a matter of having the same CI job upload both to |
Yeah, I meant encrypted tokens accessible by github actions only - storing tokens in a private repo would be weird, that's for sure :-). And (primarily @chriseth): that CI job could just create a release draft and a PR in solc-bin (the release branch of solc-bin should probably be even be protected and there should be a separate, non-protected nightly branch) - and it could be failure resistent (try to continue past errors e.g. for one platform only or in tests and just report the failure in the draft/PR instead of aborting). That way I'd argue this will be more reliable than doing things manually (the fact that we're considering a CI job to check, if we messed things up manually kind of confirms that...) |
Here's a summary of what we discussed today. I'm not sure I got it all unfortunately. Also, if something is wrong, please correct me. tl;dr: In this task we'll continue with S3, manual releases and getting binaries from CircleCI. We may automate releases later, as a separate task.
|
Our file list is 0.3 MB by the way. The one of go-ethereum is 8MB. |
Oh, sorry. I must have misheard that. |
Just for the record, I get:
That's a rather crude benchmark of course... but still it doesn't seem to make much of a difference. (Not that that's too surprising.) EDIT: Fetching directly using an in-browser IPFS implementations instead of a gateway is rather disappointing, though - I didn't get below 2 minutes (for downloading once). Pre-compressing could probably reduce this to less than one fifth of that, but that's still 10 times slower than the gateway, unfortunately... |
I have created a Github action for pulling in nightlies into |
@chriseth Looks like having a mirroring script on AWS CodeBuild triggered by pushes to The other choices are AWS Lambda and GH actions:
|
The discussion here is pretty long so I'm going to make it even longer by summarizing what's left to do here :) My next tasks
Plugging it in
Stuff for laterI think we should just create separate issues for these or this one will go on for ages:
Stuff we decided not to do for now (or not yet)
|
Any updates on this? Pretty anxious about the huge devex improvements this will unlock |
@fzeoli There were a few things in the last weeks that taken together pulled me away from this task for quite a while (the internal hackathon, my vacation, then post 0.7.0 bug squashing), but that's already over and I'm getting back to it when I'm done with the bug I'm currently working on. Is devex blocked by anything specific? The main part of this task (i.e. changing the way binaries are hosted and the release process) is done already. I've been posting updates about that part in the more specific #9226 - older binaries for old platforms are already available in Anything below "Stuff for later" in the post above won't be covered by this issue (I'll create separate ones because this one is already too big). Of these things IPFS in just a nice-to-have unless it turns out that there's actually some community interest in that and a completely static build for Windows and support for MSVC 2019 is also in the works by @christianparpart (#9594, #9476). |
The most important thing is the pre-0.6.9 mac builds. This being missing is why tools don't use the native compiler, so once that's done we can push pretty much everyone to it |
@fzeoli See #9226 (comment). I finished preparing the MacOS builds and they're currently going through review. |
I'm closing this since the core issue is solved (the release process has been changed and we have new hosting) and all the smaller related problems are either solved as well or have their own issues (see #9258 (comment) above for details). |
We should overhaul our binary release process.
There should be a single website that contains all binaries, including the soljson nightly binaries.
If possible, this website should be built via jekyll on github-pages. Since github-pages has a size limit, the soljson nightly binaries (or at least the older nightly binaries) should be stored via ipfs and the jekyll website should contain http redirects to an ipfs gateway.
The website should be backwards-compatible with solc-bin.ethereum.org/ (built from https://github.com/ethereum/solc-bin ).
The website should also be fully exposed via ipfs and on a subdomain of solidty.eth via ens.
It should contain all the binaries we usually put on a github release page in addition to the files in solc-bin.
The nightly binaries should be pushed directly from circleci (preferred) or travis or github actions. If possible, we should avoid storing (even an encrypted) access key on circleci or travis, so maybe github actions is the most viable solution there.
Maybe we could already prepare for the macos binaries being available for both intel and arm - so maybe we could find a generic scheme like
distribution-buildType-processorArchitecture
.We should also re-add the old nightly builds that were removed here: ethereum/solc-bin@e134bba
Finally, we should try to add binaries for older versions (especially macos binaries).
closes #9226
The text was updated successfully, but these errors were encountered: