-
Notifications
You must be signed in to change notification settings - Fork 434
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
crate_universe bootstrapping #650
Comments
I'm also working on something but it's using cross. my thought is that users would be able to run a script to update binaries but CI would rebuild and assert that the outputs are exactly the same, ensuring no funny business was checked in. I have a good feeling this will work out since
You can pass
I think it'd be great to be able to build this in Bazel, if you get something working with Cargo raze I'd love to see it 😄 Though I think it'll be a bit challenging with it's dependencies on things like openssl, just a heads up 😅 |
This is what it takes :) https://github.com/grafica/dockerfiles/blob/main/ubuntu_20_04_cargo_raze/Dockerfile This ran on BuildBuddy's RBE. (edit: you also need a close to main branch copy of |
Wait, you built |
No, built and ran Here's the build (I think with a 30-day expiry): https://app.buildbuddy.io/invocation/a00fa3a8-9067-4cd2-a475-bfcc36bbffee
I haven't tried things with #635 fixed, so it's possible that I won't need gcc/g++ in the Docker image, but I am not sure. |
Super excited to see this conversation happening!! A few general things to note:
I've generally been imagining that we'd have CI build the binaries and publish them to Google Cloud Storage, and update a file which lists URLs and sha256s. I think that having a script which builds the binaries, and either verifies that they're up to date, or saves them somewhere, is a great direction to go - whether we have contributors run that locally and have CI verify they're up to date, or have CI somehow automatically do the update for you, we can work out once we have the script. I think it's also OK if local development involves a slightly different flow than CI. In terms of the general approach, I've in the past gotten MacOS is always the special corner, but as we have CI running MacOS, worst-case just building binaries there shouldn't be the end of the world... |
I mean, it doesn't get used but it's still compiled 😅. I opened google/cargo-raze#305 to hopefully just delete it because I couldn't find a super clean way to section off the functionality with a feature. But I've since just made a branch on my fork of
Worth a discussion but I'd like to first get to a point where users can use the rules and have it just work. My focus is on bootstrapping and I don't think vendoring works here. We need a compiled binary for the repository rule so something has to have done that work before Bazel starts up so just having the sources there doesn't really work IMO since I don't think it's acceptable to have the repository rule try to build the binary. I might have something working by tomorrow but it's going to be using cargo for bootstrapping. I think this is acceptable given the experimental state we're in but I also feel we should have no issues using cargo for bootstrapping. The next step would be to have the resolver build another resolver, which is similar to what I did for |
If that whole dependency chain is in there for a case that wouldn't really apply in Bazel, perhaps we could get the Cargo folks to add a feature for us. Agree that eliminating that stuff would make this problem much easier. |
/cc @acmcarther - How would you feel about deprecating, deleting, or making optional, support for binary deps in cargo-raze? I put together a couple of potential branches: Just conditionaling out the Conditionaling out the whole binary deps feature: https://github.com/illicitonion/cargo-raze/tree/binary-deps - allows us to also conditional out a dep on |
So, while @acmcarther is the code-owner, I'm the author of that feature and advocate of removing it. I already have a branch with that feature totally isolated behind a feature. I only did this today and haven't had time to split it out into anything I'd expect to get merged. This is a non-blocker for the time being since I can continue to work off my branch. Though, to directly address the question, I don't think we can totally remove the feature until we have a solution for the wasm-bindgen rules which rely on the |
I have a working solution for bootstrapping. I still need to run some tests for the Buildkite CI builds but I'm able to build binaries for various platforms in a way that makes the resolver available to the rule. I'll try to iron out a few more things then open a PR soon. |
I was able to open #663 but am working with @hlopko and @illicitonion to improve the solution I came up with. |
I'm not sure if this is resolved but we do have some sort of "bootstrapping" for This method though is subject to change since the rule is still extremely experimental. |
Closing this since the implementation for If there are issues with the new implementation lets track them in new tickets. |
I read #598, and I have some experience with this. What I currently do is run a shell script called "update_bazel.sh" that looks like this for
cargo-raze
:#!/usr/bin/env bash
This command seems to require network access no matter what:
The following command can be made to run in offline mode (which is not always offline) in the sandbox with a private registry created after the step above. You'll get a bunch of
.crate
files in the directory of your choice. To get this command to run as a genrule, you'll need to defineRUSTC
andLD_RUN_PATH
env variables, the values for which you can track down this way.rustc
needs some neighboring.so
files to run this command, so I didLD_RUN_PATH=$$(dirname $(location :rustc))/../lib
.However, this still won't work, because
cargo metadata
assumes it can write files wherever it wants. I think it could probably be made to work if you could predict its output in a.bzl
file, but I just gave up after I saw #598.If run on the host machine,
cargo metadata
inserts host machine absolute paths. So I did this awful thing:Then I use Bazel to run
cargo-raze
I later reversed the
sed
to run cbindgen as agenrule
:https://github.com/grafica/crustls/blob/15152596ec726f3dc9389c5668672f544b6b783e/BUILD.bazel#L98
After all this, I did get a build that works with RBE, and running
cargo metadata
remotely almost worked.cc: @illicitonion @gibfahn @hlopko @UebelAndre
The text was updated successfully, but these errors were encountered: