-
Notifications
You must be signed in to change notification settings - Fork 434
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Work In Progress: Quick prototype of a cargo_crate repository rule #100
Conversation
This change add a 'triplet' information to the toolchain that is then used to specify the target platform to Rust.
This rule would allow cargo raze to just output a bzl file in remote mode and then rely on the rule to generate the correct BUILD files. An exemple of rule that cargo raze could generate would looks like: ```python cargo_crate( name = "ansi_term", # The crate name version = "0.11.0", # The crate version # List of resolved dependency version. # If a dependency is needed by the library and absent from this list # the cargo_crate rule will emit a warning and not add the dependency in # the build rule, making it potentially unable to compile. locked_deps = {"winapi": "0.3.5"}, # Other optional attributes: # - flags: contains a list of rustc flags to add to the rust_{library,binary} rules. # - data: contains the data attribute to propagate to the rust_{library,binary} rules. # - sha256: SHA-256 checksum of the crate, recommended since it enhance Bazel caching. ) ``` To-do: [ ] Stabilize API [ ] Document [ ] Add automated tests
|
||
crate_repository = repository_rule( | ||
_impl, | ||
attrs = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Consider including the toolchain or target triple as well to be asserted against
flags
and locked_deps
are target-specific.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean to have flags & locked_deps being selectable by platform? I don't really see where the target triple can be injected here. That's why I set the target triple in the toolchain rather than trying to inject the --target flag in the BUILD file.
Note that AFAIK there is no way to get the toolchain at this stage since it is not resolved yet when we are doing repository loading.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I initially assumed that these would be target specific repository rules since the attrs are target-specific. Presumably multiple resolved trees of crates could avoid bringing in the same crate twice using the native.existing_rule mechanism you added in another PR. It isn't clear to me what happens with the BUILD files though...
It's possible that instead the target switching could be baked into the attrs somehow though. This provides a more intuitive interface at the cost of some additional complexity.
if result.stderr: | ||
print(result.stderr) | ||
|
||
crate_repository = repository_rule( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not totally clear on this -- Does this need to retain some mechanism to override a dep (e.g. instead of using some generated build target dep, accept a target directly)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, that something I need to add to that PR.
if __name__ == "__main__": | ||
json = json.load(StringIO.StringIO(subprocess.check_output([ | ||
"cargo", | ||
"metadata", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This invocation strikes me as "fishy". You're largely using locally generated crate metadata, and expecting it to match what cargo-raze used to generate the exact set of dependencies.
In isolation, you cannot trust the metadata emitted by cargo metadata
-- you need to run this for the entire workspace and then select the metadata for a single crate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I fail to follow here. Each crate declare its direct deps and we rely on external source to provide us the locked version, why would this invocation be wrong? I meant it barely exposes the information in the Cargo.toml file in a JSON file, it does not do any resolution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The locked crate version isn't the only piece of non-local state that needs to be brought in from external sources. You also need to bring in the set of features wanted by all upstream entities.
Consider a scenario where you have the following dependency chain:
A -(wants)-> B -> C
They use the default feature set. Now, consider a new crate D such that
A -> D -> C
... Where D adds a feature directive to it's C where ["with_serde"] is a wanted C feature.
This is what I called a "non-local" effect -- this new feature flag must also be provided to the C repository rule declaration... with the addition of the new locked serde deps.
As a final wrench to throw in -- this can happen for some targets and not others if C is a [target.*.dependencies]
target-specific dependency of D.
Do you want to have a higher bandwidth discussion about this (GVC/Something else)? I suspect you're stepping into a landmine here by assuming that metadata will be present that won't be, or that you can run these cargo commands independently on a per crate basis. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Last two I think.
It's taking me a while to wrap my head around this and the interaction that it is supposed to have with cargo-raze. This is complicated stuff (because Cargo is complicated)!
I think it's largely workable, but we need to be very confident about what data is available here versus what needs to be provided by the caller (because it cannot be known in isolation).
data = "\n data = r\"\"\"%s\"\"\"," % ctxt["data"] | ||
for d in json["dependencies"]: | ||
if d["name"] not in ctxt["resolved_deps"]: | ||
sys.stderr.write("WARNING: Cannot resolve dependency on crate '%s'\n" % d["name"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't necessarily erroneous.
If cargo-raze determines that the given combination of features
and target
indicate that a dependency should be omitted, then it will not be present in the set of resolved deps.
EDIT s/a feature should/a dependency should/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right maybe we should add a "n/a" value that explicitely says we should skip that deps. That way people that right themselves the target will get the warning if they omit a dep?
"other": "%s%s" % (out_dir_tar, data), | ||
"deps": resolve_deps([d for d in json["dependencies"] if not d["kind"]], ctxt), | ||
"build_deps": resolve_deps([d for d in json["dependencies"] if d["kind"] == "build"], ctxt), | ||
"features": json["features"].keys(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Experimentally, I found that crate["features"] yielded from cargo metadata
as described below is never going to have the required features, because they cannot be identified in isolation. I think they need to be supplied by the caller (of the repository_rule) on a per-target basis with the wider crate context in mind
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This behavior was copied from cargo raze. We can definitely provide a way to override it from the attributes but it would diverge from the way cargo raze works.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a difference between what cargo raze
does and what you're doing here. Cargo raze runs the metadata analysis once for the whole tree with all of the workspace context present. If I understand what you're doing here completely, you're running a similar process once per crate brought in.
@acmcarther: getting on VC might be a bit difficult as I am in Munich, but feel free to ping me on hangout (my ldap is dmarting) |
Discussion in rust-lang/cargo#5122 is topical regarding some of the nuances of |
So some more though after yesterday VC:
My view is that there should be minimal generated stuff to check in. At the moment with cargo-raze, there is a bunch of BUILD file whose content is, IIUC, entirely determined by the crate's Cargo.toml file. The only part that depends on reverse dependency is which feature to build (which is exactly the feature_flag) and for which we could generate X target from the cargo_crate rule. I also find it nicer on the UX side to have most of the stuff baked into Bazel so the file cargo raze generate is still editable by hand afterward. |
Re: feature_flags: That said, the ability to transmit information from target to dependencies (opposite the normal target to dependers direction) is more generally provided by configuration, of which feature flags are a limited subset. AFAIK, Skylark configuration is on its way, though I have no idea offhand what the plan is for that - @juliexxia might know better. Feature flags being more exposed to Skylark might happen along a similar timeline, as configuration is even more powerful than feature flags. |
I am going to close this one until feature flag are there, I don't think we gain much vs the current cargo raze until then. |
@damienmg any updates here? |
This rule would allow cargo raze to just output a bzl file in remote mode
and then rely on the rule to generate the correct BUILD files.
An exemple of rule that cargo raze could generate would looks like:
To-do:
Fixes #2