-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add build profile. #6577
Add build profile. #6577
Conversation
I have some concerns about this, so I wanted to open this for discussion. dev/release switchingWith this feature, if a dev build is made, and then a release build is made, all of the
Extra artifactsI'm very worried about this causing shared dependencies to now be built multiple times. Some options:
I've been doing some analysis on crates.io to try to understand the impact. 3187 of 21098 crates have at least one overlapping dependency. gist. crossgen has a whopping 161 crates in common worst case. I did some tests "without" build profile and "with" build profile with either 12 or 2 concurrency. All times in seconds. These is just a rough idea. I ran each attempt multiple times, but this is particular to my hardware, and running on MacOS.
As you can see, sometimes it is a little faster, but usually it is slower (sometimes much slower). Here's a few more pieces of data that seemed interesting (of 21098 crates):
Default settingsThe current defaults may not be the best. Turning off debug improves speed and reduces disk space, but then you lose good backtraces. It's also questionable if it matters if debug-assertions or overflow-checks are off. Setting opt-level=1 had a noticeable increase in compile time on the few projects I tried, so I left it at 0. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks so much for doing the analysis here!
To make sure I understand this, the PR proposed as-is changes the default build settings for build scripts/procedural macros in both debug/release modes. This means that entire dependency trees rooted in procedural macros and build scripts are now compiled differently, and any previous sharing which happend no longer occurs, accounting for longer build times.
I'm curious if you know if there are some particularly bad "root offenders"? How do crates like crossgen
have 161 shared crates (or even imag
with 79)? That may be good for evaluating how to move forward on this.
FWIW absolute compile times aren't always the most interesting metric in my opinion. Incremental builds almost always occur because there's previous artifacts and/or build times were already bad enough to motivate tools like sccache
which compile all these rlibs super quickly. In that sense I'm personally ok eating a regression here to solve the, what is this point, huge litany of bugs this feature could fix.
debug-assertions = false | ||
codegen-units = 16 | ||
panic = 'unwind' | ||
incremental = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this be true
? (was this copy/pasted from somewhere else that needs an update?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now it is false
. The default build profile is defined here based on the default here. This is similar to release
mode.
I don't have a strong opinion about any of the defaults. I think the theory on this one is that build scripts are rarely modified. But I can see how it would be annoying when you are actively working on it.
Maybe you are thinking of #6564 which hasn't merged, yet? That would change the default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh nah I just wanted to confirm. I think that we should have incremental turned on by default as the overhead only applies for path
dependencies anyway, in this case typically just the build script itself.
I could see this going either way though. Build scripts are typically quite small and fast to compile, which means that incremental isn't a hit for them nor does it really matter too much. I'd personally err on the side of enabling incremental though
debug = false | ||
rpath = false | ||
lto = false | ||
debug-assertions = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little wary about this being false
, it seems like this may want to be true
by default to help weed out mistakes more quickly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, I wouldn't mind making it true
. Maybe the same for overflow-checks
? I don't have a sense of how much slower that typically makes things, but I suspect it would not be perceivable by most scripts/macros. I think debug
is the bigger question of how it should be defaulted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think this and overflow-checks should probably default to on, build scripts are typically rarely a bottleneck and if they both of these options can be disabled pretty easily (both from crates.io via changing apis or locally by changing profiles).
For debug
I wonder if we could perhaps try setting a default of 1? That means we only generate line tables for backtraces, but no local variable info as no one's really using gdb on these
It sounds like you understand it well. I'll look into offenders soon. |
Here is some analysis of root offenders: https://gist.github.com/ehuss/0c9fb074d4b8720316b8ede243006f78. I tried to weight them by how often they are used and how many shared dependencies they tend to have. Maybe not the best weighting strategy. The top offender is
Here is a detailed look at cargo-crev: https://gist.github.com/ehuss/a15704fc8c9d9a345a0d71739e3db32e. It's interesting because there isn't one bad offender, but a bunch of them (bindgen, clear_on_drop, failure, phf_codegen, cc, rand). |
Ok thanks for that analysis! I agree it's pretty hard to draw a trend from that. My main conclusion is largely just that the ecosystem of build dependencies is basically the same as normal dependencies, they themselves are built on a number of crates in the ecosystem and there's some big ones and some small ones. When thinking about the build as a whole, as mentioned before this change is basically irrelevant for incremental builds. It's also largely irrelevant for builds using caching solutions like One aspect of those builds I've often noticed is that for larger projects all hardware parallelism is eaten up during the first half-or-so of the build, but the second half is often more serial as dependencies become chained and all the quick crates are out of the way. The relatively small percentage increase in build times you measured above I think may be explainable that the "time to a serial build" is moving back and we're making use of the unused parallelism at that portion of the build to finish up build dependencies. Now of course those same dependencies can also push back the build because the serial chain of crates could depend on everything being finished. Overall I still personally feel pretty good about this change. Local projects can always reconfigure back to today's configuration if cold builds matter a lot, and otherwise this should provide a general improvement for working with build scripts and procedural macros. |
Spot on. Do you have any thoughts about how to organize the artifact directory? To address something like #1774 it would need to change so that dev/release will share the same build artifacts. My preference would be to remove the debug/release directory separation. I suspect there might be opposition to that, though it could maybe be done in a backwards compatible fashion with links. From a functional standpoint of using the If that is untenable, a dedicated Or it could just stay as-is, which allows for sharing, but causes rebuilds when switching dev/release. Or maybe some other option, like build artifacts are always in the |
I definitely think we should solve the rebuiding problem, but I think we could either do that by placing output in a new directory or by hashing more into the filename. I'm actually somewhat surprised that their filenames are conflicting today, do you know what's not being hashed to cause the filenames to be different and avoid colliding into the same filename? We definitely can't easily remove debug/release folders as they're so widely ingrained today. What I think we could do, however, is move towards a world where those folders only contain final output artifacts rather than intermediate ones. Sort of like how we have |
I'm a little confused. I was saying that they don't conflict, so there should be no reason they need to be in separate directories.
Yea, that's what I meant by "backwards compatible fashion with links" — it would keep the debug/release directories and just link final artifacts there for any tools that expect them. I'll take a look soon at implementing that soon and see if there are any major drawbacks. I expect there to be a lot of little changes throughout the code, but overall to be straightforward. I'd like to do that in a separate PR if that's OK? |
Oh sorry I was misunderstanding the rebuild point. It's not that we're thrashing a cache but the same artifacts are cached in two locations. That doesn't happen today as the settings are basically always different, but after this change the build profile for dev/release is the same so the artifacts are actually the same. In the long term I think we're going to move to a global build cache for Cargo, so I think it's fine to go ahead and experiment with it ahead of time. I'm thinking something along the lines of "everything stays exactly the same as it is today", but all files are just hard links to a build cache elsewhere. The build cache is just a dump of everything Cargo ever does, compeltely unorganized. |
I implemented a unified deps directory, but ran into some problems dealing with backwards compatibility. I've been trying a few different approaches, but they all have drawbacks.
Any ideas? |
If we break very old Windows I think that's fine, I thought that I don't actually know any systems that don't support hard links on the same filesystem, but have we hit some in the wild we wanted to handle? I think breaking rustbuild is fine (especially if we see the breakage coming!). Overall I think we definitely need to preserve backcompat to ensure that the current patterns for finding a test binary works somewhat (although we have broken this before...). Otherwise it should be fine to ignore older Windows and I think it's fine to assume hard links for perf (although I may be forgetting something critical there). If we only hard link/copy the final binaries that could mitigate the impact of systems without hard links perhaps and overall reduce the amount of traffic on the filesystem? |
It is fairly recent. Creating symlinks historically required admin permissions until Windows 10 Creators Update (released mid 2017). The reason you can run on older systems is because A. I don't think we every try to link directories on Windows. I can only think of macos with dSYM.
I believe some network filesystems do not support it. Sometime soonish, unless you have any other feedback, I'll try out the hybrid approach and see how it goes. |
Oh sorry right yeah symlinks won't work but I think that directory junctions are supported much further back on Windows, right? (I forget if that's what Hm network filesystems is a bummer... I think the hybrid approach would be best there though long-term! |
☔ The latest upstream changes (presumably #6687) made this pull request unmergeable. Please resolve the merge conflicts. |
Include proc-macros in `build-override`. This adds proc-macros (and their dependencies) to the `build-override` profile setting. The motivation is that these are all "build time" dependencies, and as such should probably behave the same. See the discussion on the [tracking issue](rust-lang/rust#48683 (comment)). My intent is that this paves the way for stabilizing without necessarily waiting for #6577. The only change here is the line in `with_for_host`, the rest is just renaming for clarity. This also includes some of the testsuite changes from #6577 to make it easier to check for compiler flags.
☔ The latest upstream changes (presumably #6811) made this pull request unmergeable. Please resolve the merge conflicts. |
Stabilize profile-overrides. This stabilizes the profile-overrides feature. This was proposed in [RFC 2282](rust-lang/rfcs#2282) and implemented in #5384. Tracking issue is rust-lang/rust#48683. This is intended to land in 1.41 which will reach the stable channel on Jan 30th. This includes a new documentation chapter on profiles. See the ["Overrides" section](https://github.com/rust-lang/cargo/blob/9c993a92ce33f219aaaed963bef51fc0f6a7677a/src/doc/src/reference/profiles.md#overrides) in `profiles.md` file for details on what is being stabilized. Note: The `config-profile` and `named-profiles` features are still unstable. Closes #6214 **Concerns** - There is some risk that `build-override` may be confusing with the [proposed future dedicated `build` profile](#6577). There is some more discussion about the differences at rust-lang/rust#48683 (comment). I don't expect it to be a significant drawback. If we proceed later with a dedicated `build` profile, I think we can handle explaining the differences in the documentation. (The linked PR is designed to work with profile-overrides.) - I don't anticipate any unexpected interactions with `config-profiles` or `named-profiles`. - Some of the syntax like `"*"` or `build-override` may not be immediately obvious what it means without reading the documentation. Nobody suggested any alternatives, though. - Configuring overrides for multiple packages is currently a pain, as you have to repeat the settings separately for each package. I propose that we can extend the syntax in the future to allow a comma-separated list of package names to alleviate this concern if it is deemed worthwhile. - The user may not know which packages to override, particularly for some dependencies that are split across multiple crates. I think, since this is an advanced feature, the user will likely be comfortable with using things like `cargo tree` to understand what needs to be overridden. There is [some discussion](rust-lang/rust#48683 (comment)) in the tracking issue about automatically including a package's dependencies, but this is somewhat complex. - There is some possibly confusing interaction with the test/bench profile. Because dependencies are always built with the dev/release profiles, overridding test/bench usually does not have an effect (unless specifying a workspace member that is being tested/benched). Overriding test/bench was previously prohibited, but was relaxed when named profiles were added. - We may want to allow overriding the `panic`, `lto`, and `rpath` settings in the future. I can imagine a case where someone has a large workspace, and wants to override those settings for just one package in the workspace. They are currently not allowed because it doesn't make sense to change those settings for rlibs, and `panic` usually needs to be in sync for the whole profile. - There are some strange interactions with `dylib` crates detailed in rust-lang/rust#64319. A fix was attempted, but later reverted. Since `dylib` crates are rare (this mostly applied to `libstd`), and a workaround was implemented for `build-std` (it no longer builds a dylib), I'm not too worried about this. - The interaction with `share-generics` can be quite confusing (see rust-lang/rust#63484). I added a section in the docs that tries to address this concern. It's also possible future versions of `rustc` may handle this better. - The new documentation duplicates some of the information in the rustc book. I think it's fine, as there are subtle differences, and it avoids needing to flip back and forth between the two books to learn what the settings do.
I'm going to close this PR for now. I'm still interested in pursuing this, but it will take more work than I have time for now. |
This adds a
build
profile as discussed at rust-lang/rust#48683. Seeunstable.md
for a brief description.