-
Notifications
You must be signed in to change notification settings - Fork 12.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prefetch some queries used by the metadata encoder #67888
Conversation
3544a47
to
bb2104d
Compare
This comment has been minimized.
This comment has been minimized.
bb2104d
to
9ee86c3
Compare
I did some more tuning and brought the time for @michaelwoerister Do you know why the incremental test failed here given that this PR doesn't change dependencies? |
This comment has been minimized.
This comment has been minimized.
I don't. The change doesn't look like should break that test. |
Looks like we don't check queries which did not execute, and this caused some |
9ee86c3
to
8b62655
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am feeling uncertain about this PR. I would appreciate getting some wider feedback (not sure from who, maybe @michaelwoerister)... it feels like while it does give us some good wins, it feels somewhat fragile (i.e., depends on how metadata encoding works pretty closely).
I would rather see us explore making metadata encoding itself more parallel -- IIRC, the basic idea with encoding is a bunch of arrays representing trait impls, MIR, etc. -- maybe we can instead make constructing those be parallel?
i = self.position(); | ||
let exported_symbols = self.tcx.exported_symbols(LOCAL_CRATE); | ||
let exported_symbols = self.encode_exported_symbols(&exported_symbols); | ||
let exported_symbols_bytes = self.position() - i; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this commit be fleshed out with some description of why this is done? (i.e., in the commit message ideally)?
Right now it looks like presumably it's to make sure the exported_symbols query can fallback on the parallel MIR optimization in the last commit... but I'm not sure.
tcx.promoted_mir(def_id); | ||
}) | ||
}, | ||
|| tcx.exported_symbols(LOCAL_CRATE), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, so was this why the previous commit moved exported symbols later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. It's moved later to give more time for prefetching to happen.
I'd like to remove the existing metadata and all related code and instead use the incremental query cache for both metadata and incremental compilation so I don't really want to put any effort into refactoring the existing code. |
Should we then not land this either? Feels like the 0.4 second win is nice but not huge, and presumably would be less in incremental mode (more data to load?). I feel like replacing metadata with incremental query cache is a pretty far reaching goal though -- maybe worth trying to polish metadata into better shape in the mean time? But I can see us not wanting to spend time on it. Obviously out of scope for this PR. I guess I'm not opposed to landing this PR -- but I would like to see the first review comment addressed (expanding on the commit). |
Here are some thoughts:
So I'm on the fence on whether I think this is worth the trouble or not. Since the changes are safe and can be easily reverted, I'd say it's OK to merge but maybe with more comments, i.e.:
|
I am also not opposed to merging with more comments. |
Triaged |
I added some comments and make the code use |
The changes look reasonable, but I cannot review the prefetching of the MIR bodies, as I'm not familiar enough with the code that'll be using that prefetching later on (nor with the relevant queries). I'm a little worried by the amount of code that is needed for prefetching there, too, particularly as it seems likely to not get updated over time (given the complex conditionals especially) to fit exactly what we need. With that in mind, let's try r? @matthewjasper perhaps? I'm not sure if you're the best person for the optimized/promoted MIR queries, which seem to be dominant in that convoluted code. |
☔ The latest upstream changes (presumably #70118) made this pull request unmergeable. Please resolve the merge conflicts. |
@bors r+ |
📌 Commit 027c8d9 has been approved by |
…sper Prefetch some queries used by the metadata encoder This brings the time for `metadata encoding and writing` for `syntex_syntax` from 1.338s to 0.997s with 6 threads in non-incremental debug mode. r? @Mark-Simulacrum
Rollup of 8 pull requests Successful merges: - rust-lang#67888 (Prefetch some queries used by the metadata encoder) - rust-lang#69934 (Update the mir inline costs) - rust-lang#69965 (Refactorings to get rid of rustc_codegen_utils) - rust-lang#70054 (Build dist-android with --enable-profiler) - rust-lang#70089 (rustc_infer: remove InferCtxt::closure_sig as the FnSig is always shallowly known.) - rust-lang#70092 (hir: replace "items" terminology with "nodes" where appropriate.) - rust-lang#70138 (do not 'return' in 'throw_' macros) - rust-lang#70151 (Update stdarch submodule) Failed merges: - rust-lang#70074 (Expand: nix all fatal errors) r? @ghost
This brings the time for
metadata encoding and writing
forsyntex_syntax
from 1.338s to 0.997s with 6 threads in non-incremental debug mode.r? @Mark-Simulacrum