-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
internal: Load VFS config changes in parallel #17771
Conversation
@davidbarsky you most likely want to try this with your stuff before we think about merging this |
1e3686a
to
8286847
Compare
Thanks! I assumed it wasn't possible to improve the VFS loading side of things without a wholesale replacement, but testing this against Buck2 on an Eden-backed file system on macOS (where rust-analyzer would be loading 909 crates) roughly halved the initial 1 "Root Scanned" duration from 46 seconds to roughly 20 seconds. There were cases where the "Roots Scanned" duration was as fast as 6 seconds (!) with these changes, but I wasn't able to consistently reproduce those figures. I'll try this on my remote Linux VM now! Footnotes
|
It's not a perfect solution as we will still re-switch multiple times in certain scenarios (as in we still dont wait for the initial loadup to finish), but it should reduce the number of times we restart the switching due to this speeding up the initial VFS load up. |
Though somethings wrong with the logic it seems given the failing test |
Oh, yeah: don't get me wrong: I'll happily take an incremental win over a wholesale replacement; I didn't realize it was possible to have an incremental win here! (I tried a similar approach to this PR and abandoned it when I didn't see a performance difference). Reducing the cost of switching workspaces sounds like a win in my book :) |
I imagine the "waiting for VFS loading finished before switching workspaces" should also not be too difficult, as its basically just delaying switching until a flag is set. Its mainly annoying to do with the opqueue setup I think. |
!(self.last_reported_status.is_none() | ||
|| self.fetch_workspaces_queue.op_in_progress() | ||
|| self.fetch_build_data_queue.op_in_progress() | ||
|| self.fetch_proc_macros_queue.op_in_progress() | ||
|| self.discover_workspace_queue.op_in_progress() | ||
|| self.vfs_progress_config_version < self.vfs_config_version | ||
|| self.vfs_progress_n_done < self.vfs_progress_n_total) | ||
self.vfs_done | ||
&& self.last_reported_status.is_some() | ||
&& !self.fetch_workspaces_queue.op_in_progress() | ||
&& !self.fetch_build_data_queue.op_in_progress() | ||
&& !self.fetch_proc_macros_queue.op_in_progress() | ||
&& !self.discover_workspace_queue.op_in_progress() | ||
&& self.vfs_progress_config_version >= self.vfs_config_version |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I flipped this because I personally have difficult reading the previous negation
@bors r+ |
☀️ Test successful - checks-actions |
Simple attempt to make some progress f or #17373
No clue if those atomic orderings are right, though I don't think they are really too relevant either.
A more complete fix would probably need to replace our
ProjectFolders
handling a bit.