Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Async loading for outdir and proc-macro #7412

Merged
merged 5 commits into from
Jan 28, 2021

Conversation

edwin0cheng
Copy link
Member

@edwin0cheng edwin0cheng commented Jan 23, 2021

cc #7328

Peek 2021-01-24 02-04

[Edit]
Finding a way to know when the workspace and build data are loaded...

[Edit 2]
Not perfect solution, but seem to work now.

@edwin0cheng edwin0cheng force-pushed the async-outdir branch 8 times, most recently from 11dc6c7 to fe65078 Compare January 24, 2021 05:59
@edwin0cheng
Copy link
Member Author

Updated, use OpQueue to make sure only one task is running.

@edwin0cheng edwin0cheng marked this pull request as draft January 24, 2021 09:57
@edwin0cheng edwin0cheng force-pushed the async-outdir branch 7 times, most recently from 84b3929 to b114f47 Compare January 24, 2021 14:06
@edwin0cheng edwin0cheng marked this pull request as ready for review January 24, 2021 14:24
@@ -233,6 +233,70 @@ impl Server {
});
self
}

pub(crate) fn wait_until_workspace_and_build_data_are_loaded(self) -> Server {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic is quite complicated. But the problem here is our CI is not good on multi threading :( When priming caches, it eats a lot of cpu and coming requests will be too slow to run.

The second problem here is , the ordering of the "roots_scan" events are based on how threads are run, such that is indeterminate. We tried out best here to guess when the actual "roots scan" ends (And we will have 2 roots scan event here)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For cache priming, I think we might want to just disable it for tests?

For the second issue, I think this logic belongs to the main loop. Specifically (and not only in tests), rust-analyzer should be able to tell the clients: "the project is fully loaded and I can serve every request".

We generally use the status field to track this event:

https://github.com/rust-analyzer/rust-analyzer/blob/ad3cb2125dee4379c11f447216f510d6544c10c5/crates/rust-analyzer/src/main_loop.rs#L381-L395

And we communicate this to client via the status notification:

https://github.com/rust-analyzer/rust-analyzer/blob/ad3cb2125dee4379c11f447216f510d6544c10c5/crates/rust-analyzer/src/reload.rs#L96-L99

I am not sure why we even use progress rathe than status in support, I am trying to fix this right now.

For this PR,I think we want to change the status logic to only become ready once external resources are loaded

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am trying to fix this right now.

Hm, this is for some reason more complicated that I expect. For some reason, multitheraded tests are really slow even with --release...

@matklad
Copy link
Member

matklad commented Jan 25, 2021

Finally took a look here!

I think this is right in general terms, but there are a couple of concurrency-related things which make my old scars tingle :-)

First, I am unsure about

                if !build_data_loader.is_empty() {
                    sender
                        .send(Task::FetchBuildData(BuildDataProgress::Prepare(build_data_loader)))
                        .unwrap();
                }

This happens in the background thread which fetches the workspace. This I think
is prone to a race condition -- we might fetch build data before we actually
switched to the new workspace in the main loop. Its better to schedule build
data fetching on the main loop (from &mut GlobalState), after we switched the
workspace itself.

The second thing I think can cause pain is this mutation

        if let Some(build_data_loader) = &self.build_data_loader {
            for workspace in workspaces.iter_mut() {
                workspace.update_build_data(build_data_loader);
            }
        }

Fundamentally, what we have here are two source of information: workspace and build info.

The current code merges them by destructively updating the state. This is a lossy operation, which can not be trivially undone.
A more obviously correct approach is to preserve both pieces of data as is, and merge them on the fly, when swithcing the workspace.

I think we can do the following:

struct GlobalStatea {
    /// Invariant: `workspaces.len() == workspace_build_data.len()`
    pub(crate) workspaces: Arc<Vec<ProjectWorkspace>>,
    pub(crate) workspace_build_data: Vec<Option<BuildData>>,
    pub(crate) fetch_workspaces_queue: OpQueue,
}

Loading ws will set the first field, loading ws data will set the second one. When either of fields changes, we call switch_workspaces which sets the crategraph.

@edwin0cheng edwin0cheng force-pushed the async-outdir branch 2 times, most recently from 51faee2 to 79c34e0 Compare January 28, 2021 11:21
@edwin0cheng
Copy link
Member Author

Fixed and sorry for the larger change set of this PR.

It is due to the fact that I have to decouple the build data related config out of CargoWorkspace to make it works.

@edwin0cheng edwin0cheng force-pushed the async-outdir branch 6 times, most recently from a257b1f to c01da06 Compare January 28, 2021 12:17
Comment on lines 86 to 87
pub(crate) build_data_loader: Option<BuildDataCollector>,
pub(crate) fetch_build_data_queue: OpQueue,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The loader field feels like it wants to be a part of OpQueue:

//! Bookkeeping to make sure only one long-running operation is executed.

#[derive(Default)]
pub(crate) struct OpQueue<D> {
    op_scheduled: Option<D>,
    op_in_progress: bool,
}

impl<D> OpQueue<D> {
    pub(crate) fn request_op(&mut self, data: D) {
        self.op_scheduled = Some(data);
    }
    pub(crate) fn should_start_op(&mut self) -> Option<D> {
        if self.op_in_progress {
            return None;
        }
        self.op_in_progress = self.op_scheduled.is_some();
        self.op_scheduled.take()
    }
    pub(crate) fn op_completed(&mut self) {
        assert!(self.op_in_progress);
        self.op_in_progress = false;
    }
}

@@ -233,6 +233,70 @@ impl Server {
});
self
}

pub(crate) fn wait_until_workspace_and_build_data_are_loaded(self) -> Server {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For cache priming, I think we might want to just disable it for tests?

For the second issue, I think this logic belongs to the main loop. Specifically (and not only in tests), rust-analyzer should be able to tell the clients: "the project is fully loaded and I can serve every request".

We generally use the status field to track this event:

https://github.com/rust-analyzer/rust-analyzer/blob/ad3cb2125dee4379c11f447216f510d6544c10c5/crates/rust-analyzer/src/main_loop.rs#L381-L395

And we communicate this to client via the status notification:

https://github.com/rust-analyzer/rust-analyzer/blob/ad3cb2125dee4379c11f447216f510d6544c10c5/crates/rust-analyzer/src/reload.rs#L96-L99

I am not sure why we even use progress rathe than status in support, I am trying to fix this right now.

For this PR,I think we want to change the status logic to only become ready once external resources are loaded

@edwin0cheng
Copy link
Member Author

edwin0cheng commented Jan 28, 2021

For this PR,I think we want to change the status logic to only become ready once external resources are loaded

One of the problem here is, the Ready transition is set by vfs::loader::Message::Progress but not by switch_workspaces:

https://github.com/rust-analyzer/rust-analyzer/blob/ce2f0bdac726a34627d5cf14362e7bf14f868d09/crates/rust-analyzer/src/main_loop.rs#L279-L301

So do you want me to do something like this ? :

vfs::loader::Message::Progress { n_total, n_done } => {
    if n_total == 0 {
        self.transition(Status::Invalid);
    } else {
        let state = if n_done == 0 {
            self.transition(Status::Loading);
            Progress::Begin
        } else if n_done < n_total {
            Progress::Report
        } else {
            assert_eq!(n_done, n_total);        
            if self.config.load_out_dirs_from_check() {
                if self.workspace_build_data.is_some() {
                   self.transition(Status::Ready);
                }
            } else {
                self.transition(Status::Ready);
            }
            Progress::End
        };
        self.report_progress(
            "roots scanned",
            state,
            Some(format!("{}/{}", n_done, n_total)),
            Some(Progress::fraction(n_done, n_total)),
        )
    }
}

@edwin0cheng
Copy link
Member Author

For cache priming, I think we might want to just disable it for tests?

Oh, but if we don't do cache priming, will the first request get longer to complete ?

@matklad
Copy link
Member

matklad commented Jan 28, 2021

Oh, but if we don't do cache priming, will the first request get longer to complete ?

for tests, I would expect the opposite, as we'll be doing only necessary work.

One of the problem here is, the Ready transition is set by vfs::loader::Message::Progress but not by switch_workspaces:

Yeah, the specific transition rules (and the actual states we have) are tricky. I think we might want to split Ready into Ready and ReadyPartial, to properly capture the "we have the meta, but not the build stuff yet". I wonder though, whould two states be enough?

@edwin0cheng edwin0cheng force-pushed the async-outdir branch 2 times, most recently from 267e8b6 to d22d20e Compare January 28, 2021 15:38
@matklad matklad mentioned this pull request Jan 28, 2021
Copy link
Member

@matklad matklad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left a couple of nits, but, other that that, r=me

crates/rust-analyzer/src/global_state.rs Outdated Show resolved Hide resolved
crates/rust-analyzer/src/main_loop.rs Outdated Show resolved Hide resolved
crates/rust-analyzer/src/reload.rs Outdated Show resolved Hide resolved
crates/rust-analyzer/src/global_state.rs Outdated Show resolved Hide resolved
@edwin0cheng
Copy link
Member Author

And I removed the special case wait_until_workspace_and_build_data_are_loaded and use wait_until_workspace_is_loaded directly.

@edwin0cheng
Copy link
Member Author

bors r=@maklad

@bors
Copy link
Contributor

bors bot commented Jan 28, 2021

@bors bors bot merged commit 703e6bf into rust-lang:master Jan 28, 2021
@edwin0cheng edwin0cheng deleted the async-outdir branch January 28, 2021 17:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants