-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add an mpmc concurrent queue #384
Comments
I am personally in favor of this. I basically think that we should have added So I'm on board with this. While I think one libs-api member is enough to move this forward, I think this is a big enough proposal that I'd feel a lot more comfortable if we had at least one other libs-api member approve this before moving it forward. And if anyone on libs-api is a hard no on this (which I think is reasonable), then that would be useful to know now as well! |
Yeah, the point here was to "test the waters" and see whether anyone in t-libs-api is in favor (seems like yes, awesome :) and whether anyone is strictly against. I won't have time to do API design / implementation work here, but my hope is if the initial paperwork is done then it is easier for someone to pick that up and start some experiments on nightly. |
We discussed this during today's libs-API meeting. We're ok with the general idea of exposing mpmc and the added bug surface it means, but we need a more detailed ACP because there several questions such as
So yeah, this has a chance, but please come back with a proper ACP showing how API surface will have to be remodeled. |
Great, thanks! |
Proposal
Problem statement
The standard library currently provides no concurrent queue that permits multiple consumers. Given that we no have scoped threads, a multi-consumer concurrent queue is the last missing piece to be able to implement basic parallelism via "fill a queue with work to be done, then have N workers do the work".
The standard library already contains an implementation of an mpmc queue, ever since crossbeam's queue was ported over as the underlying implementation for our standard mpsc queue. However, so far this extra power is currently not exposed to users. If we're anyway spending the maintenance effort on such a queue, I think we should let our users benefit as well. :)
Motivating examples or use cases
For instance, the formatting in bootstrap is currently using a pretty complicated "poor man's async" scheme to run mutliple instances of rustfmt concurrently when formatting many files. However it anyway limits this to
2*available_parallelism
many workers, so with an MPMC queue, a much simpler implementation with one thread per worker would be possible. In our pretty similar code for./miri fmt
we didn't bother with the manual async so formatting is just unnecessarily sequential.The ui_test crate just imports crossbeam-channel for a similar situation (walking the file system and then processing things in parallel); that dependency could be entirely avoided if there was an MPMC queue in std.
Solution sketch
The intent of this ACP is to gauge whether there is interest for having an MPMC queue in the standard library at all. Figuring out the exact API could happen at a later stage.
Alternatives
We could do nothing, and ask people to depend on crossbeam when they need an mpmc queue.
Links and related work
Go's native channels are MPMC.
(They also allow receiving on multiple channels at once, but that is very complicated to implement and not part of this proposal. It seems orthogonal to the single- vs multiple-consumer question: our MPSC queues don't allow a receiver to receive on multiple queues at once, and neither will our MPMC queues.)
What happens now?
This issue contains an API change proposal (or ACP) and is part of the libs-api team feature lifecycle. Once this issue is filed, the libs-api team will review open proposals as capability becomes available. Current response times do not have a clear estimate, but may be up to several months.
Possible responses
The libs team may respond in various different ways. First, the team will consider the problem (this doesn't require any concrete solution or alternatives to have been proposed):
Second, if there's a concrete solution:
The text was updated successfully, but these errors were encountered: