-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ACP: Export MPMC APIs #451
Labels
ACP-accepted
API Change Proposal is accepted (seconded with no objections)
api-change-proposal
A proposal to add or alter unstable APIs in the standard libraries
T-libs-api
Comments
obeis
added
api-change-proposal
A proposal to add or alter unstable APIs in the standard libraries
T-libs-api
labels
Sep 30, 2024
This comment has been minimized.
This comment has been minimized.
|
Hah, I forgot about that part. |
rust-lang/rust#126839 has been merged so I don't think this is needed any more? |
Completed by rust-lang/rust#126839 |
@rustbot labels +ACP-accepted |
dtolnay
added
the
ACP-accepted
API Change Proposal is accepted (seconded with no objections)
label
Oct 1, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
ACP-accepted
API Change Proposal is accepted (seconded with no objections)
api-change-proposal
A proposal to add or alter unstable APIs in the standard libraries
T-libs-api
Proposal
Problem statement
The standard library currently provides no concurrent queue that permits multiple consumers. Given that we no have scoped threads, a multi-consumer concurrent queue is the last missing piece to be able to implement basic parallelism via "fill a queue with work to be done, then have N workers do the work".
The standard library already contains an implementation of an mpmc queue, ever since crossbeam's queue was ported over as the underlying implementation for our standard mpsc queue. However, so far this extra power is currently not exposed to users. If we're anyway spending the maintenance effort on such a queue, I think we should let our users benefit as well. :)
Motivating examples or use cases
For instance, the formatting in bootstrap is currently using a pretty complicated "poor man's async" scheme to run mutliple instances of rustfmt concurrently when formatting many files. However it anyway limits this to 2*available_parallelism many workers, so with an MPMC queue, a much simpler implementation with one thread per worker would be possible. In our pretty similar code for ./miri fmt we didn't bother with the manual async so formatting is just unnecessarily sequential.
The ui_test crate just imports crossbeam-channel for a similar situation (walking the file system and then processing things in parallel); that dependency could be entirely avoided if there was an MPMC queue in std.
Solution sketch
Shared usage:
Also, we will provide iterator functionality similar to
mpsc
(IntoIter, Iter, TryIter).The new
Receiver
type will implement theClone
,Send
, andSync
traits.What do we do with the mpsc module?
I think we can deprecate the
mpsc
module after stabilizingmpmc
.Alternatives
We could do nothing, and ask people to depend on crossbeam when they need an mpmc queue.
Links and related work
Go's native channels are MPMC.
(They also allow receiving on multiple channels at once, but that is very complicated to implement and not part of this proposal. It seems orthogonal to the single- vs multiple-consumer question: our MPSC queues don't allow a receiver to receive on multiple queues at once, and neither will our MPMC queues.)
What happens now?
This issue contains an API change proposal (or ACP) and is part of the libs-api team feature lifecycle. Once this issue is filed, the libs-api team will review open proposals as capability becomes available. Current response times do not have a clear estimate, but may be up to several months.
Possible responses
The libs team may respond in various different ways. First, the team will consider the problem (this doesn't require any concrete solution or alternatives to have been proposed):
Second, if there's a concrete solution:
The text was updated successfully, but these errors were encountered: