Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tokio-sync: Acquire n permits from a semaphore #1550

Closed
jean-airoldie opened this issue Sep 13, 2019 · 12 comments · Fixed by #3067
Closed

tokio-sync: Acquire n permits from a semaphore #1550

jean-airoldie opened this issue Sep 13, 2019 · 12 comments · Fixed by #3067
Labels
A-tokio Area: The main tokio crate C-feature-accepted Category: A feature request that has been accepted pending implementation. M-sync Module: tokio/sync

Comments

@jean-airoldie
Copy link
Contributor

Currently the 0.2.0-alpha.4 Semaphore & Permit API of tokio-sync only allows one to acquire permits one at a time (as far as i can tell). Wouldn't the Semaphore API be more usefull is one could acquire an arbitrary number of permits at a time?

For instance Permit::poll_acquire_n would take an extra arg use to decide how much permits to acquire. And an extra check would be added to make sure the requested number of permits does not exceed the semaphore's total capacity.

My specific use case would be a async channel that blocks based on the combined len of the message it contains.

@carllerche
Copy link
Member

👍 It probably would be nice to be able to acquire an arbitrary number of permits. I am unsure about how easy / hard it would be to add that though! If anyone wants to take a stab, I can try to take a look at strategy.

@jean-airoldie
Copy link
Contributor Author

I took a quick look and I think it shouldn't be that hard. I'll take a stab at it once I'll need it, hopefully this week.

@jean-airoldie
Copy link
Contributor Author

Alright I took a better look and it would be incompatible with the current semaphore design since it assumes that it only needs to wakeup a waiter if the permit count is zero. This is because, when the permit count is zero, then the SemState becomes a pointer to a waker. However, it would be possible that a thread tries to acquire more permits that are currently available while the number of available permits is non-zero.

@jean-airoldie
Copy link
Contributor Author

jean-airoldie commented Oct 4, 2019

What if the SemState used an index to a vec of WaiterNode instead of a pointer? The SemState could be a combination of the permit count, the index to the next waker and the flags, all contained in a u64. Something like 32bits for the permit count, 29bits for the index to the next waker (could be smaller) and 3bits for the flags would work.

@jean-airoldie
Copy link
Contributor Author

This would limit the max amount of queued wakers, but I don't think thats much of a problem.

@jean-airoldie
Copy link
Contributor Author

futures_intrusive has an implementation for such a semaphore.

@carllerche
Copy link
Member

I’ll leave the issue open as I think the API would be useful and I would appreciate a contribution.

@FSMaxB
Copy link
Contributor

FSMaxB commented Oct 7, 2019

I implemented something like this for limiting the amount of bytes that could be allocated for a certain purpose. My implementation definitely wouldn't be a general purpose solution, but I want to add my use case here.

I implemented this using a Mutex<Vec<Task>>. Adding tasks at the end if there weren't enough free permits and waking them starting from the front, skipping tasks that want too many and waking them until the limit is reached. This isn't applicable for every application though because of the lock and because depending on the workload some tasks might wait forever and never be notified.

@kornelski
Copy link
Contributor

kornelski commented Feb 12, 2020

The low-level API for Semaphore has already implemented this feature. It would be great to expose this.

In my case I want to have a soft limit on amount of memory used in my server, so having permits equal to reserved memory and acquire_n(approx_memory_needed) would solve my problem.

@tuxzz
Copy link

tuxzz commented Jul 2, 2020

Maybe we should also implement release_n and forget_n?
For example, we want to acquire all resource needed before execution to prevent deadlock or something, but when a subset of task finished, we want release it as soon as possible so that other tasks can reuse it.

@Darksonn Darksonn added A-tokio Area: The main tokio crate C-feature-accepted Category: A feature request that has been accepted pending implementation. M-sync Module: tokio/sync labels Jul 25, 2020
@carllerche carllerche added the E-easy Call for participation: Experience needed to fix: Easy / not much label Oct 16, 2020
@carllerche
Copy link
Member

At this point, the issue should be fairly easy. Semaphore's internal implementation supports batch ops. It is just a question of exposing and documenting an API that uses the internal batch API.

Maybe something like: Semaphore::acquire_many(n).

@carllerche carllerche added the E-help-wanted Call for participation: Help is requested to fix this issue. label Oct 16, 2020
@dcarrier
Copy link
Contributor

I will give this a try.

@Darksonn Darksonn removed E-easy Call for participation: Experience needed to fix: Easy / not much E-help-wanted Call for participation: Help is requested to fix this issue. labels Dec 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-tokio Area: The main tokio crate C-feature-accepted Category: A feature request that has been accepted pending implementation. M-sync Module: tokio/sync
Projects
None yet
7 participants