Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(queue): add global concurrency #2496

Merged
merged 23 commits into from
Jul 15, 2024
Merged

feat(queue): add global concurrency #2496

merged 23 commits into from
Jul 15, 2024

Conversation

roggervalf
Copy link
Collaborator

@roggervalf roggervalf commented Mar 30, 2024

ref #2465

@marbemac marbemac mentioned this pull request May 31, 2024
@roggervalf roggervalf requested a review from manast June 29, 2024 19:22
@dhardtke
Copy link

dhardtke commented Jul 8, 2024

Our team would greatly benefit from this feature as we have multiple backends that each spawn a worker but only one scheduled job should be executed at the same time.

Looking forward to this! <3

Copy link
Contributor

@manast manast left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM, thanks to the way we handle paused queues we get this almost for free. I think it will be quite important though to explain the documentation of this feature that if you choose a concurrency level in your workers it will not override the global one, it will just be the maximum jobs a given worker can process in parallel but never more than the global one.

@roggervalf roggervalf merged commit 47ba055 into master Jul 15, 2024
10 of 11 checks passed
@roggervalf roggervalf deleted the global-concurrency branch July 15, 2024 14:08
github-actions bot pushed a commit that referenced this pull request Jul 15, 2024
# [5.9.0](v5.8.7...v5.9.0) (2024-07-15)

### Features

* **queue:** support global concurrency ([#2496](#2496)) ref [#2465](#2465) ([47ba055](47ba055))
@SnowMarble
Copy link

I really appreciate for implementing global concurrency feature. 🙇‍♂️

@faller
Copy link

faller commented Jul 20, 2024

What does this paragraph in the document mean?「It is not possible to achieve a global concurrency of at most 1 job at a time if you use more than one worker.」What if we setGlobalConcurrency to 1 when using multiple workers ?

@roggervalf
Copy link
Collaborator Author

roggervalf commented Jul 20, 2024

hi @faller, we missed to change that statement now that we have global concurrency. And yes, you can set global concurrency as 1 when using múltiple workers

@faller
Copy link

faller commented Jul 21, 2024

hi @faller, we missed to change that statement now that we have global concurrency. And yes, you can set global concurrency as 1 when using múltiple workers

Thank you for your excellent work

@faller
Copy link

faller commented Jul 25, 2024

hi @roggervalf , If I set a queue with 10 global concurrency, when there are 2 wokers, should I set worker level concurrency at the same time or keep it default ?

@roggervalf
Copy link
Collaborator Author

hi @faller, as from our docs https://docs.bullmq.io/guide/queues/global-concurrency, global concurrency is the maximum value that any worker can handle. Worker concurrency can be set at any time independently. Also the default local concurrency value for a worker is 1. If you want to take advantage of your global concurrency, you should add a local concurrency with a value no more than 10.

@faller
Copy link

faller commented Jul 26, 2024

hi @faller, as from our docs https://docs.bullmq.io/guide/queues/global-concurrency, global concurrency is the maximum value that any worker can handle. Worker concurrency can be set at any time independently. Also the default local concurrency value for a worker is 1. If you want to take advantage of your global concurrency, you should add a local concurrency with a value no more than 10.

Thank you for your answer

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants