-
-
Notifications
You must be signed in to change notification settings - Fork 279
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lock both worker and queue #274
Comments
in version 6 this will be fixed. You could try v6.0.0.beta1 if you want, it should not enforce the uniqueness for the lock as documented. The locks should be much more reliable moving forward. Unfortunately the v5 branch is a mess and the tests are unreliable which means it is really difficult to change anything with certainty. In version 6 the uncertainties have been removed and I trust the code more. I did some attempts at fixing the locks but failed so decided to focus on the master branch instead. |
Hi there, thank you for the response. I've tried out v6 beta as well, but it wasn't working at all. I didn't really dig too deep into it since I don't have much time but simply looking at the logs, the worker seemed to not have processed the task and the lock never got released unless i manually flush it in redis. Not sure if it's me doing something wrong here, but I guess I'll look forward to the v6 release! |
Sorry about that @darwin67 I realised a couple of issues with master after I wrote you. Will ping you when I'm done fixing |
No problem at all! Thank you for the great work! |
@darwin67 the version |
I think I have the same use case, I use
|
Thanks a bunch for trying it out @kammerer. Just to clarify, you only expect one job at a time being enqueued right? Also only one job running at the same time? My attempts to integration test this is clearly not the same in a multiprocess real world scenario: https://github.com/mhenrixon/sidekiq-unique-jobs/blob/master/spec/integration/sidekiq_unique_jobs/lock/until_and_while_executing_spec.rb#L59 I'll have to do some experiments in the https://github.com/mhenrixon/sidekiq-unique-jobs/blob/master/rails_example folder and see if I can figure out what is going on. |
@kammerer I see the error in my ways. I will add some more expectations in the integration tests. It is completely missing to account for how long the job takes. I'll let you know what I find. |
@kammerer that was a difficult one to track down but... it should hopefully be working as expected now. It is difficult to really test this in a sane manner except opening up two different consoles and test manually. You should not see the behaviour you described since |
@darwin67 you would have to raise an error and retry the job. If you provide |
@mhenrixon thanks for your work. Are these changes working in the latest v6.0.6 release? Thanks! |
hey sorry to take so long to get back to you. I've moved to a different company and currently no longer working on sidekiq (at least for a while) so I don't have a way to test it. :( I will happily close this if someone can confirm the expected behavior! |
Hi, I have a question, but might also be a feature request.
I was wondering if there is a way to make sure a job is unique both on the queue and the worker.
until_and_while_executing
seems to have the closest behavior but it still grabs a job from the queue when there are workers available, just not executing them until the running one finishes.I've also tried using
until_and_while_executing
withunique_across_workers
but it didn't seem to enforce the uniqueness like what the documents said.The scenario for this is I have a slow / heavy job that takes some time to execute and it can get queued multiple times (like an autosave on a text editor) during a short period of time. So instead of executing all the jobs that got queued, which is a complete waste of time and resources, I would like that job to only have 2 copies. One in the worker which is currently doing the work, and one in the queue which will be processed after the worker is done with the current one. So when the job that was in the queue starts executing, it will pick up all the changes.
Is there a way to make sure there's only one worker executing a job and only one additional job gets queued at the same time?
What I'm mentioning looks like this.
Client
it should stop queueing if there's a lock on the queue
Worker
it should not grab a similar job from the queue if there's one currently processing
Queue
only hold up to 1 copy of a job at any certain time
It'll be great if you can tell me how to achieve this with the current code (v5.0.10).
Thanks.
The text was updated successfully, but these errors were encountered: