-
-
Notifications
You must be signed in to change notification settings - Fork 279
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Job will not enqueue even with no existing match #342
Comments
It is super difficult to say exactly what is going wrong. There are a lot of different factors that could contribute to this. I take it you are on a rails project? From reading your issue I understand you have a problem and that you perceive it as the fault of the gem. You do however not give much information that is useful for me to be of service. |
Yes, Rails. In production. redis (4.0.2), sidekiq (5.2.2). Most instances of the job ran wonderfully and prevented duplicates as desired. Some just would not enqueue. So I would run "CommissionLedgerTracker::QueuedUpdate.perform_async(account_id)" for about 2000 separate account_id values. Only about 1700 would enqueue. Perhaps this is related to the unique keys not being removed as per other people's reported issues. |
I'm also seeing behavior just like this - it seems some uniquejob keys aren't being cleared after jobs successfully complete, so they can no longer be enqueued. Could there be any interplay with Sidekiq's push_bulk functionality? @aaronmtate are you using push_bulk by chance? |
@jwg2s the |
@mhenrixon gotcha - was just one thing we're doing but otherwise were experiencing the exact same issue @aaronmtate describes. We recently upgraded to 6.0.6 (2 days ago) and noticed that in many cases, the jobs for these users had not run for exactly the amount of time since the upgrade. I don't want to hijack this thread if our circumstances are different @aaronmtate - have you recently upgraded as well? After clearing out all uniquejob keys from our redis instance, everything is working fine again now. Maybe points to an issue in the migration tool from the uniquejob keys in the set from v5 to v6? |
Are you also using redis-namespace @jwg2s? The gem don't officially support redis namespace anymore. I removed it to be able to make the locking more reliable. Version 5 allowed too many duplicates to be processed and version 6 is way more strict. I needed to remove the test extensions, redis mock dependency, active job support and redis-namepsace support to achieve reliability. Now the error is on the other side of the scale, the gem prevents jobs that should be allowed to run sometimes. I feel like this is a much better starting point for a gem that is supposed to prevent duplicates :) I'm totally open for PRs to enable redis namespace to work with the gem. I won't support active job though. It just does not make sense. It should work to pick up jobs from v5 if they are not namespaced. If they are namespaced I honestly couldn't tell because I didn't consider that scenario for the migration/upgrade. |
Ah, that might explain it, then:
I'll see if I can stick with version 5 to at least get everything queued, even at the risk of an occasional duplicate job. EDIT: Nope. Prior versions (both v4.0.18 and v5.0.10) don't provide any uniqueness at all (just keeps requeueing jobs, regardless of the lock type), while version 6.0.6 queues only 6205 of 6737 distinct jobs. This is most unfortunate. |
Yep that is likely the culprit, we're using redis-namespace as well. Would would go into supporting that? If you could point me in the right direction, high level, I could attempt a PR |
@jwg2s @aaronmtate I think it might be in master already. I received some PR for this a while ago. Not sure if it solves all the problems and to be honest I've not even sure it is possible to solve all problems but we can try. I'd love to support redis-namespace if people want it but this is actually the first time anyone misses it. Been years since I dropped it if I am not mistaken. |
I was mostly removing things to be able to get the uniquness reliable at the time. |
@mhenrixon could you point us to that PR? I couldn't find anything or anything in the master code that indicates it's been addressed, but I could be missing it. This is definitely a critical problem for us and since redis-namespace is pretty popular, compatibility would provide a pretty big benefit to the community. |
@mhenrixon I'm working to solve the same problem @jwg2s described. Would configuring the initializer to clean dead locks as described in https://github.com/mhenrixon/sidekiq-unique-jobs#cleanup-dead-locks fix the problem:
|
Indeed @scottrblock especially if you are on a newer sidekiq version. If your sidekiq isn't the latest and greatest make sure your use |
@mhenrixon what about adding a note in README about not using the gem |
I thought it was in there @fabioperrella... I used to have a section in the readme about redis-namespace. Was I dreaming that? So weird... |
This should be fixed by |
As far as I saw it, there is no mention of redis-namespace. It would be also nice to display a warning on server startup, if both gems are loaded? |
http://www.mikeperham.com/2015/09/24/storing-data-with-redis/ I'm happy to review and merge any PRs on the redis-namespace topic. Just don't have the capacity to dig into it at the moment. |
I tried enqueueing several jobs. The uniqueness limitation worked for jobs that would enqueue at all. Others simply would not add to the queue no matter what. The enqueuing method would return nil. No other job with the same parameter existed in the queue at the time.
Worker class
We do use ActiveJob for most other jobs. My assumption regarding the mentioned incompatibility was that this should at least work if I specify the class as being a Sidekiq::Worker. I would try with one number for account id (say 12345) and it simply wouldn't enqueue, though 12346 would enqueue just fine even if the related account record didn't exist. This was not limited to one specific id, but failed for a significant number of them.
sidekiq-unique-jobs version was 6.0.6
The text was updated successfully, but these errors were encountered: