You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When you queue the same jobs twice, once using perform_async and once using perform_at with the same arguments. They are not counted as duplicates if the time provided to perform_at is in the future.
There are now two jobs that have the same arguments and should have been de-duped.
My unique worker is equivalent to:
classMyUniqueWorkerincludeSidekiq::Workersidekiq_optionsqueue: :default,retry: true,unique: true,unique_job_expiration: 2.hours,retry_count: 10defperform(id)# do some work...endend
Work Around
The work around I am using now is to override perform_in and perform_async so that the 'at' arg is always there.
# override perform_async to just schedule for now.# due to unique jobs bug.defself.perform_async(*args)perform_in(Time.now, *args)end# override perform_in to never remove the 'at' keydefself.perform_in(interval, *args)int=interval.to_fnow=Time.nowts=(int < 1_000_000_000 ? (now + interval).to_f : int)# removed the optimization in lib/sidekiq/worker.rb# item.delete('at'.freeze) if ts <= now.to_fitem={'class'=>self,'args'=>args,'at'=>ts}client_push(item)end
Long Term Solution
Any ideas at a long term solution for this?
Looking through the code it looks like something around *_unique_for? might be causing this issue.
Any ideas for a fix would be appreciated.
The text was updated successfully, but these errors were encountered:
Since the below tests pass fine using real redis I am closing this issue. The code you provided "works on my machine" ™
classMyUniqueWorkerincludeSidekiq::Workersidekiq_optionsqueue: :customqueue,retry: true,unique: true,unique_job_expiration: 7200,retry_count: 10defperform(_)endenddescribe'when a job is already scheduled'dobefore{MyUniqueWorker.perform_in(3600,1)}it'rejects new jobs with the same argument'doexpect(MyUniqueWorker.perform_async(1)).toeq(nil)endend
If something still is not working for you I suggest you provide a failing test.
When you queue the same jobs twice, once using
perform_async
and once usingperform_at
with the same arguments. They are not counted as duplicates if the time provided toperform_at
is in the future.Example:
There are now two jobs that have the same arguments and should have been de-duped.
My unique worker is equivalent to:
Work Around
The work around I am using now is to override
perform_in
andperform_async
so that the 'at' arg is always there.Long Term Solution
Any ideas at a long term solution for this?
Looking through the code it looks like something around *_unique_for? might be causing this issue.
Any ideas for a fix would be appreciated.
The text was updated successfully, but these errors were encountered: