Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reviwing: Failed jobs waiting to be retried are not considered when fetching uniqueness #708

Closed
axos88 opened this issue Apr 30, 2022 · 2 comments

Comments

@axos88
Copy link

axos88 commented Apr 30, 2022

Rewiving #394. This is still an issue on 7.0.4. Feel free to close this one, and reopen that one so that the conversation is in one place.

Any jobs that are in the failed queue will not take part in uniqueness validation.

Steps to reproduce:

class FailingJob
  include Sidekiq::Worker

  sidekiq_options lock: :until_executing, on_conflict: :replace

  def perform(id)
    raise NotImplementedError
  end
end

Execute the following in for example a rails console, while the sidekiq worker is running:

  FailingJob.perform_async
  sleep 1
  FailingJob.perform_async

Expected: The job should not duplicate
Actual: It does.

Possible solution:

If a job fails, it should re-acquire the locks. If there is a conflict, simulate what would have happened, if the other job arrived first.
Hmm... This would work for my use case (I only execute jobs serially, with a single worker), but it may be prone to race conditions, for example a conflict wouldn't be even detected if the "conflicting" job already finished by the time the original job fails in case of a slow job.

@axos88
Copy link
Author

axos88 commented Oct 24, 2024

Why was this marked as completed? Still seems to be an issue

@axos88
Copy link
Author

axos88 commented Oct 24, 2024

More info in: #394

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants