Skip to content

Reviwing: Failed jobs waiting to be retried are not considered when fetching uniqueness #708

@axos88

Description

@axos88

Rewiving #394. This is still an issue on 7.0.4. Feel free to close this one, and reopen that one so that the conversation is in one place.

Any jobs that are in the failed queue will not take part in uniqueness validation.

Steps to reproduce:

class FailingJob
  include Sidekiq::Worker

  sidekiq_options lock: :until_executing, on_conflict: :replace

  def perform(id)
    raise NotImplementedError
  end
end

Execute the following in for example a rails console, while the sidekiq worker is running:

  FailingJob.perform_async
  sleep 1
  FailingJob.perform_async

Expected: The job should not duplicate
Actual: It does.

Possible solution:

If a job fails, it should re-acquire the locks. If there is a conflict, simulate what would have happened, if the other job arrived first.
Hmm... This would work for my use case (I only execute jobs serially, with a single worker), but it may be prone to race conditions, for example a conflict wouldn't be even detected if the "conflicting" job already finished by the time the original job fails in case of a slow job.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions