Describe the bug
When locks are setup to expire using the TTL setting (both globally and at the job level), this should keep redis memory usage stable. However, while a lock's associated string keys are appropriately expired, the digest in the uniquejobs:digests zset is never cleaned up.
(This may be a duplicate of #637 , I've made a fresh issue because that thread seems to be focused on jobs which exit early with an error, whereas I've observed this issue in the absence of any errors)
[ It's entirely possible I've misunderstood some aspect of how this gem can or should work, any corrections to my assumptions or configuration are very welcome =) ]
Expected behavior
For locks/digests which have expired, I expect all data/trace to be cleaned up, leaving old locks with a memory footprint of 0
Current behavior
Strings under uniquejobs:* and hashes of uniquejobs:*:LOCKED and uniquejobs:*:UPGRADED are all expired according to our configured TTL (24 hours), but the zset uniquejobs:digests continues to grow and includes digests many days old.
Worker class
class MyWorker
include Sidekiq::Worker
sidekiq_options(
lock: :until_expired,
on_conflict: :log,
lock_ttl: 24 * 60 * 60,
unique_args: ->(args) { [args[0], args[1], args[5]] }
)
def perform(args); end
end
Sidekiq version: 6.2.2
Sidekiq Unique Jobs version: 7.1.8
Additional context
For our use case, most of our locks will be short lived. Our job arguments usually include an ID which is at risk of being handled multiple times briefly, but will never be used again after a few days. We get no value from holding digests longer, and it's an active drain on performance and risk to uptime if our redis memory usage increases forever.
I do not know if this is the sort of thing the reaper should be cleaning up, but as far as I can tell we do have the reaper enabled (default config):
irb(main):001:0> SidekiqUniqueJobs.config.reaper
=> :ruby
irb(main):002:0> SidekiqUniqueJobs.config.reaper_count
=> 1000
irb(main):003:0> SidekiqUniqueJobs.config.reaper_interval
=> 600
irb(main):004:0> SidekiqUniqueJobs.config.reaper_timeout
=> 10
Also thanks so much for a great gem and all the work on version 7 👏🏻 👏🏻 👏🏻
Describe the bug
When locks are setup to expire using the TTL setting (both globally and at the job level), this should keep redis memory usage stable. However, while a lock's associated string keys are appropriately expired, the digest in the uniquejobs:digests zset is never cleaned up.
(This may be a duplicate of #637 , I've made a fresh issue because that thread seems to be focused on jobs which exit early with an error, whereas I've observed this issue in the absence of any errors)
[ It's entirely possible I've misunderstood some aspect of how this gem can or should work, any corrections to my assumptions or configuration are very welcome =) ]
Expected behavior
For locks/digests which have expired, I expect all data/trace to be cleaned up, leaving old locks with a memory footprint of 0
Current behavior
Strings under
uniquejobs:*and hashes ofuniquejobs:*:LOCKEDanduniquejobs:*:UPGRADEDare all expired according to our configured TTL (24 hours), but the zsetuniquejobs:digestscontinues to grow and includes digests many days old.Worker class
Sidekiq version: 6.2.2
Sidekiq Unique Jobs version: 7.1.8
Additional context
For our use case, most of our locks will be short lived. Our job arguments usually include an ID which is at risk of being handled multiple times briefly, but will never be used again after a few days. We get no value from holding digests longer, and it's an active drain on performance and risk to uptime if our redis memory usage increases forever.
I do not know if this is the sort of thing the reaper should be cleaning up, but as far as I can tell we do have the reaper enabled (default config):
Also thanks so much for a great gem and all the work on version 7 👏🏻 👏🏻 👏🏻