README.md in sidekiq-unique-jobs-6.0.0.rc5 vs README.md in sidekiq-unique-jobs-6.0.0.rc6
- old
+ new
@@ -1,9 +1,43 @@
# SidekiqUniqueJobs [![Join the chat at https://gitter.im/mhenrixon/sidekiq-unique-jobs](https://badges.gitter.im/mhenrixon/sidekiq-unique-jobs.svg)](https://gitter.im/mhenrixon/sidekiq-unique-jobs?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![Build Status](https://travis-ci.org/mhenrixon/sidekiq-unique-jobs.png?branch=master)](https://travis-ci.org/mhenrixon/sidekiq-unique-jobs) [![Code Climate](https://codeclimate.com/github/mhenrixon/sidekiq-unique-jobs.png)](https://codeclimate.com/github/mhenrixon/sidekiq-unique-jobs) [![Test Coverage](https://codeclimate.com/github/mhenrixon/sidekiq-unique-jobs/badges/coverage.svg)](https://codeclimate.com/github/mhenrixon/sidekiq-unique-jobs/coverage)
-The missing unique jobs for sidekiq
+## Table of contents
+* [Introduction](#introduction)
+* [Documentation](#documentation)
+* [Requirements](#requirements)
+* [Installation](#installation)
+* [General Information](#general-information)
+* [Options](#options)
+ * [Lock Expiration](#lock-expiration)
+ * [Lock Timeout](#lock-timeout)
+ * [Unique Across Queues](#unique-across-queues)
+ * [Unique Across Workers](#unique-across-workers)
+* [Locks](#locks)
+ * [Until Executing](#until-executing)
+ * [Until Executed](#until-executed)
+ * [Until Timeout](#until-timeout)
+ * [Unique Until And While Executing](#unique-until-and-while-executing)
+ * [While Executing](#while-executing)
+* [Usage](#usage)
+ * [Finer Control over Uniqueness](#finer-control-over-uniqueness)
+ * [After Unlock Callback](#after-unlock-callback)
+ * [Logging](#logging)
+* [Debugging](#debugging)
+ * [Console](#console)
+ * [List Unique Keys](#list-unique-keys)
+ * [Remove Unique Keys](#remove-unique-keys)
+ * [Command Line](#command-line)
+* [Communication](#communication)
+* [Testing](#testing)
+* [Contributing](#contributing)
+* [Contributors](#contributors)
+
+## Introduction
+
+The goal of this gem is to ensure your Sidekiq jobs are unique. We do this by creating unique keys in Redis based on how you configure uniqueness.
+
## Documentation
This is the documentation for the master branch. You can find the documentation for each release by navigating to its tag: https://github.com/mhenrixon/sidekiq-unique-jobs/tree/v5.0.10.
Below are links to the latest major versions (4 & 5):
@@ -35,13 +69,13 @@
See [Interaction w/ Sidekiq](https://github.com/mhenrixon/sidekiq-unique-jobs/wiki/How-this-gem-interacts-with-Sidekiq) on how the gem interacts with Sidekiq.
See [Locking & Unlocking](https://github.com/mhenrixon/sidekiq-unique-jobs/wiki/Locking-&-Unlocking) for an overview of the differences on when the various lock types are locked and unlocked.
-### Options
+## Options
-#### Lock Expiration
+### Lock Expiration
This is probably not the configuration option you want...
Since the client and the server are disconnected and not running inside the same process, setting a lock expiration is probably not what you want. Any keys that are used by this gem WILL be removed at the time of the expiration. For jobs that are scheduled in the future the key will expire when that job is scheduled + whatever expiration you have set.
@@ -50,21 +84,21 @@
```ruby
sidekiq_options lock_expiration: nil # default - don't expire keys
sidekiq_options lock_expiration: 20.days # expire this lock in 20 days
```
-#### Lock Timeout
+### Lock Timeout
This is the timeout (how long to wait) when creating the lock. By default we don't use a timeout so we won't wait for the lock to be created. If you want it is possible to set this like below.
```ruby
sidekiq_options lock_timeout: 0 # default - don't wait at all
sidekiq_options lock_timeout: 5 # wait 5 seconds
sidekiq_options lock_timeout: nil # lock indefinitely, this process won't continue until it gets a lock. VERY DANGEROUS!!
```
-#### Unique Across Queues
+### Unique Across Queues
This configuration option is slightly misleading. It doesn't disregard the queue on other jobs. Just on itself, this means that a worker that might schedule jobs into multiple queues will be able to have uniqueness enforced on all queues it is pushed to.
```ruby
class Worker
@@ -76,11 +110,11 @@
end
```
Now if you push override the queue with `Worker.set(queue: 'another').perform_async(1)` it will still be considered unique when compared to `Worker.perform_async(1)` (that was actually pushed to the queue `default`).
-#### Unique Across Workers
+### Unique Across Workers
This configuration option is slightly misleading. It doesn't disregard the worker class on other jobs. Just on itself, this means that a worker that the worker class won't be used for generating the unique digest. The only way this option really makes sense is when you want to have uniqueness between two different worker classes.
```ruby
class WorkerOne
@@ -105,14 +139,12 @@
WorkerTwo.perform_async(1)
# => nil because WorkerOne just stole the lock
```
-### Locks
+## Locks
-####
-
### Until Executing
Locks from when the client pushes the job to the queue. Will be unlocked before the server starts processing the job.
**NOTE** this is probably not so good for jobs that shouldn't be running simultaneously (aka slow jobs).
@@ -311,18 +343,9 @@
2. Create your feature branch (`git checkout -b my-new-feature`)
3. Commit your changes (`git commit -am 'Add some feature'`)
4. Push to the branch (`git push origin my-new-feature`)
5. Create new Pull Request
-## Contributors
-In no particular order:
+## Contributors
-- https://github.com/salrepe
-- https://github.com/rickenharp
-- https://github.com/sax
-- https://github.com/eduardosasso
-- https://github.com/KensoDev
-- https://github.com/adstage-david
-- https://github.com/jprincipe
-- https://github.com/crberube
-- https://github.com/simonoff
+You can find a list of contributors over on https://github.com/mhenrixon/sidekiq-unique-jobs/graphs/contributors