README.md in redis_failover-0.7.0 vs README.md in redis_failover-0.8.0
- old
+ new
@@ -8,11 +8,11 @@
switch-over is not desirable in high traffic sites where Redis is a critical part of the overall
architecture. The existing standard Redis client for Ruby also only supports configuration for a single
Redis server. When using master/slave replication, it is desirable to have all writes go to the
master, and all reads go to one of the N configured slaves.
-This gem attempts to address these failover scenarios. A redis failover Node Manager daemon runs as a background
+This gem (built using [ZK][]) attempts to address these failover scenarios. A redis failover Node Manager daemon runs as a background
process and monitors all of your configured master/slave nodes. When the daemon starts up, it
automatically discovers the current master/slaves. Background watchers are setup for each of
the redis nodes. As soon as a node is detected as being offline, it will be moved to an "unavailable" state.
If the node that went offline was the master, then one of the slaves will be promoted as the new master.
All existing slaves will be automatically reconfigured to point to the new master for replication.
@@ -33,10 +33,12 @@
that it can rebuild its set of Redis connections. The client also acts as a load balancer in that it will automatically
dispatch Redis read operations to one of N slaves, and Redis write operations to the master.
If it fails to communicate with any node, it will go back and fetch the current list of available servers, and then
optionally retry the operation.
+[ZK]: https://github.com/slyphon/zk
+
## Installation
redis_failover has an external dependency on ZooKeeper. You must have a running ZooKeeper cluster already available in order to use redis_failover. ZooKeeper provides redis_failover with its high availability and data consistency between Redis::Failover clients and the Node Manager daemon. Please see the requirements section below for more information on installing and setting up ZooKeeper if you don't have it running already.
Add this line to your application's Gemfile:
@@ -118,19 +120,32 @@
:namespace - namespace for redis nodes (optional)
:logger - logger override (optional)
:retry_failure - indicate if failures should be retried (default true)
:max_retries - max retries for a failure (default 3)
+## Manual Failover
+
+Manual failover can be initiated via RedisFailover::Client#manual_failover. This schedules a manual failover with the
+currently active Node Manager. Once the Node Manager receives the request, it will either failover to the specific
+server passed to #manual_failover, or it will pick a random slave to become the new master. Here's an example:
+
+ client = RedisFailover::Client.new(:zkservers => 'localhost:2181,localhost:2182,localhost:2183')
+ client.manual_failover(:host => 'localhost', :port => 2222)
+
## Requirements
- redis_failover is actively tested against MRI 1.9.2/1.9.3 and JRuby 1.6.7 (1.9 mode only). Other rubies may work, although I don't actively test against them. 1.8 is not supported.
- redis_failover requires a ZooKeeper service cluster to ensure reliability and data consistency. ZooKeeper is very simple and easy to get up and running. Please refer to this [Quick ZooKeeper Guide](https://github.com/ryanlecompte/redis_failover/wiki/Quick-ZooKeeper-Guide) to get up and running quickly if you don't already have ZooKeeper as a part of your environment.
## Considerations
- Note that by default the Node Manager will mark slaves that are currently syncing with their master as "available" based on the configuration value set for "slave-serve-stale-data" in redis.conf. By default this value is set to "yes" in the configuration, which means that slaves still syncing with their master will be available for servicing read requests. If you don't want this behavior, just set "slave-serve-stale-data" to "no" in your redis.conf file.
+## Limitations
+
- Note that it's still possible for the RedisFailover::Client instances to see a stale list of servers for a very small window. In most cases this will not be the case due to how ZooKeeper handles distributed communication, but you should be aware that in the worst case the client could write to a "stale" master for a small period of time until the next watch event is received by the client via ZooKeeper.
+
+- Note that currently multiple Node Managers are currently used for redundancy purposes only. The Node Managers do not communicate with each other to perform any type of election or voting to determine if they all agree on promoting a new master. Right now Node Managers that are not "active" just sit and wait until they can grab the lock to become the single decision-maker for which Redis servers are available or not. This means that a scenario could present itself where a Node Manager thinks the Redis master is available, however the actual RedisFailover::Client instances think they can't reach the Redis master (either due to network partitions or the Node Manager flapping due to machine failure, etc). We are exploring ways to improve this situation.
## Resources
- To learn more about Redis master/slave replication, see the [Redis documentation](http://redis.io/topics/replication).
- To learn more about ZooKeeper, see the official [ZooKeeper](http://zookeeper.apache.org/) site.