README in memcached-1.2 vs README in memcached-1.2.1
- old
+ new
@@ -52,14 +52,14 @@
$cache.set 'test', value
$cache.set 'test2', value
$cache.get ['test', 'test2', 'missing']
#=> {"test" => "hello", "test2" => "hello"}
-You can set a counter and increment it:
+You can set a counter and increment it. Note that you must initialize it with an integer, encoded as an unmarshalled ASCII string:
start = 1
- $cache.set 'counter', start, 0, false
+ $cache.set 'counter', start.to_s, 0, false
$cache.increment 'counter' #=> 2
$cache.increment 'counter' #=> 3
$cache.get('counter', false).to_i #=> 3
You can get some server stats:
@@ -71,14 +71,21 @@
$cache.set 'test', nil
$cache.get 'test' #=> nil
$cache.delete 'test'
$cache.get 'test' #=> raises Memcached::NotFound
-== Legacy applications
+== Pipelining
-There is a compatibility wrapper for legacy applications called Memcached::Rails.
+Pipelining updates is extremely effective in <b>memcached</b>, leading to more than 25x write throughput than the default settings. Use the following options to enable it:
+ :no_block => true,
+ :buffer_requests => true,
+ :noreply => true,
+ :binary_protocol => false
+
+Currently #append, #prepend, #set, and #delete are pipelined. Note that when you perform a read, all pending writes are flushed to the servers.
+
== Threading
<b>memcached</b> is threadsafe, but each thread requires its own Memcached instance. Create a global Memcached, and then call Memcached#clone each time you spawn a thread.
thread = Thread.new do
@@ -88,9 +95,13 @@
cache.get 'example'
end
# Join the thread so that exceptions don't get lost
thread.join
+
+== Legacy applications
+
+There is a compatibility wrapper for legacy applications called Memcached::Rails.
== Benchmarks
<b>memcached</b>, correctly configured, is at least twice as fast as <b>memcache-client</b> and <b>dalli</b>. See BENCHMARKS[link:files/BENCHMARKS.html] for details.