README.md in tensor_stream-0.8.5 vs README.md in tensor_stream-0.8.6

- old
+ new

@@ -9,14 +9,13 @@ The goal of this gem is to have a high performance machine learning and compute solution for ruby with support for a wide range of hardware and software configuration. ## Features - Replicates most of the commonly used low-level tensorflow ops (tf.add, tf.constant, tf.placeholder, tf.matmul, tf.sin etc...) -- Supports auto-differentiation via tf.gradients (mostly) +- Supports auto-differentiation - Provision to use your own opcode evaluator (opencl, sciruby and tensorflow backends planned) - Goal is to be as close to TensorFlow in behavior but with some freedom to add ruby specific enhancements (with lots of test cases) -- eager execution (experimental) - (08-08-2018) Load pbtext files from tensorflow (Graph.parse_from_string) ## Compatibility TensorStream comes with a pure ruby and OpenCL implementation out of the box. The pure ruby implementation @@ -29,11 +28,11 @@ ``` and then (without bundler) ```ruby -require 'tensor_stream-opencl' +require 'tensor_stream/opencl' ``` OpenCL is basically a requirement for deep learning and image processing tasks as the ruby implementation is too slow even with jit speedups using latest ruby implementations. OpenCL kernels used by tensorstream can be found at tensor_stream/lib/evaluator/opencl/kernels. These are non specific and should work with any device that supports OpenCL including intel GPUs and CPUs, as well as GPUs from Nvidia and AMD. @@ -89,12 +88,15 @@ pred = X * W + b # Mean squared error cost = ((pred - Y) ** 2).reduce(:+) / ( 2 * n_samples) -# optimizer = TensorStream::Train::MomentumOptimizer.new(0.01, 0.5, use_nesterov: true).minimize(cost) -# optimizer = TensorStream::Train::AdamOptimizer.new.minimize(cost) +# optimizer = TensorStream::Train::MomentumOptimizer.new(learning_rate, momentum, use_nesterov: true).minimize(cost) +# optimizer = TensorStream::Train::AdamOptimizer.new(learning_rate).minimize(cost) +# optimizer = TensorStream::Train::AdadeltaOptimizer.new(1.0).minimize(cost) +# optimizer = TensorStream::Train::AdagradOptimizer.new(0.01).minimize(cost) +# optimizer = TensorStream::Train::RMSPropOptimizer.new(0.01, centered: true).minimize(cost) optimizer = TensorStream::Train::GradientDescentOptimizer.new(learning_rate).minimize(cost) # Initialize the variables (i.e. assign their default value) init = tf.global_variables_initializer() @@ -210,11 +212,11 @@ ``` To use the opencl evaluator instead of the ruby evaluator simply require it (if using rails this should be loaded automatically). ```ruby -require 'tensor_stream-opencl' +require 'tensor_stream/opencl' ``` Adding the OpenCL evaluator should expose additional devices available to tensor_stream ```ruby @@ -225,10 +227,10 @@ By default TensorStream will determine using the given evaluators the best possible placement for each tensor operation ```ruby -require 'tensor_stream/evaluator/opencl/opencl_evaluator' +require 'tensor_stream/opencl' # set session to use the opencl evaluator sess = ts.session sess.run(....) # do stuff \ No newline at end of file