README.md in tensor_stream-0.8.0 vs README.md in tensor_stream-0.8.1

- old
+ new

@@ -15,12 +15,25 @@ - Provision to use your own opcode evaluator (opencl, sciruby and tensorflow backends planned) - Goal is to be as close to TensorFlow in behavior but with some freedom to add ruby specific enhancements (with lots of test cases) - eager execution (experimental) - (08-08-2018) Load pbtext files from tensorflow (Graph.parse_from_string) -Since this is a pure ruby implementation for now, performance is not there yet. However it should be a good enough environment to learn about tensorflow and experiment with some models. +## Compatibility +TensorStream comes with a pure ruby and OpenCL implementation out of the box. The pure ruby implementation +is known to work with most ruby implementations including TruffleRuby, JRuby as well as jit enabled versions of mri (ruby-2.6.0). + +OpenCL is supported only on mri implementations of ruby. This can be enabled by including the OpenCL evaluator (Make sure you have OpenCL drivers installed correctly on your system): + +```ruby +require 'tensor_stream/evaluator/opencl/opencl_evaluator' +``` + +OpenCL is basically a requirement for deep learning and image processing tasks as the ruby implementation is too slow even with jit speedups using latest ruby implementations. + +OpenCL kernels used by tensorstream can be found at tensor_stream/lib/evaluator/opencl/kernels. These are non specific and should work with any device that supports OpenCL including intel GPUs and CPUs, as well as GPUs from Nvidia and AMD. + ## Installation Installation is easy, no need to mess with docker, python, clang or other shennanigans, works with both mri and jruby out of the box. Add this line to your application's Gemfile: @@ -70,10 +83,12 @@ pred = X * W + b # Mean squared error cost = ((pred - Y) ** 2).reduce(:+) / ( 2 * n_samples) +# optimizer = TensorStream::Train::MomentumOptimizer.new(0.01, 0.5, use_nesterov: true).minimize(cost) +# optimizer = TensorStream::Train::AdamOptimizer.new.minimize(cost) optimizer = TensorStream::Train::GradientDescentOptimizer.new(learning_rate).minimize(cost) # Initialize the variables (i.e. assign their default value) init = tf.global_variables_initializer() @@ -336,11 +351,11 @@ ``` $ ruby -v ruby 2.4.0p0 (2016-12-24 revision 57164) [x86_64-linux] $ ruby samples/linear_regression.rb -495 seconds 1000 epochs +495 seconds 10000 epochs ``` ruby 2.6.0-preview2 ``` @@ -381,10 +396,8 @@ ## Contributing Bug reports and pull requests are welcome on GitHub at https://github.com/[USERNAME]/tensor_stream. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [Contributor Covenant](http://contributor-covenant.org) code of conduct. - ## License -The gem is available as open source under the terms of the [MIT License](http://opensource.org/licenses/MIT). - +The gem is available as open source under the terms of the [MIT License](http://opensource.org/licenses/MIT). \ No newline at end of file