README.md in tensor_stream-0.9.8 vs README.md in tensor_stream-0.9.9

- old
+ new

@@ -1,20 +1,19 @@ [![Gem Version](https://badge.fury.io/rb/tensor_stream.svg)](https://badge.fury.io/rb/tensor_stream)[![CircleCI](https://circleci.com/gh/jedld/tensor_stream.svg?style=svg)](https://circleci.com/gh/jedld/tensor_stream) [![Join the chat at https://gitter.im/tensor_stream/Lobby](https://badges.gitter.im/tensor_stream/Lobby.svg)](https://gitter.im/tensor_stream/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) # TensorStream -A reimplementation of TensorFlow for ruby. This is a ground up implementation with no dependency on TensorFlow. Effort has been made to make the programming style as near to TensorFlow as possible, comes with a pure ruby evaluator by default with support for an opencl evaluator for large models and datasets. +An opensource machine learning framework for ruby. Designed to run on a wide variety of ruby implementations (JRuby, TruffleRuby, MRI) as well as an option for High Performance computation (OpenCL). -The goal of this gem is to have a high performance machine learning and compute solution for ruby with support for a wide range of hardware and software configuration. +This is a framework is heavily influenced by tensorflow and aims to be familiar with tensorflow users. This is a ground up implementation with no dependency on TensorFlow. Effort has been made to make the programming style as near to TensorFlow as possible, comes with a pure ruby evaluator by default with support for an opencl evaluator for large models and datasets. -## Features +## Goals & Features +- Easy to use - Improve model readability - Replicates most of the commonly used low-level tensorflow ops (tf.add, tf.constant, tf.placeholder, tf.matmul, tf.sin etc...) -- Supports auto-differentiation -- Provision to use your own opcode evaluator (opencl, sciruby and tensorflow backends planned) -- Goal is to be as close to TensorFlow in behavior but with some freedom to add ruby specific enhancements (with lots of test cases) -- (08-08-2018) Load pbtext files from tensorflow (Graph.parse_from_string) +- Supports auto-differentiation using formal derivation +- Extensible - use your own opcode evaluator (OpenCL and Pure ruby currently supported) ## Compatibility TensorStream comes with a pure ruby and OpenCL implementation out of the box. The pure ruby implementation is known to work with most ruby implementations including TruffleRuby, JRuby as well as jit enabled versions of mri (ruby-2.6.0). @@ -53,11 +52,12 @@ $ gem install tensor_stream ## Usage -Usage is similar to how you would use TensorFlow except with ruby syntax +Usage is similar to how you would use TensorFlow except with ruby syntax. +There are also enhancements to the syntax to make it as consice as possible. Linear regression sample: ```ruby require 'tensor_stream' @@ -73,22 +73,28 @@ train_Y = [1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221, 2.827,3.465,1.65,2.904,2.42,2.94,1.3] n_samples = train_X.size -X = tf.placeholder("float") -Y = tf.placeholder("float") +# X = tf.placeholder("float") +X = Float.placeholder +# Y = tf.placeholder("float") +Y = Float.placeholder + # Set model weights -W = tf.variable(rand, name: "weight") -b = tf.variable(rand, name: "bias") +# W = tf.variable(rand, name: "weight") +W = rand.t.var name: "weight" +# b = tf.variable(rand, name: "bias") +b = rand.t.var name: "bias" + # Construct a linear model pred = X * W + b # Mean squared error -cost = ((pred - Y) ** 2).reduce(:+) / ( 2 * n_samples) +cost = ((pred - Y) ** 2).reduce / ( 2 * n_samples) # optimizer = TensorStream::Train::MomentumOptimizer.new(learning_rate, momentum, use_nesterov: true).minimize(cost) # optimizer = TensorStream::Train::AdamOptimizer.new(learning_rate).minimize(cost) # optimizer = TensorStream::Train::AdadeltaOptimizer.new(1.0).minimize(cost) # optimizer = TensorStream::Train::AdagradOptimizer.new(0.01).minimize(cost) @@ -132,11 +138,11 @@ Not all ops are available. Available ops are defined in lib/tensor_stream/ops.rb, corresponding gradients are found at lib/tensor_stream/math_gradients. There are also certain differences with regards to naming conventions, and named parameters: -# Variables +# Variables and Constants To make referencing python examples easier it is recommended to use "tf" as the TensorStream namespace At the beginning ```ruby @@ -155,12 +161,19 @@ Ruby ```ruby w = ts.variable(0, name: 'weights') +c = ts.constant(1.0) + +# concise way when initializing using a constant +w = 0.t.var name: 'weights' +c = 1.0.t ``` +Calling .t to Integer, Array and Float types converts it into a tensor + # Shapes Python ```python x = tf.placeholder(tf.float32, shape=(1024, 1024)) @@ -171,9 +184,13 @@ Ruby ```ruby x = ts.placeholder(:float32, shape: [1024, 1024]) x = ts.placeholder(:float32, shape: [nil, 1024]) + +# Another a bit more terse way +x = Float.placeholder shape: [1024, 1024] +y = Float.placeholder shape: [nil, 1024] ``` For debugging, each operation or tensor supports the to_math method ```ruby \ No newline at end of file