README.md in torch-rb-0.1.1 vs README.md in torch-rb-0.1.2
- old
+ new
@@ -1,10 +1,10 @@
# Torch-rb
:fire: Deep learning for Ruby, powered by [LibTorch](https://pytorch.org)
-This gem is currently experimental. There may be breaking changes between each release.
+This gem is currently experimental. There may be breaking changes between each release. Please report any issues you experience.
[![Build Status](https://travis-ci.org/ankane/torch-rb.svg?branch=master)](https://travis-ci.org/ankane/torch-rb)
## Installation
@@ -221,18 +221,70 @@
num_features
end
end
```
-And run
+Create an instance of it
```ruby
net = Net.new
input = Torch.randn(1, 1, 32, 32)
net.call(input)
```
+Get trainable parameters
+
+```ruby
+net.parameters
+```
+
+Zero the gradient buffers and backprop with random gradients
+
+```ruby
+net.zero_grad
+out.backward(Torch.randn(1, 10))
+```
+
+Define a loss function
+
+```ruby
+output = net.call(input)
+target = Torch.randn(10)
+target = target.view(1, -1)
+criterion = Torch::NN::MSELoss.new
+loss = criterion.call(output, target)
+```
+
+Backprop
+
+```ruby
+net.zero_grad
+p net.conv1.bias.grad
+loss.backward
+p net.conv1.bias.grad
+```
+
+Update the weights
+
+```ruby
+learning_rate = 0.01
+net.parameters.each do |f|
+ f.data.sub!(f.grad.data * learning_rate)
+end
+```
+
+Use an optimizer
+
+```ruby
+optimizer = Torch::Optim::SGD.new(net.parameters, lr: 0.01)
+optimizer.zero_grad
+output = net.call(input)
+loss = criterion.call(output, target)
+loss.backward
+optimizer.step
+```
+
### Tensor Creation
Here’s a list of functions to create tensors (descriptions from the [C++ docs](https://pytorch.org/cppdocs/notes/tensor_creation.html)):
- `arange` returns a tensor with a sequence of integers
@@ -349,10 +401,10 @@
To get started with development:
```sh
git clone https://github.com/ankane/torch-rb.git
-cd torch
+cd torch-rb
bundle install
bundle exec rake compile
bundle exec rake test
```