README.md in torch-rb-0.8.1 vs README.md in torch-rb-0.8.2
- old
+ new
@@ -26,31 +26,35 @@
It can take a few minutes to compile the extension.
## Getting Started
-Deep learning is significantly faster with a GPU. If you don’t have an NVIDIA GPU, we recommend using a cloud service. [Paperspace](https://www.paperspace.com/) has a great free plan.
+A good place to start is [Deep Learning with Torch.rb: A 60 Minute Blitz](tutorials/blitz/README.md).
-We’ve put together a [Docker image](https://github.com/ankane/ml-stack) to make it easy to get started. On Paperspace, create a notebook with a custom container. Under advanced options, set the container name to:
+## Tutorials
-```text
-ankane/ml-stack:torch-gpu
-```
+- [Transfer learning](tutorials/transfer_learning/README.md)
+- [Sequence models](tutorials/nlp/sequence_models.md)
+- [Word embeddings](tutorials/nlp/word_embeddings.md)
-And leave the other fields in that section blank. Once the notebook is running, you can run the [MNIST example](https://github.com/ankane/ml-stack/blob/master/torch-gpu/MNIST.ipynb).
+## Examples
+- [Image classification with MNIST](examples/mnist) ([日本語版](https://qiita.com/kojix2/items/c19c36dc1bf73ea93409))
+- [Collaborative filtering with MovieLens](examples/movielens)
+- [Generative adversarial networks](examples/gan)
+
## API
This library follows the [PyTorch API](https://pytorch.org/docs/stable/torch.html). There are a few changes to make it more Ruby-like:
- Methods that perform in-place modifications end with `!` instead of `_` (`add!` instead of `add_`)
- Methods that return booleans use `?` instead of `is_` (`tensor?` instead of `is_tensor`)
- Numo is used instead of NumPy (`x.numo` instead of `x.numpy()`)
You can follow PyTorch tutorials and convert the code to Ruby in many cases. Feel free to open an issue if you run into problems.
-## Tutorial
+## Overview
Some examples below are from [Deep Learning with PyTorch: A 60 Minutes Blitz](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html)
### Tensors
@@ -212,36 +216,26 @@
Define a neural network
```ruby
class MyNet < Torch::NN::Module
def initialize
- super
+ super()
@conv1 = Torch::NN::Conv2d.new(1, 6, 3)
@conv2 = Torch::NN::Conv2d.new(6, 16, 3)
@fc1 = Torch::NN::Linear.new(16 * 6 * 6, 120)
@fc2 = Torch::NN::Linear.new(120, 84)
@fc3 = Torch::NN::Linear.new(84, 10)
end
def forward(x)
x = Torch::NN::F.max_pool2d(Torch::NN::F.relu(@conv1.call(x)), [2, 2])
x = Torch::NN::F.max_pool2d(Torch::NN::F.relu(@conv2.call(x)), 2)
- x = x.view(-1, num_flat_features(x))
+ x = Torch.flatten(x, 1)
x = Torch::NN::F.relu(@fc1.call(x))
x = Torch::NN::F.relu(@fc2.call(x))
- x = @fc3.call(x)
- x
+ @fc3.call(x)
end
-
- def num_flat_features(x)
- size = x.size[1..-1]
- num_features = 1
- size.each do |s|
- num_features *= s
- end
- num_features
- end
end
```
Create an instance of it
@@ -400,23 +394,13 @@
```ruby
Torch.zeros(3) # tensor([0, 0, 0])
```
-## Examples
-
-Here are a few full examples:
-
-- [Image classification with MNIST](examples/mnist) ([日本語版](https://qiita.com/kojix2/items/c19c36dc1bf73ea93409))
-- [Collaborative filtering with MovieLens](examples/movielens)
-- [Sequence models and word embeddings](examples/nlp)
-- [Generative adversarial networks](examples/gan)
-- [Transfer learning](examples/transfer-learning)
-
## LibTorch Installation
-[Download LibTorch](https://pytorch.org/). For Linux, use the `cxx11 ABI` version. Then run:
+[Download LibTorch](https://pytorch.org/) (for Linux, use the `cxx11 ABI` version). Then run:
```sh
bundle config build.torch-rb --with-torch-dir=/path/to/libtorch
```
@@ -442,14 +426,12 @@
Then install the gem (no need for `bundle config`).
## Performance
-### Linux
+Deep learning is significantly faster on a GPU. With Linux, install [CUDA](https://developer.nvidia.com/cuda-downloads) and [cuDNN](https://developer.nvidia.com/cudnn) and reinstall the gem.
-Deep learning is significantly faster on a GPU. Install [CUDA](https://developer.nvidia.com/cuda-downloads) and [cuDNN](https://developer.nvidia.com/cudnn) and reinstall the gem.
-
Check if CUDA is available
```ruby
Torch::CUDA.available?
```
@@ -458,17 +440,16 @@
```ruby
net.cuda
```
-## rbenv
+If you don’t have a GPU that supports CUDA, we recommend using a cloud service. [Paperspace](https://www.paperspace.com/) has a great free plan. We’ve put together a [Docker image](https://github.com/ankane/ml-stack) to make it easy to get started. On Paperspace, create a notebook with a custom container. Under advanced options, set the container name to:
-This library uses [Rice](https://github.com/jasonroelofs/rice) to interface with LibTorch. Rice and earlier versions of rbenv don’t play nicely together. If you encounter an error during installation, upgrade ruby-build and reinstall your Ruby version.
-
-```sh
-brew upgrade ruby-build
-rbenv install [version]
+```text
+ankane/ml-stack:torch-gpu
```
+
+And leave the other fields in that section blank. Once the notebook is running, you can run the [MNIST example](https://github.com/ankane/ml-stack/blob/master/torch-gpu/MNIST.ipynb).
## History
View the [changelog](https://github.com/ankane/torch.rb/blob/master/CHANGELOG.md)