README.md in torch-rb-0.13.0 vs README.md in torch-rb-0.13.1
- old
+ new
@@ -408,22 +408,16 @@
Here’s the list of compatible versions.
Torch.rb | LibTorch
--- | ---
-0.13.0+ | 2.0.0+
-0.12.0-0.12.2 | 1.13.0-1.13.1
-0.11.0-0.11.2 | 1.12.0-1.12.1
-0.10.0-0.10.2 | 1.11.0
-0.9.0-0.9.2 | 1.10.0-1.10.2
-0.8.0-0.8.3 | 1.9.0-1.9.1
-0.6.0-0.7.0 | 1.8.0-1.8.1
-0.5.0-0.5.3 | 1.7.0-1.7.1
-0.3.0-0.4.2 | 1.6.0
-0.2.0-0.2.7 | 1.5.0-1.5.1
-0.1.8 | 1.4.0
-0.1.0-0.1.7 | 1.3.1
+0.13.x | 2.0.x
+0.12.x | 1.13.x
+0.11.x | 1.12.x
+0.10.x | 1.11.x
+0.9.x | 1.10.x
+0.8.x | 1.9.x
### Homebrew
You can also use Homebrew.
@@ -431,12 +425,16 @@
brew install pytorch
```
## Performance
-Deep learning is significantly faster on a GPU. With Linux, install [CUDA](https://developer.nvidia.com/cuda-downloads) and [cuDNN](https://developer.nvidia.com/cudnn) and reinstall the gem.
+Deep learning is significantly faster on a GPU.
+### Linux
+
+With Linux, install [CUDA](https://developer.nvidia.com/cuda-downloads) and [cuDNN](https://developer.nvidia.com/cudnn) and reinstall the gem.
+
Check if CUDA is available
```ruby
Torch::CUDA.available?
```
@@ -452,9 +450,24 @@
```text
ankane/ml-stack:torch-gpu
```
And leave the other fields in that section blank. Once the notebook is running, you can run the [MNIST example](https://github.com/ankane/ml-stack/blob/master/torch-gpu/MNIST.ipynb).
+
+### Mac
+
+With Apple silicon, check if Metal Performance Shaders (MPS) is available
+
+```ruby
+Torch::Backends::MPS.available?
+```
+
+Move a neural network to a GPU
+
+```ruby
+device = Torch.device("mps")
+net.to(device)
+```
## History
View the [changelog](https://github.com/ankane/torch.rb/blob/master/CHANGELOG.md)