README.md in libmf-0.1.2 vs README.md in libmf-0.1.3
- old
+ new
@@ -1,11 +1,9 @@
# LIBMF
[LIBMF](https://github.com/cjlin1/libmf) - large-scale sparse matrix factorization - for Ruby
-:fire: Uses the C API for blazing performance
-
[![Build Status](https://travis-ci.org/ankane/libmf.svg?branch=master)](https://travis-ci.org/ankane/libmf)
## Installation
Add this line to your application’s Gemfile:
@@ -63,52 +61,66 @@
```ruby
model.fit(data, eval_set: eval_set)
```
-## Parameters
+## Cross-Validation
-Pass parameters
+Perform cross-validation
```ruby
-model = Libmf::Model.new(k: 20, nr_iters: 50)
+model.cv(data)
```
-Supports the same parameters as LIBMF
+Specify the number of folds
-```text
-variable meaning default
-================================================================
-fun loss function 0
-k number of latent factors 8
-nr_threads number of threads used 12
-nr_bins number of bins 25
-nr_iters number of iterations 20
-lambda_p1 coefficient of L1-norm regularization on P 0
-lambda_p2 coefficient of L2-norm regularization on P 0.1
-lambda_q1 coefficient of L1-norm regularization on Q 0
-lambda_q2 coefficient of L2-norm regularization on Q 0.1
-eta learning rate 0.1
-alpha importance of negative entries 0.1
-c desired value of negative entries 0.0001
-do_nmf perform non-negative MF (NMF) false
-quiet no outputs to stdout false
-copy_data copy data in training procedure true
+```ruby
+model.cv(data, folds: 5)
```
-## Cross-Validation
+## Parameters
-Perform cross-validation
+Pass parameters - default values below
```ruby
-model.cv(data)
+Libmf::Model.new(
+ loss: 0, # loss function
+ factors: 8, # number of latent factors
+ threads: 12, # number of threads used
+ bins: 25, # number of bins
+ iterations: 20, # number of iterations
+ lambda_p1: 0, # coefficient of L1-norm regularization on P
+ lambda_p2: 0.1, # coefficient of L2-norm regularization on P
+ lambda_q1: 0, # coefficient of L1-norm regularization on Q
+ lambda_q2: 0.1, # coefficient of L2-norm regularization on Q
+ learning_rate: 0.1, # learning rate
+ alpha: 0.1, # importance of negative entries
+ c: 0.0001, # desired value of negative entries
+ nmf: false, # perform non-negative MF (NMF)
+ quiet: false, # no outputs to stdout
+ copy_data: true # copy data in training procedure
+)
```
-Specify the number of folds
+### Loss Functions
-```ruby
-model.cv(data, folds: 5)
-```
+For real-valued matrix factorization
+
+- 0 - squared error (L2-norm)
+- 1 - absolute error (L1-norm)
+- 2 - generalized KL-divergence
+
+For binary matrix factorization
+
+- 5 - logarithmic error
+- 6 - squared hinge loss
+- 7 - hinge loss
+
+For one-class matrix factorization
+
+- 10 - row-oriented pair-wise logarithmic loss
+- 11 - column-oriented pair-wise logarithmic loss
+- 12 - squared error (L2-norm)
## Resources
- [LIBMF: A Library for Parallel Matrix Factorization in Shared-memory Systems](https://www.csie.ntu.edu.tw/~cjlin/papers/libmf/libmf_open_source.pdf)