README.md in eps-0.1.1 vs README.md in eps-0.2.0
- old
+ new
@@ -1,15 +1,17 @@
# Eps
-Linear regression for Ruby
+Machine learning for Ruby
-- Build models quickly and easily
+- Build predictive models quickly and easily
- Serve models built in Ruby, Python, R, and more
-- Automatically handles categorical variables
-- No external dependencies
+- Supports regression (linear regression) and classification (naive Bayes)
+- Automatically handles categorical features
- Works great with the SciRuby ecosystem (Daru & IRuby)
+Check out [this post](https://ankane.org/rails-meet-data-science) for more info on machine learning with Rails
+
[![Build Status](https://travis-ci.org/ankane/eps.svg?branch=master)](https://travis-ci.org/ankane/eps)
## Installation
Add this line to your application’s Gemfile:
@@ -29,11 +31,11 @@
{bedrooms: 1, bathrooms: 1, price: 100000},
{bedrooms: 2, bathrooms: 1, price: 125000},
{bedrooms: 2, bathrooms: 2, price: 135000},
{bedrooms: 3, bathrooms: 2, price: 162000}
]
-model = Eps::Regressor.new(data, target: :price)
+model = Eps::Model.new(data, target: :price)
puts model.summary
```
Make a prediction
@@ -41,26 +43,28 @@
model.predict(bedrooms: 2, bathrooms: 1)
```
> Pass an array of hashes make multiple predictions at once
+The target can be numeric (regression) or categorical (classification).
+
## Building Models
### Training and Test Sets
When building models, it’s a good idea to hold out some data so you can see how well the model will perform on unseen data. To do this, we split our data into two sets: training and test. We build the model with the training set and later evaluate it on the test set.
```ruby
-rng = Random.new(1) # seed random number generator
-train_set, test_set = houses.partition { rng.rand < 0.7 }
+split_date = Date.parse("2018-06-01")
+train_set, test_set = houses.partition { |h| h.sold_at < split_date }
```
-If your data has a time associated with it, we recommend splitting on this.
+If your data doesn’t have a time associated with it, you can split it randomly.
```ruby
-split_date = Date.parse("2018-06-01")
-train_set, test_set = houses.partition { |h| h.sold_at < split_date }
+rng = Random.new(1) # seed random number generator
+train_set, test_set = houses.partition { rng.rand < 0.7 }
```
### Outliers and Missing Data
Next, decide what to do with outliers and missing data. There are a number of methods for handling them, but the easiest is to remove them.
@@ -77,14 +81,14 @@
{state: "CA"}
```
> Categorical features generate coefficients for each distinct value except for one
-You should do this for any ids in your data.
+Convert any ids to strings so they’re treated as categorical features.
```ruby
-{city_id: "123"}
+{city_id: city_id.to_s}
```
For times, create features like day of week and hour of day with:
```ruby
@@ -120,15 +124,15 @@
### Training
Now, let’s train the model.
```ruby
-model = Eps::Regressor.new(train_features, train_target)
+model = Eps::Model.new(train_features, train_target)
puts model.summary
```
-The summary includes the coefficients and their significance. The lower the p-value, the more significant the feature is. p-values below 0.05 are typically considered significant. It also shows the adjusted r-squared, which is a measure of how well the model fits the data. The higher the number, the better the fit. Here’s a good explanation of why it’s [better than r-squared](https://www.quora.com/What-is-the-difference-between-R-squared-and-Adjusted-R-squared).
+For regression, the summary includes the coefficients and their significance. The lower the p-value, the more significant the feature is. p-values below 0.05 are typically considered significant. It also shows the adjusted r-squared, which is a measure of how well the model fits the data. The higher the number, the better the fit. Here’s a good explanation of why it’s [better than r-squared](https://www.quora.com/What-is-the-difference-between-R-squared-and-Adjusted-R-squared).
### Evaluation
When you’re happy with the model, see how well it performs on the test set. This gives us an idea of how well it’ll perform on unseen data.
@@ -136,103 +140,249 @@
test_features = test_set.map { |h| features(h) }
test_target = test_set.map { |h| target(h) }
model.evaluate(test_features, test_target)
```
-This returns:
+For regression, this returns:
- RMSE - Root mean square error
- MAE - Mean absolute error
- ME - Mean error
We want to minimize the RMSE and MAE and keep the ME around 0.
+For classification, this returns:
+
+- Accuracy
+
+We want to maximize the accuracy.
+
### Finalize
-Now that we have an idea of how the model will perform, we want to retrain the model with all of our data.
+Now that we have an idea of how the model will perform, we want to retrain the model with all of our data. Treat outliers and missing data the same as you did with the training set.
```ruby
+# outliers and missing data
+houses.reject! { |h| h.bedrooms.nil? || h.price < 10000 }
+
+# training
all_features = houses.map { |h| features(h) }
all_target = houses.map { |h| target(h) }
-model = Eps::Regressor.new(all_features, all_target)
+model = Eps::Model.new(all_features, all_target)
```
We now have a model that’s ready to serve.
## Serving Models
-Once the model is trained, all we need are the coefficients to make predictions. You can dump them as a Ruby object or JSON. For Ruby, use:
+Once the model is trained, we need to store it. Eps uses PMML - [Predictive Model Markup Language](https://en.wikipedia.org/wiki/Predictive_Model_Markup_Language) - a standard for storing models. A great option is to write the model to a file with:
```ruby
-model.dump
+File.write("model.pmml", model.to_pmml)
```
-Then hardcode the result into your app.
+> You may need to add `nokogiri` to your Gemfile
+To load a model, use:
+
```ruby
-data = {:coefficients=>{:_intercept=>63500.0, :bedrooms=>26000.0, :bathrooms=>10000.0}}
-model = Eps::Regressor.load(data)
+pmml = File.read("model.pmml")
+model = Eps::Model.load_pmml(pmml)
```
Now we can use it to make predictions.
```ruby
model.predict(bedrooms: 2, bathrooms: 1)
```
-Another option that works well is writing the model to file in your app.
+To continuously train models, we recommend [storing them in your database](#database-storage).
-```ruby
-json = model.to_json
-File.open("lib/models/housing_price.json", "w") { |f| f.write(json) }
+## Full Example
+
+We recommend putting all the model code in a single file. This makes it easy to rebuild the model as needed.
+
+In Rails, we recommend creating a `app/ml_models` directory. Be sure to restart Spring after creating the directory so files are autoloaded.
+
+```sh
+bin/spring stop
```
-To load it, use:
+Here’s what a complete model in `app/ml_models/price_model.rb` may look like:
```ruby
-json = File.read("lib/models/housing_price.json")
-model = Eps::Regressor.load_json(json)
-```
+class PriceModel < Eps::Base
+ def build
+ houses = House.all.to_a
-To continuously train models, we recommend [storing them in your database](#database-storage).
+ # divide into training and test set
+ split_date = Date.parse("2018-06-01")
+ train_set, test_set = houses.partition { |h| h.sold_at < split_date }
-### Beyond Ruby
+ # handle outliers and missing values
+ train_set = preprocess(train_set)
-Eps makes it easy to serve models from other languages. You can build models in R, Python, and others and serve them in Ruby without having to worry about how to deploy or run another language. Eps can load models in:
+ # train
+ train_features = train_set.map { |v| features(v) }
+ train_target = train_set.map { |v| target(v) }
+ model = Eps::Model.new(train_features, train_target)
+ puts model.summary
-JSON
+ # evaluate
+ test_features = test_set.map { |v| features(v) }
+ test_target = test_set.map { |v| target(v) }
+ metrics = model.evaluate(test_features, test_target)
+ puts "Test RMSE: #{metrics[:rmse]}"
+ # for classification, use:
+ # puts "Test accuracy: #{metrics[:accuracy]}"
+ # finalize
+ houses = preprocess(houses)
+ all_features = houses.map { |h| features(h) }
+ all_target = houses.map { |h| target(h) }
+ @model = Eps::Model.new(all_features, all_target)
+
+ # save
+ File.write(model_file, @model.to_pmml)
+ end
+
+ def predict(house)
+ model.predict(features(house))
+ end
+
+ private
+
+ def preprocess(train_set)
+ train_set.reject { |h| h.bedrooms.nil? || h.price < 10000 }
+ end
+
+ def features(house)
+ {
+ bedrooms: house.bedrooms,
+ city_id: house.city_id.to_s,
+ month: house.sold_at.strftime("%b")
+ }
+ end
+
+ def target(house)
+ house.price
+ end
+
+ def model
+ @model ||= Eps::Model.load_pmml(File.read(model_file))
+ end
+
+ def model_file
+ File.join(__dir__, "price_model.pmml")
+ end
+end
+```
+
+Build the model with:
+
```ruby
-data = File.read("model.json")
-model = Eps::Regressor.load_json(data)
+PriceModel.build
```
-[PMML](https://en.wikipedia.org/wiki/Predictive_Model_Markup_Language) - Predictive Model Markup Language
+This saves the model to `price_model.pmml`. Be sure to check this into source control.
+Predict with:
+
```ruby
-data = File.read("model.pmml")
-model = Eps::Regressor.load_pmml(data)
+PriceModel.predict(house)
```
-> Loading PMML requires Nokogiri to be installed
+## Monitoring
-[PFA](http://dmg.org/pfa/) - Portable Format for Analytics
+We recommend monitoring how well your models perform over time. To do this, save your predictions to the database. Then, compare them with:
```ruby
-data = File.read("model.pfa")
-model = Eps::Regressor.load_pfa(data)
+actual = houses.map(&:price)
+estimated = houses.map(&:estimated_price)
+Eps.metrics(actual, estimated)
```
-Here are examples for how to dump models in each:
+This returns the same evaluation metrics as model building. For RMSE and MAE, alert if they rise above a certain threshold. For ME, alert if it moves too far away from 0. For accuracy, alert if it drops below a certain threshold.
-- [R JSON](guides/Modeling.md#r-json)
-- [R PMML](guides/Modeling.md#r-pmml)
-- [R PFA](guides/Modeling.md#r-pfa)
-- [Python JSON](guides/Modeling.md#python-json)
-- [Python PMML](guides/Modeling.md#python-pmml)
-- [Python PFA](guides/Modeling.md#python-pfa)
+## Other Languages
+Eps makes it easy to serve models from other languages. You can build models in R, Python, and others and serve them in Ruby without having to worry about how to deploy or run another language.
+
+Eps can serve linear regression and Naive bayes models. Check out [Scoruby](https://github.com/asafschers/scoruby) to serve other models.
+
+### R
+
+To create a model in R, install the [pmml](https://cran.r-project.org/package=pmml) package
+
+```r
+install.packages("pmml")
+```
+
+For regression, run:
+
+```r
+library(pmml)
+
+model <- lm(dist ~ speed, cars)
+
+# save model
+data <- toString(pmml(model))
+write(data, file="model.pmml")
+```
+
+For classification, run:
+
+```r
+library(pmml)
+library(e1071)
+
+model <- naiveBayes(Species ~ ., iris)
+
+# save model
+data <- toString(pmml(model, predictedField="Species"))
+write(data, file="model.pmml")
+```
+
+### Python
+
+To create a model in Python, install the [sklearn2pmml](https://github.com/jpmml/sklearn2pmml) package
+
+```sh
+pip install sklearn2pmml
+```
+
+For regression, run:
+
+```python
+from sklearn2pmml import sklearn2pmml, make_pmml_pipeline
+from sklearn.linear_model import LinearRegression
+
+x = [1, 2, 3, 5, 6]
+y = [5 * xi + 3 for xi in x]
+
+model = LinearRegression()
+model.fit([[xi] for xi in x], y)
+
+# save model
+sklearn2pmml(make_pmml_pipeline(model), "model.pmml")
+```
+
+For classification, run:
+
+```python
+from sklearn2pmml import sklearn2pmml, make_pmml_pipeline
+from sklearn.naive_bayes import GaussianNB
+
+x = [1, 2, 3, 5, 6]
+y = ["ham", "ham", "ham", "spam", "spam"]
+
+model = GaussianNB()
+model.fit([[xi] for xi in x], y)
+
+sklearn2pmml(make_pmml_pipeline(model), "model.pmml")
+```
+
### Verifying
It’s important for features to be implemented consistently when serving models created in other languages. We highly recommend verifying this programmatically. Create a CSV file with ids and predictions from the original model.
house_id | prediction
@@ -242,30 +392,29 @@
3 | 250000
Once the model is implemented in Ruby, confirm the predictions match.
```ruby
-model = Eps::Regressor.load_json("model.json")
+model = Eps::Model.load_pmml("model.pmml")
# preload houses to prevent n+1
houses = House.all.index_by(&:id)
-CSV.foreach("predictions.csv", headers: true) do |row|
- house = houses[row["house_id"].to_i]
- expected = row["prediction"].to_f
+CSV.foreach("predictions.csv", headers: true, converters: :numeric) do |row|
+ house = houses[row["house_id"]]
+ expected = row["prediction"]
actual = model.predict(bedrooms: house.bedrooms, bathrooms: house.bathrooms)
- unless (actual - expected).abs < 0.001
- raise "Bad prediction for house #{house.id} (exp: #{expected}, act: #{actual})"
- end
+ success = actual.is_a?(String) ? actual == expected : (actual - expected).abs < 0.001
+ raise "Bad prediction for house #{house.id} (exp: #{expected}, act: #{actual})" unless success
putc "✓"
end
```
-### Database Storage
+## Database Storage
The database is another place you can store models. It’s good if you retrain models automatically.
> We recommend adding monitoring and guardrails as well if you retrain automatically
@@ -276,33 +425,21 @@
```
Store the model with:
```ruby
-store = Model.where(key: "housing_price").first_or_initialize
-store.update(data: model.to_json)
+store = Model.where(key: "price").first_or_initialize
+store.update(data: model.to_pmml)
```
Load the model with:
```ruby
-data = Model.find_by!(key: "housing_price").data
-model = Eps::Regressor.load_json(data)
+data = Model.find_by!(key: "price").data
+model = Eps::Model.load_pmml(data)
```
-## Monitoring
-
-We recommend monitoring how well your models perform over time. To do this, save your predictions to the database. Then, compare them with:
-
-```ruby
-actual = houses.map(&:price)
-estimated = houses.map(&:estimated_price)
-Eps.metrics(actual, estimated)
-```
-
-This returns the same evaluation metrics as model building. For RMSE and MAE, alert if they rise above a certain threshold. For ME, alert if it moves too far away from 0.
-
## Training Performance
Speed up training on large datasets with GSL.
First, [install GSL](https://www.gnu.org/software/gsl/). With Homebrew, you can use:
@@ -317,35 +454,37 @@
gem 'gsl', group: :development
```
It only needs to be available in environments used to build the model.
+> This only speeds up regression, not classification
+
## Data
A number of data formats are supported. You can pass the target variable separately.
```ruby
x = [{x: 1}, {x: 2}, {x: 3}]
y = [1, 2, 3]
-Eps::Regressor.new(x, y)
+Eps::Model.new(x, y)
```
Or pass arrays of arrays
```ruby
x = [[1, 2], [2, 0], [3, 1]]
y = [1, 2, 3]
-Eps::Regressor.new(x, y)
+Eps::Model.new(x, y)
```
## Daru
Eps works well with Daru data frames.
```ruby
df = Daru::DataFrame.from_csv("houses.csv")
-Eps::Regressor.new(df, target: "price")
+Eps::Model.new(df, target: "price")
```
To split into training and test sets, use:
```ruby
@@ -363,24 +502,36 @@
CSV.table("data.csv").map { |row| row.to_h }
```
## Jupyter & IRuby
-You can use [IRuby](https://github.com/SciRuby/iruby) to run Eps in [Jupyter](https://jupyter.org/) notebooks. Here’s how to get [IRuby working with Rails](https://github.com/ankane/shorts/blob/master/Jupyter-Rails.md).
+You can use [IRuby](https://github.com/SciRuby/iruby) to run Eps in [Jupyter](https://jupyter.org/) notebooks. Here’s how to get [IRuby working with Rails](https://ankane.org/jupyter-rails).
## Reference
-Get coefficients
-
-```ruby
-model.coefficients
-```
-
Get an extended summary with standard error, t-values, and r-squared
```ruby
model.summary(extended: true)
```
+
+## Upgrading
+
+## 0.2.0
+
+Eps 0.2.0 brings a number of improvements, including support for classification.
+
+We recommend:
+
+1. Changing `Eps::Regressor` to `Eps::Model`
+2. Converting models from JSON to PMML
+
+ ```ruby
+ model = Eps::Model.load_json("model.json")
+ File.write("model.pmml", model.to_pmml)
+ ```
+
+3. Renaming `app/stats_models` to `app/ml_models`
## History
View the [changelog](https://github.com/ankane/eps/blob/master/CHANGELOG.md)