README.mdown in split-0.6.6 vs README.mdown in split-0.7.0
- old
+ new
@@ -132,10 +132,24 @@
Thanks for signing up, dude! <% finished("signup_page_redesign") %>
```
You can find more examples, tutorials and guides on the [wiki](https://github.com/andrew/split/wiki).
+## Statistical Validity
+
+Split uses a z test (n>30) of the difference between your control and alternative conversion rates to calculate statistical significance.
+
+This means that Split will tell you whether an alternative is better or worse than your control, but it will not distinguish between which alternative is the best in an experiment with multiple alternatives. To find that out, run a new experiment with one of the prior alternatives as the control.
+
+Also, as per this [blog post](http://www.evanmiller.org/how-not-to-run-an-ab-test.html) on the pitfalls of A/B testing, it is highly recommended that you determine your requisite sample size for each branch before running the experiment. Otherwise, you'll have an increased rate of false positives (experiments which show a significant effect where really there is none).
+
+[Here](http://www.evanmiller.org/ab-testing/sample-size.html) is a sample size calculator for your convenience.
+
+Finally, two things should be noted about the dashboard:
+* Split will only tell if you if your experiment is 90%, 95%, or 99% significant. For levels of lesser significance, Split will simply show "insufficient significance."
+* If you have less than 30 participants or 5 conversions for a branch, Split will not calculate significance, as you have not yet gathered enough data.
+
## Extras
### Weighted alternatives
Perhaps you only want to show an alternative to 10% of your visitors because it is very experimental or not yet fully load tested.
@@ -268,10 +282,26 @@
logger.info "experiment=%s alternative=%s user=%s complete=true" %
[ trial.experiment.name, trial.alternative, current_user.id ]
end
```
+#### Views
+
+If you are running `ab_test` from a view, you must define your event
+hook callback as a
+[helper_method](http://apidock.com/rails/AbstractController/Helpers/ClassMethods/helper_method)
+in the controller:
+
+``` ruby
+helper_method :log_trial_choose
+
+def log_trial_choose(trial)
+ logger.info "experiment=%s alternative=%s user=%s" %
+ [ trial.experiment.name, trial.alternative, current_user.id ]
+end
+```
+
### Experiment Hooks
You can assign a proc that will be called when an experiment is reset or deleted. You can use these hooks to call methods within your application to keep data related to experiments in sync with Split.
For example:
@@ -603,9 +633,15 @@
- [Split::Mongoid](https://github.com/MongoHQ/split-mongoid) - store experiment data in mongoid (still uses redis)
## Screencast
Ryan bates has produced an excellent 10 minute screencast about split on the Railscasts site: [A/B Testing with Split](http://railscasts.com/episodes/331-a-b-testing-with-split)
+
+## Blogposts
+
+* [A/B Testing with Split in Ruby on Rails](http://grinnick.com/posts/a-b-testing-with-split-in-ruby-on-rails)
+* [Recipe: A/B testing with KISSMetrics and the split gem](http://robots.thoughtbot.com/post/9595887299/recipe-a-b-testing-with-kissmetrics-and-the-split-gem)
+* [Rails A/B testing with Split on Heroku](http://blog.nathanhumbert.com/2012/02/rails-ab-testing-with-split-on-heroku.html)
## Contributors
Special thanks to the following people for submitting patches: