BigBench is a http penetration tool. It allows you to test the performance of any web server with very high loads. It:
Is a fancy ruby solution for local and remote load benchmarking
Creates about 16% more load then Apache’s JMeter
Has an own very easy to use DSL
Makes remote testing a breeze with bots
Offers an awesome post processor environment to analyze your benchmarks
that includes tracking iteration
, polynomial
regressions
, normal distributions
,
Offers the ability to hook in and execute any code after the benchmarks are finished
Comes with included post processors that create statistics and graphs
Is very easy to extend!
gem install bigbench
Ruby 1.9+
Redisonly if you’re testing with multiple hosts
How do the test receipts look like? As easy as possible. For example like
this in example.rb
:
configure => { :duration => 2.minutes, :output => "example.ljson", :users => 5, :basic_auth => ['username', 'password'] } benchmark "default website pages" => "http://localhost:3000" do get "/" get "/blog" get "/imprint" get "/admin", :basic_auth => ['username', 'password'] end benchmark "login and logout" => "http://localhost:3000" do post "/login", :params => { :name => "test@user.com", :password => "secret" } post "/logout", :params => { :name => "test@user.com" } end post_process :statistics
You can either test with a single machine right from your local host, or with multiple machines using bots. No matter what, the test receipt will stay the same.
BigBench allows you to run your tests against every host from your local machine. The command for this looks like this:
bigbench run local example.rb
BigBench uses a bot design pattern which means you can run your tests from multiple hosts. Everything you need for this is a redis that is reachable from all testing hosts. Every host simply starts a bot that is checking for a new test receipt every minute like this:
bigbench start bot redis_url:port redis_password
Then to run the tests from all hosts simply use the same receipt as you would use for a local run and call it like this:
bigbench run bots example.rb redis_url:port redis_password
This will upload the test receipt to all bots and make them run it. Every bot reports its results back to the redis and the local machine then combines, and writes them to the output file. So you test with the same receipts and get the same results, no matter if your testing from the local host or with multiple bots.
How does the recorded output look like? It’s in the *.ljson
format which is nothing else but a textfile with a complete JSON object on
every line. It looks like this:
{"elapsed":0.002233,"start":1333981203.542233,"stop":1333981203.54279,"duration":0,"benchmark":"index page","url":"http://localhost:3000/","path":"/","method":"get","status":"200"} {"elapsed":0.00331,"start":1333981203.5434968,"stop":1333981203.5438669,"duration":0,"benchmark":"index page","url":"http://localhost:3000/","path":"/","method":"get","status":"200"} {"elapsed":0.004248,"start":1333981203.544449,"stop":1333981203.544805,"duration":0,"benchmark":"index page","url":"http://localhost:3000/","path":"/","method":"get","status":"200"} {"elapsed":0.00521,"start":1333981203.545397,"stop":1333981203.5457668,"duration":0,"benchmark":"index page","url":"http://localhost:3000/","path":"/","method":"get","status":"200"} {"elapsed":0.00615,"start":1333981203.546355,"stop":1333981203.546707,"duration":0,"benchmark":"index page","url":"http://localhost:3000/","path":"/","method":"get","status":"200"} {"elapsed":0.007127,"start":1333981203.547328,"stop":1333981203.5476842,"duration":0,"benchmark":"index page","url":"http://localhost:3000/","path":"/","method":"get","status":"200"} {"elapsed":0.008024,"start":1333981203.548226,"stop":1333981203.5485811,"duration":0,"benchmark":"index page","url":"http://localhost:3000/","path":"/","method":"get","status":"200"} {"elapsed":0.008904,"start":1333981203.549105,"stop":1333981203.549461,"duration":0,"benchmark":"index page","url":"http://localhost:3000/","path":"/","method":"get","status":"200"} {"elapsed":0.009803,"start":1333981203.550003,"stop":1333981203.55036,"duration":0,"benchmark":"index page","url":"http://localhost:3000/","path":"/","method":"get","status":"200"} {"elapsed":0.010678,"start":1333981203.550882,"stop":1333981203.551235,"duration":0,"benchmark":"index page","url":"http://localhost:3000/","path":"/","method":"get","status":"200"} {"elapsed":0.011549,"start":1333981203.5517519,"stop":1333981203.552106,"duration":0,"benchmark":"index page","url":"http://localhost:3000/","path":"/","method":"get","status":"200"} {"elapsed":0.012417,"start":1333981203.5526242,"stop":1333981203.552974,"duration":0,"benchmark":"index page","url":"http://localhost:3000/","path":"/","method":"get","status":"200"} {"elapsed":0.013294,"start":1333981203.553495,"stop":1333981203.553851,"duration":0,"benchmark":"index page","url":"http://localhost:3000/","path":"/","method":"get","status":"200"} {"elapsed":0.014166,"start":1333981203.5543702,"stop":1333981203.554723,"duration":0,"benchmark":"index page","url":"http://localhost:3000/","path":"/","method":"get","status":"200"} {"elapsed":0.015043,"start":1333981203.555247,"stop":1333981203.5556,"duration":0,"benchmark":"index page","url":"http://localhost:3000/","path":"/","method":"get","status":"200"} {"elapsed":0.01592,"start":1333981203.556119,"stop":1333981203.5564768,"duration":0,"benchmark":"index page","url":"http://localhost:3000/","path":"/","method":"get","status":"200"} ...
The advantage with this file format is, that it can be parsed and computed very efficiently because the JSON parser doesn’t have to parse a whole JSON array with with loads of objects but simply one objectline by line.
After the benchmark has finished you can create hooks and write plugins
that do something with the collected data. To setup a hook simply use the
post_process
method to add a block or run a predefined plugin:
# Run BigBench::PostProcessor::Statistics post_process :statistics # Run a block that could do anything post_process do total_trackings, total_errors = 0, 0 each_tracking do |tracking| total_trackings += 1 total_errors += 1 unless tracking[:status] == 200 end Twitter.post "Just run BigBench with #{total_trackings} trackings and #{total_errors} errors." end
It’s very easy to write a post processor. The basic structure is like this:
module BigBench module PostProcessor module SamplePostProcessor def self.run!(options) # Do whatever you want here end end end end
You can hook it in with:
post_process :sample_post_processor # or post_process BigBench::PostProcessor::SamplePostProcessor
Post processors by default offer a great load of functionality that helps to evaluate the benchmarks. The available methods are:
Iterate over each of the tracking elements. The trackings are read line-by-line. This is the fastest approach and should be used for huge datasets because the trackings are not loaded completely into memory.
total_trackings, total_errors = 0, 0 each_tracking do |tracking| total_trackings += 1 total_errors += 1 unless tracking[:status] == 200 end
An array with all tracking hashes in it. The creation might take some time at the first usage, afterwards the array is cached automatically.
trackings.size # => 650456 trackings.each do |tracking| puts tracking[:duration] end
Computes the default statistics for any attribute
# Unclustered statistics statistics.durations.max # => 78.2 statistics.durations.min # => 12.3 statistics.durations.mean # => 45.2 statistics.durations.average # => 45.2 statistics.durations.standard_deviation # => 11.3 statistics.durations.sd # => 11.3 statistics.durations.squared_deviation # => 60.7 statistics.durations.variance # => 60.7 # Time clustered statistics - 1.second statistics.requests.max # => 42.1 statistics.requests.min # => 12.3 statistics.methods(:get).max # => 42.1 statistics.methods(:get).average # => 33.1 statistics.benchmark("index page").average # => 32.9 statistics.paths("/").average # => 12.5
Clusters the resulting trackings by a timebase. The default timebase is
1.second
which means, that it groups all trackings to full
seconds and calculates the amount of requests and the average duration.
# Duration is 120 seconds for this example # 1.second cluster.timesteps # => [1, 2, ..., 120] cluster.durations # => [43, 96, ..., 41] cluster.requests # => [503, 541, ..., 511] cluster.methods(:get) # => [200, 204, ..., 209] cluster.methods(:post) # => [201, 102, ..., 401] cluster.statuses(200) # => [501, 502, ..., 102] cluster.statuses(404) # => [3, 1, ..., 0] cluster.paths("/") # => [401, 482, ..., 271] cluster.paths("/logout") # => [56, 51, ..., 38] cluster.benchmark("index") # => [342, 531, ..., 234] cluster.benchmark("user") # => [22, 41, ..., 556] # 1.minute cluster(1.minute).timesteps # => [0, 1] cluster(1.minute).durations # => [42, 44] cluster(1.minute).requests # => [24032, 21893] cluster(1.minute).methods(:get) # => [200, 204] cluster(1.minute).statuses(200) # => [501, 502] cluster(1.minute).paths("/") # => [401, 482] cluster(1.minute).benchmark("user") # => [22, 41] # 30.seconds cluster(30.seconds).timesteps # => [0, 1, 2, 3] cluster(30.seconds).durations # => [42, 44, 41, 40] cluster(30.seconds).requests # => [11023, 10234, 12345, 13789] cluster(30.seconds).methods(:get) # => [200, 204, 34, 124] cluster(30.seconds).statuses(200) # => [501, 502, 243, 57] cluster(30.seconds).paths("/") # => [401, 482, 124, 234] cluster(30.seconds).benchmark("user") # => [22, 41, 12, 51]
Lists the unique attribute values that appeared in all trackings or the selected tracking scope.
appearing.statuses # => [200, 404] appearing.methods # => ["get", "post"] appearing.paths # => ["/", "/basic/auth"
The polynomial regression creates a function that tries to map the test data best. With this function you have the ability do derivate and thereby plot the changes of the tested system over time. The degree of the regression can be freely chosen.
# Linear regression by default polynomial_regression.durations.x # => [1, 2, ..., 120] polynomial_regression.durations.y # => [45, 23, ..., 36] polynomial_regression.requests.y # => [43, 45, ..., 62] polynomial_regression.methods(:get).y # => [23, 62, ..., 23] polynomial_regression.statuses(200).y # => [51, 22, ..., 15] polynomial_regression.paths("/").y # => [78, 12, ..., 63] polynomial_regression.benchmarks("index page").y # => [12, 45, ..., 23] polynomial_regression.durations.degree # => 1 polynomial_regression.durations.formula # => "43.00886000234 + 0.0167548964060689x^1" # 1. Derivation polynomial_regression.durations.derivation(1) # => [0.01, 0.01, ..., 0.01] polynomial_regression.requests.derivation(1) # => [405, 405, ..., 406] polynomial_regression.methods(:get).derivation(1) # => [23, 62, ..., 23] polynomial_regression.statuses(200).derivation(1) # => [51, 22, ..., 15] polynomial_regression.paths("/").derivation(1) # => [78, 12, ..., 63] polynomial_regression.benchmarks("index page").derivation(1) # => [12, 45, ..., 23] polynomial_regression.durations.formula(1) # => "0.0167548964060689" # Quadratic regression polynomial_regression(:degree => 2).requests.x # => [1, 2, ..., 120] polynomial_regression(:degree => 2).durations.y # => [43, 41, ..., 44] polynomial_regression(:degree => 2).requests.y # => [43, 41, ..., 44] polynomial_regression(:degree => 2).methods(:get).y # => [23, 62, ..., 23] polynomial_regression(:degree => 2).statuses(200).y # => [51, 22, ..., 15] polynomial_regression(:degree => 2).paths("/").y # => [78, 12, ..., 63] polynomial_regression(:degree => 2).benchmarks("index page").y # => [12, 45, ..., 23] polynomial_regression(:degree => 2).requests.formula # => "33.00886000234 + 0.0167548964060689x^1 + 0.0167548964060689x^2" # Different timebase clustering polynomial_regression(:degree => 2, :timebase => 1.minute).requests.x # => [0, 1] polynomial_regression(:degree => 2, :timebase => 1.minute).requests.y # => [24032, 21893] polynomial_regression(:degree => 2, :timebase => 1.minute).durations.y # => [43, 41] polynomial_regression(:degree => 2, :timebase => 1.minute).requests.y # => [43, 41] polynomial_regression(:degree => 2, :timebase => 1.minute).methods(:get).y # => [23, 62] polynomial_regression(:degree => 2, :timebase => 1.minute).statuses(200).y # => [51, 22] polynomial_regression(:degree => 2, :timebase => 1.minute).paths("/").y # => [78, 12] polynomial_regression(:degree => 2, :timebase => 1.minute).benchmarks("index page").y # => [12, 45]
The normal distribution method creates a Gaussian bell function that visualizes the distribution of a special attribute. If you want to know if all your requests take about the same time, or if they vary a lot this is method to use. The x-values are automatically scaled to 4-times the variance around the mean, so it should map the whole bell all the time.
# Normal distribution without time clustering normal_distribution.durations.x # => [1, 2, ..., 120] normal_distribution.durations.y # => [45, 23, ..., 36] normal_distribution.durations.formula # => "(1 / (10.242257627240862 * sqrt(2*PI))) * exp(-1 * ((x - 2.04671984377919)^2) / (2*10.242257627240862))" # Normal distribution with default time slicing of 1.second normal_distribution.requests.y # => [43, 45, ..., 62] normal_distribution.methods(:get).y # => [23, 62, ..., 23] normal_distribution.statuses(200).y # => [51, 22, ..., 15] normal_distribution.paths("/").y # => [78, 12, ..., 63] normal_distribution.benchmarks("index page").y # => [12, 45, ..., 23] # Normal distribution with custom time slicing normal_distribution(1.minute).requests.y # => [43, 45, ..., 62] normal_distribution(1.minute).methods(:get).y # => [23, 62, ..., 23] normal_distribution(1.minute).statuses(200).y # => [51, 22, ..., 15] normal_distribution(1.minute).paths("/").y # => [78, 12, ..., 63] normal_distribution(1.minute).benchmarks("index page").y # => [12, 45, ..., 23]
The scope_to_benchmark method lets you scope any result to a single benchmark. The values computed in this block have entirely been created by this benchmark.
# Results for the index page benchmark scope_to_benchmark "index page" do cluster.durations # => [43, 96, ..., 41] cluster.requests # => [503, 541, ..., 511] cluster.methods(:get) # => [200, 204, ..., 209] cluster.methods(:post) # => [201, 102, ..., 401] polynomial_regression.durations.x # => [1, 2, ..., 120] polynomial_regression.durations.y # => [45, 23, ..., 36] normal_distribution.requests.y # => [43, 45, ..., 62] normal_distribution.methods(:get).y # => [23, 62, ..., 23] normal_distribution.statuses(200).y # => [51, 22, ..., 15] end # Results for the login and logout benchmark scope_to_benchmark "login and logout" do cluster.durations # => [43, 96, ..., 41] cluster.requests # => [300, 141, ..., 511] cluster.methods(:get) # => [100, 204, ..., 209] cluster.methods(:post) # => [101, 102, ..., 401] polynomial_regression.durations.x # => [1, 2, ..., 120] polynomial_regression.durations.y # => [45, 23, ..., 36] normal_distribution.requests.y # => [43, 45, ..., 62] normal_distribution.methods(:get).y # => [23, 62, ..., 23] normal_distribution.statuses(200).y # => [51, 22, ..., 15] end
Iterates over all benchmarks and automatically scopes the results at each iteration to the current benchmark. This is useful if you want to access the detailed differences of each benchmark.
# Iterate over all benchmarks and calculate the results each_benchmark do |benchmark| benchmark.name # => "index page" then "login and logout" cluster.durations # => [43, 96, ..., 41] cluster.requests # => [300, 141, ..., 511] cluster.methods(:get) # => [100, 204, ..., 209] cluster.methods(:post) # => [101, 102, ..., 401] polynomial_regression.durations.x # => [1, 2, ..., 120] polynomial_regression.durations.y # => [45, 23, ..., 36] normal_distribution.requests.y # => [43, 45, ..., 62] normal_distribution.methods(:get).y # => [23, 62, ..., 23] normal_distribution.statuses(200).y # => [51, 22, ..., 15] end
You can also re-run the currently defined post processors or run a separate post processor you never even defined in the first place without collecting the test data again like this:
# Re-run the postprocessors defined in example.rb bigbench run postprocessors example.rb # Run a separate post processor independently - the already defined post processors are ignored bigbench run postprocessor example.rb statistics
Contribute, create great post processors and send me a pull request!
The statistics post processor computes a simple overview of the benchmark and prints it to the terminal like this:
BigBench Statistics +---------------------------+------------------+---------+ | Name | Value | Percent | +---------------------------+------------------+---------+ | Total Requests: | 52,469 | 100% | | Total Errors: | 0 | 0.0% | | | | | | Average Requests/Second: | 437 Requests/sec | | | Average Request Duration: | 1 ms | | | | | | | Max Request Duration: | 181 ms | | | Min Request Duration: | 1 ms | | | | | | | Status Codes: | | | | 200 | 52469 | 100.0% | | | | | | HTTP Methods | | | | get | 34980 | 66.7% | | post | 17489 | 33.3% | | | | | | URL Paths: | | | | / | 34979 | 66.7% | | /basic/auth | 17490 | 33.3% | +---------------------------+------------------+---------+ 19 rows in set
BigBench is awfully good at creating high loads on web servers. A quick benchmark comparison to Apache's JMeter shows that BigBench is able to create 16% more load than JMeter.
Parameter | Value |
---|---|
Test Duration | 2 Minutes |
Concurrency(Threads) | 20 |
Rack Server | Thin |
Rack Host | localhost |
Rack Request | GET: 200, Body: Test |
Ruby Version | ruby 1.9.3p125 [x86_64-darwin11.3.0] |
JMeter Version | 2.6 r1237317 |
BigBench Version | 0.2 |
Value | JMeter | BigBench |
---|---|---|
Total Requests | 48.014 | 55.484 |
Requests/sec | 377 | 462 |
Percentages | 100%% | 116% |
Added command line tool to run the post processors again
Added command line tool to run any post processor on already collected data
Pimped the post processor environment. Available functions now include:
trackings array is now available with all hashes of the trackings at once
Clustering by any timebase, e.g. 1.second
,
20.seconds
, or 2.minutes
which automatically
calculates these values per time slice:
average duration
requests
methods(:get, :post, :put, ...)
statuses(200, 404, 403, ...)
paths("/", "/logout", "/login",
...)
benchmarks("index page", "user behavior",
"bot crawling", ...)
Polynomial Regression of any Degree for all attributes including the derivatives and a formula printer
Statistics with the following values for all attributes:
max
min
mean
standard_deviation
squared_deviation
or variance
Gaussian Normal Distribution for all attributes including a formula printer
Appearing method to quickly list all appearing
statuses
, methods
, paths
in the test
results
Added post processors hook that run after the benchmark with a simple plugin structure
Added a first basic post processor that computes the benchmark statistics and prints them in the terminal
Added ability to execute a block of code after running the benchmark. The code can do anything usefully like send emails, post twitter notifications or startup new servers
Net::HTTP
was too slow. Only reached 35% of Apache's JMeter
load. Changed requesting structure to eventmachine using
em-http-request
Compared to JMeter it can create 16% more load than JMeter now
Changed config option from threads
to users
due
to a better understanding
Added basic auth support
Added params hashes for request content
Initial Version using Net::HTTP
Local and bot testing
LJSON output
Global configuration