This is Rack middleware that provides logic for rate-limiting incoming
HTTP requests to Rack applications. You can use Rack::Throttle
with any
Ruby web framework based on Rack, including with Ruby on Rails 3.0 and with
Sinatra.
- Throttles a Rack application by enforcing a minimum time interval between subsequent HTTP requests from a particular client, as well as by defining a maximum number of allowed HTTP requests per a given time period (per minute, hourly, or daily).
- Compatible with any Rack application and any Rack-based framework.
- Stores rate-limiting counters in any key/value store implementation that
responds to
#[]
/#[]=
(like Ruby's hashes) or to#get
/#set
(like memcached or Redis). - Compatible with the gdbm binding included in Ruby's standard library.
- Compatible with the memcached, memcache-client, memcache and redis gems.
- Compatible with Heroku's memcached add-on (currently available as a free beta service).
# config/application.rb
require 'rack/throttle'
class Application < Rails::Application
config.middleware.use Rack::Throttle::Interval
end
#!/usr/bin/env ruby -rubygems
require 'sinatra'
require 'rack/throttle'
use Rack::Throttle::Interval
get('/hello') { "Hello, world!\n" }
#!/usr/bin/env rackup
require 'rack/throttle'
use Rack::Throttle::Interval
run lambda { |env| [200, {'Content-Type' => 'text/plain'}, "Hello, world!\n"] }
use Rack::Throttle::Interval, :min => 3.0
use Rack::Throttle::Minute, :max => 60
use Rack::Throttle::Hourly, :max => 100
use Rack::Throttle::Daily, :max => 1000
use Rack::Throttle::Daily, :max => 1000 # requests
use Rack::Throttle::Hourly, :max => 100 # requests
use Rack::Throttle::Minute, :max => 60 # requests
use Rack::Throttle::Interval, :min => 3.0 # seconds
require 'gdbm'
use Rack::Throttle::Interval, :cache => GDBM.new('tmp/throttle.db')
require 'memcached'
use Rack::Throttle::Interval, :cache => Memcached.new, :key_prefix => :throttle
require 'redis'
use Rack::Throttle::Interval, :cache => Redis.new, :key_prefix => :throttle
Rack::Throttle
supports three built-in throttling strategies:
Rack::Throttle::Interval
: Throttles the application by enforcing a minimum interval (by default, 1 second) between subsequent HTTP requests.Rack::Throttle::Hourly
: Throttles the application by defining a maximum number of allowed HTTP requests per hour (by default, 3,600 requests per 60 minutes, which works out to an average of 1 request per second).Rack::Throttle::Daily
: Throttles the application by defining a maximum number of allowed HTTP requests per day (by default, 86,400 requests per 24 hours, which works out to an average of 1 request per second).
You can fully customize the implementation details of any of these strategies
by simply subclassing one of the aforementioned default implementations.
And, of course, should your application-specific requirements be
significantly more complex than what we've provided for, you can also define
entirely new kinds of throttling strategies by subclassing the
Rack::Throttle::Limiter
base class directly.
The rate-limiting counters stored and maintained by Rack::Throttle
are
keyed to unique HTTP clients.
By default, HTTP clients are uniquely identified by their IP address as
returned by Rack::Request#ip
. If you wish to instead use a more granular,
application-specific identifier such as a session key or a user account
name, you need only subclass a throttling strategy implementation and
override the #client_identifier
method.
When a client exceeds their rate limit, Rack::Throttle
by default returns
a "403 Forbidden" response with an associated "Rate Limit Exceeded" message
in the response body.
An HTTP 403 response means that the server understood the request, but is refusing to respond to it and an accompanying message will explain why. This indicates an error on the client's part in exceeding the rate limits outlined in the acceptable use policy for the site, service, or API.
However, there exists a widespread practice of instead returning a "503 Service Unavailable" response when a client exceeds the set rate limits. This is technically dubious because it indicates an error on the server's part, which is certainly not the case with rate limiting - it was the client that committed the oops, not the server.
An HTTP 503 response would be correct in situations where the server was
genuinely overloaded and couldn't handle more requests, but for rate
limiting an HTTP 403 response is more appropriate. Nonetheless, if you think
otherwise, Rack::Throttle
does allow you to override the returned HTTP
status code by passing in a :code => 503
option when constructing a
Rack::Throttle::Limiter
instance.
http://datagraph.rubyforge.org/rack-throttle/
- {Rack::Throttle}
- {Rack::Throttle::Interval}
- {Rack::Throttle::Daily}
- {Rack::Throttle::Hourly}
- Rack (>= 1.0.0)
The recommended installation method is via RubyGems. To install the latest official release of the gem, do:
% [sudo] gem install rack-throttle
To get a local working copy of the development repository, do:
% git clone git://github.com/datagraph/rack-throttle.git
Alternatively, you can download the latest development version as a tarball as follows:
% wget http://github.com/datagraph/rack-throttle/tarball/master
Rack::Throttle
is free and unencumbered public domain software. For more
information, see http://unlicense.org/ or the accompanying UNLICENSE file.