-
-
Notifications
You must be signed in to change notification settings - Fork 162
RateLimiterQueue
RateLimiterQueue limits number of actions during period of time and queues other to execute the next period.
It uses FIFO (first in, first out
) queue on single server or distributed environment. It doesn't limit concurrency and it tries to consume as many tokens as possible, if there are waiting requests.
It strictly respects queue order with Memory and Cluster limiters only. Other limiters respect it too, but do not guarantee: some requests may go out of order because of distributed queue.
Both removeTokens(tokens, key)
and getTokensRemaining(key)
may be called without key
param to use single RateLimiterQueue instance for all users and actions. key
can be user ID, IP address or any other string or number.
const http = require('http');
const express = require('express');
const { RateLimiterMemory, RateLimiterQueue } = require('rate-limiter-flexible');
const limiterFlexible = new RateLimiterMemory({
points: 2,
duration: 1, // Per one second
});
const limiterQueue = new RateLimiterQueue(limiterFlexible, {
maxQueueSize: 100,
});
const app = express();
app.get('/', async (req, res) => {
try {
const remainingTokens = await limiter.removeTokens(1)
res.end(remainingTokens)
} catch(err) {
if (err instanceof Error) {
res.status(400).end()
} else {
res.status(429).send('Too Many Requests');
}
}
});
const server = http.createServer(app);
server.listen(3002, () => {
console.log('RateLimiterQueue service started');
});
In the above example, there is RateLimiterMemory
instance created with 2 points
available per one second duration
. (points and tokens are the same thing in this case).
maxQueueSize
option is set to 100. Default value is 4294967295
(2 ^ 32 - 1)
- If the queue is full and another request tries to remove token(s),
removeTokens
immediately rejected withRateLimiterQueueError
. -
removeTokens
can be also rejected withRateLimiterQueueError
, if application tries to remove more tokens than allowed per interval. - If you use one of store limiter like Redis, MongoDB or any other, it may be rejected with error from store.
RateLimiterQueueError
can be got from components of rate-limiter-flexible
:
const RateLimiterQueueError = require('rate-limiter-flexible/lib/component/RateLimiterQueueError')
const Redis = require('ioredis');
const { RateLimiterRedis, RateLimiterQueue } = require('rate-limiter-flexible');
const redisClient = new Redis({ enableOfflineQueue: false });
const rlRedis = new RateLimiterRedis({
storeClient: redisClient,
points: 2, // Number of tokens
duration: 5, // Per 5 second interval
});
const rlQueue = new RateLimiterQueue(rlRedis);
rlQueue.getTokensRemaining()
.then((tokensRemaining) => { res.end(tokensRemaining) })
.catch((errFromStore) => { res.status(500).end() })
Migration from limiter
This RateLimiterQueue
provides the same features as rate limiter from limiter
package.
Advantages in comparison:
- Works in multi-server scenario with any store limiter like Redis, MongoDB or any other from
rate-limiter-flexible
. - Respects queue order with Memory and Cluster limiters.
- Works on top of native promises.
Example of migration:
var RateLimiter = require('limiter').RateLimiter;
var limiter = new RateLimiter(150, 'hour');
limiter.removeTokens(1, function(err, remainingRequests) {
callMyRequestSendingFunction(...);
});
Should be changed to:
const {RateLimiterMemory, RateLimiterQueue} = require('rate-limiter-flexible');
const limiterFlexible = new RateLimiterMemory({
points: 150,
duration: 60 * 60, // hour
});
const limiter = new RateLimiterQueue(limiterFlexible);
app.get('/', async (req, res) => {
const remainingTokens = await limiter.removeTokens(1);
callMyRequestSendingFunction(...);
})
Scroll top to read more.
Get started
Middlewares and plugins
Migration from other packages
Limiters:
- Redis
- Memory
- DynamoDB
- Prisma
- MongoDB (with sharding support)
- PostgreSQL
- MySQL
- BurstyRateLimiter
- Cluster
- PM2 Cluster
- Memcached
- RateLimiterUnion
- RateLimiterQueue
Wrappers:
- RLWrapperBlackAndWhite Black and White lists
Knowledge base:
- Block Strategy in memory
- Insurance Strategy
- Comparative benchmarks
- Smooth out traffic peaks
-
Usage example
- Minimal protection against password brute-force
- Login endpoint protection
- Websocket connection prevent flooding
- Dynamic block duration
- Different limits for authorized users
- Different limits for different parts of application
- Block Strategy in memory
- Insurance Strategy
- Third-party API, crawler, bot rate limiting