-
Notifications
You must be signed in to change notification settings - Fork 986
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pipelining for lettuce (or: flush after n
commands)
#92
Labels
type: feature
A new feature
Milestone
Comments
Performance test results:
|
That's very nice 👍 |
mp911de
changed the title
Pipelining for lettuce
Pipelining for lettuce (or: flush after Jul 7, 2015
n
commands)
Todo:
|
mp911de
added a commit
that referenced
this issue
Jul 9, 2015
Allow explicit control over flushing when dispatching commands ("pipelining") on the async API
mp911de
added a commit
that referenced
this issue
Aug 1, 2015
Use the commandBuffer when autoFlushCommands is disabled instead of writing commands to a channel and write the whole buffer when flushing. This change slightly improves the throughput of lettuce. Motivation: netty maintains a promise for every written command and handles buffering on its own. Writing commands one by one but delaying the flush has less flavor of batching than buffering commands and writing them as batch.
Added throughput improvements |
mp911de
added a commit
that referenced
this issue
Aug 2, 2015
Use the commandBuffer when autoFlushCommands is disabled instead of writing commands to a channel and write the whole buffer when flushing. This change slightly improves the throughput of lettuce. Motivation: netty maintains a promise for every written command and handles buffering on its own. Writing commands one by one but delaying the flush has less flavor of batching than buffering commands and writing them as batch.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Redis commands are rather small (usually below 50 bytes). Every write is written by
writeAndFlush
. A single connection operated with one thread can achieve currently between 100K ops/sec to 150K ops/sec.The
flush
part within netty is currently the costly and limiting part. Tests with batching (writing multiple commands in async/observable mode without flush after every write) and an explicitflush
every 20 to 50 commands pushed throughput to 470K ops/sec to 800K ops/sec.Goal of this ticket is to implement a batching mode that allows buffering of commands and an explicit write (or even auto-flush every
n
commands).The text was updated successfully, but these errors were encountered: