Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pipelining for lettuce (or: flush after n commands) #92

Closed
mp911de opened this issue Jun 28, 2015 · 5 comments
Closed

Pipelining for lettuce (or: flush after n commands) #92

mp911de opened this issue Jun 28, 2015 · 5 comments
Labels
type: feature A new feature
Milestone

Comments

@mp911de
Copy link
Collaborator

mp911de commented Jun 28, 2015

Redis commands are rather small (usually below 50 bytes). Every write is written by writeAndFlush. A single connection operated with one thread can achieve currently between 100K ops/sec to 150K ops/sec.

The flush part within netty is currently the costly and limiting part. Tests with batching (writing multiple commands in async/observable mode without flush after every write) and an explicit flush every 20 to 50 commands pushed throughput to 470K ops/sec to 800K ops/sec.

Goal of this ticket is to implement a batching mode that allows buffering of commands and an explicit write (or even auto-flush every n commands).

@mp911de mp911de added the type: feature A new feature label Jun 28, 2015
@mp911de mp911de added this to the Lettuce 4.0 milestone Jun 28, 2015
@mp911de
Copy link
Collaborator Author

mp911de commented Jun 28, 2015

Performance test results:

Warming:
Duration: 415 ms (0,42 sec), operations: 50000, 120481,93 ops/sec 
Duration: 143 ms (0,14 sec), operations: 50000, 349650,35 ops/sec 

Measure:
Duration: 101 ms (0,10 sec), operations: 50000, 495049,50 ops/sec 
Duration: 117 ms (0,12 sec), operations: 50000, 427350,43 ops/sec 
Duration: 125 ms (0,13 sec), operations: 50000, 400000,00 ops/sec 
Duration: 134 ms (0,13 sec), operations: 50000, 373134,33 ops/sec 
Duration: 106 ms (0,11 sec), operations: 50000, 471698,11 ops/sec 
Duration: 103 ms (0,10 sec), operations: 50000, 485436,89 ops/sec 
Duration: 98 ms (0,10 sec), operations: 50000, 510204,08 ops/sec 
Duration: 86 ms (0,09 sec), operations: 50000, 581395,35 ops/sec 
Duration: 99 ms (0,10 sec), operations: 50000, 505050,51 ops/sec 
Duration: 62 ms (0,06 sec), operations: 50000, 806451,61 ops/sec
Mean: 505577,08 ops/sec 

@mp911de mp911de changed the title Allow batching beyond MULTI Pipelining for lettuce Jun 29, 2015
@itamarhaber
Copy link
Member

That's very nice 👍

@mp911de mp911de modified the milestones: Lettuce 3.3, Lettuce 4.0 Jul 4, 2015
@mp911de mp911de changed the title Pipelining for lettuce Pipelining for lettuce (or: flush after n commands) Jul 7, 2015
mp911de added a commit that referenced this issue Jul 8, 2015
@mp911de
Copy link
Collaborator Author

mp911de commented Jul 8, 2015

Todo:

  • Merge in 4.0
  • Docs

mp911de added a commit that referenced this issue Jul 9, 2015
Allow explicit control over flushing when dispatching commands ("pipelining") on the async API
@mp911de
Copy link
Collaborator Author

mp911de commented Jul 15, 2015

@mp911de mp911de closed this as completed Jul 15, 2015
mp911de added a commit that referenced this issue Aug 1, 2015
Use the commandBuffer when autoFlushCommands is disabled instead of writing commands to a channel and write the whole buffer when flushing. This change slightly improves the throughput of lettuce.

Motivation: netty maintains a promise for every written command and handles buffering on its own. Writing commands one by one but delaying the flush has less flavor of batching than buffering commands and writing them as batch.
@mp911de
Copy link
Collaborator Author

mp911de commented Aug 1, 2015

Added throughput improvements

mp911de added a commit that referenced this issue Aug 2, 2015
Use the commandBuffer when autoFlushCommands is disabled instead of writing commands to a channel and write the whole buffer when flushing. This change slightly improves the throughput of lettuce.

Motivation: netty maintains a promise for every written command and handles buffering on its own. Writing commands one by one but delaying the flush has less flavor of batching than buffering commands and writing them as batch.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: feature A new feature
Projects
None yet
Development

No branches or pull requests

2 participants