Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow configuration of discardReadBytes through ClientOptions.bufferUsageRatio #916

Closed
wants to merge 4 commits into from

Conversation

gavincook
Copy link
Contributor

@gavincook gavincook commented Nov 8, 2018

Add bufferUsageRatio option for ClientOptions, used to optimize memory usage. bufferUsageRatio option is a Integer which should greater than 0.

When buffer usage reach bufferUsageRatio / bufferUsageRatio + 1 of buffer capacity, will try to discard the read bytes to free buffer memory to more writting.

Default sets bufferUsageRatio as 3, means when buffer usage greater than 75%, will try to discard read bytes after decoding one whole reply or reaching end of buffer with partial reply.

@codecov
Copy link

codecov bot commented Nov 8, 2018

Codecov Report

Merging #916 into master will decrease coverage by 0.06%.
The diff coverage is 66.66%.

Impacted file tree graph

@@             Coverage Diff              @@
##             master     #916      +/-   ##
============================================
- Coverage      81.1%   81.04%   -0.07%     
- Complexity     5714     5715       +1     
============================================
  Files           410      410              
  Lines         18497    18510      +13     
  Branches       2138     2141       +3     
============================================
- Hits          15002    15001       -1     
- Misses         2489     2500      +11     
- Partials       1006     1009       +3
Impacted Files Coverage Δ Complexity Δ
src/main/java/io/lettuce/core/ClientOptions.java 94.25% <50%> (-4.49%) 16 <1> (+1)
.../java/io/lettuce/core/protocol/CommandHandler.java 78.6% <85.71%> (+0.83%) 108 <2> (+2) ⬆️
...io/lettuce/core/masterslave/SentinelConnector.java 65% <0%> (-17.5%) 6% <0%> (-2%)
...main/java/io/lettuce/core/masterslave/Timeout.java 62.5% <0%> (-12.5%) 3% <0%> (-1%)
...uce/core/masterslave/SentinelTopologyProvider.java 80.55% <0%> (-2.78%) 9% <0%> (-1%)
src/main/java/io/lettuce/core/RedisPublisher.java 78.81% <0%> (-0.75%) 5% <0%> (ø)
...java/io/lettuce/core/protocol/DefaultEndpoint.java 69.17% <0%> (-0.52%) 89% <0%> (ø)
.../io/lettuce/core/dynamic/ReactiveTypeAdapters.java 83.78% <0%> (ø) 1% <0%> (ø) ⬇️
...tuce/core/masterslave/SentinelTopologyRefresh.java 84.86% <0%> (ø) 35% <0%> (+2%) ⬆️
... and 1 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5f27184...66b4efb. Read the comment docs.

@gavincook gavincook changed the title #906 - use buffer usage ratio to optimize memory usage use buffer usage ratio to optimize memory usage Nov 8, 2018
@mp911de
Copy link
Collaborator

mp911de commented Nov 13, 2018

Benchmark results:

Before

RedisClientBenchmark.asyncSetBatch                                               avgt   10    44280.801 ±   3005.278  ns/op
cluster.RedisClusterClientBenchmark.asyncSetBatch                                avgt   10    27198.661 ±   2578.932  ns/op

After

RedisClientBenchmark.asyncSetBatch                                               avgt   10    40904.853 ±   6763.893  ns/op
cluster.RedisClusterClientBenchmark.asyncSetBatch                                avgt   10    24660.902 ±   3884.637  ns/op

Copy link
Collaborator

@mp911de mp911de left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the change. Looking at the benchmarks, we can see a positive impact. What do you think of making buffer release more customizable by returning a function object that encapsulates the actual way of discarding bytes?

Returning the ratio transports an implementation detail of how discard is configured. Exposing a function allows users to override that one and react much more specific to buffer discard.

@@ -837,6 +836,15 @@ private static long nanoTime() {
return System.nanoTime();
}

private void discardReadBytesIfNecessary(ByteBuf buffer) {
int bufferUsageRatio = clientOptions.getBufferUsageRatio();
if (buffer.writerIndex() * (bufferUsageRatio + 1) >= buffer.capacity() * bufferUsageRatio) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This calculation is prone for integer overflow.

I wonder whether it would make sense to make discard more customizable. What do you think about providing a BiConsumer<ByteBuf, DecodeProgress> that handles discard? The consumer would accept the buffer and an enum value (whether discard happened during decoding a command/after decoding a command/after decoding a batch) and the implementation can then decide over the time and way of release.

The cleaner consumer would be directly returned by ClientOptions

Copy link
Contributor Author

@gavincook gavincook Nov 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mp911de , thanks your feedback.
Yes, as you say, current behavior may cause integer overflow. Division operation will be better here. I use this way to calculate by referring to jvm -XX:GCTimeRatio usage.

In my opinion, a integer will make this option configure simply. If we want optimize for cpu, we can set a large integer. And a suitable integer for most scene based on CPU and the size of direct memory. No matter which side we want to optimize, the precondition should always avoid the out of direct memory error. The -1 can be introduced for only no more free memory or after batch command decoding to discard bytes, if we want keep current behavior which use memory as more as possible when decoding a command.

If use way of BiConsumer<ByteBuf, DecodeProgress>, I worry about that make this behavior configure in a complex way. How do you think of this? Also, if we really need expose to make user do some customized discard.

Copy link
Collaborator

@mp911de mp911de Nov 19, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, then let's shoot for the simple approach which is leaving the approach/configuration as proposed by your PR. Care to fix the overflow so we can prepare this PR for merging?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I updated the PR, Please take a look again.

* @return the buffer usage ratio.
* @since 5.2
*/
public int getBufferUsageRatio() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd propose to make cleaning more customizable. This method could return a discard consumer (BiConsumer<ByteBuf, DecodeProgress>) instead of exposing details how discard of buffer bytes is configured.

See below for the full explanation.

@gavincook
Copy link
Contributor Author

I updated the PR, Please take a look again.

@gavincook gavincook closed this Nov 20, 2018
@gavincook gavincook deleted the fix-906 branch November 20, 2018 08:50
@gavincook gavincook restored the fix-906 branch November 20, 2018 08:50
@gavincook gavincook reopened this Nov 20, 2018
@gavincook
Copy link
Contributor Author

Actually, the checks is failed on shouldResolveWithoutSubdomain test. The changes on this pr should be OK.

@mp911de
Copy link
Collaborator

mp911de commented Nov 21, 2018

Thanks a lot. I'll have a look in the next days.


sut.channelRead(context, msg);

assertThat(buffer.readerIndex() == 0);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a valid assertion.

*/
private void discardReadBytesIfNecessary(ByteBuf buffer) {
int bufferUsageRatio = clientOptions.getBufferUsageRatio();
if ((float)buffer.writerIndex() / buffer.capacity() >= (float)bufferUsageRatio / (bufferUsageRatio + 1)) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should rather check for readerIndex(). The written bytes don't express how much of the buffer was already consumed.

@mp911de
Copy link
Collaborator

mp911de commented Nov 23, 2018

I found two issues during the merge or this PR. I'll fix these during the merge.

mp911de pushed a commit that referenced this pull request Nov 23, 2018
…sageRatio #906

We now allow configuration of when ByteBuf.discardReadBytes() gets called. Memory freeing can be useful during decoding of large responses instead of waiting until decoding is finished.

Original pull request: #916
mp911de added a commit that referenced this pull request Nov 23, 2018
Improve documentation. Add author tags. Precompute expected-to-discard buffer usage ratio instead of computing it on each discard. Add override to ClusterClientOptions Builder. Add and fix tests. Reformat code.

Original pull request: #916
@mp911de
Copy link
Collaborator

mp911de commented Nov 23, 2018

That's merged and polished now.

@mp911de mp911de closed this Nov 23, 2018
@mp911de mp911de changed the title use buffer usage ratio to optimize memory usage Allow configuration of discardReadBytes through ClientOptions.bufferUsageRatio Nov 23, 2018
@mp911de mp911de added the type: feature A new feature label Nov 23, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: feature A new feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants