-
Notifications
You must be signed in to change notification settings - Fork 992
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow configuration of discardReadBytes through ClientOptions.bufferUsageRatio #916
Conversation
Codecov Report
@@ Coverage Diff @@
## master #916 +/- ##
============================================
- Coverage 81.1% 81.04% -0.07%
- Complexity 5714 5715 +1
============================================
Files 410 410
Lines 18497 18510 +13
Branches 2138 2141 +3
============================================
- Hits 15002 15001 -1
- Misses 2489 2500 +11
- Partials 1006 1009 +3
Continue to review full report at Codecov.
|
Benchmark results: Before
After
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the change. Looking at the benchmarks, we can see a positive impact. What do you think of making buffer release more customizable by returning a function object that encapsulates the actual way of discarding bytes?
Returning the ratio transports an implementation detail of how discard is configured. Exposing a function allows users to override that one and react much more specific to buffer discard.
@@ -837,6 +836,15 @@ private static long nanoTime() { | |||
return System.nanoTime(); | |||
} | |||
|
|||
private void discardReadBytesIfNecessary(ByteBuf buffer) { | |||
int bufferUsageRatio = clientOptions.getBufferUsageRatio(); | |||
if (buffer.writerIndex() * (bufferUsageRatio + 1) >= buffer.capacity() * bufferUsageRatio) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This calculation is prone for integer overflow.
I wonder whether it would make sense to make discard more customizable. What do you think about providing a BiConsumer<ByteBuf, DecodeProgress>
that handles discard? The consumer would accept the buffer and an enum value (whether discard happened during decoding a command/after decoding a command/after decoding a batch) and the implementation can then decide over the time and way of release.
The cleaner consumer would be directly returned by ClientOptions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mp911de , thanks your feedback.
Yes, as you say, current behavior may cause integer overflow. Division operation will be better here. I use this way to calculate by referring to jvm -XX:GCTimeRatio
usage.
In my opinion, a integer will make this option configure simply. If we want optimize for cpu, we can set a large integer. And a suitable integer for most scene based on CPU and the size of direct memory. No matter which side we want to optimize, the precondition should always avoid the out of direct memory error. The -1
can be introduced for only no more free memory or after batch command decoding to discard bytes, if we want keep current behavior which use memory as more as possible when decoding a command.
If use way of BiConsumer<ByteBuf, DecodeProgress>
, I worry about that make this behavior configure in a complex way. How do you think of this? Also, if we really need expose to make user do some customized discard.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, then let's shoot for the simple approach which is leaving the approach/configuration as proposed by your PR. Care to fix the overflow so we can prepare this PR for merging?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I updated the PR, Please take a look again.
* @return the buffer usage ratio. | ||
* @since 5.2 | ||
*/ | ||
public int getBufferUsageRatio() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd propose to make cleaning more customizable. This method could return a discard consumer (BiConsumer<ByteBuf, DecodeProgress>
) instead of exposing details how discard of buffer bytes is configured.
See below for the full explanation.
I updated the PR, Please take a look again. |
Actually, the checks is failed on |
Thanks a lot. I'll have a look in the next days. |
|
||
sut.channelRead(context, msg); | ||
|
||
assertThat(buffer.readerIndex() == 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not a valid assertion.
*/ | ||
private void discardReadBytesIfNecessary(ByteBuf buffer) { | ||
int bufferUsageRatio = clientOptions.getBufferUsageRatio(); | ||
if ((float)buffer.writerIndex() / buffer.capacity() >= (float)bufferUsageRatio / (bufferUsageRatio + 1)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should rather check for readerIndex()
. The written bytes don't express how much of the buffer was already consumed.
I found two issues during the merge or this PR. I'll fix these during the merge. |
That's merged and polished now. |
Add
bufferUsageRatio
option forClientOptions
, used to optimize memory usage.bufferUsageRatio
option is a Integer which should greater than 0.When buffer usage reach
bufferUsageRatio / bufferUsageRatio + 1
of buffer capacity, will try to discard the read bytes to free buffer memory to more writting.Default sets
bufferUsageRatio
as 3, means when buffer usage greater than 75%, will try to discard read bytes after decoding one whole reply or reaching end of buffer with partial reply.