-
Notifications
You must be signed in to change notification settings - Fork 986
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce DecodeBufferPolicy to reduce memory usage #1314
Comments
Thanks for the elaborate report. Looking at the code, each worker creates its own connection, Another aspect to note is that each connection maintains an aggregation buffer to buffer partial responses. Having 2000 connections instantiates 2000 times aggregation buffers. Starting the reproducer as-is causes immediately a too many open files failure. Running the reproducer with a single connection and client instance renders for me a constant, 16MB buffer. |
@mp911de Thanks for taking at look at the ticket as well as at the sample program!
The sample program is intended to run out of direct memory. That's its goal. The purpose of the program is to show that on lettuce 5.2.x and higher the same code under the same conditions runs out of direct memory ~4 times faster than on pre-5.2. Our production code (where we started to run into the issue) is, surely, different. It takes ~3-4 days for an instance on lettuce 5.2.x with 4G heap (and equal max direct memory limit) to run out of memory, despite the high throughput of Redis operations. It only creates a single
Please let me know if you'd like me to provide any more info / further elaborate on the sample app. One more point: what would you say about the workaround code? |
I have the feeling, that no matter what we do, we always find someone that wants to use a different buffer cleanup strategy because of I was considering a callback interface that gets called with the buffer after reach partial and full read and deprecate |
(cherry picked from commit b078fed)
(cherry picked from commit b078fed)
Thanks for your pull request. That's merged for Lettuce 6 for now as we typically do not introduce new API with bugfix release. Can you check out Lettuce 6 whether memory behavior can be customized as intended? |
Bug Report
Firstly, here is the link to the private repository with a sample app that helps to reproduce the leak, or, to be more precise, clearly shows that lettuce behavior from the direct memory usage perspective changed between 5.1.8 and 5.2.x: https://github.com/Shaphan/lettuce-direct-memory-leak-reproducer
Access to the repository is already granted to @mp911de
I can also share the repository with somebody else, if necessary, but would like to avoid making the repository public for the time being.
After upgrading to
io.lettuce:lettuce-core:5.2.2.RELEASE
we started seeing increased memory consumption. We tracked this down to direct memory and preliminarily identified the commit 4f026e6 as the one introducing the issue in5.2.x
branch (was addressing issue #906, PR #916).Current Behavior
Faster direct memory growth in
5.2.2.RELEASE
as compared to5.1.8.RELEASE
.Higher probability to run into
OutOfDirectMemoryError
s /OutOfMemoryError
s under the same conditions.Expected behavior/code
Direct memory usage at the same level as in versions prior to 5.2.x.
Environment
Possible Solution
In our code we worked that around with help of reflection setting
bufferUsageRatio
inio.lettuce.core.ClientOptions.Builder
.This effectively makes
CommandHandler
behave in pre-5.2 way.An example:
Additional context
Please find screenshots showing direct memory usage from the aforementioned Sample app runs on different versions of lettuce (all the other parameters were the same):
The text was updated successfully, but these errors were encountered: