-
Notifications
You must be signed in to change notification settings - Fork 408
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak in 7.0.1 when logging from a thread coming out of a "ForkJoinPool.common()" #722
Comments
Hmm... doesn't look good :-( A few questions to help us investigate:
As you understand, the worst case scenario is logging from a new thread every time + large heap size causing very few GC. In this case you are likely to see the heap usage growing until the first GC event. Is this your case? PS: I have not been able to reproduce the scenario (yet) - hence all my questions... |
As far as I can tell from the screenshots, there is about 41800 items in the |
@donbeave Any news about this issue? |
@brenuart Sorry for the late response. We didn't do experiments during December and the begging of January, as we blocked the deployment of new features from the master. But before that, we tried some and found that it's not related to the version of |
@donbeave Do you confirm the issue is outside of logback-logstash-encoder? |
Closing due to lack of requested feedback. If you would like us to look at this issue, please provide the requested information and we will re-open the issue. |
@brenuart We might have been able to reproduce this issue here: https://github.com/tommyulfsparre/repro-logstash-encoder but it is only reproducible running Java 17, 18 not 15 or 11 and only when using the ForkJoinPool common pool. Where the threadValues grows unbounded. |
As a comment to @tommyulfsparre above, it is not reproducible with Java 16 either. And as you can see, the repro uses version 7.1.1 |
Hi @tommyulfsparre, @klaraward, Thanks for the little test case - I could reproduce the issue! I could trace it down to a change introduced in Java 17 in how the ForkJoinPool behaves and more specifically the common ForkJoinPool. To make it short, since Java 17 the common ForkJoinPool is configured with innocuous workers. This kind of worker is clearing the thread locals before executing the submitted task, probably to make the thread appears as if it was a brand new thread instead of a thread reused from the pool. See this change for more information. Note that the problem disappears if you use a This of course break assumptions made by our I'll have a look at how we can fix this and avoid the memory leak. However, ThreadLocal can't be used anymore to keep track of resources allocated to the thread. Unfortunately we rely on this mechanism to reuse resources like buffers and JsonGenerator instead of creating new instances for every ILoggingEvent... I'm not yet convinced I'll be able to restore this functionality when the thread comes out of a common ForkJoinPool... Anyway, stay tuned... |
I've been thinking a bit on how to make ThreadLocal in Java 17 behave the same way as in older Java and I ended up with a subclass of ThreadLocal that uses a fallback map keyed on thread id and keeps track of thread deaths to clean up the fallback map (both to prevent assigning old values to a new thread, and to avoid leaking data) If you think this seems useful, feel free to use it as is or modify it: spotify/sparkey-java#55 |
From the conversation at the issue in open JDK it seems that this behaviour is what is expected and the only change coming is of better documentation. The team at Spotify will try to see if we can come up with a solution for this problem, but if you have already something @brenuart please let us know. |
Hi @caesar-ralf , |
In newer versions of java, specifically 16 and over, the `ThreadLocal` can be cleared before the Thread is killed. This breaks the assumption of `ThreadLocalHolder` which expected the `initializeThread` method to be executed only once per `Thread`. This commit changes the `threadValues` to become a map that holds which thread the value is associated with so we can avoid recreating new values when one already exists. Fixes logfellow#722
In newer versions of java, specifically 16 and over, the `ThreadLocal` can be cleared before the Thread is killed. This breaks the assumption of `ThreadLocalHolder` which expected the `initializeThread` method to be executed only once per `Thread`. This commit changes the `threadValues` to become a map that holds which thread the value is associated with so we can avoid recreating new values when one already exists. Fixes logfellow#722
In newer versions of java, specifically 16 and over, the `ThreadLocal` can be cleared before the Thread is killed. This breaks the assumption of `ThreadLocalHolder` which expected the `initializeThread` method to be executed only once per `Thread`. This commit changes the `threadValues` to become a map that holds which thread the value is associated with so we can avoid recreating new values when one already exists. Fixes logfellow#722
In newer versions of java, specifically 16 and over, the `ThreadLocal` can be cleared before the Thread is killed. This breaks the assumption of `ThreadLocalHolder` which expected the `initializeThread` method to be executed only once per `Thread`. This commit changes the `threadValues` to become a map that holds which thread the value is associated with so we can avoid recreating new values when one already exists. Fixes logfellow#722
In newer versions of java, specifically 16 and over, the `ThreadLocal` can be cleared before the Thread is killed. This breaks the assumption of `ThreadLocalHolder` which expected the `initializeThread` method to be executed only once per `Thread`. This commit changes the `threadValues` to become a map that holds which thread the value is associated with so we can avoid recreating new values when one already exists. Fixes #722
Describe the bug
Some of our applications have been updated to the latest version
7.0.1
from6.6
version. Later we found out one of the applications have a memory leak. We used heap dump to analyze it with Eclipse Memory Analyzer (https://www.eclipse.org/mat/) and found out that potentially this memory leak was introduced in the latest version oflogstash-logback-encoder
,This is our logstash logback configuration:
And here are some memory leak reports:
Please let me know if I need to provide any additional information.
The text was updated successfully, but these errors were encountered: