-
-
Notifications
You must be signed in to change notification settings - Fork 522
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible memory leak #1110
Comments
By the looks of it from the screenshots, One thing that caught my eye is this line: Does it look the same if you set that to say 5 seconds? does it flatten out earlier then? One possible issue could be that futures are not cleared until the timeout expires, even if completed successfully. Another possibility might be if the ConcurrentMap do keep the already allocated size even when entries are removed. We will have to look deeper into all this. |
I've seen the same behaviour. From what I could gather, it is the second option (ConcurrentMap keeps already allocated size and does not decrease). |
Hi, sorry for the late response, I've been away. I'll give your recommendation a try but seems Irweck thinks this might have something to do with ConcurrentMap but I'll give it a shit either way and report back. |
It still happens after reducing the RequestTimeoutTime to 5 seconds, memory usage keeps going up same as before. |
@rogeralsing have you had the time to check if the memory increase is indeed from ConcurrentMap? |
Hi,
We are using clustering with consul and there seems to be a memory leak with gossip. I started new instances and left them idle for a short while (10 mins) and saw the following in parca.
It keeps accumulating entries in
ConcurrentMap
and never releases, I've left it running for up to an hour with the same result, memory just keeps increasing.We do not send anything over gossip so this is purely internal to clustering and we do not subscribe to anything on the cluster gossip.
It seems to eventually flatten out though but after creating over 50K+ objects, not sure what it's allocating here have not looking into it but that concurrent map keeps growing for some time and eats up a decent chunk of memory. I did have a quick look in a debugger but all I could see were 32 entries each containing 0 items and a rw mutex so could not figure out where these in use allocations are going.
We are initializing the cluster like so, basically defaults:
The text was updated successfully, but these errors were encountered: