-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Potential memory leak in v1.8.7 debug #4211
Comments
We experienced the same issue when upgrading from 1.5.2 to 1.8.8. One pod would consistently use up to 3GB of memory and then crash. Upping 'Flush' to 8 in the service config helped, but pods are still using 3x more memory than they did in 1.5.2. [INPUT] [INPUT] [INPUT] [INPUT] [INPUT] [FILTER] [OUTPUT] |
Same here on |
Any update on this? It's happening in |
@lmuhlha Have you found a workaround for this? |
if you have 2.6G of data up in memory and then you aim to convert it to JSON you will exceed 3GB for sure, your mem_buf_limits are too high |
@edsiper our mem_buf_limits are 500mb and the OP's are 1mb. If this was just a configuration thing, it would be happening in both versions. When we rolled back to 1.5.2, memory use dropped right back to about 4mb per pod vs the 20mb-3gb that the 1.8.8 version pods used. In 1.8.8, one pod out of three would consistently run up to 3gb within hours while the others would slowly rise up and hang around at 20mb. |
@ggallagher0 can you try reproducing the problem by disabling systemd input ? can you help to isolate the plugin triggering the problem |
I have this same issue and I only use the
|
Any update? |
#4192 may be a related issue. |
Same issue with |
I wonder which case is the easiest one to reproduce locally, @lmuhlha s seems to be good output wise because it's using the http plugin but it's a bit convoluted configuration wise, @ggallagher0 s is good because it uses simpler inputs and the output plugin is forwarder which means it can be locally set without requiring any api keys. Have you tried removing those outputs and adding a simple tcp endpooint to see if the leak is still there @NeckBeardPrince? I'm trying to come up with some ideas on what these cases have in common and what simplifications could be made to prove these ideas, the one thing 2 out of 3 have in common is the Kubernetes filter plugin and all of them use parsers. |
Just an update from my end, I've been trying to get the k8s filter to work with my set up but on Re: "I wonder which case is the easiest one to reproduce locally, @lmuhlha s seems to be good output wise because it's using the http plugin but it's a bit convoluted configuration wise," |
So I just tried this again with a simplified config and decreased the
|
Nope, I actually have the issue in |
Just tried |
After updating to 1.8.12 I don't see the memory leak. |
Maybe these two are the same issue: #5147 |
even with 1.8.12, I am facing some problem when turning K8S-Logging.Exclude On in kubernetes filter plugin. It remains constant when I turn this option Off. |
Same issue here. Tested with versions 1.8.11 and 1.8.12 and with K8S-Logging.Exclude Off, but the memory always keeps leaking. |
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the |
This issue was closed because it has been stalled for 5 days with no activity. |
Bug Report
Describe the bug
fluent/fluent-bit:1.8.7-debug@sha256:024748e4aa934d5b53a713341608b7ba801d41a170f9870fdf67f4032a20146f
To Reproduce
Deploy fluent/fluent-bit:1.8.7-debug@sha256:024748e4aa934d5b53a713341608b7ba801d41a170f9870fdf67f4032a20146f and wait 10-15 mins. Container will OOM.
Expected behavior
Deploying fluent/fluent-bit:1.8.7-debug@sha256:024748e4aa934d5b53a713341608b7ba801d41a170f9870fdf67f4032a20146f with a specified amount of memory will work and not constantly increase / OOM.
Screenshots
Your Environment
Additional context
The text was updated successfully, but these errors were encountered: