-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
emitter_for_rewrite_tag uses lots of memory after fluent-bit restarts #4506
Comments
Could you check #4049 ? |
thank you for your reply! I noticed the memory keep growing as well. It seems the same issue. will try in v1.8.9.
I understand the 1st thread is for the records from instance of rewrite_tag, they are added to the data chunk of internal input plugin. Please correct me if I'm wrong. But what's the 2nd thread doing? I'm confused. |
Hi I turned fluent-bit version to 1.8.10, the memory problem has gone !
the fluent-bit filters and outputs are
I want to collect error logs in /tmp/test1, and collect logs both in /tmp/test2 and /tmp/test3.
|
Can you raise a new issue just to keep it clear? I'll close the original issue as fixed on 1.8.10. |
@wangyuan0916 Please open new issue for v1.8.10+.
1st thread adds a buffer which is internal in_emitter plugin buffer. The point is 1st thread doesn't have a limit, so flood of records cause memory leak. |
Bug Report
Describe the bug
In my enviroment, fluent-bit is deployed as a daemonset in k8s cluster to collect logs of all containers. I'm using modify filter plugin to add tags based on log level in log content, e.g. {keep: 1}, {keep:2}, {keep:3}, then using rewrite-tag to forward these log to 3 different destinations. it works well, until fluent-bit pod restarts. The memory of fluent-bit is very high after it restarts. I find the input plugin created internally call 'emitter_for_rewrite_tag' uses a lot. How can I limit this part of memory? It seems like the data chunk size is 309M before data serialized into json format. Actually it costs much more than that, 'memory_working_set_bytes=1262563328i'
and remains at least 800m after that. I have limited the emitter buffer size, but it doesn't work. usually less than 100m is sensable.
"emitter_for_rewrite_tag.7": {
"status": {
"overlimit": true,
"mem_size": "309.3M",
"mem_limit": "9.5M"
},
"chunks": {
"total": 1,
"up": 1,
"down": 0,
"busy": 1,
"busy_size": "309.3M"
}
}
To Reproduce
fluent-bit configuration is :$keep ^(test1)$ test1 false$keep ^(test2)$ test2 false$keep ^(test3)$ test3 false
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/.log
DB /var/log/flb_kube.db
Mem_Buf_Limit 5MB
Skip_Long_Lines On
Refresh_Interval 10
parser cri
[FILTER]
Name kubernetes
Match kube.
Kube_URL https://kubernetes.default.svc.cluster.local:443
Buffer_Size 0
Merge_Log On
K8S-Logging.Parser On
[FILTER]
Name modify
match kube.var.log.containers*log-1*
Add keep test1
Condition Key_value_matches message ERROR
[FILTER]
Name modify
match kube.var.log.containers*log-1*
Add keep test2
Condition Key_value_matches message DEBUG
[FILTER]
Name modify
match kube.var.log.containers*log-1*
Add keep test3
Condition Key_value_matches message WARNING
[FILTER]
Name rewrite_tag
Match kube.var.log.containers*log-1*
Rule
Rule
Rule
Emitter_Mem_Buf_Limit 10M
Emitter_Storage.type memory
[OUTPUT]
Name file
Match test1
File /tmp/test1
[OUTPUT]
Name file
Match test2
File /tmp/test2
[OUTPUT]
Name file
Match test3
File /tmp/test3
there're 100 containers in this cluster. each container emits 3[DEBUG/WARNING/ERROR] logs per second, each record is 50 bytes.
The text was updated successfully, but these errors were encountered: