-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Filebeat too many open files using Docker autodiscover #14389
Comments
Hey @CpuID, Thanks for reporting this issue! I think that the problem may be in the use of In any case, |
Yeah, I can confirm that resources used by |
@jsoriano awesome find thx ;) I'll try removing it and confirm I get sufficient metadata still, and if not try the global processor approach. I assume it's still probably worth ensuring those resources are closed when containers stop if this is enabled by a user? |
Yep, I have started a PR to fix this #16349. It should already fix the problem in the case you reported, but there are some other places to review where processors can be used. |
(Reported at https://discuss.elastic.co/t/filebeat-too-many-open-files-using-docker-autodiscover/205596 - no feedback)
Seeing this on a machine that runs short lived containers every few minutes, seems to run out of file descriptors due to filebeat hanging onto connections to the Docker daemon socket? File descriptor leak?
Takes a day or two to get to that point, noticed the file handles count increasing steadily throughout.
Gets to the point where it can't even rotate out its own logfile at
/var/log/filebeat/filebeat...
Tested on 6.8.3 + 7.4.1 so far. Config snippet on 6.8.3:
Steps to reproduce on 6.x + 7.x: https://github.com/CpuID/filebeat-docker-socket-leak
I've gone through the various
close_*
andclean_*
related config options, but I don't think any of those will solve this one...cc @exekias
The text was updated successfully, but these errors were encountered: