-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Packetbeat] Restrict max buffer size before send to logstash #516
Comments
Thanks for reporting, we'll look into this. |
it's a duplicate of elastic/libbeat#337 mostly affecting packetbeat and topbeat. Pulling discussion over in this repo see original tickets description of the problem encountered: For additional details see discuss |
Advantage with bigger queue size and buffer size in packetbeat, it's helpfull to deal with short bursts in packetbeat. Disadvantage is, if ES/LS becomes unavailable, loads of memory is wasted. With 'bulk_max_size' being configurable, one can reduce the bulk size B in order to reduce memory usage. For logstash the default value is 50 and for logstash the default value is 10000 (derived from logstash-forwarder)
|
@gerardorochin just to confirm, I had a similar issue on another system, and setting the |
The high value of 10k caused memory issues when Logstash was not available or slow to process data. This is because the 10k gets multiplied with the worker queue size (1000). See for example elastic#516.
The high value of 10k caused memory issues when Logstash was not available or slow to process data. This is because the 10k gets multiplied with the worker queue size (1000). See for example elastic#516.
Is it better to work this out together with #575? Can we keep a binary write-ahead log of events on disk, and then the publisher flush the log? This way, we can recover from the binary log in case of upstream failure and the memory footprint can be controlled. |
The high value of 10k caused memory issues when Logstash was not available or slow to process data. This is because the 10k gets multiplied with the worker queue size (1000). See for example elastic#516.
When running Packetbeat into MySQL server, the server was memory growth (10.834GB)
when logstash is down and Packetbeat supports buffering the events for waiting to send
the memory is over and kills mysql process.
The data collected by topbeat of the packetbeat process
This point MySQL was restarted after memory is over
Using:
Centos 6.6 2.6.32-504.8.1.el6.x86_64
packetbeat version 1.0.0 (amd64)
logstash 1.5.6
Topology:
Packetbeat -> Logstash -> Elasticsearch
Configuration
I think the Packetbeat it shoulds a setting to prevent max memory consumption for buffering
The text was updated successfully, but these errors were encountered: