Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Packetbeat] Restrict max buffer size before send to logstash #516

Closed
gerardorochin opened this issue Dec 11, 2015 · 5 comments
Closed

[Packetbeat] Restrict max buffer size before send to logstash #516

gerardorochin opened this issue Dec 11, 2015 · 5 comments

Comments

@gerardorochin
Copy link

When running Packetbeat into MySQL server, the server was memory growth (10.834GB)
when logstash is down and Packetbeat supports buffering the events for waiting to send
the memory is over and kills mysql process.

The data collected by topbeat of the packetbeat process

Time proc.name proc.mem.size
11-12-2015 10:19:04 packetbeat 8.011GB
11-12-2015 10:19:14 packetbeat 8.041GB
11-12-2015 10:19:24 packetbeat 8.041GB
11-12-2015 10:19:34 packetbeat 8.041GB
11-12-2015 10:19:44 packetbeat 8.041GB
11-12-2015 10:19:54 packetbeat 8.159GB
11-12-2015 10:20:04 packetbeat 8.159GB
11-12-2015 10:20:14 packetbeat 8.159GB
11-12-2015 10:20:24 packetbeat 8.159GB
11-12-2015 10:20:34 packetbeat 8.216GB
11-12-2015 10:20:44 packetbeat 8.216GB
11-12-2015 10:20:54 packetbeat 8.216GB
11-12-2015 10:21:04 packetbeat 8.218GB
11-12-2015 10:21:14 packetbeat 8.229GB
11-12-2015 10:21:24 packetbeat 8.33GB
11-12-2015 10:21:34 packetbeat 8.33GB
11-12-2015 10:21:44 packetbeat 8.33GB
11-12-2015 10:21:54 packetbeat 8.33GB
11-12-2015 10:22:04 packetbeat 8.332GB
11-12-2015 10:22:14 packetbeat 8.451GB
11-12-2015 10:22:24 packetbeat 8.451GB
11-12-2015 10:22:34 packetbeat 8.451GB
11-12-2015 10:22:44 packetbeat 8.451GB
11-12-2015 10:22:54 packetbeat 8.755GB
11-12-2015 10:23:04 packetbeat 8.755GB
11-12-2015 10:23:14 packetbeat 8.755GB
11-12-2015 10:23:24 packetbeat 8.755GB
11-12-2015 10:23:34 packetbeat 8.755GB
11-12-2015 10:23:44 packetbeat 9.016GB
11-12-2015 10:23:54 packetbeat 9.016GB
11-12-2015 10:24:04 packetbeat 9.016GB
11-12-2015 10:24:14 packetbeat 9.016GB
11-12-2015 10:24:24 packetbeat 9.016GB
11-12-2015 10:24:34 packetbeat 9.266GB
11-12-2015 10:24:44 packetbeat 9.266GB
11-12-2015 10:24:54 packetbeat 9.266GB
11-12-2015 10:25:04 packetbeat 9.266GB
11-12-2015 10:25:14 packetbeat 9.266GB
11-12-2015 10:25:24 packetbeat 9.266GB
11-12-2015 10:25:34 packetbeat 9.372GB
11-12-2015 10:25:44 packetbeat 9.381GB
11-12-2015 10:25:54 packetbeat 9.381GB
11-12-2015 10:26:04 packetbeat 9.381GB
11-12-2015 10:26:14 packetbeat 9.381GB
11-12-2015 10:26:24 packetbeat 9.381GB
11-12-2015 10:26:34 packetbeat 9.381GB
11-12-2015 10:26:44 packetbeat 9.587GB
11-12-2015 10:26:54 packetbeat 9.587GB
11-12-2015 10:27:04 packetbeat 9.588GB
11-12-2015 10:27:14 packetbeat 9.588GB
11-12-2015 10:27:24 packetbeat 9.588GB
11-12-2015 10:27:34 packetbeat 9.588GB
11-12-2015 10:27:44 packetbeat 9.993GB
11-12-2015 10:27:54 packetbeat 9.993GB
11-12-2015 10:28:04 packetbeat 9.993GB
11-12-2015 10:28:14 packetbeat 9.993GB
11-12-2015 10:28:24 packetbeat 9.993GB
11-12-2015 10:28:34 packetbeat 9.993GB
11-12-2015 10:28:44 packetbeat 9.993GB
11-12-2015 10:28:56 packetbeat 10.313GB
11-12-2015 10:29:04 packetbeat 10.34GB
11-12-2015 10:29:15 packetbeat 10.34GB
11-12-2015 10:29:25 packetbeat 10.34GB
11-12-2015 10:29:34 packetbeat 10.34GB
11-12-2015 10:29:44 packetbeat 10.34GB
11-12-2015 10:29:54 packetbeat 10.34GB
11-12-2015 10:30:04 packetbeat 10.34GB
11-12-2015 10:30:14 packetbeat 10.34GB
11-12-2015 10:30:24 packetbeat 10.379GB
11-12-2015 10:30:34 packetbeat 10.786GB
11-12-2015 10:30:44 packetbeat 10.834GB
11-12-2015 10:30:54 packetbeat 10.834GB
11-12-2015 10:31:04 packetbeat 105.926MB

This point MySQL was restarted after memory is over

Using:
Centos 6.6 2.6.32-504.8.1.el6.x86_64
packetbeat version 1.0.0 (amd64)
logstash 1.5.6

Topology:
Packetbeat -> Logstash -> Elasticsearch

Configuration

interfaces:
  device: any

protocols:
  dns:
    include_authorities: true
    include_additionals: true
  mysql:
    ports: [3306]

 logstash:
   hosts: ["localhost:5044"]

I think the Packetbeat it shoulds a setting to prevent max memory consumption for buffering

@gerardorochin gerardorochin changed the title Restrict max buffer size before send to logstash [Packetbeat] Restrict max buffer size before send to logstash Dec 11, 2015
@tsg
Copy link
Contributor

tsg commented Dec 16, 2015

Thanks for reporting, we'll look into this.

@urso
Copy link

urso commented Dec 16, 2015

it's a duplicate of elastic/libbeat#337 mostly affecting packetbeat and topbeat.

Pulling discussion over in this repo see original tickets description of the problem encountered:
Default output queue size is 1000 elements. This is okish for packetbeat to deal with spikes, but bad for topbeat. If output plugins are stalled due to elasticsearch/logstash being unavailable memory usage will grow until internal queues fill up. For packetbeat and topbeat memory events queued up is given by N*(3+B) where N=queue size (all internal queues same size) and B=bulk_max_size. For topbeat this can easily grow > 100MB.

For additional details see discuss

@urso
Copy link

urso commented Dec 16, 2015

Advantage with bigger queue size and buffer size in packetbeat, it's helpfull to deal with short bursts in packetbeat. Disadvantage is, if ES/LS becomes unavailable, loads of memory is wasted.

With 'bulk_max_size' being configurable, one can reduce the bulk size B in order to reduce memory usage. For logstash the default value is 50 and for logstash the default value is 10000 (derived from logstash-forwarder)

  • requirement: the queue must be able to deal with short bursts, but drop events in case of LS/ES becoming unavailable.
  • proposal: some active queue management (AQM) e.g. using CoDel + configurable queue size in addition to bulk_max_size.

@tsg
Copy link
Contributor

tsg commented Dec 16, 2015

@gerardorochin just to confirm, I had a similar issue on another system, and setting the bulk_max_size for the logstash output to 500 helped quite a bit (although memory usage was still increased).

tsg pushed a commit to tsg/beats that referenced this issue Dec 17, 2015
The high value of 10k caused memory issues when Logstash was not
available or slow to process data. This is because the 10k gets
multiplied with the worker queue size (1000). See for example elastic#516.
tsg pushed a commit to tsg/beats that referenced this issue Dec 17, 2015
The high value of 10k caused memory issues when Logstash was not
available or slow to process data. This is because the 10k gets
multiplied with the worker queue size (1000). See for example elastic#516.
@mrkschan
Copy link

Is it better to work this out together with #575?

Can we keep a binary write-ahead log of events on disk, and then the publisher flush the log? This way, we can recover from the binary log in case of upstream failure and the memory footprint can be controlled.

@urso urso closed this as completed Jan 7, 2016
leweafan pushed a commit to leweafan/beats that referenced this issue Apr 28, 2023
The high value of 10k caused memory issues when Logstash was not
available or slow to process data. This is because the 10k gets
multiplied with the worker queue size (1000). See for example elastic#516.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants