-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Infinite ingestion retry when batches are too large and using GuaranteedSend #14350
Comments
Related issue #3688 |
Prior experience with that from LS logstash-plugins/logstash-output-elasticsearch#497 |
Linked to #6749 |
This issue probably still exists, but seems rare, is fixable with proper configuration, and was never allocated time in a release cycle -- unassigning so it can be re-triaged. |
Pinging @elastic/elastic-agent-data-plane (Team:Elastic-Agent-Data-Plane) |
Ping @mukeshelastic @nimarezainia as you were both interested by this issue. It will be fixed to 8.1 thanks To @rdner |
Thanks @jlind23 |
Hi @simitt Could you please help us on this Ticket validation with below points:
Thanks |
@rdner given that you implemented the fix, can you please provide guidance for the testers. |
@dikshachauhan-qasource I described the testing process in my PR #29368 Let me know if it's missing something. |
Elasticsearch returns status code
413
when a bulk request exceeds the size limit. A user can either increase thehttp.max_content_length
in ES or decrease thebulk_max_size
in the Beat to overcome such failures.However, when this error happens and the beat is using a
GuaranteedSend
publisher method the current implementation can lead to an infinite retry, sending the same request to ES.This might result in not being able to ingest any more events.
It might be worth exploring to use a special handling for the batch when the request size exceeds a limit, e.g. split it in half.
The text was updated successfully, but these errors were encountered: