Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential memory issues when slow HEC responses #255

Open
rquinio1A opened this issue May 30, 2024 · 0 comments
Open

Potential memory issues when slow HEC responses #255

rquinio1A opened this issue May 30, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@rquinio1A
Copy link
Member

In another project using splunk-library-javalogging, it has been reported that if HEC endpoint is slow to answer because it's in bad shape, then this can lead to OutOfMemory on the client, because log event objects are not garbage collected on the client.

In sync mode, slow logging may slow some application threads, but there is no max "in-flight" log event batches
In async mode, using quarkus.log.handler.splunk.async and quarkus.log.handler.splunk.async.overflow=discard could mitigate the issue, if Quarkus starts droping logs after the queue limit has been reached. However the dequeing is probably not limited by anything today, so it could be the same problem.

If the number of "in-flight" batches is over a certain limit, we could drop log events/stop dequeing, rather than adding them into in a new batch.
This would require to have counters over in-flight vs completed log events, cf #57
cf splunk/splunk-library-javalogging#265
cf splunk/splunk-library-javalogging#98

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant