-
Notifications
You must be signed in to change notification settings - Fork 408
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce GC pressure in CompositeJsonEncoder and *LogstashTcpSocketAppender #461
Comments
<rant> Your idea sounds intriguing. Can you submit the POC as draft PR so I can take a deeper look? |
Btw, do we still have to support Logback 1.0.x?
I mean that version is almost 7 years old now and supporting it makes implementing new features somewhat more complicated.
|
1.0 no |
…f returning a byte array Introduce a new (internal) StreamingEncoder interface to be implemented by Encoders that supports writing directly into the output stream instead of returning their results as a byte array. Update both the AbstractLogstashTcpSocketAppender and the CompositeJsonEncoder to support this new interface. This should hoppefully reduce the amount of short-lived byte arrays created for each log event. See logfellow#461 for more information.
…f returning a byte array Introduce a new (internal) StreamingEncoder interface to be implemented by Encoders that supports writing directly into the output stream instead of returning their results as a byte array. Update both the AbstractLogstashTcpSocketAppender and the CompositeJsonEncoder to support this new interface. This should hoppefully reduce the amount of short-lived byte arrays created for each log event. See logfellow#461 for more information.
Closing this issue. Continue discussion on PR #472 |
…f returning a byte array Introduce a new (internal) StreamingEncoder interface to be implemented by Encoders that supports writing directly into the output stream instead of returning their results as a byte array. Update both the AbstractLogstashTcpSocketAppender and the CompositeJsonEncoder to support this new interface. This should hoppefully reduce the amount of short-lived byte arrays created for each log event. See logfellow#461 for more information.
Reduce memory allocations by writing directly into the output stream (#461)
CompositeJsonEncoder
implements theEncoder
interface and therefore must return the encoded event as a byte array.The implementation makes use of an intermediate ByteArrayOutputStream to collect the various parts produced by the formatter and prefix/suffix encoders. When done, the result is returned as a byte array.
A new ByteArrayOutputStream is initialised for every log event. It starts with an initial size of about 1Kb (+ prefix/suffix length) by default and grows if the formatter produces a larger output. If we are lucky and the initial size is large enough, this process allocates 2 byte arrays and 2 memory copy. If the buffer needs to grow, a new one is allocated (larger) and the content of the previous is copied into it. We end up with 3 allocations and 3 copy operations.
This process is repeated for every event and imposes an extra overhead on the garbage collector.
Most of the time, the caller will write the output of the Encoder into an output stream. In this case, using an intermediate byte array isn't the most efficient design (well, I know, this is how Logback's Encoder interface is designed :-( ...
But maybe we could do better... I was thinking about introducing a new
StreamingEncoder
interface similar to this:CompositeJsonEncoder
can be easily modified to implement this new interface alongside the existingbyte[] encode(event)
method. Then we can adaptAbstractLogstashTcpSocketAppender
around lines L598-L602 to tell the encoder to write directly into the output stream if it implements the newStreamingEncoder
interface.This would be highly efficient while preserving support for "legacy" encoders.
I made a first POC with this idea and everything looks OK.
What do you think ?
Do you see other areas/classes that could be optimised using a similar technique?
The text was updated successfully, but these errors were encountered: