-
Notifications
You must be signed in to change notification settings - Fork 408
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make DelegatingAsyncDisruptorAppender more resilient to exceptions thrown by child appenders #456
Comments
Sounds good to me. One thing to consider is that if an exception occurs while appending/flushing on one event, it seems there will be a high probability that the same exception will occur while appending/flushing the next event on the next call to |
Indeed... The easiest is probably to log an error status the first time flush throws an exception and be silent until it succeeds. We can do more like throttling the error status, log for how long flushing was failing, etc... However I'm not sure it is worth the effort. If flush fails, appending subsequent events are likely to fail any time soon anyway. Looking at On the other hand, I noticed |
Hmm. I'm thinking the Therefore, I think the
Agreed. Let's keep it simple for now. |
Both This behaviour is written in Logstash base classes from which most appenders are likely to inherit. Because of that there is indeed no need for the I still believe we should implement the same logic as for doAppend(), i.e. flush the appender only when it is started. This would be consistent with how doAppend() behaves. I don't see this as "making (unreasonable) assumptions about how the appender behaves". The fact that some appenders like the FileAppender are not able to recover after they stopped themselves because of an exception is another issue and should not affect "our" decision to check for isStarted before flushing. |
Plz check PR #457. It addresses multiple issues at once because they all relate to the same code. Tell me what you think. |
Closed by #457 |
The
EventHandler
used by theDelegatingAsyncDisruptorAppender
does the following:Invoke
AppenderAttachable#appendLoopOnAppenders(event)
. This method loops through the attached appenders and callsAppender#doAppend(event)
on each. If an appender throws an exception, the process is aborted and subsequent appenders are not called.Is this a problem? I mean, wouldn't it be safer to try/catch exceptions and give a chance to all appenders to process the event? Looking at Logback's
AsyncAppenderBase
I noticed they behave the same and don't care about exceptions... Could be the rational is appenders are not supposed to throw exceptions?Flush the output stream of attached
OutputStreamAppender
s. Same remark here: if an exception is thrown when flushing the first output stream, the remaining appenders won't be flushed at all.The case may be slightly different from
doAppend()
in that appenders like theOutputStreamAppender
won't throw an exception when they have IO issues when processing the event. They log an error status instead and proceed.According to me, the
DelegatingAsyncDisruptorAppender
should do the same: wrapOutputStream#flush()
calls within a try/catch and log an error status when an IOException is thrown.What do you think?
If you agree with point (2), I can include the modifications in the PR I'm about to submit for #454.
The text was updated successfully, but these errors were encountered: