-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make locking more fine-grained on client close #14027
make locking more fine-grained on client close #14027
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a comment about potential problem. Blocking this until you can check it 👍
@sayden fixed! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Awesome :)
6b6620d
to
f3b612b
Compare
if err != nil { | ||
return nil, errors.Wrap(err, "error loading pipeline") | ||
} | ||
pm.registerPipeline(pipeline, hashstring) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This pattern looks like a race, potentially creating zombie-pipelines. At least the if block should be protected and there should be another check (similar to double-checked locking pattern).
Does docker synchronize API calls on logging plugins? In this case we might be safe here.
What happened to the notice file? Given the change itself I wonder why we need to update it in this PR. |
* make locking more fine-grained on client close
* make locking more fine-grained on client close
This came out of a discussion with @urso -- Client and pipeline close operations on libbeat can block depending on
WaitClose
and other settings. We could potentially block other client start operations while the pipelines drained. This attempts to make the locking more fine-grained so we don't hold a mutex while queues drain.