-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[libbeat] Document / clean up parts of the publisher queue interface #16858
Conversation
Pinging @elastic/integrations-services (Team:Services) |
libbeat/publisher/queue/queue.go
Outdated
Publish(event publisher.Event) bool | ||
|
||
// TryPublish adds an event to the queue, returning immediately if the queue | ||
// is full, and returns true on success. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
more correct might be: 'if the event can not be processed by the queue'.
e.g. memqueue broker also does some cleanup. During this phase it is possible that we can't accept events, although there is some space left.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, if possible I'd like somethng more precise than "can not be processed," I'm trying to articulate the specific contract for this interface. Can be we be more specific about the possible reasons for failure? Whether to call Publish
or TryPublish
is decided by whether the client config for the pipeline sets PublishMode
to DropIfFull
. If that setting is actually dropping events for reasons other than a full queue, I'd like to make it more explicit in the description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It depends per queue implementation. Important is to not block the caller at all if the event can not be processed right now. DropIfFull
is maybe not a good name to transfer this message.
One use-case is packetbeat. In packetbeat we must not block if the event can not be consumed right now. Packetbeat would rather drop the event and continue processing the next network packet, then drop a a multiple network packet . This is required due to packetbeat accumulating quite some internal state and packet loss can be much more severe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the abstract it depends on queue implementation, but more pragmatically we only have one non-beta implementation of queue.Queue
in the entire codebase, so it seems desirable to be as specific as we can about why TryPublish
might fail. (The behavior of the whole publisher pipeline depends on the contract for Publish
and TryPublish
, which is currently entirely implicit, making it hard to safely reason about new queue implementations.)
If the current design doesn't allow for something more specific, how about we explicitly call out that behavior dependency: TryPublish adds an event to the queue if doing so will not block the caller, otherwise it immediately returns. The reasons a publish attempt might block are defined by the specific queue implementation and its configuration. Returns true if the event was successfully added, false otherwise.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM
// drop its events on Cancel, the number of dropped events is returned. | ||
// Note: A queue may still send ACK signals even after Cancel is called on | ||
// the originating Producer. The pipeline client must accept and | ||
// discard these ACKs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I recall correctly the client based cancellers are closed as well. communication is async, which means that 'during' cancel we still get ACKs, but once close has finished no ACKs might be received anymore. The pipeline also has a 'global' ACK queue, which will still receive the ACKs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you clarify? I'm not sure what "client based cancellers" are in this context. This comment was based on the code in {memqueue, spool}/produce.go:Close()
which only closes the done
channel of the producer's openState
. Unless you're talking about the producerCancel{Request, Response}
handling when dropOnCancel
is set?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. Yeah, you are right here. The behavior about ACKs is in the beat.Client interface, not the Producer interface.
I just added a a bit more, making the |
changes applied |
…lastic#16858) (cherry picked from commit c756ba8)
What does this PR do?
This PR does a few documentation / cleanup tasks:
queue.Producer
andqueue.Consumer
interfacesopenState.isOpen
ackProducer.cancel
toackProducer.dropOnCancel
This PR introduces no user-visible changes.
Checklist
I have made corresponding changes to the documentationI have made corresponding change to the default configuration filesI have added tests that prove my fix is effective or that my feature works