-
Notifications
You must be signed in to change notification settings - Fork 4.9k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
This change marks the beginning of the libbeat event publisher pipeline refactoring. - central to the publisher pipeline is the broker: - broker implementation can be configured when constructing the pipeline - common broker implementation tests in `brokertest` package - broker features: - Fully in control of all published events. In comparison to old publisher pipeline with many batches in flight, the broker now configures/controls the total number of events stored in the publisher pipeline. Only after ACKs from outputs, will new space become available. - broker returns ACKS in correct order to publisher - broker batches up multiple ACKs - producer can only send one event at a time to the broker (push) - consumer can only receive batches of events from broker (pull) - producer can cancel(remove) active events not yet pulled by a consumer - broker/output related interfaces defined in `publisher` package - pipeline/client interfaces for use by beats currently defined in `publisher/beat` package - event structure has been changed to be more compatible with Logstash (See beat.Event): Beats can send metadata to libbeat outputs (e.g. pipeline) and logstash by using the `Event.Meta` field. Event fields will be stored on `Event.Fields`. Event fields are normalized (for use with processors) and serialized using. - The old publishers publish API is moved to libbeat/publisher/bc/publisher for now: - move to new sub-package to fight of circular imports - package implements old pipeline API on top of new pipeline - Filters/Processors are still executed before pushing events to the new pipeline - New API: - beats client requirements are configured via `beat.ClientConfig`: - register async ACK callbacks (currently callbacks will not be triggered after `Client.Close`) - configurable sending guarantees (must match ACK support) - "wait on close", for beats clients to wait for pending events to be ACKed (only if ACK is configured) - pipeline also supports "wait on close", waiting for pending events (independent of ACK configurations). Can be used by any beat, to wait on shutdown for published events to be actually send Event Structure: ---------------- The event structure has been changed a little: ``` type Event struct { Timestamp time.Time Meta common.MapStr Fields common.MapStr } ``` - We always require the timestamps. - Meta contains additional meta data (hints?) a beat can forward to the outputs. For example `pipeline` or `index` settings for the Elasticsearch output. - If output is not Elasticsearch, a `@metadata` field will always be written to the json document. This way Logstash can take advantage of all `@metadata`, even if the event has been send via kafka or redis. The new output plugin factory is defined as: ``` type Factory func(beat common.BeatInfo, cfg *common.Config) (Group, error) ``` The package libbeat/output/mode is being removed + all it's functionality is moved into a single implementation in the publisher pipeline supporting sync/async clients with failover and load-balancing. In the future dynamic output discovery might be added as well. This change requires output.Group to return some common settings for an active output, to configure the pipeline: ``` // Group configures and combines multiple clients into load-balanced group of // clients being managed by the publisher pipeline. type Group struct { Clients []Client BatchSize int Retry int } ``` Moving functionality from the outputs to the publisher pipeline restricts beats from having one output type configured only. All client instances configured will participate in load-balancing being driven by the publisher pipeline. This removes some intermediate workers used for forwarding batches. Future changes to groups include: outputs always operate in batches by implement only the `Publish` method: ``` // Publish sends events to the clients sink. A client must synchronously or // asynchronously ACK the given batch, once all events have been processed. // Using Retry/Cancelled a client can return a batch of unprocessed events to // the publisher pipeline. The publisher pipeline (if configured by the output // factory) will take care of retrying/dropping events. Publish(publisher.Batch) error ``` With: ``` // Batch is used to pass a batch of events to the outputs and asynchronously listening // for signals from these outpts. After a batch is processed (completed or // errors), one of the signal methods must be called. type Batch interface { Events() []Event // signals ACK() Drop() Retry() RetryEvents(events []Event) Cancelled() CancelledEvents(events []Event) } ``` The batch interface combines `events + signaling` into one common interface. The main difference between sync/async clients is, when `batch.ACK` is called. Batches/Events can be processed out of order. The publisher pipelining doing the batching and load-balancing guarantees ACKs being returned to the beat in order + implements upper bound. Once publisher pipeline is 'full', it will block, waiting for ACKs from outputs. The logic for dropping events on retry and guaranteed sending is moved to the publisher pipeline as well. Outputs are concerned with publishing and signaling ACK or Retry only.
- Loading branch information
urso
committed
Jun 23, 2017
1 parent
b224a3c
commit 7f87b55
Showing
173 changed files
with
6,324 additions
and
6,793 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.