You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TL;DR: when collectors have pressure, I want it to build up in the collectors and for data to drop if it cannot be consumed due to memory limits on the first layer collector that serves applications. this is to protect the instrumented runtime itself from building pressure which is problematic. I want to add an opt-in option to memory limiter processor configuration to drop data instead of refusing it.
Is your feature request related to a problem? Please describe.
I am maintaining the Odigos project, which deploys collectors in Kubernetes environments to set up a telemetry pipeline for collecting, processing, and exporting data to various destinations.
Odigos uses a two-layer collector design:
DaemonSets (Node-Level Collectors): Handle telemetry locally on each node.
Cluster Collectors: Auto-scaled Deployments for centralized processing.
We utilize node-level collectors to ensure local data export and offload concerns like batching, retries, buffering, and cluster-wide networking from users' applications. However, the pipeline can experience pressure under specific conditions:
Downstream Backpressure: If a downstream component refuses data, queues grow, leading to increased memory and CPU usage.
Data Bursts: Sudden traffic spikes may overwhelm node collectors before cluster collectors scale.
Bugs or Configuration Issues: Errors or specific data patterns (e.g., large spans) can cause inefficiencies in handling the load.
Our objective is to buffer and retry within the collectors during transient failures or bursts, preventing backpressure from impacting users' applications. However, if memory pressure builds up, we want to avoid returning retryable errors to applications, which could inadvertently increase their resource usage.
Describe the solution you'd like
To address this, I propose enhancing the Memory Limiter Processor with a new configuration option:
New Option: Introduce a boolean flag to control whether the processor should drop data instead of returning retryable errors during memory pressure.
Default Behavior: Maintain the current behavior (returning retryable errors).
Opt-In Behavior: When enabled, the processor would drop data under memory pressure rather than propagating errors back to applications.
This change involves adding the new configuration option and updating the processor's logic here and for other signals, enabling it to either refuse data or drop it based on the setting.
Describe alternatives you've considered
Wondering it if makes sense to have the memory limiter positioned after a batch processor, so if the memory is too high, the batch is dropped, but the consumer always receives indicating that the data is pushed successfully to batch. I guess the downside is that memory pressure may still build up in the batch processor itself which can eat up the safety reserves while the memory pressure is active.
Additional context
If there is support for this feature, I am willing to contribute by creating a PR.
The text was updated successfully, but these errors were encountered:
TL;DR: when collectors have pressure, I want it to build up in the collectors and for data to drop if it cannot be consumed due to memory limits on the first layer collector that serves applications. this is to protect the instrumented runtime itself from building pressure which is problematic. I want to add an opt-in option to memory limiter processor configuration to drop data instead of refusing it.
Is your feature request related to a problem? Please describe.
I am maintaining the Odigos project, which deploys collectors in Kubernetes environments to set up a telemetry pipeline for collecting, processing, and exporting data to various destinations.
Odigos uses a two-layer collector design:
We utilize node-level collectors to ensure local data export and offload concerns like batching, retries, buffering, and cluster-wide networking from users' applications. However, the pipeline can experience pressure under specific conditions:
Our objective is to buffer and retry within the collectors during transient failures or bursts, preventing backpressure from impacting users' applications. However, if memory pressure builds up, we want to avoid returning retryable errors to applications, which could inadvertently increase their resource usage.
Describe the solution you'd like
To address this, I propose enhancing the Memory Limiter Processor with a new configuration option:
This change involves adding the new configuration option and updating the processor's logic here and for other signals, enabling it to either refuse data or drop it based on the setting.
Describe alternatives you've considered
Wondering it if makes sense to have the memory limiter positioned after a batch processor, so if the memory is too high, the batch is dropped, but the consumer always receives indicating that the data is pushed successfully to batch. I guess the downside is that memory pressure may still build up in the batch processor itself which can eat up the safety reserves while the memory pressure is active.
Additional context
If there is support for this feature, I am willing to contribute by creating a PR.
The text was updated successfully, but these errors were encountered: