Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Memory Limiter Processor opt-in configuration to drop data instead of refusing it #11726

Open
blumamir opened this issue Nov 21, 2024 · 0 comments

Comments

@blumamir
Copy link
Member

blumamir commented Nov 21, 2024

TL;DR: when collectors have pressure, I want it to build up in the collectors and for data to drop if it cannot be consumed due to memory limits on the first layer collector that serves applications. this is to protect the instrumented runtime itself from building pressure which is problematic. I want to add an opt-in option to memory limiter processor configuration to drop data instead of refusing it.

Is your feature request related to a problem? Please describe.

I am maintaining the Odigos project, which deploys collectors in Kubernetes environments to set up a telemetry pipeline for collecting, processing, and exporting data to various destinations.

Odigos uses a two-layer collector design:

  1. DaemonSets (Node-Level Collectors): Handle telemetry locally on each node.
  2. Cluster Collectors: Auto-scaled Deployments for centralized processing.

We utilize node-level collectors to ensure local data export and offload concerns like batching, retries, buffering, and cluster-wide networking from users' applications. However, the pipeline can experience pressure under specific conditions:

  • Downstream Backpressure: If a downstream component refuses data, queues grow, leading to increased memory and CPU usage.
  • Data Bursts: Sudden traffic spikes may overwhelm node collectors before cluster collectors scale.
  • Bugs or Configuration Issues: Errors or specific data patterns (e.g., large spans) can cause inefficiencies in handling the load.

Our objective is to buffer and retry within the collectors during transient failures or bursts, preventing backpressure from impacting users' applications. However, if memory pressure builds up, we want to avoid returning retryable errors to applications, which could inadvertently increase their resource usage.

Describe the solution you'd like

To address this, I propose enhancing the Memory Limiter Processor with a new configuration option:

  • New Option: Introduce a boolean flag to control whether the processor should drop data instead of returning retryable errors during memory pressure.
  • Default Behavior: Maintain the current behavior (returning retryable errors).
  • Opt-In Behavior: When enabled, the processor would drop data under memory pressure rather than propagating errors back to applications.

This change involves adding the new configuration option and updating the processor's logic here and for other signals, enabling it to either refuse data or drop it based on the setting.

Describe alternatives you've considered

Wondering it if makes sense to have the memory limiter positioned after a batch processor, so if the memory is too high, the batch is dropped, but the consumer always receives indicating that the data is pushed successfully to batch. I guess the downside is that memory pressure may still build up in the batch processor itself which can eat up the safety reserves while the memory pressure is active.

Additional context

If there is support for this feature, I am willing to contribute by creating a PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant