-
Notifications
You must be signed in to change notification settings - Fork 260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OTel features overwhelmed during high load #2565
Comments
there is a lot of on going work to make Otel suitable for this kind of demanding usecases.
This is an overview/parent issue with lots of insides for both sides: |
I'm not sure that there is anything we can/should be doing about this in Spin itself apart from handling overflows gracefully. To that end: maybe we should rate-limit/collapse these otel errors (and add a metric?) to make sure we aren't blowing out logs? |
Agreed. I like the idea of tracking errors with a metric. OTel errors are already only emitted at DEBUG level. @lann are you saying you think they should be rate limited still so that if you |
It might be nice, especially for this specific |
Quick learning. Lots of the parameters on the batch processor are configurable by env vars. Cranking up a bunch of the parameters prevents us from dropping messages.
I don't think we will want to actually hardcode this into Spin, but for certain end users they might want to tune it this aggressively. It's something worth documenting more explicitly. |
When o11y is enabled in Spin (some variation of
OTEL_EXPORTER_OTLP_ENDPOINT
is set) and a large amount of load is run against Spin we start to see that the OTel feature gets overloaded.Possible fixes to explore:
The text was updated successfully, but these errors were encountered: