-
Notifications
You must be signed in to change notification settings - Fork 17
Opentelemetry goes berserker mode #2380
Comments
cc @klochowicz / @Restioson |
This is interesting. How long do the logs continue and does it ever stop? This error seems to be that the number of batches is piling up in the queue. Maybe this is just a case of adding an exporter timeout or making the queue bigger, but then again maybe not... |
We can change |
as for the error circle - wouldn't this be outside of any tracing span? I thought mere usage of |
Perhaps, but I don't know if we can rely on knowing from where
I think it will given that the collector is part of the tracing subscriber |
Indeed, there's no other way currently other than overriding the error handler: open-telemetry/opentelemetry-rust#549 |
I have tested this hypothesis locally; even when the collector channel is closed (during the shutdown), tracing logs from such a custom error handler still work as expected. I'll throw a PR in a sec. |
I'm seeing these errors again on testnet. |
The reason for this is that the agent is not able to keep up with the logs
|
Can we change opentelementry to a polling mode which automatically drops un-polled spans after a few seconds? |
Note: this is not a blocker for 0.5.0. If we can't solve this we just don't enable instrumentation on mainnet. |
This is partially a symptom of another problem, which is that at some point logs get really spammy and we get ratelimited... I wonder what the underlying cause of that issue is? |
The docs of |
How would the application behave if it can't send the trace to the collector due to a timeout? |
I'm not sure - opentelemetry-otlp just notes that it is the "timeout to the collector". I asked already on the opentelemetry-rust gitter room about this, so I'll add this question too |
I think we can close this. |
I had the taker and maker running locally locally over night. Now I get spammed with these errors.
2 questions
The text was updated successfully, but these errors were encountered: