You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When using the sample config, I don't see the metric otelcol_exporter_enqueue_failed_spans exported to the logs in version 0.88.0, but I do see it in 0.87.0.
Steps to reproduce
Run the sample config in version 0.87.0. Wait 10+ seconds for the metrics to scrape and log. Exit. Copy paste the output into a text editor and search for "enqueue_failed". See that it appears.
Sample from logs:
Metric #5
Descriptor:
-> Name: otelcol_exporter_enqueue_failed_spans
-> Description: Number of spans failed to be added to the sending queue.
-> Unit:
-> DataType: Sum
-> IsMonotonic: true
-> AggregationTemporality: Cumulative
NumberDataPoints #0
Data point attributes:
-> exporter: Str(logging)
-> service_instance_id: Str(9338ca73-219e-4206-861d-dacc7266367b)
-> service_name: Str(otelcol-contrib)
-> service_version: Str(0.87.0)
StartTimestamp: 2024-02-02 20:41:24.026 +0000 UTC
Timestamp: 2024-02-02 20:41:24.026 +0000 UTC
Value: 0.000000
Run the sample config in version 0.88.0. Wait 10+ seconds for the metrics to scrape and log. Exit. Copy paste the output into a text editor and search for "enqueue_failed". See that it does not appear.
What did you expect to see?
I expected the metric to continue coming through, even after upgrading.
What did you see instead?
The metric is missing in the log statements.
What version did you use?
otelcol-contrib_0.88.0_darwin_arm64 and otelcol-contrib_0.87.0_darwin_arm64
I found a handful of potentially related posts, but wasn't really able to piece together a cohesive story of how they fit in with this behavior:
This issue seems to report the opposite of what I'm observing: #8673
This reply on an issue (#7454 (comment)) seems to suggest that there is a way to enable a feature flag to get internal metrics from the collector in otel format, but I couldn't find any documentation on how to set it up.
The text was updated successfully, but these errors were encountered:
I did spend some time trying to do that with a mock HTTP endpoint that always returns a 429 status code and a small queue size, but I wasn't actually able to get the metric to go non-zero on either version. Let me take another attempt at that.
I see what happened on Friday after repeating my test today.
0.87.0 never captures a non-zero value for otelcol_exporter_enqueue_failed_spans. See the logs that are attached. I (incorrectly) assumed this meant 0.88.0 would not capture the metric.
Describe the bug
When using the sample config, I don't see the metric
otelcol_exporter_enqueue_failed_spans
exported to the logs in version 0.88.0, but I do see it in 0.87.0.Steps to reproduce
Sample from logs:
What did you expect to see?
I expected the metric to continue coming through, even after upgrading.
What did you see instead?
The metric is missing in the log statements.
What version did you use?
otelcol-contrib_0.88.0_darwin_arm64 and otelcol-contrib_0.87.0_darwin_arm64
What config did you use?
Environment
OS: MacOS Sonoma 14.2.1
Additional context
I found a handful of potentially related posts, but wasn't really able to piece together a cohesive story of how they fit in with this behavior:
This issue seems to report the opposite of what I'm observing: #8673
This reply on an issue (#7454 (comment)) seems to suggest that there is a way to enable a feature flag to get internal metrics from the collector in otel format, but I couldn't find any documentation on how to set it up.
The text was updated successfully, but these errors were encountered: