-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Collector sending otelcol_exporter_queue_size metric on single exporter #10444
Labels
bug
Something isn't working
Comments
I'm experiencing exactly the same thing. Is it right for the system to work like this? |
Can confirm I am seeing this as well. Here is our information Example ConfigI spawned a collector example just to confirm:
When curling
I am trying this out with collector v0.103 |
Hi @dmitryax, Have you any idea on this problem? Thanks a lot! |
dmitryax
added a commit
to dmitryax/opentelemetry-collector
that referenced
this issue
Jul 8, 2024
Fix incorrect deduplication of otelcol_exporter_queue_size and otelcol_exporter_queue_capacity metrics if multiple exporters are used. Fixes open-telemetry#10444
dmitryax
added a commit
to dmitryax/opentelemetry-collector
that referenced
this issue
Jul 8, 2024
Fix incorrect deduplication of otelcol_exporter_queue_size and otelcol_exporter_queue_capacity metrics if multiple exporters are used. Fixes open-telemetry#10444
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi all,
I have the following issue while working with OTel Collector, and I can't seem to find anything on docs or useful config parameters for avoiding this issue. Feel free to ask for more details if needed, and if providing such other details is compatible with the reason some information is being redacted. Thanks in advance for your help.
Describe the bug
The metric otelcol_exporter_queue_size is being sent to prometheus for only one exporter instead of each one.
What did you expect to see?
I expect to see a queue metric for each exporter.
What did you see instead?
I see the aforementioned metric for only the first service initialised by collector at startup. I checked on Grafana and in the timeline, each time container restarts, a different exporter queue on that metric is exposed. There are no related errors on logs, each exporter is configured the same way, including the one initialised at startup of OTel collector container.
What version did you use?
ADOT v.0.39.1
What config did you use?
Prometheus receiver config:
prometheus/
config:
scrape_configs:
- job_name:
scrape_interval: 1m
static_configs:
- targets:
- '127.0.0.1:8888'
Service conf:
service:
[...]
metrics/:
receivers:
- prometheus/
exporters:
- prometheusremotewrite
Prometheus exporter config:
exporters:
prometheusremotewrite:
endpoint:
resource_to_telemetry_conversion:
enabled: true
add_metric_suffixes: false
auth:
authenticator:
Metrics and logs level are already set at maximum verbosity, other pieces of the config are omitted on purpose.
Environment
Docker container of OTel Collector, tagged latest
The text was updated successfully, but these errors were encountered: