-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metric queueSize twice fails with opentelemetry collector with a prometheus metrics exporter #18194
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Can you please add the |
updated with the full log output where you can see the issue with the 2 queueSize gauges, sorry the log is pretty big.. |
Looks like metrics with the same name from different instrumentation scopes with different descriptions not being handled well. The metrics given to the exporter:
|
Yep exactly the problem, this issue have been also encountered in the java instrumentation project. |
I'm having the same problem, and as a workaround, decided to drop the
|
- Enable few logs for Ad service and Recommendation service. - Add OTLP exporters for logs - Add the filter processor to prevent an error from the Prometheus exporter for duplicate queueSize metric, see open-telemetry/opentelemetry-collector-contrib#18194. The filter processor can be removed when the fault gets fixed. This PR doesn’t introduce any logs backend. Instead, logs are output only to Logging exporter and can be seen in the console (otelcol). otel-col | 2023-03-17T11:40:22.662Z info LogsExporter {"kind": "exporter", "data_type": "logs", "name": "logging", "#logs": 2} After this PR, different logging backends can be easily tested by configuring an additional exporter.
- Enable logs for Ad service and Recommendation service. - Add OTLP exporters for logs - Add the filter processor to prevent an error from the Prometheus exporter for duplicate queueSize metric, see open-telemetry/opentelemetry-collector-contrib#18194. The filter processor can be removed when the fault gets fixed. This PR doesn’t introduce any logs backend. Instead, logs are output only to Logging exporter and can be seen in the console (otelcol). otel-col | 2023-03-17T11:40:22.662Z info LogsExporter {"kind": "exporter", "data_type": "logs", "name": "logging", "#logs": 2} After this PR, different logging backends can be easily tested by configuring an additional exporter.
thanks for the filter tip, working nicely ! hope a correct solution will be provided by the team. |
* What’s included? - Enable logs for Ad service and Recommendation service. - Add OTLP exporters for logs - Add the filter processor to prevent an error from the Prometheus exporter for duplicate queueSize metric, see open-telemetry/opentelemetry-collector-contrib#18194. The filter processor can be removed when the fault gets fixed. This PR doesn’t introduce any logs backend. Instead, logs are output only to Logging exporter and can be seen in the console (otelcol). otel-col | 2023-03-17T11:40:22.662Z info LogsExporter {"kind": "exporter", "data_type": "logs", "name": "logging", "#logs": 2} After this PR, different logging backends can be easily tested by configuring an additional exporter. * Add changelog and fix lint errors. * Fix changelog and lint * Fix lint * Move protocol env variables to .env * Update CHANGELOG.md --------- Co-authored-by: Austin Parker <[email protected]> Co-authored-by: Juliano Costa <[email protected]>
* What’s included? - Enable logs for Ad service and Recommendation service. - Add OTLP exporters for logs - Add the filter processor to prevent an error from the Prometheus exporter for duplicate queueSize metric, see open-telemetry/opentelemetry-collector-contrib#18194. The filter processor can be removed when the fault gets fixed. This PR doesn’t introduce any logs backend. Instead, logs are output only to Logging exporter and can be seen in the console (otelcol). otel-col | 2023-03-17T11:40:22.662Z info LogsExporter {"kind": "exporter", "data_type": "logs", "name": "logging", "#logs": 2} After this PR, different logging backends can be easily tested by configuring an additional exporter. * Add changelog and fix lint errors. * Fix changelog and lint * Fix lint * Move protocol env variables to .env * Update CHANGELOG.md --------- Co-authored-by: Austin Parker <[email protected]> Co-authored-by: Juliano Costa <[email protected]>
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been closed as inactive because it has been stale for 120 days with no activity. |
* What’s included? - Enable logs for Ad service and Recommendation service. - Add OTLP exporters for logs - Add the filter processor to prevent an error from the Prometheus exporter for duplicate queueSize metric, see open-telemetry/opentelemetry-collector-contrib#18194. The filter processor can be removed when the fault gets fixed. This PR doesn’t introduce any logs backend. Instead, logs are output only to Logging exporter and can be seen in the console (otelcol). otel-col | 2023-03-17T11:40:22.662Z info LogsExporter {"kind": "exporter", "data_type": "logs", "name": "logging", "#logs": 2} After this PR, different logging backends can be easily tested by configuring an additional exporter. * Add changelog and fix lint errors. * Fix changelog and lint * Fix lint * Move protocol env variables to .env * Update CHANGELOG.md --------- Co-authored-by: Austin Parker <[email protected]> Co-authored-by: Juliano Costa <[email protected]>
Component(s)
exporter/prometheus
What happened?
Describe the bug
When setting up an opentelemetry collector with a metrics prometheus exporter, the exporter will fail when called
if the monitored application is a java application using a opentelemetry-log4j-appender-2.17 instrumentation
since the application has 2 metrics named queueSize one for the logs and of for the spans processor
Steps to reproduce
A java application using opentelementry java agent log4j2 instrumentation :
io.opentelemetry.instrumentation
opentelemetry-log4j-appender-2.17
1.21.0-alpha
runtime
Java Application must do in loop log statements every seconds:
start the collector :
docker run -p 4317:4317 -p 9464:9464 -v $(pwd)/otel-collector.yaml:/etc/otelcol/config.yaml otel/opentelemetry-collector
start the java application
call http://localhost:9464/metrics will trigger the bug in the opentelemetry collector and will produce an half populated prometheus output file
What did you expect to see?
A prometheus output file with output :
What did you see instead?
A badly generated prometheus output file with missing queueSize{spanProcessorType="BatchSpanProcessor"} 0
What version and what artifacts are you using?
oentelemetry-collector 0.68, 0.69.0.70
opentelemetry-java-agent 1.21.0
Collector version
0.68,0.69,0.70
Environment information
OS: Ubuntu 20.04
OpenTelemetry Collector configuration
Log output
Additional context
This issue is similar to this one : open-telemetry/opentelemetry-java#4382
The text was updated successfully, but these errors were encountered: