-
Notifications
You must be signed in to change notification settings - Fork 872
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot get Micrometer custom metrics #5292
Comments
Hey @irizzant , |
Thank you very much for the hint @mateuszrzeszutek , I'm indeed using
without registering it to the global registry. I'm testing the #5200 version right now... |
Ok @mateuszrzeszutek, after testing the solution it looks like:
plus registering the custom metrics to the global registry only, allowed me to get the custom metrics from both OTEL Prometheus exporter and the Micrometer endpoint. Thank you very much for your help |
Sorry @mateuszrzeszutek I may have talked too soon. I found that metrics exposed by OTEL go under different names than the Micrometer ones. E.g from the Micrometer endpoint I have
which comes from
Also some other metrics are not available at all. E.g. In Micrometer
and in OTEL the above is not defined at all. Is it possible that some metrics coming from Micrometer |
Yes, that might be the case. Micrometer The Micrometer You could probably fix that in your application by registering a custom naming convention that adds
I skipped the |
Didn't know that, thanks for clarifying @mateuszrzeszutek
That worked, now I can see OTEL metric
but the problem is now that I get totally different bucket values than Micrometer's! These are the ones from OTEL:
and these are the ones from Micrometer:
As you can see in OTEL I no longer have only one value returned but I have 2, and the bucket values are different than I had in Micrometer.
That would be great thanks |
Oh, I got one thing wrong in my previous post - the base time unit used in the Micrometer-Otel bridge is milliseconds, not seconds - so your measurements are not actually
It looks like the default OTel buckets for histograms are overriding the micrometer ones. Can you share a fragment of code that shows how you're registering the timer? |
Let me try to sum up the steps:
prometheusMeterRegistry.config().commonTags("edimonitorinstance", instance, "application", "edimonitor");
Metrics.globalRegistry.config()
.commonTags("edimonitorinstance", instance, "application", "edimonitor")
.meterFilter(new MeterFilter() {
@Override
public Id map(Id id) {
if (id.getType() == Type.TIMER || id.getType() == Type.LONG_TASK_TIMER) {
return id.withName(id.getName() + "_seconds");
}
return id;
}
});
PrometheusRegistryProvider.setPrometheusMeterRegistry(prometheusMeterRegistry);
runningJobExecutionTime = LongTaskTimer.builder("edimonitor.job.running.executiontime")
.description("The execution time for a running job")
.tags(EDI_MONITOR_JOB, context.getJobDetail().getKey().getName())
.publishPercentiles(0.5, 0.99)
.register(Metrics.globalRegistry);
jobExecutionTime = Timer.builder("edimonitor.job.executiontime")
.description("The execution time for a job")
.tags(EDI_MONITOR_JOB, context.getJobDetail().getKey().getName())
.publishPercentileHistogram()
.minimumExpectedValue(Duration.of(40, ChronoUnit.MINUTES))
.maximumExpectedValue(Duration.of(2, ChronoUnit.HOURS))
.register(Metrics.globalRegistry);
What I see here is that OTEL is returning 2 values for the same metric: I would have expected a single value 0.0 instead it adds 1643905334.100. |
Thanks for the example! I'll try to figure something out for your use case.
I think that's just the timestamp - Micrometer |
@mateuszrzeszutek The problem is that with OTEL metrics I cannot render the heatmap in Grafana correctly. Here are the values I pull from the bucket:
|
@irizzant can you summarize any remaining issue(s) after updating to 1.13.1? thx! |
Hi @trask The problem I mentioned here seems to be fixed. For a single job run I now get:
which looks correct. I noticed for
can you just confirm if this is expected? The only issue I still see is that the buckets configured in the application are not taken into consideration by the instrumentation.
and the resulting buckets are the ones I have listed above |
based on the tests: it looks like LongTaskTimer duration should be a DoubleSum instead of a Gauge.
ya, I don't see any handling of |
As it is now, it is impossible to configure histogram buckets through metrics API, this has to be done through SDK views. The metric Hint API would solve that problem, but I don't think that any actual work on its spec has started. |
ok so that is something to be checked as well. |
where are you seeing that it's a Gauge? thx |
As I wrote above:
This is the metrics output I get, the comment states it's a Gauge |
hey @irizzant, I looked into it and I think that this behavior is correct.
Monotonic |
Hi @trask thanks for confirming, so this should be the only remaining issue:
|
Thanks @irizzant. your example resolved my doubts & I can send my custom metrics to open telemetry. I have directly sent my custom metrics to global registry & it showed up in the open telemetry metrics. Example code
|
Describe the bug
I have a Java application instrumented to create custom Micrometer metrics.
The very same Java application uses the OpenTelemetry Java Instrumentation agent to collect metrics using the Prometheus exporter.
I can see Micrometer metrics and OpenTelemetry metrics being exported just fine, but I cannot see Micrometer metrics exported via OpenTelemetry Prometheus exporter.
It's my understanding that from 1.10 the agent should export Micrometer metrics but maybe I get this wrong.
Agent is configured like this:
Querying port 9464 produces OTEL metrics like the following:
Querying the Mircometer port I get the following:
but I cannot find the latter metrics inside the former ones.
Steps to reproduce
What did you expect to see?
Metrics from Micrometer reported in the Prometheus exporter endpoint
What did you see instead?
Metrics from Micrometer are not reported in the Prometheus exporter endpoint
What version are you using?
1.10.1
Environment
Compiler: (e.g., "AdoptOpenJDK 11.0.6")
OS: (e.g., "Ubuntu 20.04")
Runtime (if different from JDK above): (e.g., "Oracle JRE 8u251")
OS (if different from OS compiled on): (e.g., "Windows Server 2019")
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: