Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus Exporter Example #887

Closed
raphaelzoellner opened this issue Jul 2, 2020 · 3 comments · Fixed by #903
Closed

Prometheus Exporter Example #887

raphaelzoellner opened this issue Jul 2, 2020 · 3 comments · Fixed by #903
Assignees
Labels
bug Something isn't working
Milestone

Comments

@raphaelzoellner
Copy link

Description (What happened)

While running the metrics example for the prometheus exporter from here, I noticed the following:

After all meter.Records in main.go have happened (~10s)
Calling the metrics endpoint http://localhost:2222/metrics yields the following expected prometheus metrics:

# HELP ex_com_one A ValueObserver set to 1.0
# TYPE ex_com_one histogram
ex_com_one_bucket{ex_com_lemons="13",le="+Inf"} 1
ex_com_one_sum{ex_com_lemons="13"} 1
ex_com_one_count{ex_com_lemons="13"} 1
# HELP ex_com_three 
# TYPE ex_com_three counter
ex_com_three{ex_com_lemons="13"} 22
ex_com_three{A="1",B="2",C="3",ex_com_lemons="10"} 12
# HELP ex_com_two 
# TYPE ex_com_two histogram
ex_com_two_bucket{ex_com_lemons="13",le="+Inf"} 1
ex_com_two_sum{ex_com_lemons="13"} 2
ex_com_two_count{ex_com_lemons="13"} 1
ex_com_two_bucket{A="1",B="2",C="3",ex_com_lemons="10",le="+Inf"} 1
ex_com_two_sum{A="1",B="2",C="3",ex_com_lemons="10"} 2
ex_com_two_count{A="1",B="2",C="3",ex_com_lemons="10"} 1

After some additional time interval (~15s) all values of the metrics instruments seem to be increasing.
Especially the counter ex_com_three{ex_com_lemons="13"} 44 has been rising from 22 to 44

# HELP ex_com_one A ValueObserver set to 1.0
# TYPE ex_com_one histogram
ex_com_one_bucket{ex_com_lemons="13",le="+Inf"} 2
ex_com_one_sum{ex_com_lemons="13"} 2
ex_com_one_count{ex_com_lemons="13"} 2
ex_com_one_bucket{A="1",B="2",C="3",ex_com_lemons="10",le="+Inf"} 1
ex_com_one_sum{A="1",B="2",C="3",ex_com_lemons="10"} 13
ex_com_one_count{A="1",B="2",C="3",ex_com_lemons="10"} 1
# HELP ex_com_three 
# TYPE ex_com_three counter
ex_com_three{ex_com_lemons="13"} 44
ex_com_three{A="1",B="2",C="3",ex_com_lemons="10"} 25
# HELP ex_com_two 
# TYPE ex_com_two histogram
ex_com_two_bucket{ex_com_lemons="13",le="+Inf"} 2
ex_com_two_sum{ex_com_lemons="13"} 4
ex_com_two_count{ex_com_lemons="13"} 2
ex_com_two_bucket{A="1",B="2",C="3",ex_com_lemons="10",le="+Inf"} 2
ex_com_two_sum{A="1",B="2",C="3",ex_com_lemons="10"} 14
ex_com_two_count{A="1",B="2",C="3",ex_com_lemons="10"} 2

All the metrics of the example keep increasing indefinitely, even though no new measurements for the metric instruments in the example application are recorded.

Expectations

I would have expected that the counter would not increase any further since no new measurement for this counter is recorded in the example application.

Is this an expected behavior? If so, what is the reason for the additional increase and can it be configured?

In the future I would like to employ counters and other metric instruments in a go grpc-interceptor in some applications, but the described behavior would seem to work against that. Is there maybe another approach to achive that?

Configuration

go version:

  • go1.13.8

Dependencies:

  • go.opentelemetry.io/otel v0.7.0
  • go.opentelemetry.io/otel/exporters/metric/prometheus v0.7.0
@MrAlias MrAlias added area: exporter bug Something isn't working question Further information is requested labels Jul 2, 2020
@jmacd jmacd self-assigned this Jul 2, 2020
@jmacd
Copy link
Contributor

jmacd commented Jul 2, 2020

This is not expected behavior. I will take this.

@jmacd
Copy link
Contributor

jmacd commented Jul 2, 2020

(I was able to reproduce this.)

@jmacd
Copy link
Contributor

jmacd commented Jul 3, 2020

I've identified the problem and have a solution, but in testing this fix I discovered another problem with Prometheus export semantics. The current sdk/metric/processor/basic code addresses the need to maintain cumulative sums from delta metric inputs, and this issue was caused by updates being applied that were stale as a result of no events happening.

The immediate fix for this issue addresses the correctness of values problem. The other problem showing up in the new test is that values will disappear from Prometheus if they do not change over an interval. Prometheus's concept of "stateful" is different than the one implemented in the processor, and I'm trying to decide if the processor should change unconditionally or should add a new option to support a cumulative exporter that only reports changed metrics, not all metrics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants