You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While running the metrics example for the prometheus exporter from here, I noticed the following:
After all meter.Records in main.go have happened (~10s)
Calling the metrics endpoint http://localhost:2222/metrics yields the following expected prometheus metrics:
# HELP ex_com_one A ValueObserver set to 1.0
# TYPE ex_com_one histogram
ex_com_one_bucket{ex_com_lemons="13",le="+Inf"} 1
ex_com_one_sum{ex_com_lemons="13"} 1
ex_com_one_count{ex_com_lemons="13"} 1
# HELP ex_com_three
# TYPE ex_com_three counter
ex_com_three{ex_com_lemons="13"} 22
ex_com_three{A="1",B="2",C="3",ex_com_lemons="10"} 12
# HELP ex_com_two
# TYPE ex_com_two histogram
ex_com_two_bucket{ex_com_lemons="13",le="+Inf"} 1
ex_com_two_sum{ex_com_lemons="13"} 2
ex_com_two_count{ex_com_lemons="13"} 1
ex_com_two_bucket{A="1",B="2",C="3",ex_com_lemons="10",le="+Inf"} 1
ex_com_two_sum{A="1",B="2",C="3",ex_com_lemons="10"} 2
ex_com_two_count{A="1",B="2",C="3",ex_com_lemons="10"} 1
After some additional time interval (~15s) all values of the metrics instruments seem to be increasing.
Especially the counter ex_com_three{ex_com_lemons="13"} 44 has been rising from 22 to 44
# HELP ex_com_one A ValueObserver set to 1.0
# TYPE ex_com_one histogram
ex_com_one_bucket{ex_com_lemons="13",le="+Inf"} 2
ex_com_one_sum{ex_com_lemons="13"} 2
ex_com_one_count{ex_com_lemons="13"} 2
ex_com_one_bucket{A="1",B="2",C="3",ex_com_lemons="10",le="+Inf"} 1
ex_com_one_sum{A="1",B="2",C="3",ex_com_lemons="10"} 13
ex_com_one_count{A="1",B="2",C="3",ex_com_lemons="10"} 1
# HELP ex_com_three
# TYPE ex_com_three counter
ex_com_three{ex_com_lemons="13"} 44
ex_com_three{A="1",B="2",C="3",ex_com_lemons="10"} 25
# HELP ex_com_two
# TYPE ex_com_two histogram
ex_com_two_bucket{ex_com_lemons="13",le="+Inf"} 2
ex_com_two_sum{ex_com_lemons="13"} 4
ex_com_two_count{ex_com_lemons="13"} 2
ex_com_two_bucket{A="1",B="2",C="3",ex_com_lemons="10",le="+Inf"} 2
ex_com_two_sum{A="1",B="2",C="3",ex_com_lemons="10"} 14
ex_com_two_count{A="1",B="2",C="3",ex_com_lemons="10"} 2
All the metrics of the example keep increasing indefinitely, even though no new measurements for the metric instruments in the example application are recorded.
Expectations
I would have expected that the counter would not increase any further since no new measurement for this counter is recorded in the example application.
Is this an expected behavior? If so, what is the reason for the additional increase and can it be configured?
In the future I would like to employ counters and other metric instruments in a go grpc-interceptor in some applications, but the described behavior would seem to work against that. Is there maybe another approach to achive that?
I've identified the problem and have a solution, but in testing this fix I discovered another problem with Prometheus export semantics. The current sdk/metric/processor/basic code addresses the need to maintain cumulative sums from delta metric inputs, and this issue was caused by updates being applied that were stale as a result of no events happening.
The immediate fix for this issue addresses the correctness of values problem. The other problem showing up in the new test is that values will disappear from Prometheus if they do not change over an interval. Prometheus's concept of "stateful" is different than the one implemented in the processor, and I'm trying to decide if the processor should change unconditionally or should add a new option to support a cumulative exporter that only reports changed metrics, not all metrics.
Description (What happened)
While running the metrics example for the prometheus exporter from here, I noticed the following:
After all meter.Records in
main.go
have happened (~10s)Calling the metrics endpoint
http://localhost:2222/metrics
yields the following expected prometheus metrics:After some additional time interval (~15s) all values of the metrics instruments seem to be increasing.
Especially the counter
ex_com_three{ex_com_lemons="13"} 44
has been rising from22
to44
All the metrics of the example keep increasing indefinitely, even though no new measurements for the metric instruments in the example application are recorded.
Expectations
I would have expected that the counter would not increase any further since no new measurement for this counter is recorded in the example application.
Is this an expected behavior? If so, what is the reason for the additional increase and can it be configured?
In the future I would like to employ counters and other metric instruments in a go grpc-interceptor in some applications, but the described behavior would seem to work against that. Is there maybe another approach to achive that?
Configuration
go version:
Dependencies:
The text was updated successfully, but these errors were encountered: