You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We want to ensure OpenMetrics / Prometheus compatibility in the OpenTelemetry Collector. We have been building compatibility tests to verify the OpenMetrics spec is fully supported on both the OpenTelemetry Collector Prometheus receiver and PRW exporter as well as in Prometheus itself.
In order to verify that the Prometheus receiver is functioning as expected for Prometheus to OTLP data transformations, issue #6000 and #6151 create test to validate Prometheus core metrics. Through this test we found that after a failed scrape, start_timestamp of histogram and summary metrics are not same as start_timestamp of the first scrape. start_timestamp are used by OTLP format to set timestamp when a metric collection system is started.
Currently, the validate loop is skipped for the tests, re-enable the validate loop by removing/commenting following lines from func testEndToEnd(...) (lines 1442-1445 )
if true {
t.Log(`Skipping the "up" metric checks as they seem to be spuriously failing after staleness marker insertions`)
return}
Note: the test fails in getValidScrapes due to staleness, inspect the metrics to look at the timestamps
What did you expect to see?
Referring to the table below, expected to see the start_timestamps of Histogram and Summary of Target1Scrape2 same as Target1Scrape1, irrespective of the failed scrape in between Target1Scrape2 and Target1Scrape1.
What did you see instead?
Because of the failed scrape between Target1Page1 and Target1Page2, the start_timestamp of Target1Scrape1 andTarget1Scrape2 are different for histogram and summary metrics.
PaurushGarg
changed the title
[Prometheus reciever] Incorrect start_timestamp of Summary and Histogram metrics after a failed scrape
[Prometheus receiver] Incorrect start_timestamp of Summary and Histogram metrics after a failed scrape
Nov 17, 2021
PaurushGarg
changed the title
[Prometheus receiver] Incorrect start_timestamp of Summary and Histogram metrics after a failed scrape
[Prometheus Receiver] Incorrect start_timestamp of Summary and Histogram metrics after a failed scrape
Nov 18, 2021
Describe the bug
We want to ensure OpenMetrics / Prometheus compatibility in the OpenTelemetry Collector. We have been building compatibility tests to verify the OpenMetrics spec is fully supported on both the OpenTelemetry Collector Prometheus receiver and PRW exporter as well as in Prometheus itself.
In order to verify that the Prometheus receiver is functioning as expected for Prometheus to OTLP data transformations, issue #6000 and #6151 create test to validate Prometheus core metrics. Through this test we found that after a failed scrape,
start_timestamp
of histogram and summary metrics are not same asstart_timestamp
of the first scrape.start_timestamp
are used by OTLP format to set timestamp when a metric collection system is started.Steps to reproduce
Run func TestEndToEnd(t *testing.T)
Currently, the validate loop is skipped for the tests, re-enable the validate loop by removing/commenting following lines from
func testEndToEnd(...)
(lines 1442-1445 )getValidScrapes
due to staleness, inspect the metrics to look at the timestampsWhat did you expect to see?
Referring to the table below, expected to see the start_timestamps of Histogram and Summary of Target1Scrape2 same as Target1Scrape1, irrespective of the failed scrape in between Target1Scrape2 and Target1Scrape1.
What did you see instead?
Because of the failed scrape between Target1Page1 and Target1Page2, the start_timestamp of Target1Scrape1 andTarget1Scrape2 are different for histogram and summary metrics.
Table: start_timestamps observed:
Possible Solution
Modify existing adjustMetricTimeseries logic to ensure:
Complete solution will also be dependent on the resolution of the issue: #6400.
What version did you use?
Collector Contrib: v- 0.37.1
Additional context
Related to: open-telemetry/prometheus-interoperability-spec#57
Issue: #6000 #6400
cc: @alolita @Aneurysm9
The text was updated successfully, but these errors were encountered: