Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inconsistent timestamps on metric points for metric up #22096

Closed
shenlvcheng opened this issue May 19, 2023 · 8 comments
Closed

inconsistent timestamps on metric points for metric up #22096

shenlvcheng opened this issue May 19, 2023 · 8 comments
Assignees

Comments

@shenlvcheng
Copy link

shenlvcheng commented May 19, 2023

Component(s)

cmd/otelcontribcol

Describe the issue you're reporting

otel collecotr config:

prometheus:
    config:
      scrape_configs:
        - job_name: 'federate'
          scrape_interval: 15s
          honor_labels: true
          metrics_path: '/federate'
          params:
            'match[]':
              - '{job="prometheus"}'
              - '{job="mysql"}'
          static_configs:
            - targets:
              - '192.168.2.245:9090'

prometheus metrics:
go_gc_heap_allocs_by_size_bytes_total_bucket{instance="192.168.2.245:9104",job="mysql",le="+Inf"} 6.3285439e+07 1684481521360
go_gc_heap_allocs_by_size_bytes_total_bucket{instance="192.168.2.245:9104",job="mysql",le="13568.999999999998"} 6.3056218e+07 1684481521360
go_gc_heap_allocs_by_size_bytes_total_bucket{instance="192.168.2.245:9104",job="mysql",le="144.99999999999997"} 5.9483332e+07 1684481521360
go_gc_heap_allocs_by_size_bytes_total_bucket{instance="192.168.2.245:9104",job="mysql",le="1536.9999999999998"} 6.2455961e+07 1684481521360
go_gc_heap_allocs_by_size_bytes_total_bucket{instance="192.168.2.245:9104",job="mysql",le="24.999999999999996"} 3.3698032e+07 1684481521360
go_gc_heap_allocs_by_size_bytes_total_bucket{instance="192.168.2.245:9104",job="mysql",le="27264.999999999996"} 6.3218909e+07 1684481521360
go_gc_heap_allocs_by_size_bytes_total_bucket{instance="192.168.2.245:9104",job="mysql",le="320.99999999999994"} 6.1425144e+07 1684481521360
go_gc_heap_allocs_by_size_bytes_total_bucket{instance="192.168.2.245:9104",job="mysql",le="3200.9999999999995"} 6.2638457e+07 1684481521360
go_gc_heap_allocs_by_size_bytes_total_bucket{instance="192.168.2.245:9104",job="mysql",le="64.99999999999999"} 4.6509709e+07 1684481521360
go_gc_heap_allocs_by_size_bytes_total_bucket{instance="192.168.2.245:9104",job="mysql",le="6528.999999999999"} 6.29979e+07 1684481521360
go_gc_heap_allocs_by_size_bytes_total_bucket{instance="192.168.2.245:9104",job="mysql",le="704.9999999999999"} 6.1907236e+07 1684481521360
go_gc_heap_allocs_by_size_bytes_total_bucket{instance="192.168.2.245:9104",job="mysql",le="8.999999999999998"} 6.137389e+06 1684481521360

**prometheus url:**http://192.168.2.245:9090/federate?match%5B%5D=%7Bjob%3D%22mysql%22%7D

**otel collecotr error:**Appending scrape report failed {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "scrape_pool": "federate", "target": "http://192.168.2.245:9090/federate?match%5B%5D=%7Bjob%3D%22mysql%22%7D", "error": "inconsistent timestamps on metric points for metric up"}

I would like to ask how to solve this problem. Is this a bug?

@shenlvcheng shenlvcheng added the needs triage New item requiring triage label May 19, 2023
@crobert-1
Copy link
Member

crobert-1 commented May 24, 2023

Note: This is a potential duplicate of #14453

@andrzej-stencel andrzej-stencel added the receiver/prometheus Prometheus receiver label Jun 29, 2023
@github-actions
Copy link
Contributor

Pinging code owners for receiver/prometheus: @Aneurysm9 @dashpole. See Adding Labels via Comments if you do not have permissions to add labels yourself.

@dashpole
Copy link
Contributor

Sorry for the low response. That looks like a bug. My first guess is that the up metric from the federated endpoint collides with the up metric from scraping the federated endpoint. Adding a static label using metric_relabel_config should fix the issue if that was the case.

@dashpole dashpole removed the needs triage New item requiring triage label Jul 26, 2023
@github-actions
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Sep 25, 2023
@crobert-1 crobert-1 added the bug Something isn't working label Sep 25, 2023
@github-actions github-actions bot removed the Stale label Sep 26, 2023
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Nov 27, 2023
@crobert-1 crobert-1 added never stale Issues marked with this label will be never staled and automatically removed and removed Stale labels Nov 27, 2023
@dashpole
Copy link
Contributor

To help us investigate, can you provide the contents of the /federate endpoint on the endpoint being scraped? In particular, i'd like to see the up metrics being reported on that endpoint.

@dashpole dashpole self-assigned this Jan 31, 2024
@dashpole dashpole added waiting for author and removed never stale Issues marked with this label will be never staled and automatically removed labels Jan 31, 2024
Copy link
Contributor

github-actions bot commented Apr 1, 2024

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

Copy link
Contributor

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale May 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants