Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scraped Prometheus Metric Metadata Missing #13849

Closed
awiddersheim opened this issue Sep 2, 2022 · 18 comments
Closed

Scraped Prometheus Metric Metadata Missing #13849

awiddersheim opened this issue Sep 2, 2022 · 18 comments
Assignees
Labels

Comments

@awiddersheim
Copy link

Describe the bug
We use the otel-collector to scrape metrics which have metadata about them. The best example of this is metrics that come from HAProxy's Prometheus exporter. When the otel-collector ingests these and then sends them to Prometheus using the prometheusremotewrite exporter, the metadata doesn't seem to make it. That said, the metadata does appear when using the logging exporter.

Steps to reproduce
Scrape metrics with metadata associated with them and you will not see them. For example, the following is in the logs:

Descriptor:
     -> Name: haproxy_frontend_http_requests_total
     -> Description: Total number of HTTP requests processed by this object since the worker process started
     -> Unit:
     -> DataType: Sum
     -> IsMonotonic: true

You won't see this in Grafana for example:

image

What did you expect to see?

I expected to see something similar to what is described in this Grafana article:

image

What did you see instead?

image

What version did you use?

aws-otel-collector:v0.20.0

What config did you use?

A trimmed down look at the basics of the config we are using:

  prometheus:
    config:
      scrape_configs:
        - job_name: 'otel-collector'
          scrape_interval: 15s
          static_configs:
            - targets:
              - sometarget

exporters:
   # log level, defaults to info automatically on all outputs
  logging/metrics:
    loglevel: debug

  prometheusremotewrite:
    endpoint: <some_amazon_managed_prometheus_workspace>/api/v1/remote_write
    auth:
      authenticator: sigv4auth
    resource_to_telemetry_conversion:
      enabled: true

service:
  extensions:
    - sigv4auth
  pipelines:
    metrics:
      receivers:
        - prometheus
      exporters:
        - logging/metrics
        - prometheusremotewrite

Environment

Amazon Linux

Additional context
Add any other context about the problem here.

@github-actions
Copy link
Contributor

github-actions bot commented Sep 7, 2022

Pinging code owners: @Aneurysm9. See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Nov 10, 2022
@Aneurysm9 Aneurysm9 removed the Stale label Nov 14, 2022
@kovrus
Copy link
Member

kovrus commented Jan 5, 2023

So the issue seems to be on both sides, the OTel collector does not send metadata and Prometheus remote write API does not handle Metadata for remote write requests.

From OTel collector side, I guess, we should make enabling metadata sending configurable and send it either periodically or every n requests?

@github-actions
Copy link
Contributor

github-actions bot commented Mar 7, 2023

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Mar 7, 2023
@kovrus kovrus removed the Stale label Mar 7, 2023
@jmichalek132
Copy link
Contributor

Seems like there is support for ingesting this at least with mimir, I would be interesting than working on implementing support for sending the metadata.

@kovrus
Copy link
Member

kovrus commented Mar 24, 2023

@jmichalek132 nice! Shall I assign this issue to you then?

@jmichalek132
Copy link
Contributor

@jmichalek132 nice! Shall I assign this issue to you then?

Yes please.

@github-actions
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label May 24, 2023
@jmichalek132
Copy link
Contributor

🤞 I'll make some progress on this during the weekend.

@github-actions github-actions bot removed the Stale label May 26, 2023
@jmichalek132
Copy link
Contributor

I made some progress today, I have PoC, but I will need to discuss the implementation details with someone.

@jmichalek132
Copy link
Contributor

So I opened a draft PR #23585 hoping to get some feedback before I spend more time on this.

@jmichalek132
Copy link
Contributor

Might be worth delaying the implementation since it seems like there be changes to how the metadata is sent as part of the remote write request. A document discussing the potential changes.

@jmichalek132
Copy link
Contributor

Might be worth delaying the implementation since it seems like there be changes to how the metadata is sent as part of the remote write request. A document discussing the potential changes.

Hi @kovrus any thoughts in this?

@jmichalek132
Copy link
Contributor

I reached out to the prometheus-dev channel to ask if it makes sense to delay the implementation of this. The answer was yes, so for now I won't work on this until it's clear if the spec changes or not.
https://cloud-native.slack.com/archives/C01AUBA4PFE/p1691579373854289?thread_ts=1691516888.962179&cid=C01AUBA4PFE

@github-actions
Copy link
Contributor

github-actions bot commented Oct 9, 2023

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@jmichalek132
Copy link
Contributor

So after chatting with @gouthamve on Promcon, he suggested I should just choose a simple option for implementing this and put it behind a config option. This is what I came up with #27565. I would like some feedback on this before I add tests. Basically the same way we batch the metrics into a couple of requests that are below the max size of a single one, we repeat the same for metadata and send it as a separate set of requests.

@github-actions github-actions bot removed the Stale label Oct 12, 2023
jpkrohling pushed a commit that referenced this issue Nov 22, 2023
…ption to send metadata (#27565)

**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->

This PR adds an option to send metric Metadata to prometheus compatible
backend (disabled by default). This contains information such as metric
descrtiption, type, unit, and name.

**Link to tracking Issue:** <Issue number if applicable> #13849

**Testing:** <Describe what testing was performed and which tests were
added.>

Tested in our testing environment with locally built image.

**Documentation:** <Describe the documentation added.>

---------

Co-authored-by: Antoine Toulme <[email protected]>
Co-authored-by: Anthony Mirabella <[email protected]>
@jmichalek132
Copy link
Contributor

With #27565 merged, there is a new feature (disabled by default), which can be enabled by setting send_metadata to true. This will send and extra request for every request containing metrics another one is send containing metadata. This should be available in the next release of the otel contrib collector.

This will change in the future with prometheus remote write 1.1.

RoryCrispin pushed a commit to ClickHouse/opentelemetry-collector-contrib that referenced this issue Nov 24, 2023
…ption to send metadata (open-telemetry#27565)

**Description:** <Describe what has changed.>
<!--Ex. Fixing a bug - Describe the bug and how this fixes the issue.
Ex. Adding a feature - Explain what this achieves.-->

This PR adds an option to send metric Metadata to prometheus compatible
backend (disabled by default). This contains information such as metric
descrtiption, type, unit, and name.

**Link to tracking Issue:** <Issue number if applicable> open-telemetry#13849

**Testing:** <Describe what testing was performed and which tests were
added.>

Tested in our testing environment with locally built image.

**Documentation:** <Describe the documentation added.>

---------

Co-authored-by: Antoine Toulme <[email protected]>
Co-authored-by: Anthony Mirabella <[email protected]>
@jpkrohling
Copy link
Member

Thanks, I believe this should be closed then, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants