Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Naming difference for metrics received from same node exporter in Prometheus and Opentelemetry server. #22067

Closed
KelvinTanJF opened this issue May 18, 2023 · 4 comments
Labels
bug Something isn't working receiver/prometheus Prometheus receiver

Comments

@KelvinTanJF
Copy link

Component(s)

receiver/prometheus

What happened?

Description

I have a node exporter that is collecting metrics data and I have tried to push the collected data to Splunk using both Prometheus and Opentelemetry service. I had noticed that the name of metrics pushed to Splunk through Prometheus are the same as in node exporter but are different when it is pushed through Opentelemetry.

Steps to Reproduce

I have configured the .yaml file for Prometheus and Opentelemetry to monitor the same node exporter instance, then run the binary with the configuration file. Please refer to the OpenTelemetry Collector configuration for more details.

Expected Result

Metrics displayed in node exporter instance:
node_mountstats_nfs_operations_received_bytes_total{export="",mountaddr="",operation="",protocol=""} 2520

Metrics displayed in Splunk through Prometheus service:
node_mountstats_nfs_operations_received_bytes_total

Metrics displayed in Splunk through Opentelemetry service:
node_mountstats_nfs_operations_received_bytes_total

Actual Result

Metrics displayed in node exporter instance:
node_mountstats_nfs_operations_received_bytes_total{export="",mountaddr="",operation="",protocol=""} 2520

Metrics displayed in Splunk through Prometheus service:
node_mountstats_nfs_operations_received_bytes_total

Metrics displayed in Splunk through Opentelemetry service:
Unable to collect the metrics when specifying the metric name "node_mountstats_nfs_operations_received_bytes_total" but able to collect metrics when specifying the metric name "node_mountstats_nfs_operations_received_bytes". Both are the same metric just the naming is different.

Collector version

v0.77.0

Environment information

Environment

OS: x86-64_linux_4.12_ImageSLES12SP5

OpenTelemetry Collector configuration

receivers:
  prometheus/mountstats:
    config:
      scrape_configs:
      - job_name: '<job_name>'
        scrape_interval: 5m
        static_configs:
        - targets: ['<target_instance>']

processors:
  batch:

  filter/mountstats:
    metrics:
      include:
        match_type: strict
        metric_names:
          - node_mountstats_nfs_operations_received_bytes

exporters:
  logging:
    verbosity: detailed

  file:
    path: ./logs_2

  splunk_hec/mountstats:
    token: "<Splunk HEC token>"
    endpoint: "http://<splunk_instance>:<Port>/services/collector"

service:
  telemetry:
    metrics:
      level: detailed
      address: 0.0.0.0:8886

  pipelines:
    metrics:
      receivers: [prometheus/mountstats]
      processors: [filter/mountstats]
      exporters: [logging, splunk_hec/mountstats, file]

Log output

No response

Additional context

Can I ask why is there naming difference for metrics collected using the same node exporter? and
Is there a way to check what is the correct naming that I need to use at the filter process in Opentelemetry to allow me to filter the metrics I want please?

Thank you very much.

Best Regards,
Kelvin Tan Jun Feng.

@KelvinTanJF KelvinTanJF added bug Something isn't working needs triage New item requiring triage labels May 18, 2023
@github-actions github-actions bot added the receiver/prometheus Prometheus receiver label May 18, 2023
@github-actions
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@atoulme atoulme removed the needs triage New item requiring triage label May 18, 2023
@swiatekm
Copy link
Contributor

I believe this is due to full metrics normalization having been enabled in 0.76.0, see #21743 for reference.

@KelvinTanJF
Copy link
Author

Hi @swiatekm-sumo,

Thanks for the information provided, it is really useful.

Best Regards,
Kelvin Tan Jun Feng.

@dashpole
Copy link
Contributor

The feature gate was reverted in the subsequent release. This should no longer be an issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working receiver/prometheus Prometheus receiver
Projects
None yet
Development

No branches or pull requests

4 participants