Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Value of kafkametricsreceiver not correctly transformed on otlp/elastic #4432

Closed
realwoobinlee opened this issue Aug 3, 2021 · 5 comments
Closed
Labels
bug Something isn't working

Comments

@realwoobinlee
Copy link

realwoobinlee commented Aug 3, 2021

Describe the bug
Values of Kafka metrics scrapers are somehow not correctly transformed when using otlp/elastic exporter
Therefore the results of Logging exporter do not match the results shown on kibana.
ex) logging Exporter;;; Value: 486 => oltp/elastic Exporter;;; "kafka.consumer_group.offset_sum": [0]

Steps to reproduce

  • Result: Logging Exporter
  Metric #8
  Descriptor:
       -> Name: kafka.consumer_group.offset_sum
       -> Description: 
       -> Unit: 
       -> DataType: Gauge
  NumberDataPoints #0
  Data point labels:
       -> group: some_group
       -> topic: order
  StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
  Timestamp: 2021-08-03 11:14:46.096965308 +0000 UTC
  Value: 486
  • Result: Kibana JSON
  {
    "_index": "apm-7.13.3-metric-000001",
    "_type": "_doc",
    "_id": "fiC5C3sBERpgGFE8IEG9",
    "_version": 1,
    "_score": null,
    "fields": {
      "service.name": [
        "unknown"
      ],
      "processor.name": [
        "metric"
      ],
      "observer.version_major": [
        7
      ],
      "kafka.consumer_group.offset_sum": [
        0
      ],
      "observer.hostname": [
        "demoelk-apm-server-6848b5c789-gxsvd"
      ],
      "service.language.name": [
        "unknown"
      ],
      "kafka.consumer_group.lag_sum": [
        0
      ],
      "metricset.name": [
        "app"
      ],
      "event.ingested": [
        "2021-08-03T11:14:47.100Z"
      ],
      "labels.group": [
        ""
      ],
      "@timestamp": [
        "2021-08-03T11:14:46.096Z"
      ],
      "ecs.version": [
        "1.8.0"
      ],
      "observer.type": [
        "apm-server"
      ],
      "observer.version": [
        "7.13.3"
      ],
      "labels.topic": [
        "order"
      ],
      "processor.event": [
        "metric"
      ],
      "agent.name": [
        "otlp"
      ],
      "agent.version": [
        "unknown"
      ]
    },
    "sort": [
      1627989286096
    ]
  }

What did you expect to see?
Same Values on both exporters (in the case of example '486')

What did you see instead?
Different Values probably due to a bug regarding the empty unit (Metric_Gauge*).

What version did you use?
Version: v0.31.0

What config did you use?

    receivers:
      otlp:
        protocols: 
          grpc:
          http:
      kafkametrics:
        brokers: demo-kafka-cp-kafka.demo.svc.cluster.local:9092
        protocol_version: 2.0.0
        scrapers:
          - brokers
          - topics
          - consumers
        collection_interval: 10s
    processors:
      batch:
    exporters:
      logging:
        loglevel: debug
      otlp/elastic:
        endpoint: "demoelk-apm-http.obs.svc.cluster.local:8200"
        insecure: true
        headers:
          Authorization: "Bearer somepassword"
      jaeger:
        endpoint: jaeger-collector.obs.svc.cluster.local:14250
        insecure: true
    extensions:
      health_check:
      pprof:
      zpages:
    service:
      extensions: [health_check, pprof, zpages]
      pipelines:
        metrics:
          receivers:
            - otlp
            - kafkametrics
          exporters:
            - logging
            - otlp/elastic
        traces:
          receivers: 
            - otlp
          processors: 
            - batch
          exporters: 
            - otlp/elastic
            - jaeger
        logs:
          receivers:
            - otlp
          exporters:
            - logging

Environment
OS: "Ubuntu 20.04" but with Kind (local k8s)

@realwoobinlee realwoobinlee added the bug Something isn't working label Aug 3, 2021
@github-actions github-actions bot added the Stale label Aug 11, 2021
@bogdandrutu bogdandrutu removed the Stale label Aug 11, 2021
@cyrille-leclerc
Copy link
Member

Hello @realwoobinlee, can you please indicate which version of Elastic Observability you use executing apm-server version ? cc @axw

@leewoobin789
Copy link
Contributor

@cyrille-leclerc
the entire cluster is based on the version 7.13.3.

@axw
Copy link
Contributor

axw commented Aug 18, 2021

Thanks for the details @realwoobinlee / @leewoobin789.

I've created elastic/apm-server#5960, as the bug is most likely there. The collector is just forwarding OTLP on to Elastic APM; translation to Elasticsearch docs happens there. Let's close this and continue over in the new issue.

@cyrille-leclerc
Copy link
Member

Hello @leewoobin789 , this bug on metrics was fixed in Elastic 7.15, please upgrade your Elastic cluster to geka metrics properly ingested.


Fixed in Elastic 7.15.0 see https://www.elastic.co/guide/en/apm//server/current/release-notes-7.15.html

@leewoobin789
Copy link
Contributor

Hello @cyrille-leclerc i left a comment regarding this already fixed issues: elastic/apm-server#6185
i close this issue. Thanks!

hex1848 pushed a commit to hex1848/opentelemetry-collector-contrib that referenced this issue Jun 2, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants