Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to set X-Scope-OrgID dynamically with headerssetterextension to send metrics to Grafana Mimir #27901

Closed
jriguera opened this issue Oct 22, 2023 · 15 comments
Labels

Comments

@jriguera
Copy link
Contributor

jriguera commented Oct 22, 2023

Component(s)

extension/headerssetter

What happened?

Description

We are trying to use the collector as a K8S daemonset to discover pods with annotations
and send metrics to Grafana Mimir setting the tenant from the namespace or from another
K8S annotation.

We hare deploying the open-telemetry/opentelemetry-collector with helm as a daemonset
(see configuration below)

We are deploying podinfo as workload with
the annotations:

podAnnotations: {
  "o11y-monitoring/scrape": "true",
  "o11y-monitoring/port": "9898",
  "o11y-monitoring/path": "/metrics"
}

Steps to Reproduce

Given this collector configuration:

extensions:
  health_check: {}
  memory_ballast: {}
  k8s_observer:
    auth_type: serviceAccount
    node: ${env:K8S_NODE_NAME}
    observe_pods: true
    observe_nodes: true
  headers_setter:
    headers:
      # Trying to get it from resource attributes
      - action: upsert
        key: X-Scope-OrgID
        from_context: o11y_tenant
      # Trying to get it from processor.resource.attributes
      - action: upsert
        key: X-Scope-OrgID0
        from_context: tenant0
      # Trying to get it from processor.transform.metrics_statements [context.resource]
      - action: upsert
        key: X-Scope-OrgID1
        from_context: tenant1
      # Trying to get it from processor.transform.metrics_statements [context.datapoint]
      - action: upsert
        key: X-Scope-OrgID2
        from_context: tenant1
      # Fixed value
      - action: upsert
        key: X-Scope-OrgID3
        value: tenant3

receivers:
  jaeger: null
  prometheus: null
  zipkin: null
  otlp: null
  receiver_creator/o11y:
    # Name of the extensions to watch for endpoints to start and stop.
    watch_observers: [k8s_observer]
    receivers:
      prometheus_simple:
        # Configure prometheus scraping if standard prometheus annotations are set on the pod.
        rule: type == "pod" && annotations["o11y-monitoring/scrape"] == "true"
        config:
          metrics_path: '`"o11y-monitoring/path" in annotations ? annotations["o11y-monitoring/path"] : "/metrics"`'
          endpoint: '`endpoint`:`"o11y-monitoring/port" in annotations ? annotations["o11y-monitoring/port"] : 8080`'
        resource_attributes:
          o11y_tenant: '`"o11y-monitoring/tenant" in annotations ? annotations["o11y-monitoring/tenant"] : namespace`'
          o11y_namespace: '`namespace`'

processors:
  # Not needed
  k8sattributes:
    auth_type: "serviceAccount"
    passthrough: false
    filter:
      node_from_env_var: K8S_NODE_NAME
    extract:
      metadata:
        - k8s.pod.name
        - k8s.pod.uid
        - k8s.deployment.name
        - k8s.namespace.name
        - k8s.node.name
        - k8s.pod.start_time
    pod_association:
    - sources:
      - from: resource_attribute
        name: k8s.pod.uid
    - sources:
      - from: resource_attribute
        name: k8s.pod.name
      - from: resource_attribute
        name: k8s.namespace.name

  resource:
    attributes:
    - action: insert
      key: tenant0
      from_attribute: k8s.namespace.name

  transform:
    error_mode: propagate
    metric_statements:
    - context: resource
      statements:
      - set(attributes["tenant1"], attributes["k8s.namespace.name"])
    - context: datapoint
      statements:
      - set(attributes["tenant2"], resource.attributes["k8s.namespace.name"])


exporters:
  prometheusremotewrite/mimir:
    endpoint: "http://mockserver:1080"
    resource_to_telemetry_conversion:
      enabled: true       # Convert resource attributes to metric labels
    auth:
      authenticator: headers_setter

  otlphttp/mimir:
    endpoint: "http://mockserver:1080"
    timeout: 30s
    tls:
      insecure_skip_verify: true

  debug:
    verbosity: detailed

service:
  pipelines:
    logs: null
    traces: null
    metrics:
      receivers: [receiver_creator/o11y]
      processors: [k8sattributes, resource, transform]
      exporters: [debug, prometheusremotewrite/mimir]
  extensions: 
  - k8s_observer
  - health_check
  - memory_ballast
  - headers_setter

Then deploy a a pod (podinfo) in kubernetes with the defined annotations and wait for the metrics flow.
Metrics are successfully send to the prometheusremotewrite/mimir endpoint (also the otlphttp/mimir works),
but the tenant header (X-Scope-OrgID) expected by Grafana Mimir is not set in the HTTP request.

Expected Result

Set dynamically the header based on a resource attribute.
As described in the readme of Headers Setter extension, the
intended use case to enable multi-tenancy for observability backends
such as Tempo, Mimir, Loki.

See a header X-Scope-Orgid with the value of the K8S namespace

Actual Result

Only the header X-Scope-OrgID3 is set from a static value tenant3.
The other headers X-Scope-OrgID are not dynamically set.

     {"kind": "exporter", "data_type": "metrics", "name": "debug"}
 Resource SchemaURL:
 Resource attributes:
      -> service.name: Str(prometheus_simple/10.89.5.210:9898)
      -> net.host.name: Str(10.89.5.210)
      -> service.instance.id: Str(10.89.5.210:9898)
      -> net.host.port: Str(9898)
      -> http.scheme: Str(http)
      -> o11y_namespace: Str(o11y-dev-innovation-podinfo)
      -> k8s.pod.name: Str(o11y-dev-podinfo-5f88d6c5f7-gx8bt)
      -> k8s.pod.uid: Str(9bbe733e-1e0c-47f6-80b2-c640913176c2)
      -> k8s.namespace.name: Str(o11y-dev-innovation-podinfo)
      -> o11y_tenant: Str(o11y-dev-innovation-podinfo)
      -> k8s.pod.start_time: Str(2023-10-20 12:55:21 +0000 UTC)
      -> k8s.node.name: Str(gke-ee-k8s-nap-e2-standard-8-1pwrsmx6-3627fcd0-nhb3)
      -> tenant0: Str(o11y-dev-innovation-podinfo)
      -> tenant1: Str(o11y-dev-innovation-podinfo)
 ScopeMetrics #0
 ScopeMetrics SchemaURL:
 InstrumentationScope otelcol/prometheusreceiver 0.87.0
 Metric #0
 Descriptor:
      -> Name: process_start_time_seconds
      -> Description: Start time of the process since unix epoch in seconds.
      -> Unit:
      -> DataType: Gauge
 NumberDataPoints #0
 Data point attributes:
      -> tenant2: Str(o11y-dev-innovation-podinfo)
 StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
 Timestamp: 2023-10-22 21:10:25.281 +0000 UTC
 Value: 1697806521.420000
 Metric #1
 Descriptor:
      -> Name: promhttp_metric_handler_requests_in_flight
      -> Description: Current number of scrapes being served.
      -> Unit:
      -> DataType: Gauge
 NumberDataPoints #0
 Data point attributes:
      -> tenant2: Str(o11y-dev-innovation-podinfo)

Log ouput from mockserver (see headers section):

 2023-10-22 21:14:45 5.14.0 INFO 1080 received request:
   {
     "method" : "POST",
     "path" : "/",
     "headers" : {
       "content-length" : [ "6736" ],
       "content-encoding" : [ "snappy" ],
       "X-Scope-Orgid3" : [ "tenant3" ],
       "X-Scope-Orgid2" : [ "" ],
       "X-Scope-Orgid1" : [ "" ],
       "X-Scope-Orgid0" : [ "" ],
       "X-Scope-Orgid" : [ "" ],
       "X-Prometheus-Remote-Write-Version" : [ "0.1.0" ],
       "User-Agent" : [ "opentelemetry-collector-contrib/0.87.0" ],
       "Host" : [ "mockserver:1080" ],
       "Content-Type" : [ "application/x-protobuf" ],
       "Content-Encoding" : [ "snappy" ],
       "Accept-Encoding" : [ "gzip" ]
     },
     "keepAlive" : true,
     "secure" : false,
     "localAddress" : "10.89.5.61:1080",
     "remoteAddress" : "10.89.5.64",
     "body" : "gfEE8H0KuQYKIwoIX19uYW1lX18SF2dvX ..."
   }

Collector version

Image: otel/opentelemetry-collector-contrib:0.87.0

Environment information

Environment

Deployed in K8S with the official helm chart from https://open-telemetry.github.io/opentelemetry-helm-charts

OpenTelemetry Collector configuration

Helm configuration:

mode: daemonset

extraEnvs:
  - name: K8S_NODE_NAME
    valueFrom:
      fieldRef:
        fieldPath: spec.nodeName

ports:
  otlp:
    enabled: false
  otlp-http:
    enabled: false
  jaeger-compact:
    enabled: false
  jaeger-thrift:
    enabled: false
  jaeger-grpc:
    enabled: false
  zipkin:
    enabled: false

clusterRole:
  create: true
  rules: 
  - apiGroups: [""]
    resources: ["nodes", "pods", "namespaces"]
    verbs: ["get", "watch", "list"]
  - apiGroups: ["apps"]
    resources: ["replicasets"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["extensions"]
    resources: ["replicasets"]
    verbs: ["get", "list", "watch"]

config:
  extensions:
    health_check: {}
    memory_ballast: {}
    k8s_observer:
      auth_type: serviceAccount
      node: ${env:K8S_NODE_NAME}
      observe_pods: true
      observe_nodes: true
    headers_setter:
      headers:
        # Trying to get it from resource attributes
        - action: upsert
          key: X-Scope-OrgID
          from_context: o11y_tenant
        # Trying to get it from processor.resource.attributes
        - action: upsert
          key: X-Scope-OrgID0
          from_context: tenant0
        # Trying to get it from processor.transform.metrics_statements [context.resource]
        - action: upsert
          key: X-Scope-OrgID1
          from_context: tenant1
        # Trying to get it from processor.transform.metrics_statements [context.datapoint]
        - action: upsert
          key: X-Scope-OrgID2
          from_context: tenant1
        # Fixed value
        - action: upsert
          key: X-Scope-OrgID3
          value: tenant3
  
  receivers:
    jaeger: null
    prometheus: null
    zipkin: null
    otlp: null
    receiver_creator/o11y:
      # Name of the extensions to watch for endpoints to start and stop.
      watch_observers: [k8s_observer]
      receivers:
        prometheus_simple:
          # Configure prometheus scraping if standard prometheus annotations are set on the pod.
          rule: type == "pod" && annotations["o11y-monitoring/scrape"] == "true"
          config:
            metrics_path: '`"o11y-monitoring/path" in annotations ? annotations["o11y-monitoring/path"] : "/metrics"`'
            endpoint: '`endpoint`:`"o11y-monitoring/port" in annotations ? annotations["o11y-monitoring/port"] : 8080`'
          resource_attributes:
            o11y_tenant: '`"o11y-monitoring/tenant" in annotations ? annotations["o11y-monitoring/tenant"] : namespace`'
            o11y_namespace: '`namespace`'
  
  processors:
    # Not needed
    k8sattributes:
      auth_type: "serviceAccount"
      passthrough: false
      filter:
        node_from_env_var: K8S_NODE_NAME
      extract:
        metadata:
          - k8s.pod.name
          - k8s.pod.uid
          - k8s.deployment.name
          - k8s.namespace.name
          - k8s.node.name
          - k8s.pod.start_time
      pod_association:
      - sources:
        - from: resource_attribute
          name: k8s.pod.uid
      - sources:
        - from: resource_attribute
          name: k8s.pod.name
        - from: resource_attribute
          name: k8s.namespace.name
  
    resource:
      attributes:
      - action: insert
        key: tenant0
        from_attribute: k8s.namespace.name
  
    transform:
      error_mode: propagate
      metric_statements:
      - context: resource
        statements:
        - set(attributes["tenant1"], attributes["k8s.namespace.name"])
      - context: datapoint
        statements:
        - set(attributes["tenant2"], resource.attributes["k8s.namespace.name"])
  
  
  exporters:
    prometheusremotewrite/mimir:
      endpoint: "http://mockserver:1080"
      resource_to_telemetry_conversion:
        enabled: true       # Convert resource attributes to metric labels
      auth:
        authenticator: headers_setter
  
    otlphttp/mimir:
      endpoint: "http://mockserver:1080"
      timeout: 30s
      tls:
        insecure_skip_verify: true
  
    debug:
      verbosity: detailed
  
  service:
    pipelines:
      logs: null
      traces: null
      metrics:
        receivers: [receiver_creator/o11y]
        processors: [k8sattributes, resource, transform]
        exporters: [debug, prometheusremotewrite/mimir]
    extensions: 
    - k8s_observer
    - health_check
    - memory_ballast
    - headers_setter

Log output

No response

Additional context

No response

@jriguera jriguera added bug Something isn't working needs triage New item requiring triage labels Oct 22, 2023
@github-actions
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@jpkrohling
Copy link
Member

The main problem is that extensions don't get access to the telemetry data, and the header extension was mostly done with tenancy information coming from the connection (usually, HTTP headers) instead of coming from the telemetry itself.

That said, I believe we have an issue tracking this feature request, although there's no ETA for implementing it. I can commit to reviewing a PR if you decide to contribute with a fix for that.

@jpkrohling jpkrohling added enhancement New feature or request and removed bug Something isn't working needs triage New item requiring triage labels Oct 23, 2023
@jriguera
Copy link
Contributor Author

Thanks for the answer @jpkrohling !

First of all I am quite new to OT, so maybe I am missing something, I have a couple of questions:

  1. Regarding how to solve the initial requirement I have. I've seen other people configurations -who wanted to achieve something similar- defining a finite number of (mimir) exporters -one per tenant- and using a routing processor to select which exporter send metrics to. In our case we do not want to hardcode tenants (they are "dynamic" ), are you aware of some kind of component or configuration chain to solve this issue (with OT agent configuration, not involving external software)?

  2. Regarding a PR, I would need to have a look at the code first. But according to what you said, then a extension is not the way to go for this use case. It seems to me, it needs to be fixed at the exporters, in the same way as Loki with the loki.tenant tag. So prometheusremotewrite and/or otlphttp would need to get extended to look for a resource attribute tenant (probably not hardcoded) which will be used as tenant value with the header X-Scope-OrgID. Does it make sense? or is there a better way to implement it?

Thanks!

@jpkrohling
Copy link
Member

jpkrohling commented Oct 24, 2023

  1. the routing processor would work fine if you have a pre-defined list of tenants, but the original problem remains: you'd need something that would bring information from the pipeline (resource attributes) into the connection (context).

  2. Right now, the only solution I can think of is a new processor, which would take entries from the resource attributes and place them into a context, splitting the incoming batches into a batch per tenant (and thus, one context per batch). I might be wrong, but I can't think of an existing processor that would be suitable for this.

@jriguera
Copy link
Contributor Author

For us, the first point is not a solution. Maybe for other people is fine, but we do not want to deal with "static" tenants (and also we can't).

Regarding the second point, I am wondering if it is correct (or elegant) that a processor uses the context in that way. It seems the context holds information about the HTTP connection, then, does it make sense to add some data there which never was part of it? (just because there is an extension which can use it later). Maybe is better, one of this options:

  1. Implement a specific Mimir exporter, using a similar logic as you described for the processor (which I think is the same as Loki exporter is doing).
  2. Add some logic to otlphttp (or prometheusremotewrite) to be able to headers with values taken from resource attributes.

Ofc, the advantage of the processor is that it can be used for any kind o headers and exporters, but not sure if it would be the correct way ...

@jpkrohling
Copy link
Member

The context isn't necessarily the network context, but the "execution context". At some point, we need to split the batches per tenant, and when doing so, it's OK to mark the context as belonging to that specific tenant.

About your alternatives, I believe the first wouldn't be desirable, as we want users of other distributions to be able to send data to Mimir with the regular OTLP exporter. Having a specific exporter for Mimir doesn't prevent that, but sends the wrong signal, IMO. The second one could work as well, but it still requires "something" to split the batches into connections, as each connection going to Mimir should have its own X-Scope-OrgID.

@jriguera
Copy link
Contributor Author

Ok, I see your point and I agree with you.

I have been looking to all components available to see the different options and I have found these:

  1. There is a batchprocessor which is able to create batches grouped by metatada keys. I would need to investigate more what is a metadata key. But it seems that the main part of the job can be done with this processor.
  2. There is also a groupbyattrsprocessor The docs say: It is recommended to use the groupbyattrs processor together with batch processor, as a consecutive step, as this will reduce the fragmentation of data (by grouping records together under matching Resource/Instrumentation Library)

With those two pieces, by adding a 3rd one which takes the resource attribute used for batching and grouping, then we get what is needed, right? We would need to create a new processor attributetocontextprocessor which moves the attribute used for batching/grouping to the the context and it will become available for the headersetterextension. Another option would be extend the transformprocessor with some action to perform this task.

So, the pipeline would look something like this (notice the new attributetocontextprocessor)

  service:
    pipelines:
      metrics:
        receivers: [receiver_creator/o11y]
        processors: [memory_limiter, batch, groupbyattrs, attributetocontext]
        exporters: [debug, prometheusremotewrite/mimir]
    extensions: 
    - k8s_observer
    - health_check
    - memory_ballast
    - headers_setter

Do you think this makes sense? and will it work? I cannot promise that I will have time for a PR right now, but maybe in the near future I can do it or somebody else can take this idea.

@jpkrohling
Copy link
Member

jpkrohling commented Oct 26, 2023

I believe it would indeed work, thank you for looking into this! If you decide to work on this, please follow the following guidelines for a new component: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/CONTRIBUTING.md#adding-new-components

@jriguera
Copy link
Contributor Author

Hi @jpkrohling. I have managed to create a prototype and it is working. I will work on it a bit more (to do the same for traces and logs) and follow the guide to add a new component. Most likely I will get int contact with you about the process, but probably not this week.
I think this issue can be closed.
Thanks for your support!

@brunokoeferli
Copy link

@jriguera Have you elaborated your prototype further?

I have the same requirement to set the X-Scope-OrdID dynamically on metrics (and traces/logs in the future).

@jriguera
Copy link
Contributor Author

Yes, I did, we have it working in production. But I had no time to compile it with the latest version (it is working with version 0.91.0)
You have instructions in the repo about how to build it and use it: https://github.com/springernature/o11y-otel-contextprocessor

@jriguera
Copy link
Contributor Author

I can provide a working configuration if you need it. We assign the tenant value from the namespace (so that is the default) or if there is an specific annotation in the pods, it takes the value of the tenant from there. It works for metrics, traces and logs.

@brunokoeferli
Copy link

Thank you very much for your quick reply. The contextprocessor's README should be fine for the moment.

@dfi470
Copy link

dfi470 commented Feb 29, 2024

Yes, I did, we have it working in production. But I had no time to compile it with the latest version (it is working with version 0.91.0) You have instructions in the repo about how to build it and use it: https://github.com/springernature/o11y-otel-contextprocessor

Hi @jriguera - Glad to see that you have added "cotextprocessor" , Is this now part of the latest otel version 0.95.0 too ?

@jriguera
Copy link
Contributor Author

jriguera commented Mar 7, 2024

No, this is not part of the otel collector standard distribution. You have to compile it by yourself (instructions are on the repo) and get the binary.

I do not think it can be part of the standard distribution, I think the functionality provided by contextprocessor should be included in the otel language functions to allow users to use it with transform processor. Unfortunately I do not have time to implement it yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants