Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conditions with dots in label name does not work as expected #1284

Closed
barkbay opened this issue Sep 23, 2022 · 8 comments
Closed

Conditions with dots in label name does not work as expected #1284

barkbay opened this issue Sep 23, 2022 · 8 comments
Assignees
Labels
bug Something isn't working

Comments

@barkbay
Copy link
Contributor

barkbay commented Sep 23, 2022

For confirmed bugs, please report:

  • Version: 8.4.1
  • Operating System: Agent in GKE + Stack on Elastic Cloud

Autodiscover conditions does not seem to work if there is a / in a K8S Pod's label name used in the condition.

In my use case I want to collect Prometheus metrics from Pods with the app.kubernetes.io/name: global-controllers label:

k get pod -l app.kubernetes.io/name=global-controllers -A -o wide
NAMESPACE        NAME                                 READY   STATUS    RESTARTS   AGE   IP            NODE                                                  NOMINATED NODE   READINESS GATES
elastic-system   global-controllers-fd45757dd-xtrmr   1/1     Running   0          72m   10.20.49.18   gke-michael-dev-cluster--default-pool-d3f94c80-tqlr   <none>           <none>

Here is the standalone config I'm using:

apiVersion: v1
kind: ConfigMap
metadata:
  name: elastic-agent-cluster-metrics-cm
  labels:
    k8s-app: elastic-agent-cluster-metrics
data:
  elastic-agent-cm.yml: |-
    outputs:
      metrics:
        type: elasticsearch
        hosts:
          - >-
            ${ELASTICSEARCH_METRICS_HOST}
        username: ${ELASTICSEARCH_METRICS_USERNAME}
        password: ${ELASTICSEARCH_METRICS_PASSWORD}
    providers:
      kubernetes_leaderelection:
        enabled: true
      kubernetes:
        scope: cluster
        resources:
          pod:
            enabled: true
    inputs:
      - id: prometheus/metrics-prometheus-18fc9e8d-7243-449f-8c54-9407a1972014
        name: prometheus-1
        revision: 1
        type: prometheus/metrics
        use_output: metrics
        meta:
          package:
            name: prometheus
            version: 0.13.0
        data_stream:
          namespace: default
        streams:
          - id: >-
              prometheus/metrics-prometheus.collector-18fc9e8d-7243-449f-8c54-9407a1972014
            data_stream:
              dataset: prometheus.collector
              type: metrics
            metricsets:
              - collector
            hosts:
              - '${kubernetes.pod.ip}:9090'
            metrics_filters.exclude: null
            metrics_filters.include: null
            metrics_path: /metrics
            period: 10s
            condition: ${kubernetes.labels.app.kubernetes.io/name} == 'global-controllers'
            rate_counters: true
            use_types: true

If I use a label name without a / (for example app :foo), and I update the condition to condition: ${kubernetes.labels.app} == 'foo', then Agent collects the metrics as expected.

@barkbay barkbay added the bug Something isn't working label Sep 23, 2022
@cmacknz
Copy link
Member

cmacknz commented Sep 23, 2022

CC @elastic/obs-cloudnative-monitoring

@gsantoro gsantoro self-assigned this Sep 23, 2022
@gsantoro
Copy link
Contributor

Hello @barkbay,
I have just tested this behaviour and I managed to collect logs from prometheus using the following condition ${kubernetes.annotations.prometheus.io/scrape} == 'true' in a similar environment Agent in GKE + Stack on Elastic Cloud with the same collector. The only difference is that my elastic agent was running with Fleet in managed mode.

Have you tried with a different condition (still with /)?
You should be able to try with ${kubernetes.labels.app.kubernetes.io/name} == 'kube-state-metrics' (if you have kube-state-metrics running).

@barkbay
Copy link
Contributor Author

barkbay commented Sep 26, 2022

It's really odd:

  1. If I try with a different but similar label: it still fails to collect the metrics ❌
$ k label pod/my-pod -n elastic-system app.kubernetes.io/foo=foo
pod/my-pod labeled

with

condition: ${kubernetes.labels.app.kubernetes.io/foo} == 'foo'
  1. If I use a simpler one, but still with / inside it works again ✅
k label pod/my-pod -n elastic-system bar/foo=test

with

condition: ${kubernetes.labels.bar/foo} == 'test'
  1. Going back to condition: ${kubernetes.labels.app.kubernetes.io/foo} == 'foo' if fails again to collect the metrics ❌

image

So, yes, it does not seem to be related to the /, but something else prevents Agent to discover the Pod.

I would really appreciate if you could try to reproduce on your end in standalone mode.
Thanks

@MichaelKatsoulis MichaelKatsoulis self-assigned this Sep 26, 2022
@gsantoro gsantoro removed their assignment Sep 26, 2022
@MichaelKatsoulis
Copy link
Contributor

@barkbay can you try and replace the condition with condition: ${kubernetes.labels.app_kubernetes_io/name} == 'global-controllers' and see if it works?

@MichaelKatsoulis
Copy link
Contributor

The reason is labels are dedoted by default meaning that a label containing a dot (.) like app.kubernetes.io/name will be changed to app_kubernetes_io/name so that it won't be stored encapsulated into elasticsearch. After the dedoting, the condition check takes place.

The problem is that this is not documented and also annotations do not follow the same approach as @gsantoro mentioned in #1284 (comment).

@barkbay barkbay changed the title Conditions with slash character in label name does not seem to work as expected Conditions with dots in label name does not work as expected Sep 27, 2022
@barkbay
Copy link
Contributor Author

barkbay commented Sep 27, 2022

Thanks for your help. I moved to an annotation in the meantime, and everything is now working as expected (using dots). I'm not sure I'll have the time to test replacing . with _ in labels but your explanation makes sense.

Feel free to close this issue or keep it open as a way to track that this behaviour should be reflected in the documentation.

Thanks again 🙇

@MichaelKatsoulis
Copy link
Contributor

MichaelKatsoulis commented Sep 28, 2022

You are welcome. I will keep this open as except from the missing documentation around this, we would like the behaviour between labels and annotations to be the same.

I believe it makes more sense that users use in conditions matching, the labels/annotations exactly as they have added them in the pod/resource, instead of the dedoted way they are stored in elasticsearch.

@elastic/obs-cloudnative-monitoring , @gizas

@MichaelKatsoulis
Copy link
Contributor

Fixed by #1398 in elastic-agent and elastic/beats#33240 in beats.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants