Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8sprocessor: cannot parse config * '' has invalid keys: pod_association #2719

Closed
rockb1017 opened this issue Mar 16, 2021 · 3 comments
Closed
Labels
bug Something isn't working

Comments

@rockb1017
Copy link
Contributor

Describe the bug
the agent fails to parse configuration for k8s processor.

Steps to reproduce
run collector with below config.

What did you expect to see?
for it to work.

What did you see instead?
pod fails with this error message

2021/03/12 03:07:05 application run finished with error: cannot load configuration: error reading processors configuration for k8s_tagger: 1 error(s) decoding:
* '' has invalid keys: pod_association

What version did you use?
0.22.0

What config did you use?
Config:

exporters:
  logging:
    loglevel: debug
    sampling_initial: 5
    sampling_thereafter: 200
  splunk_hec:
    disable_compression: true
    endpoint: https://172.31.18.227:8088/services/collector
    index: k8s_log
    insecure_skip_verify: true
    max_connections: 2000
    source: otel
    sourcetype: otel
    timeout: 10s
    token: XXX
extensions:
  health_check: {}
processors:
  batch: {}
  k8s_tagger:
    auth_type: kubeConfig
    extract:
      annotations:
      - key: splunk.com/index
      labels:
      - key: hello
      metadata:
      - podName
      - podUID
      - deployment
      - cluster
      - namespace
      - node
      - startTime
    filter:
      node_from_env_var: KUBE_NODE_NAME
    passthrough: false
    pod_association:
    - from: resource_attribute
      name: k8s.pod.uid
  memory_limiter:
    ballast_size_mib: 204
    check_interval: 5s
    limit_mib: 409
    spike_limit_mib: 128
receivers:
  filelog:
    exclude:
    - /var/log/pods/default_otel-opentelemetry-collector-agent-*_*/opentelemetry-collector/*.log
    include:
    - /var/log/pods/*/*/*.log
    include_file_name: false
    include_file_path: true
    operators:
    - id: parser-docker
      output: extract_metadata_from_filepath
      timestamp:
        layout: '%Y-%m-%dT%H:%M:%S.%LZ'
        parse_from: time
      type: json_parser
    - id: extract_metadata_from_filepath
      parse_from: $$labels.file_path
      regex: ^\/var\/log\/pods\/(?P<namespace>[^_]+)_(?P<pod_name>[^_]+)_(?P<uid>[^\/]+)\/(?P<container_name>[^\._]+)\/(?P<run_id>\d+)\.log$
      type: regex_parser
    - attributes:
        k8s.container.name: EXPR($.container_name)
        k8s.namespace.name: EXPR($.namespace)
        k8s.pod.name: EXPR($.pod_name)
        k8s.pod.uid: EXPR($.uid)
        run_id: EXPR($.run_id)
        stream: EXPR($.stream)
      resource:
        k8s.pod.uid: EXPR($.uid)
      type: metadata
    - id: clean-up-log-record
      ops:
      - remove: logtag
      - remove: stream
      - remove: container_name
      - remove: namespace
      - remove: pod_name
      - remove: run_id
      - remove: uid
      type: restructure
    start_at: beginning
  jaeger:
    protocols:
      grpc:
        endpoint: 0.0.0.0:14250
      thrift_http:
        endpoint: 0.0.0.0:14268
  otlp:
    protocols:
      grpc: null
      http: null
  prometheus:
    config:
      scrape_configs:
      - job_name: opentelemetry-collector
        scrape_interval: 10s
        static_configs:
        - targets:
          - ${MY_POD_IP}:8888
  zipkin:
    endpoint: 0.0.0.0:9411
service:
  extensions:
  - health_check
  pipelines:
    logs:
      exporters:
      - logging
      - splunk_hec
      processors:
      - batch
      - k8s_tagger
      receivers:
      - filelog
    metrics:
      exporters:
      - logging
      processors:
      - memory_limiter
      - batch
      receivers:
      - prometheus
    traces:
      exporters:
      - logging
      processors:
      - memory_limiter
      - batch
      receivers:
      - jaeger
      - zipkin

Environment
docker
compiler: go version go1.15.5 darwin/amd64

@rockb1017 rockb1017 added the bug Something isn't working label Mar 16, 2021
@pmatyjasek-sumo
Copy link
Contributor

Hey @rockb1017 I've checked your scenario on versions 0.22.0 and 0.21.0. Version 0.22.0 worked properly.
On 0.21.0 I've got this error:

2021-03-17T08:31:43.695Z	info	service/service.go:411	Starting OpenTelemetry Contrib Collector...	{"Version": "v0.21.0", "GitHash": "bbb76e92", "NumCPU": 8}
2021-03-17T08:31:43.695Z	info	service/service.go:255	Setting up own telemetry...
2021-03-17T08:31:43.697Z	info	service/telemetry.go:102	Serving Prometheus metrics	{"address": ":8888", "level": 0, "service.instance.id": "bbb81539-9626-4632-9543-398e9dfce7d7"}
2021-03-17T08:31:43.697Z	info	service/service.go:292	Loading configuration...
Error: cannot load configuration: error reading processors configuration for k8s_tagger: 1 error(s) decoding:

* '' has invalid keys: pod_association
2021/03/17 08:31:43 application run finished with error: cannot load configuration: error reading processors configuration for k8s_tagger: 1 error(s) decoding:

* '' has invalid keys: pod_association

It is caused that pod_association config key was introduced in 0.22.0. Could you please double check version of collector in your container?

@rockb1017
Copy link
Contributor Author

So it should work when i build the docker image from main branch, yeah?
using make docker-otelcontribcol

@rockb1017
Copy link
Contributor Author

yea it works now. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants