-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k8sattributes it`s not working in EKS 1.26 #22036
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Anything in the collector logs? |
nothing, k8sattributes launched but in logs nothing |
Did this configuration work with a previous collector or EKS version? |
idk i use only this version, but in my opinion, it may not work in EKS in general. I used your example and it`s also not working because you extract pod names and names from the path of the file so k8attributes does nothing. |
|
you can remove "move" from filelog proccesor and leave only k8sattributes and you must see
|
By default k8sattributes processor identifies the Pod by looking at the ip of the remote which sent the data. This works if the data is sent directly from instrumentation, but if you want to use it in a different context (for example a DaemonSet collecting logs), you need to tell the processor how to identify the Pod for a given resource. For the configuration you posted, you get pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.name
- from: resource_attribute
name: k8s.namespace.name I think your current config is missing the |
Okay but in the case of pod label how i must configure this in the example it only adds the label part and all good but in my case it`s not working. |
I'm not sure I follow what exactly you're seeing at this point. Can you post
? |
it`s current configuration
If you see i added k8s.deployment.name in k8sattributes but nothing in log context. @swiatekm-sumo |
This: pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.start_time
- from: resource_attribute
name: k8s.deployment.name should instead be: pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.name
- from: resource_attribute
name: k8s.namespace.name To be clear, these sources shouldn't be the same as the attributes you have specified under As an aside, you should be careful about using "global" receivers like k8sevents in a DaemonSet context. You're going to get the same events out of every collector Pod, whereas you only want them once per cluster. Same thing is true about the cluster receiver, and probably about aws cloudwatch. |
what difference between the 2 pod_association for me, i need the deployment name also labels but it`s now extracting metadata and labels from the pod. |
i |
|
yes but maybe you know why k8sattributes cannot extract the deployment name or another attribute. Maybe he can conflict with another processor. |
Are there any updates on why the processor cannot extract the pod label? @swiatekm-sumo |
I'm honestly a bit lost as to the current state of your setup @AndriySidliarskiy. Can you be clearer about:
|
@swiatekm-sumo |
I see the problem now, you have the identifying information in record attributes instead of resource attributes. They need to be at the resource level. In your filelog receiver configuration, change: - type: move
from: attributes.namespace
to: attributes["k8s.namespace.name"]
- type: move
from: attributes.restart_count
to: attributes["k8s.pod.restart_count"]
- type: move
from: attributes.pod_name
to: attributes["k8s.pod.name"]
- type: move
from: attributes.container_name
to: attributes["k8s.container.name"] to: - type: move
from: attributes.namespace
to: resource["k8s.namespace.name"]
- type: move
from: attributes.restart_count
to: resource["k8s.pod.restart_count"]
- type: move
from: attributes.pod_name
to: resource["k8s.pod.name"]
- type: move
from: attributes.container_name
to: resource["k8s.container.name"] |
@swiatekm-sumo but the main problem it`s the extract pod label. and for me this solution not working. I cannot extract metadata and put it in log.
|
So you do see |
and after this #22036 (comment) i have k8s.pod.name inside the resource but the log doesn't have a deployment name and labels that must be extracted by k8sattributtes. @swiatekm-sumo
|
Have you also implemented the changes from #22036 (comment)? |
yes, in my opinion, k8sattributes not working, in opentemeetry logs he launched but cannot extract metadata. |
Yes, I can see it's not working, I'm trying to figure out what's wrong with your configuration that's causing it. Can you post your current configuration again? If you're looking at collector logs, can you post those as well? |
@swiatekm-sumo
|
@swiatekm-sumo i add this for test purposes and it`s also not working. |
Also, this:
should have |
If that doesn't help, please enable debug logging by setting: service:
telemetry:
logs:
level: DEBUG and post the collector logs you see. There's probably going to be a lot, so it would help a lot if you only posted logs from the k8sattributes processor. |
@swiatekm-sumo hi. New logs from DEBUG
|
Thanks! That confirms my hypothesis that the problem lies in identifying the Pod for the given resource. These log lines:
mean that we can't find the Pod identifier. Can you confirm the following facts for me:
|
i`m using file log receiver to extract the namespace name and pod name from the file path but we can test this with k8s.deployment.name. Now configuration looks like
|
@swiatekm-sumo but also when i committed part of moving form attributes to resource k8s.pod.name etc i have the same errors.
|
and also with this configuration i have the same error
|
That should work. At the very least k8sattributes processor should compute the right identifier. Even with the above config, you see the same logs? |
yes |
could you have time to test this on eks environment? @swiatekm-sumo |
I don't think this has anything to do with the specific K8s distribution in play, but I will test your specific config in a KinD cluster. |
@swiatekm-sumo Thanks but how much time it can take to test? |
Just to be clear, I'm not going to be committing to any timelines here, any assistance offered in this issue is on a best-effort basis. With that said, I tested the following configurations:
and this worked as expected:
So there must be something in your actual configuration that doesn't match what you've posted here. |
@swiatekm-sumo where you launched it? it`s like eks, aks local Kubernetes or what:? |
and could you please clarify how k8sattributes extracts data, Is it like he calls endpoint, or how it works? |
and could you please provide a full confirmation for me that you use, thanks a lot. |
In a local KiND cluster.
Are you asking how it gets metadata from the K8s apiserver? It establishes a Watch for the necessary resources (mostly Pods) and maintains a local cache of them via the standard client-go mechanism of informers. For the issue you're experiencing, the problem isn't that metadata though, it's that the processor can't tell which Pod your log records come from. That's what the logs about identifying the Pod mean. |
yea, i understand but it`s interesting why this processor cannot identify pod. |
Here's a stripped down manifest where the Pod is identified correctly: apiVersion: apps/v1
kind: DaemonSet
metadata:
name: otelcol-logs-collector
spec:
selector:
matchLabels:
app.kubernetes.io/name: otelcol-logs-collector
template:
metadata:
labels:
app.kubernetes.io/name: otelcol-logs-collector
spec:
securityContext:
fsGroup: 0
runAsGroup: 0
runAsUser: 0
containers:
- args:
- --config=/etc/otelcol/config.yaml
image: "otel/opentelemetry-collector-contrib:0.77.0"
name: otelcol
volumeMounts:
- mountPath: /etc/otelcol
name: otelcol-config
- mountPath: /var/log/pods
name: varlogpods
readOnly: true
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
volumes:
- configMap:
defaultMode: 420
items:
- key: config.yaml
path: config.yaml
name: otelcol-logs-collector
name: otelcol-config
- hostPath:
path: /var/log/pods
type: ""
name: varlogpods
---
# Source: sumologic/templates/logs/collector/otelcol/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: otelcol-logs-collector
labels:
app: otelcol-logs-collector
data:
config.yaml: |
exporters:
logging:
processors:
k8sattributes:
auth_type: serviceAccount
extract:
annotations:
- from: pod
key: monitoring
tag_name: monitoring
labels:
- from: pod
key: c2i.pipeline.execution
tag_name: c2i.pipeline.execution
- from: pod
key: c2i.pipeline.project
tag_name: c2i.pipeline.project
metadata:
- k8s.pod.name
- k8s.pod.uid
- k8s.deployment.name
- k8s.namespace.name
filter:
node_from_env_var: KUBE_NODE_NAME
passthrough: false
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.name
- from: resource_attribute
name: k8s.namespace.name
receivers:
filelog/containers:
include:
- /var/log/pods/*/*/*.log
include_file_name: false
include_file_path: true
operators:
- id: parser-containerd
output: merge-cri-lines
parse_to: body
regex: ^(?P<time>[^ ^Z]+Z) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*)( |)(?P<log>.*)$
timestamp:
layout: '%Y-%m-%dT%H:%M:%S.%LZ'
parse_from: body.time
type: regex_parser
- combine_field: body.log
combine_with: ""
id: merge-cri-lines
is_last_entry: body.logtag == "F"
overwrite_with: newest
source_identifier: attributes["log.file.path"]
type: recombine
- id: extract-metadata-from-filepath
parse_from: attributes["log.file.path"]
parse_to: attributes
regex: ^.*\/(?P<namespace>[^_]+)_(?P<pod_name>[^_]+)_(?P<uid>[a-f0-9\-]+)\/(?P<container_name>[^\._]+)\/(?P<run_id>\d+)\.log$
type: regex_parser
- from: attributes.container_name
to: resource["k8s.container.name"]
type: move
- from: attributes.namespace
to: resource["k8s.namespace.name"]
type: move
- from: attributes.pod_name
to: resource["k8s.pod.name"]
type: move
- field: attributes.run_id
type: remove
- field: attributes.uid
type: remove
- field: attributes["log.file.path"]
type: remove
- from: body.log
to: body
type: move
service:
pipelines:
logs/containers:
exporters:
- logging
processors:
- k8sattributes
receivers:
- filelog/containers
telemetry:
logs:
level: debug Note that k8sattributes doesn't add metadata here, as it doesn't have the required RBAC. But it does identify Pods correctly, which you can confirm in the debug logs. |
so i try to use another configuration and it work but now i have this error but for metrics pipeline |
@AndriySidliarskiy was your original problem fixed, then? If you have a different one, please close this issue and open a new one, with more information pertaining to the new problem with metrics. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been closed as inactive because it has been stale for 120 days with no activity. |
this issue is still happening to me in EKS |
Component(s)
processor/k8sattributes
What happened?
Description
I have an open telemetry colector configuration with k8sattributes but in the log context, I cannot see anything from metadata included in k8sattributes
Steps to Reproduce
Configure k8sattributes In EKS 1.26 with open telemetry collector helm chart.
Expected Result
Actual Result
nothing
Collector version
0.77.0
Environment information
Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
OpenTelemetry Collector configuration
Log output
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: