Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multi-container-pods can cause problems with useLabelsForResourceAttributes #3495

Closed
zeitlinger opened this issue Nov 25, 2024 · 3 comments · Fixed by #3497
Closed

multi-container-pods can cause problems with useLabelsForResourceAttributes #3495

zeitlinger opened this issue Nov 25, 2024 · 3 comments · Fixed by #3497
Labels
enhancement New feature or request needs triage

Comments

@zeitlinger
Copy link
Member

zeitlinger commented Nov 25, 2024

Component(s)

auto-instrumentation

Related slack

https://cloud-native.slack.com/archives/C041APFBYQP/p1732635742710849

Is your feature request related to a problem? Please describe.

If you have a multi-container pod that use OTel in combination with useLabelsForResourceAttributes, then all of those applications (on all containers) end up sharing the same service.instance.id and service.name.

Why is this a problem for service.instance.id?

The value must be unique (https://opentelemetry.io/docs/specs/semconv/attributes-registry/service/) - but all apps (in all containers) would get the same value.

Describe the solution you'd like

Given that the default (namespace name + pod name + container name) is good as it is, we could remove the logic for service.instance.id.

Is this also a problem for service.name?

It's not great, but you still have service.instance.id to differentiate.

Is this a breaking change?

Yes - but I think we should accept this and prevent uses from accidental mistakes that can be hard to understand.

@diurnalist
Copy link

Do you have an example of the user pain/confusion that can arise from the way it works now? In our environment, we made the choice where all containers w/in a pod logically constitute a service so it's good that they are consistently tagged w/ one service.instance.id (which is usually the pod name/uid).

I can understand how that might be a problem if for some reason team A is on call for container A and team B for container B, or other interesting organization topologies :)

@zeitlinger
Copy link
Member Author

Do you have an example of the user pain/confusion

You can have multiple containers that both report memory or CPU usage with the same metric name - this is the motivation for the semantic convention.

Adhering to semantic conventions is almost always a good idea - because other tools are, or will be, based on them.

@diurnalist
Copy link

We have used k8s.container.name as the differentiating attribute for that use-case. It's a different container, same service instance ID.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request needs triage
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants