Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[receiver/kubeletstats] Include sidecar initContainers to pod/container utilization metrics #29712

Closed
jinja2 opened this issue Dec 8, 2023 · 6 comments

Comments

@jinja2
Copy link
Contributor

jinja2 commented Dec 8, 2023

Component(s)

receiver/kubeletstats

Is your feature request related to a problem? Please describe.

Running initContainer as sidecars is beta in k8s v1.29. A few metrics in the kubeletstats receiver need to be updated since it is now possible for initContainers (restartPolicy == Always) to keep running for the entirety of pod's lifecycle.

Describe the solution you'd like

K8s pod/container utilization wrt to request/limit metrics should iterate over initContainers also to include any sidecars.

Below metrics for pods should be updated s.t. they now consider sidecar initContainer's limits/request when deciding whether to compute these, and the metric values should add any sidecar container's usage to the calculation.

k8s.pod.cpu_limit_utilization
k8s.pod.cpu_request_utilization
k8s.pod.memory_limit_utilization
k8s.pod.memory_request_utilization

Below metrics computed for main containers should also be computed for sidecar containers

k8s.container.cpu_limit_utilization
k8s.container.cpu_request_utilization
k8s.container.memory_limit_utilization
k8s.container.memory_request_utilization

PS - Should we calculate these (k8s.container.*_utilitzation) for non-sidecar initContainers when they are running? We have the raw usage metrics from initContainers being reported from the receiver, but this is an inconsistency we might want to fix.

Describe alternatives you've considered

No response

Additional context

No response

@jinja2 jinja2 added enhancement New feature or request needs triage New item requiring triage labels Dec 8, 2023
Copy link
Contributor

github-actions bot commented Dec 8, 2023

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@TylerHelmuth TylerHelmuth added priority:p2 Medium and removed needs triage New item requiring triage labels Dec 8, 2023
@TylerHelmuth
Copy link
Member

PS - Should we calculate these (k8s.container.*_utilitzation) for non-sidecar initContainers when they are running? We have the raw usage metrics from initContainers being reported from the receiver, but this is an inconsistency we might want to fix.

I think we should since they are contributing to the pod-level request/limit utilization metrics.

@bryan-aguilar
Copy link
Contributor

I believe this is a duplicate (or very similar to) #29623

@jinja2
Copy link
Contributor Author

jinja2 commented Dec 12, 2023

Similar but this to track work for kubeletstats receiver. #29623 is for k8scluster receiver.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Feb 12, 2024
Copy link
Contributor

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants