-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
play kube: memory limit for pod kind behaves differently than podman's '-m' option and also differently then k8s #13102
Comments
@umohnani8 PTAL |
A friendly reminder that this issue had no activity for 30 days. |
A friendly reminder that this issue had no activity for 30 days. |
@umohnani8 Any chance you have looked at this? |
yup, working on it. |
A friendly reminder that this issue had no activity for 30 days. |
A friendly reminder that this issue had no activity for 30 days. |
@umohnani8 Ping. |
A friendly reminder that this issue had no activity for 30 days. |
@cdoern PTAL |
not sure if this is a bug or just a quirk of different implementations. |
A friendly reminder that this issue had no activity for 30 days. |
@cdoern what do you think we should do? |
The memory limit being per-container looks correct - that's what the YAML is requesting. The actual bug appears to be that OOMKilled is not being set properly, from what I'm reading? Also, potentially that our virtual memory consumption is in excess of the requested amount - which is probably an issue of mapping a K8S memory limit to our resource limit primitives. |
A friendly reminder that this issue had no activity for 30 days. |
/kind bug
Description
play kube: memory limit for pod kind behaves differently than podman's '-m' option and also differently then k8s
Steps to reproduce the issue:
pod YAML:
Use pod YAML above to create a pod with 'podman play kube'
Use pod YAML above to create a pod in k8s cluster
run a container with
podman run -m 60m -d --entrypoint '["/stam/a.out","200"]' quay.io/ydayagi/memoryeater:latest
Describe the results you received:
For 'play kube' the container's RSS is limited to approximately the limit in the YAML. However, virtual memory is whatever the container consumes.
For k8s pod the cluster kills the pod and sets the 'OOMKilled' status.
For 'podman run' the container exists with exit code 137 but OOMKilled is false.
Describe the results you expected:
I expect options 2 and 3 to be the same. I do not see a reason not to. After all, 'play kube' and 'run' are just 2 different input methods for the same flow/operation.
Output of
podman version
:I am using the latest podman code
Output of
podman info --debug
:The text was updated successfully, but these errors were encountered: