-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OOMKilled pod on master node #823
Comments
Related: #785 |
Looks like he was already running with 4GiB on the masters, but yeah, there's some possible-consumer discussion there. My impression is that on most runs, memory usage is reasonably stable under 4GiB for the masters, but that sometimes something happens to cause a memory spike and things get OOMed. E.g. see this CI run. Hasn't happened when I've been watching to see though. |
I haven't seen this in a while now, and it may have been fixed by some openshift/origin changes or similar. Can you still reproduce? If not, I think we should close this and we can re-open if someone hits it again. |
Closing due to inactivity. |
@crawford: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Installed yesterday (using d127242) and there were two OutOfMemory pods in the list. Moreover leaving the cluster running overnight ends up like this:
I'm not sure what do all the installer-* pods do, but this looks very suspicious (note the cluster is not running any real workload and just falls apart on its own...). All I did was |
Version
Platform (aws|libvirt|openstack):
libvirt
What happened?
After successful installation I tried to look around the cluster
What you expected to happen?
No OOMKilled pod.
How to reproduce it (as minimally and precisely as possible)?
I followed the Readme on how to install on libvirt.
Anything else we need to know?
The hypervisor machine doesn't look that memory-stressed:
Neither does the master node itself:
I wonder if the installation itself can't get too memory-hungry at some point: it seems like this had no other visible consequence but surely looks strange.
The text was updated successfully, but these errors were encountered: