-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change default cgroup driver to systemd and verify parity w/docker on preflight #1394
Comments
we already have docs for setting systemd as the driver for both ubuntu and centos: i think this preflight check should be warning only. |
Actually, there is no mention there on what and why we do this.
|
yes, we need an expert to write a paragraph on why it's really needed and what could break. |
/assign @mauilion |
@neolit123: GitHub didn't allow me to assign the following users: mauilion. Note that only kubernetes members and repo collaborators can be assigned and that issues/PRs can only have 10 assignees at the same time. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
When systemd is chosen as the init system for a linux distribution. The init process generates and consumes a root cgroup and acts as a cgroup manager. Systemd has a tight integration with cgroups and will allocate cgroups per process. While it's possible to configure docker and kubelet to use cgroupfs this means that there will then be two different cgroup managers. At the end of the day, cgroups are used to allocate and constrain resources that are allocated to processes. A single cgroup manager will simplify the view of what resources are being allocated and will by default have a more consistent view of the resources available / in use. When we have two managers we end up with two views of those available resources. We have seen cases in the field where nodes that are configured to use cgroupfs for kubelet and docker and systemd for the rest can become unstable under resource pressure. Changing the settings such that docker and kubelet use systems as a cgroup-driver stabilized the systems. An issue opened with systemd that discusses this at some length: |
/assign @timothysc please take a look |
lgtm, @mauilion did you want to make the PR? |
@timothysc @mauilion i will do the PR later tonight. |
/assign |
why use systemd? the kubelet default is cgroupsfs
|
please read here: |
See kubernetes/kubeadm#1394 for details.
@mauilion Thank for the information about cgroup driver. But i have question about your mention " We have seen cases in the field where nodes that are configured to use cgroupfs for kubelet and docker and systemd for the rest can become unstable under resource pressure." Why using both are unstable?? I think that, from the kernel's point of view, it looks the same as a cgroup created using cgroupfs driver and a cgroup created by systemd. Are there any keywords that I can track on this issue? Or please introduce history or status about this issue?? Thanks |
And how are steps for Ubuntu 20? I watch this page but anything I do doesn't work for me. |
@mauilion great explanation! All seems to be correct except the
If we replace The problem is because the cgroups manager is not only a By writing to Therefore allowing users to have cgroupfs driver on systemd machines isn't only not recommended but should be, in my humble opinion, prohibited by kubelet with preflight check. |
Default installs of docker still use cgroupfs and most of our supported userbase is on systemd systems, we should change the defaults and update instructions.
New Installs:
1 - Update the instructions for installation of docker to use
--exec-opt native.cgroupdriver=systemd
2 - Verify that the kubelet.service file sets systemd flag to the kubelet.
Upgrades:
TBD
Preflight:
1 - Verify if the system is systemd'd that the docker flags are correct and if they are not, start with a warning
The text was updated successfully, but these errors were encountered: