-
Notifications
You must be signed in to change notification settings - Fork 30
docker cgroup driver discussion - cgroupfs or systemd #1435
Comments
The original reason for the change was due to the poor interaction between Docker and cAdvisor. If I remember correctly, the "docker" cgroup driver was creating a hierarchy that cAdvisor didn't recognize. If that is in fact the issue, then I think we can reverse that configuration change. |
I've spent a little time testing the docker cgroup driver on docker on the latest coreos alpha with the v1.3 coreos-kubernetes repo. So far I havn't been able to reproduce some of the possibly related issues: So I don't have enough to make the call on this but next step is just running e2e tests against a cluster setup with coreos-kubernetes on coreos and with the docker cgroup driver. I'll be out of office for a few days starting now. possibly related: kubernetes/kubernetes#27383 (comment) |
The (CorOS) default of systemd might have some problems. See also coreos/bugs#1435 moby/moby#21444 and moby/moby#21678
The (CorOS) default of systemd might have some problems. See also coreos/bugs#1435 moby/moby#21444 and moby/moby#21678
Spent some more time with this and talked to some people. I think the consensus from our team is to ship it. No tests I've been able to run have brought issues nor have I seen anything in the logs that worry me or at least differ from the systemd driver. I've done this testing on a recent alpha and kubernetes v1.3.4. |
Hi, how can I know if this has made its way into CoreOS stable? |
The (CorOS) default of systemd might have some problems. See also coreos/bugs#1435 moby/moby#21444 and moby/moby#21678
@manojlds Last time I tried latest stable it's not there. |
@manojlds it can be a bit of a rat's nest to follow, but I'll show you how I figured it out in this case. If you look at my pull request which fixed the issue, you'll notice that Docker was renamed from |
the current beta channel build (1185.1.0) includes |
what is the current status of this? for docker 1.12.6 and systemd 231, which cgroup driver should be used by docker and kubelet? |
docker's default, cgroupfs, is now used. |
I found that without switching both docker and kubelet to systemd, kubelet checks and complains about non-existing containers, or maybe containers it can't reach. |
Could somebody tell us which cgroup driver should be used by docker and kubelet please? 😄 |
how to change the Cgroup in docker from Systemd to cgroupfs |
Setup daemon.cat > /etc/docker/daemon.json <<EOF mkdir -p /etc/systemd/system/docker.service.d Restart docker.systemctl daemon-reload |
Sorry to resurrect the dead here but this thread seems to disagree with what appears to be a more recent consensus and I'd love to know more about what the recommended go-forward configurations are. As mentioned in kubernetes/kubeadm#1394 - we are indeed seeing situations Have the cgroup driver configuration recommendations changed since this thread? What is the current recommendation (or) is there a different place to find more information on cgroup driver configuration recommendations for CoreOS? |
I received some answers from the CoreOS mailing thread. An important consideration here is the fact that CoreOS is now in maintenance mode, and some otherwise critical changes like Docker upgrades simply won't happen (#2624 (comment)). EoL hasn't been announced, but it sure feels like that what this is (watch the mailing list if you're interested in this). Regarding my above questions: cgroupfs was chosen as cgroup driver for docker for backward compatibility. Changing it should be possible but doesn't seem like there is much information out there on how to do this. At this point, it seems that if you require deeper customization of your nodes then it's prudent to consider an alternative operating system. |
Background
CoreOS currently ships docker with a non-default configuration of
--exec-opt native.cgroupdriver=systemd
. This option manages Docker's container's cgroups with systemd instead of thecgroupfs
driver.I don't know the full history for why this configuration was chosen, but I believe it was related to interacting well with Kubernetes.
Problem
I believe that this change might no longer make sense for CoreOS to ship. Below, I've enumerated reasons why we might want to use each driver:
Reasons to use cgroupfs
init.scope
in systemd 226+ (current issue in CoreOS alpha/beta) (runc issue, cadvisor issue)(important reasons starred)
Reasons to use systemd
systemctl status
but I'm not sure that actually does integrate better)Conclusion
I can't claim that any of the reasons for either choice make one inarguably better, but I think there are enough issues here that it's worth having the discussion. This will also help guide some related issues (how effort is directed on some K8s + CoreOS issues etc).
cc @philips @mischief @vishh @timstclair @crawford @marineam @aaronlevy
The text was updated successfully, but these errors were encountered: