-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot start a container with systemd 232 #1175
Comments
... scream internally While this should be okay to handle, I really wish they hadn't implemented that. The cgroup migration code (having processes in some |
Just want to confirm that the workaround to use |
Too many things don't get along with the unified hierarchy yet: * opencontainers/runc#1175 * moby/moby#28109 * lxc/lxc#1280 So revert the default to the legacy hierarchy for now. Developers of the above software can opt into the unified hierarchy with "systemd.legacy_systemd_cgroup_controller=0".
It looks like systemd/systemd#4628 is going to revert the regression. From the discussion linked, it looks like the solution is for us to mount a fake |
Too many things don't get along with the unified hierarchy yet: * opencontainers/runc#1175 * moby/moby#28109 * lxc/lxc#1280 So revert the default to the legacy hierarchy for now. Developers of the above software can opt into the unified hierarchy with "systemd.legacy_systemd_cgroup_controller=0".
systemd/systemd#4628 reverted this for now, but you can still boot with |
Too many things don't get along with the unified hierarchy yet: * opencontainers/runc#1175 * moby/moby#28109 * lxc/lxc#1280 So revert the default to the legacy hierarchy for now. Developers of the above software can opt into the unified hierarchy with "systemd.legacy_systemd_cgroup_controller=0". (cherry picked from commit 843d5ba)
Too many things don't get along with the unified hierarchy yet: * opencontainers/runc#1175 * moby/moby#28109 * lxc/lxc#1280 So revert the default to the legacy hierarchy for now. Developers of the above software can opt into the unified hierarchy with "systemd.legacy_systemd_cgroup_controller=0". (cherry picked from commit 843d5ba)
…temd#4628) Too many things don't get along with the unified hierarchy yet: * opencontainers/runc#1175 * moby/moby#28109 * lxc/lxc#1280 So revert the default to the legacy hierarchy for now. Developers of the above software can opt into the unified hierarchy with "systemd.legacy_systemd_cgroup_controller=0". (cherry picked from commit 843d5ba)
I can confirm that this is still an issue as of systemd-233, and the above workaround still works. |
The latest version of runC includes #1266 which should fix this problem in the hybrid mode that systemd 232 shipped. AFAIK the newest Docker release should contain that fix. |
With runc 1.0.0-rc2 on Container Linux 1465, kube-spawn init hangs forever with message like: "Created API client, waiting for the control plane to become ready". That's because docker daemon cannot execute runc, which returns error like "no subsystem for mount". See also: opencontainers/runc#1175 (comment) This issue was apparently resolved in runc 1.0.0-rc3, so in theory runc 1.0.0-rc3 should work fine with Docker 17.05. Unfortunately on Container Linux, it's not trivial to replace only the runc binary with a custom one, because Container Linux makes use of torcx to provide docker as well as runc: /run/torcx/unpack is sealed, read-only mounted. It's simply not doable to change those binaries altogether at run-time. As workaround, we should change cgroupdriver for docker and kubelet from systemd to cgroupfs. Then init process will succeed without hanging forever. See also #45
With runc 1.0.0-rc2 on Container Linux 1465, kube-spawn init hangs forever with message like: "Created API client, waiting for the control plane to become ready". That's because docker daemon cannot execute runc, which returns error like "no subsystem for mount". See also: opencontainers/runc#1175 (comment) This issue was apparently resolved in runc 1.0.0-rc3, so in theory runc 1.0.0-rc3 should work fine with Docker 17.05. Unfortunately on Container Linux, it's not trivial to replace only the runc binary with a custom one, because Container Linux makes use of torcx to provide docker as well as runc: /run/torcx/unpack is sealed, read-only mounted. As workaround, we should change cgroupdriver for docker and kubelet from systemd to cgroupfs. Then init process will succeed without hanging forever. See also #45
Too many things don't get along with the unified hierarchy yet: * opencontainers/runc#1175 * moby/moby#28109 * lxc/lxc#1280 So revert the default to the legacy hierarchy for now. Developers of the above software can opt into the unified hierarchy with "systemd.legacy_systemd_cgroup_controller=0". (cherry picked from commit 843d5baf6aad6c53fc00ea8d95d83209a4f92de1)
Too many things don't get along with the unified hierarchy yet: * opencontainers/runc#1175 * moby/moby#28109 * lxc/lxc#1280 So revert the default to the legacy hierarchy for now. Developers of the above software can opt into the unified hierarchy with "elogind.legacy_elogind_cgroup_controller=0".
I know this is an old thread, but just to make it pop up in google serches a bit more, upon upgrading my Ubuntu Server 17.04->10 I ran head long into this issue as well (17.10 uses systemd 2.34 if it's any help). I'm about to try the later fix of changing the cgroup driver, before I use the initial work-around of change the kernel boot up parameters which could break other things that expect the new cgroup implementation on 17.10. |
Hi,
With systemd 232, they now mount
/sys/fs/cgroup/systemd
with cgroup2 aka unified hierarchy systemd/systemd#3965This breaks runc with the error:
no subsystem for mount
I had to use the
systemd.legacy_systemd_cgroup_controller=yes
boot parameter to fix it.The text was updated successfully, but these errors were encountered: