Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Tracking Issue] Support for cgroups v2 #78

Closed
7 tasks done
utam0k opened this issue Jun 9, 2021 · 46 comments
Closed
7 tasks done

[Tracking Issue] Support for cgroups v2 #78

utam0k opened this issue Jun 9, 2021 · 46 comments
Assignees
Labels
good first issue Good for newcomers
Milestone

Comments

@utam0k
Copy link
Member

utam0k commented Jun 9, 2021

This issue is for tracking the implementation of cgroups v2.
Since cpu and cpuset of cgoups has already been implemented, you can implement it while referring to it.
If you are interested, you can comment on this issue and I will assign it to you.

Devices are special and will be handled separately from this issue.
#230

Goal

There isn't an integration test for cgv2, so please make sure that the results of the operation and unit tests are successful.

Reference

@utam0k utam0k added the good first issue Good for newcomers label Jun 9, 2021
@utam0k utam0k pinned this issue Jun 9, 2021
@Furisto
Copy link
Collaborator

Furisto commented Jun 9, 2021

Note: Devices will be more challenging to implement than the other ones because there is no support from cgroups yet. This requires an ebpf based solution.

@tsturzl
Copy link
Collaborator

tsturzl commented Jun 9, 2021

@Furisto runc claims to support v2 so I think we can probably pick apart what they are doing. In fact quickly glancing through the implementation it would seem that Rust would be more capable in this regard than Golang, so we'll probably have a better and less hack-y time on this implementation than they did. Given you're certainly right, it's a much larger effort.

Useful links:

@tsturzl
Copy link
Collaborator

tsturzl commented Jun 12, 2021

I'd like to start on the memory controller.

@utam0k
Copy link
Member Author

utam0k commented Jun 12, 2021

@tsturzl Sure. I assigned you it.

@lizhemingi
Copy link
Contributor

I'd like to take freezer v2 too.

And if this finished, should we also add pause and resume command?

@utam0k
Copy link
Member Author

utam0k commented Jun 19, 2021

@duduainankai
Sure! I assinged you it.

I'd like to take freezer v2 too.

The commands pause and resume are not described in oci-spec, so I don't understand these two commands. However, I think these two commands can be implemented by using freezer. If you are interested, why don't you try to implement these commands? Perhaps you can refer to runc.
I think @Furisto knows more about this area.

And if this finished, should we also add pause and resume command?

@lizhemingi
Copy link
Contributor

The commands pause and resume are not described in oci-spec, so I don't understand these two commands. However, I >think these two commands can be implemented by using freezer. If you are interested, why don't you try to implement these >commands? Perhaps you can refer to runc.

Yep, I already check in runc, it's implemented by using freezer.

@utam0k
Copy link
Member Author

utam0k commented Jun 19, 2021

@duduainankai
I have created an issue regarding these commands. Could you please comment on the issue to assign you?
#99

@bobsongplus
Copy link
Contributor

I'd like to start on the pid controller.
/assgin pid

@tsturzl
Copy link
Collaborator

tsturzl commented Jun 30, 2021

@TinySong thanks for your interest! I'll assign you.

@bobsongplus
Copy link
Contributor

io controller /assgin
@utam0k

@utam0k
Copy link
Member Author

utam0k commented Jul 2, 2021

@TinySong Sure! I'll assign you.

@kmpzr
Copy link
Contributor

kmpzr commented Jul 5, 2021

Hi @utam0k

I could take huge tlb. You can assign it to me if it is still available.

@utam0k
Copy link
Member Author

utam0k commented Jul 5, 2021

@0xdco Of course! I have assigned you.

@bobsongplus
Copy link
Contributor

if the device controller no assigned, I would like to take the challenge to finish this.
@utam0k

@utam0k
Copy link
Member Author

utam0k commented Jul 10, 2021

@TinySong Sure! I am looking forward to your PR!

@utam0k
Copy link
Member Author

utam0k commented Aug 17, 2021

@Furisto Are you interested in devices? This is probably not a good first issue.

@Furisto
Copy link
Collaborator

Furisto commented Aug 17, 2021

@utam0k I do not have experience with epbf, could be an interesting challenge though. My current understanding is that none of the ebpf rust projects support generating an ebpf program on the fly at the moment. This would be required because the device configuration can be different for every container. We would need to write a library that can write ebpf bytecode directly, maybe we can interest one of the existing ebpf rust projects in adding this.

@utam0k
Copy link
Member Author

utam0k commented Aug 17, 2021

@Furisto An ebpf foundation has recently been established and I think it will be an interesting challenge. Do you want to start another issue?

@Furisto
Copy link
Collaborator

Furisto commented Aug 17, 2021

@utam0k I can take it up.

@utam0k
Copy link
Member Author

utam0k commented Aug 18, 2021

@Furisto I've assigned you.

@MoZhonghua
Copy link

@Furisto @utam0k

I created a PoC pr for cgroup v2 devices controller, see #208

I'm newbie to rust and want to learn it by writing some code for real project. This PR is not expected to
be merged (the code is not satisfying), but should be helpful for you to implement this feature.

@utam0k
Copy link
Member Author

utam0k commented Aug 20, 2021

@MoZhonghua Excellent! If you're interested, you can have @Furisto continue this if he hasn't already started developing it. It may be a little challenging for a first-time contributor.

@Furisto
Copy link
Collaborator

Furisto commented Aug 20, 2021

This is great @MoZhonghua. I will take a look at this over the weekend.

@Furisto
Copy link
Collaborator

Furisto commented Aug 24, 2021

@MoZhonghua I think your code is fine. I have made changes so that it passes CI and some other stuff (will update your PR later), but basically I think we should take this, hide it behind a feature flag and then work on the remaining FIXMEs. Are you interested?

@MoZhonghua
Copy link

@MoZhonghua I think your code is fine. I have made changes so that it passes CI and some other stuff (will update your PR later), but basically I think we should take this, hide it behind a feature flag and then work on the remaining FIXMEs. Are you interested?

@Furisto I'm glad to contribute more to this feature. You can create a list of tasks, and assign some tasks to me. Should we create a dedicated tracking issue for this feature?

@utam0k
Copy link
Member Author

utam0k commented Aug 25, 2021

@MoZhonghua I think your code is fine. I have made changes so that it passes CI and some other stuff (will update your PR later), but basically I think we should take this, hide it behind a feature flag and then work on the remaining FIXMEs. Are you interested?

@Furisto I'm glad to contribute more to this feature. You can create a list of tasks, and assign some tasks to me. Should we create a dedicated tracking issue for this feature?

@MoZhonghua
Can I ask you to create an issue for devices, because it is too different from other resource controllers?

@utam0k
Copy link
Member Author

utam0k commented Nov 19, 2021

@Furisto Since the devices controller is the only special implementation, can I cut it out with another issue about handling devices and this issue complete?

@Furisto
Copy link
Collaborator

Furisto commented Nov 21, 2021

@utam0k Yes, we have this issue for devices.

@utam0k
Copy link
Member Author

utam0k commented Nov 28, 2021

Special Thanks @Furisto @tsturzl @0xdco @TinySong @duduainankai

@utam0k utam0k closed this as completed Nov 28, 2021
@utam0k utam0k unpinned this issue Nov 28, 2021
@gattytto
Copy link

gattytto commented Dec 22, 2021

this is what I get when I try youki rust runtime in cri-o engine for kubernetes, trying to run a native rust-only binary pod using the runtimeClass yaml flag in a kube pod yaml:

apiVersion: node.k8s.io/v1  # RuntimeClass is defined in the node.k8s.io API group
kind: RuntimeClass
metadata:
  name: youki  # The name the RuntimeClass will be referenced by
  # RuntimeClass is a non-namespaced resource
handler: youki  # The name of the corresponding CRI configuration

this is the pod template:

apiVersion: v1
kind: Pod
metadata:
  name: rustest
  labels:
    name: rust
spec:
  runtimeClassName: youki
  containers:
  - name: rust
    image: quay.io/gattytto/rst:29c8045

this is the source for the container image

this is crio.conf relevant sections:

[crio.runtime.runtimes.youki]
runtime_path = "/usr/bin/youki"
runtime_type ="oci"
runtime_root = "/run/youki"
cgroup_manager = "cgroupfs"
conmon_cgroup = "pod"
kubectl describe pod/rustest
Name:         rustest
Namespace:    default
Priority:     0
Node:         sol4/2001:----:----:----:----:----:----:ff13
Start Time:   Wed, 22 Dec 2021 16:23:45 -0300
Labels:       name=rust
Annotations:  cni.projectcalico.org/containerID: 294d201a4b507f3b67871f5e42e15ec03835a5283c51797dba102d12629e9406
              cni.projectcalico.org/podIP: 1100:200::78:5240/128
              cni.projectcalico.org/podIPs: 1100:200::78:5240/128
Status:       Pending
IP:           1100:200::78:5240
IPs:
  IP:  1100:200::78:5240
Containers:
  rust:
    Container ID:
    Image:          quay.io/gattytto/rst:29c8045
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CreateContainerError
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hfbj5 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kube-api-access-hfbj5:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  26s   default-scheduler  Successfully assigned default/rustest to sol4
  Normal   Pulling    25s   kubelet            Pulling image "quay.io/gattytto/rst:29c8045"
  Normal   Pulled     2s    kubelet            Successfully pulled image "quay.io/gattytto/rst:29c8045" in 23.797168146s
  Warning  Failed     1s    kubelet            Error: container create failed: [DEBUG crates/youki/src/main.rs:92] 2021-12-22T16:24:10.949959886-03:00 started by user 0 with ArgsOs { inner: ["/usr/bin/youki", "--root=/run/youki", "create", "--bundle", "/run/containers/storage/overlay-containers/0bdf8c885c67f72917adef6a3911fac11e78b5df44b5ea3d606527aabf038d63/userdata", "--pid-file", "/run/containers/storage/overlay-containers/0bdf8c885c67f72917adef6a3911fac11e78b5df44b5ea3d606527aabf038d63/userdata/pidfile", "0bdf8c885c67f72917adef6a3911fac11e78b5df44b5ea3d606527aabf038d63"] }
[DEBUG crates/libcontainer/src/container/init_builder.rs:94] 2021-12-22T16:24:10.963460485-03:00 container directory will be "/run/youki/0bdf8c885c67f72917adef6a3911fac11e78b5df44b5ea3d606527aabf038d63"
[DEBUG crates/libcontainer/src/container/container.rs:191] 2021-12-22T16:24:10.963531332-03:00 Save container status: Container { state: State { oci_version: "v1.0.2", id: "0bdf8c885c67f72917adef6a3911fac11e78b5df44b5ea3d606527aabf038d63", status: Creating, pid: None, bundle: "/run/containers/storage/overlay-containers/0bdf8c885c67f72917adef6a3911fac11e78b5df44b5ea3d606527aabf038d63/userdata", annotations: Some({}), created: None, creator: None, use_systemd: None }, root: "/run/youki/0bdf8c885c67f72917adef6a3911fac11e78b5df44b5ea3d606527aabf038d63" } in "/run/youki/0bdf8c885c67f72917adef6a3911fac11e78b5df44b5ea3d606527aabf038d63"
[DEBUG crates/libcontainer/src/rootless.rs:50] 2021-12-22T16:24:10.963752694-03:00 This is NOT a rootless container
[INFO crates/libcgroups/src/common.rs:206] 2021-12-22T16:24:10.963818616-03:00 cgroup manager V2 will be used
[DEBUG crates/libcontainer/src/container/builder_impl.rs:87] 2021-12-22T16:24:10.963974423-03:00 Set OOM score to 1000
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-22T16:24:10.964830286-03:00 Controller rdma is not yet implemented.
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-22T16:24:10.964906149-03:00 Controller misc is not yet implemented.
[DEBUG crates/libcontainer/src/process/fork.rs:16] 2021-12-22T16:24:10.967299156-03:00 failed to run fork: failed to apply cgroups

Caused by:
    0: failed to apply resource limits to cgroup
    1: failed to apply cpu resource restrictions
    2: failed to write to "/sys/fs/cgroup/kubepods/besteffort/pod1d77c698-5ae1-49e8-92a3-d12b6ea41aff/crio-0bdf8c885c67f72917adef6a3911fac11e78b5df44b5ea3d606527aabf038d63/cpu.max"
    3: Invalid argument (os error 22)
[INFO crates/libcgroups/src/common.rs:206] 2021-12-22T16:24:10.968453669-03:00 cgroup manager V2 will be used
[DEBUG crates/libcgroups/src/v2/manager.rs:129] 2021-12-22T16:24:10.968495644-03:00 remove cgroup "/sys/fs/cgroup/kubepods/besteffort/pod1d77c698-5ae1-49e8-92a3-d12b6ea41aff/crio-0bdf8c885c67f72917adef6a3911fac11e78b5df44b5ea3d606527aabf038d63"
Error: failed to create container

Caused by:
    0: failed to receive a message from the intermediate process
    1: channel connection broken
  Normal  Pulled  1s  kubelet  Container image "quay.io/gattytto/rst:29c8045" already present on machine

@Furisto
Copy link
Collaborator

Furisto commented Dec 23, 2021

@gattytto Thanks for the detailed report. The pod is specified without resource restrictions but still the config.json that is created by CRI-O for youki sets a value for quota/period and somehow this causes a value to be written to the cgroup file that is outside of the accepcted range. Can you build youki with the changes from here? Would like to know what we are trying to write here.

@gattytto
Copy link

@gattytto Thanks for the detailed report. The pod is specified without resource restrictions but still the config.json that is created by CRI-O for youki sets a value for quota/period and somehow this causes a value to be written to the cgroup file that is outside of the accepcted range. Can you build youki with the changes from here? Would like to know what we are trying to write here.

 kubectl describe pod/rustest
Name:         rustest
Namespace:    default
Priority:     0
Node:         beloved-oryx/2001:----:----:----:----:----:----:91f5
Start Time:   Fri, 24 Dec 2021 13:14:52 -0300
Labels:       name=rust
Annotations:  cni.projectcalico.org/containerID: 67ee4f8aafea297cbcf7046d9bd64e522d0b8b0413a19ae6a433796df3a25ce8
              cni.projectcalico.org/podIP: 1100:200::fa:fc82/128
              cni.projectcalico.org/podIPs: 1100:200::fa:fc82/128
Status:       Pending
IP:           1100:200::fa:fc82
IPs:
  IP:  1100:200::fa:fc82
Containers:
  rust:
    Container ID:
    Image:          quay.io/gattytto/rst:29c8045
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CreateContainerError
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jnt9g (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kube-api-access-jnt9g:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age    From               Message
  ----     ------     ----   ----               -------
  Normal   Scheduled  5m37s  default-scheduler  Successfully assigned default/rustest to beloved-oryx
  Warning  Failed     5m37s  kubelet            Error: container create failed: [DEBUG crates/youki/src/main.rs:92] 2021-12-24T13:14:53.168007314-03:00 started by user 0 with ArgsOs { inner: ["/usr/bin/youki", "--root=/run/youki", "create", "--bundle", "/run/containers/storage/overlay-containers/8c419a4cda428b8ed9dc3e80105797da165b3179010bfa332cfba0ab8b556a18/userdata", "--pid-file", "/run/containers/storage/overlay-containers/8c419a4cda428b8ed9dc3e80105797da165b3179010bfa332cfba0ab8b556a18/userdata/pidfile", "8c419a4cda428b8ed9dc3e80105797da165b3179010bfa332cfba0ab8b556a18"] }
[DEBUG crates/libcontainer/src/container/init_builder.rs:94] 2021-12-24T13:14:53.174469147-03:00 container directory will be "/run/youki/8c419a4cda428b8ed9dc3e80105797da165b3179010bfa332cfba0ab8b556a18"
[DEBUG crates/libcontainer/src/container/container.rs:191] 2021-12-24T13:14:53.174535649-03:00 Save container status: Container { state: State { oci_version: "v1.0.2", id: "8c419a4cda428b8ed9dc3e80105797da165b3179010bfa332cfba0ab8b556a18", status: Creating, pid: None, bundle: "/run/containers/storage/overlay-containers/8c419a4cda428b8ed9dc3e80105797da165b3179010bfa332cfba0ab8b556a18/userdata", annotations: Some({}), created: None, creator: None, use_systemd: None }, root: "/run/youki/8c419a4cda428b8ed9dc3e80105797da165b3179010bfa332cfba0ab8b556a18" } in "/run/youki/8c419a4cda428b8ed9dc3e80105797da165b3179010bfa332cfba0ab8b556a18"
[DEBUG crates/libcontainer/src/rootless.rs:50] 2021-12-24T13:14:53.174822568-03:00 This is NOT a rootless container
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:14:53.174894136-03:00 cgroup manager V2 will be used
[DEBUG crates/libcontainer/src/container/builder_impl.rs:87] 2021-12-24T13:14:53.174946117-03:00 Set OOM score to 1000
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:14:53.175587777-03:00 Controller rdma is not yet implemented.
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:14:53.175641589-03:00 Controller misc is not yet implemented.
[DEBUG crates/libcontainer/src/process/fork.rs:16] 2021-12-24T13:14:53.176346261-03:00 failed to run fork: failed to apply cgroups

Caused by:
    0: failed to apply resource limits to cgroup
    1: failed to apply cpu resource restrictions
    2: failed to write 0 100000 to "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-8c419a4cda428b8ed9dc3e80105797da165b3179010bfa332cfba0ab8b556a18/cpu.max"
    3: Invalid argument (os error 22)
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:14:53.177124402-03:00 cgroup manager V2 will be used
[DEBUG crates/libcgroups/src/v2/manager.rs:129] 2021-12-24T13:14:53.177174265-03:00 remove cgroup "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-8c419a4cda428b8ed9dc3e80105797da165b3179010bfa332cfba0ab8b556a18"
Error: failed to create container

Caused by:
    0: failed to receive a message from the intermediate process
    1: channel connection broken
  Warning  Failed  5m36s  kubelet  Error: container create failed: [DEBUG crates/youki/src/main.rs:92] 2021-12-24T13:14:54.079767059-03:00 started by user 0 with ArgsOs { inner: ["/usr/bin/youki", "--root=/run/youki", "create", "--bundle", "/run/containers/storage/overlay-containers/4fda1ba8b62a4a7f984417d0868a51deb3db6f2b124398e008a9b2691ccdb7ae/userdata", "--pid-file", "/run/containers/storage/overlay-containers/4fda1ba8b62a4a7f984417d0868a51deb3db6f2b124398e008a9b2691ccdb7ae/userdata/pidfile", "4fda1ba8b62a4a7f984417d0868a51deb3db6f2b124398e008a9b2691ccdb7ae"] }
[DEBUG crates/libcontainer/src/container/init_builder.rs:94] 2021-12-24T13:14:54.086783576-03:00 container directory will be "/run/youki/4fda1ba8b62a4a7f984417d0868a51deb3db6f2b124398e008a9b2691ccdb7ae"
[DEBUG crates/libcontainer/src/container/container.rs:191] 2021-12-24T13:14:54.086851768-03:00 Save container status: Container { state: State { oci_version: "v1.0.2", id: "4fda1ba8b62a4a7f984417d0868a51deb3db6f2b124398e008a9b2691ccdb7ae", status: Creating, pid: None, bundle: "/run/containers/storage/overlay-containers/4fda1ba8b62a4a7f984417d0868a51deb3db6f2b124398e008a9b2691ccdb7ae/userdata", annotations: Some({}), created: None, creator: None, use_systemd: None }, root: "/run/youki/4fda1ba8b62a4a7f984417d0868a51deb3db6f2b124398e008a9b2691ccdb7ae" } in "/run/youki/4fda1ba8b62a4a7f984417d0868a51deb3db6f2b124398e008a9b2691ccdb7ae"
[DEBUG crates/libcontainer/src/rootless.rs:50] 2021-12-24T13:14:54.087160663-03:00 This is NOT a rootless container
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:14:54.087223574-03:00 cgroup manager V2 will be used
[DEBUG crates/libcontainer/src/container/builder_impl.rs:87] 2021-12-24T13:14:54.087273420-03:00 Set OOM score to 1000
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:14:54.087913476-03:00 Controller rdma is not yet implemented.
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:14:54.087972395-03:00 Controller misc is not yet implemented.
[DEBUG crates/libcontainer/src/process/fork.rs:16] 2021-12-24T13:14:54.103514274-03:00 failed to run fork: failed to apply cgroups

Caused by:
    0: failed to apply resource limits to cgroup
    1: failed to apply cpu resource restrictions
    2: failed to write 0 100000 to "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-4fda1ba8b62a4a7f984417d0868a51deb3db6f2b124398e008a9b2691ccdb7ae/cpu.max"
    3: Invalid argument (os error 22)
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:14:54.104299474-03:00 cgroup manager V2 will be used
[DEBUG crates/libcgroups/src/v2/manager.rs:129] 2021-12-24T13:14:54.104335238-03:00 remove cgroup "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-4fda1ba8b62a4a7f984417d0868a51deb3db6f2b124398e008a9b2691ccdb7ae"
Error: failed to create container

Caused by:
    0: failed to receive a message from the intermediate process
    1: channel connection broken
  Warning  Failed  5m35s  kubelet  Error: container create failed: [DEBUG crates/youki/src/main.rs:92] 2021-12-24T13:14:55.053121626-03:00 started by user 0 with ArgsOs { inner: ["/usr/bin/youki", "--root=/run/youki", "create", "--bundle", "/run/containers/storage/overlay-containers/e20dce8d1e758115f823d23847e7127cf42df43e1ae82f170f19cb98c0f9f287/userdata", "--pid-file", "/run/containers/storage/overlay-containers/e20dce8d1e758115f823d23847e7127cf42df43e1ae82f170f19cb98c0f9f287/userdata/pidfile", "e20dce8d1e758115f823d23847e7127cf42df43e1ae82f170f19cb98c0f9f287"] }
[DEBUG crates/libcontainer/src/container/init_builder.rs:94] 2021-12-24T13:14:55.059382711-03:00 container directory will be "/run/youki/e20dce8d1e758115f823d23847e7127cf42df43e1ae82f170f19cb98c0f9f287"
[DEBUG crates/libcontainer/src/container/container.rs:191] 2021-12-24T13:14:55.059434997-03:00 Save container status: Container { state: State { oci_version: "v1.0.2", id: "e20dce8d1e758115f823d23847e7127cf42df43e1ae82f170f19cb98c0f9f287", status: Creating, pid: None, bundle: "/run/containers/storage/overlay-containers/e20dce8d1e758115f823d23847e7127cf42df43e1ae82f170f19cb98c0f9f287/userdata", annotations: Some({}), created: None, creator: None, use_systemd: None }, root: "/run/youki/e20dce8d1e758115f823d23847e7127cf42df43e1ae82f170f19cb98c0f9f287" } in "/run/youki/e20dce8d1e758115f823d23847e7127cf42df43e1ae82f170f19cb98c0f9f287"
[DEBUG crates/libcontainer/src/rootless.rs:50] 2021-12-24T13:14:55.059621071-03:00 This is NOT a rootless container
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:14:55.059677779-03:00 cgroup manager V2 will be used
[DEBUG crates/libcontainer/src/container/builder_impl.rs:87] 2021-12-24T13:14:55.059727485-03:00 Set OOM score to 1000
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:14:55.060345380-03:00 Controller rdma is not yet implemented.
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:14:55.060404307-03:00 Controller misc is not yet implemented.
[DEBUG crates/libcontainer/src/process/fork.rs:16] 2021-12-24T13:14:55.078089309-03:00 failed to run fork: failed to apply cgroups

Caused by:
    0: failed to apply resource limits to cgroup
    1: failed to apply cpu resource restrictions
    2: failed to write 0 100000 to "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-e20dce8d1e758115f823d23847e7127cf42df43e1ae82f170f19cb98c0f9f287/cpu.max"
    3: Invalid argument (os error 22)
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:14:55.078950431-03:00 cgroup manager V2 will be used
[DEBUG crates/libcgroups/src/v2/manager.rs:129] 2021-12-24T13:14:55.079003514-03:00 remove cgroup "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-e20dce8d1e758115f823d23847e7127cf42df43e1ae82f170f19cb98c0f9f287"
Error: failed to create container

Caused by:
    0: failed to receive a message from the intermediate process
    1: channel connection broken
  Warning  Failed  5m21s  kubelet  Error: container create failed: [DEBUG crates/youki/src/main.rs:92] 2021-12-24T13:15:09.284729308-03:00 started by user 0 with ArgsOs { inner: ["/usr/bin/youki", "--root=/run/youki", "create", "--bundle", "/run/containers/storage/overlay-containers/5daddf6faa2daa243e2a44345ce6f8e6d36414e922f1ce7e474d89edacabe5b9/userdata", "--pid-file", "/run/containers/storage/overlay-containers/5daddf6faa2daa243e2a44345ce6f8e6d36414e922f1ce7e474d89edacabe5b9/userdata/pidfile", "5daddf6faa2daa243e2a44345ce6f8e6d36414e922f1ce7e474d89edacabe5b9"] }
[DEBUG crates/libcontainer/src/container/init_builder.rs:94] 2021-12-24T13:15:09.291114376-03:00 container directory will be "/run/youki/5daddf6faa2daa243e2a44345ce6f8e6d36414e922f1ce7e474d89edacabe5b9"
[DEBUG crates/libcontainer/src/container/container.rs:191] 2021-12-24T13:15:09.291175070-03:00 Save container status: Container { state: State { oci_version: "v1.0.2", id: "5daddf6faa2daa243e2a44345ce6f8e6d36414e922f1ce7e474d89edacabe5b9", status: Creating, pid: None, bundle: "/run/containers/storage/overlay-containers/5daddf6faa2daa243e2a44345ce6f8e6d36414e922f1ce7e474d89edacabe5b9/userdata", annotations: Some({}), created: None, creator: None, use_systemd: None }, root: "/run/youki/5daddf6faa2daa243e2a44345ce6f8e6d36414e922f1ce7e474d89edacabe5b9" } in "/run/youki/5daddf6faa2daa243e2a44345ce6f8e6d36414e922f1ce7e474d89edacabe5b9"
[DEBUG crates/libcontainer/src/rootless.rs:50] 2021-12-24T13:15:09.291366060-03:00 This is NOT a rootless container
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:15:09.291423148-03:00 cgroup manager V2 will be used
[DEBUG crates/libcontainer/src/container/builder_impl.rs:87] 2021-12-24T13:15:09.291472472-03:00 Set OOM score to 1000
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:15:09.294094499-03:00 Controller rdma is not yet implemented.
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:15:09.294150697-03:00 Controller misc is not yet implemented.
[DEBUG crates/libcontainer/src/process/fork.rs:16] 2021-12-24T13:15:09.314076029-03:00 failed to run fork: failed to apply cgroups

Caused by:
    0: failed to apply resource limits to cgroup
    1: failed to apply cpu resource restrictions
    2: failed to write 0 100000 to "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-5daddf6faa2daa243e2a44345ce6f8e6d36414e922f1ce7e474d89edacabe5b9/cpu.max"
    3: Invalid argument (os error 22)
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:15:09.315596180-03:00 cgroup manager V2 will be used
[DEBUG crates/libcgroups/src/v2/manager.rs:129] 2021-12-24T13:15:09.315638771-03:00 remove cgroup "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-5daddf6faa2daa243e2a44345ce6f8e6d36414e922f1ce7e474d89edacabe5b9"
Error: failed to create container

Caused by:
    0: failed to receive a message from the intermediate process
    1: channel connection broken
  Warning  Failed  5m8s  kubelet  Error: container create failed: [DEBUG crates/youki/src/main.rs:92] 2021-12-24T13:15:22.294559304-03:00 started by user 0 with ArgsOs { inner: ["/usr/bin/youki", "--root=/run/youki", "create", "--bundle", "/run/containers/storage/overlay-containers/3cbcec4819bdfaecc9d3527ec64b32006d42358d9d4440d2d033b56292e60365/userdata", "--pid-file", "/run/containers/storage/overlay-containers/3cbcec4819bdfaecc9d3527ec64b32006d42358d9d4440d2d033b56292e60365/userdata/pidfile", "3cbcec4819bdfaecc9d3527ec64b32006d42358d9d4440d2d033b56292e60365"] }
[DEBUG crates/libcontainer/src/container/init_builder.rs:94] 2021-12-24T13:15:22.301704143-03:00 container directory will be "/run/youki/3cbcec4819bdfaecc9d3527ec64b32006d42358d9d4440d2d033b56292e60365"
[DEBUG crates/libcontainer/src/container/container.rs:191] 2021-12-24T13:15:22.301769771-03:00 Save container status: Container { state: State { oci_version: "v1.0.2", id: "3cbcec4819bdfaecc9d3527ec64b32006d42358d9d4440d2d033b56292e60365", status: Creating, pid: None, bundle: "/run/containers/storage/overlay-containers/3cbcec4819bdfaecc9d3527ec64b32006d42358d9d4440d2d033b56292e60365/userdata", annotations: Some({}), created: None, creator: None, use_systemd: None }, root: "/run/youki/3cbcec4819bdfaecc9d3527ec64b32006d42358d9d4440d2d033b56292e60365" } in "/run/youki/3cbcec4819bdfaecc9d3527ec64b32006d42358d9d4440d2d033b56292e60365"
[DEBUG crates/libcontainer/src/rootless.rs:50] 2021-12-24T13:15:22.301964490-03:00 This is NOT a rootless container
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:15:22.302022241-03:00 cgroup manager V2 will be used
[DEBUG crates/libcontainer/src/container/builder_impl.rs:87] 2021-12-24T13:15:22.302071315-03:00 Set OOM score to 1000
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:15:22.302691964-03:00 Controller rdma is not yet implemented.
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:15:22.302746671-03:00 Controller misc is not yet implemented.
[DEBUG crates/libcontainer/src/process/fork.rs:16] 2021-12-24T13:15:22.318001099-03:00 failed to run fork: failed to apply cgroups

Caused by:
    0: failed to apply resource limits to cgroup
    1: failed to apply cpu resource restrictions
    2: failed to write 0 100000 to "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-3cbcec4819bdfaecc9d3527ec64b32006d42358d9d4440d2d033b56292e60365/cpu.max"
    3: Invalid argument (os error 22)
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:15:22.318836813-03:00 cgroup manager V2 will be used
[DEBUG crates/libcgroups/src/v2/manager.rs:129] 2021-12-24T13:15:22.318871902-03:00 remove cgroup "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-3cbcec4819bdfaecc9d3527ec64b32006d42358d9d4440d2d033b56292e60365"
Error: failed to create container

Caused by:
    0: failed to receive a message from the intermediate process
    1: channel connection broken
  Warning  Failed  4m53s  kubelet  Error: container create failed: [DEBUG crates/youki/src/main.rs:92] 2021-12-24T13:15:37.294448092-03:00 started by user 0 with ArgsOs { inner: ["/usr/bin/youki", "--root=/run/youki", "create", "--bundle", "/run/containers/storage/overlay-containers/6df10ce94730ea6973209fbf3bd451915a8ec4881bdc8abff413716f64b27f50/userdata", "--pid-file", "/run/containers/storage/overlay-containers/6df10ce94730ea6973209fbf3bd451915a8ec4881bdc8abff413716f64b27f50/userdata/pidfile", "6df10ce94730ea6973209fbf3bd451915a8ec4881bdc8abff413716f64b27f50"] }
[DEBUG crates/libcontainer/src/container/init_builder.rs:94] 2021-12-24T13:15:37.301610234-03:00 container directory will be "/run/youki/6df10ce94730ea6973209fbf3bd451915a8ec4881bdc8abff413716f64b27f50"
[DEBUG crates/libcontainer/src/container/container.rs:191] 2021-12-24T13:15:37.301702493-03:00 Save container status: Container { state: State { oci_version: "v1.0.2", id: "6df10ce94730ea6973209fbf3bd451915a8ec4881bdc8abff413716f64b27f50", status: Creating, pid: None, bundle: "/run/containers/storage/overlay-containers/6df10ce94730ea6973209fbf3bd451915a8ec4881bdc8abff413716f64b27f50/userdata", annotations: Some({}), created: None, creator: None, use_systemd: None }, root: "/run/youki/6df10ce94730ea6973209fbf3bd451915a8ec4881bdc8abff413716f64b27f50" } in "/run/youki/6df10ce94730ea6973209fbf3bd451915a8ec4881bdc8abff413716f64b27f50"
[DEBUG crates/libcontainer/src/rootless.rs:50] 2021-12-24T13:15:37.301923376-03:00 This is NOT a rootless container
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:15:37.301984076-03:00 cgroup manager V2 will be used
[DEBUG crates/libcontainer/src/container/builder_impl.rs:87] 2021-12-24T13:15:37.302036520-03:00 Set OOM score to 1000
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:15:37.302642183-03:00 Controller rdma is not yet implemented.
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:15:37.302702012-03:00 Controller misc is not yet implemented.
[DEBUG crates/libcontainer/src/process/fork.rs:16] 2021-12-24T13:15:37.326143371-03:00 failed to run fork: failed to apply cgroups

Caused by:
    0: failed to apply resource limits to cgroup
    1: failed to apply cpu resource restrictions
    2: failed to write 0 100000 to "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-6df10ce94730ea6973209fbf3bd451915a8ec4881bdc8abff413716f64b27f50/cpu.max"
    3: Invalid argument (os error 22)
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:15:37.327558701-03:00 cgroup manager V2 will be used
[DEBUG crates/libcgroups/src/v2/manager.rs:129] 2021-12-24T13:15:37.327619102-03:00 remove cgroup "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-6df10ce94730ea6973209fbf3bd451915a8ec4881bdc8abff413716f64b27f50"
Error: failed to create container

Caused by:
    0: failed to receive a message from the intermediate process
    1: channel connection broken
  Warning  Failed  4m42s  kubelet  Error: container create failed: [DEBUG crates/youki/src/main.rs:92] 2021-12-24T13:15:48.288602158-03:00 started by user 0 with ArgsOs { inner: ["/usr/bin/youki", "--root=/run/youki", "create", "--bundle", "/run/containers/storage/overlay-containers/58133dacaaa8cddc9f19e1e8ea9f144e830e9902b9be2504fe921c2674e42971/userdata", "--pid-file", "/run/containers/storage/overlay-containers/58133dacaaa8cddc9f19e1e8ea9f144e830e9902b9be2504fe921c2674e42971/userdata/pidfile", "58133dacaaa8cddc9f19e1e8ea9f144e830e9902b9be2504fe921c2674e42971"] }
[DEBUG crates/libcontainer/src/container/init_builder.rs:94] 2021-12-24T13:15:48.298739405-03:00 container directory will be "/run/youki/58133dacaaa8cddc9f19e1e8ea9f144e830e9902b9be2504fe921c2674e42971"
[DEBUG crates/libcontainer/src/container/container.rs:191] 2021-12-24T13:15:48.298805042-03:00 Save container status: Container { state: State { oci_version: "v1.0.2", id: "58133dacaaa8cddc9f19e1e8ea9f144e830e9902b9be2504fe921c2674e42971", status: Creating, pid: None, bundle: "/run/containers/storage/overlay-containers/58133dacaaa8cddc9f19e1e8ea9f144e830e9902b9be2504fe921c2674e42971/userdata", annotations: Some({}), created: None, creator: None, use_systemd: None }, root: "/run/youki/58133dacaaa8cddc9f19e1e8ea9f144e830e9902b9be2504fe921c2674e42971" } in "/run/youki/58133dacaaa8cddc9f19e1e8ea9f144e830e9902b9be2504fe921c2674e42971"
[DEBUG crates/libcontainer/src/rootless.rs:50] 2021-12-24T13:15:48.299000608-03:00 This is NOT a rootless container
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:15:48.299059276-03:00 cgroup manager V2 will be used
[DEBUG crates/libcontainer/src/container/builder_impl.rs:87] 2021-12-24T13:15:48.299109716-03:00 Set OOM score to 1000
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:15:48.299810440-03:00 Controller rdma is not yet implemented.
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:15:48.299863432-03:00 Controller misc is not yet implemented.
[DEBUG crates/libcontainer/src/process/fork.rs:16] 2021-12-24T13:15:48.318147456-03:00 failed to run fork: failed to apply cgroups

Caused by:
    0: failed to apply resource limits to cgroup
    1: failed to apply cpu resource restrictions
    2: failed to write 0 100000 to "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-58133dacaaa8cddc9f19e1e8ea9f144e830e9902b9be2504fe921c2674e42971/cpu.max"
    3: Invalid argument (os error 22)
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:15:48.318922094-03:00 cgroup manager V2 will be used
[DEBUG crates/libcgroups/src/v2/manager.rs:129] 2021-12-24T13:15:48.318961056-03:00 remove cgroup "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-58133dacaaa8cddc9f19e1e8ea9f144e830e9902b9be2504fe921c2674e42971"
Error: failed to create container

Caused by:
    0: failed to receive a message from the intermediate process
    1: channel connection broken
  Warning  Failed  4m28s  kubelet  Error: container create failed: [DEBUG crates/youki/src/main.rs:92] 2021-12-24T13:16:02.771862592-03:00 started by user 0 with ArgsOs { inner: ["/usr/bin/youki", "--root=/run/youki", "create", "--bundle", "/run/containers/storage/overlay-containers/b6d83bb0804fc1038482ccf36c9f7e4951abc95c5f3c05d90624912e8b3c29c6/userdata", "--pid-file", "/run/containers/storage/overlay-containers/b6d83bb0804fc1038482ccf36c9f7e4951abc95c5f3c05d90624912e8b3c29c6/userdata/pidfile", "b6d83bb0804fc1038482ccf36c9f7e4951abc95c5f3c05d90624912e8b3c29c6"] }
[DEBUG crates/libcontainer/src/container/init_builder.rs:94] 2021-12-24T13:16:02.778693663-03:00 container directory will be "/run/youki/b6d83bb0804fc1038482ccf36c9f7e4951abc95c5f3c05d90624912e8b3c29c6"
[DEBUG crates/libcontainer/src/container/container.rs:191] 2021-12-24T13:16:02.778768341-03:00 Save container status: Container { state: State { oci_version: "v1.0.2", id: "b6d83bb0804fc1038482ccf36c9f7e4951abc95c5f3c05d90624912e8b3c29c6", status: Creating, pid: None, bundle: "/run/containers/storage/overlay-containers/b6d83bb0804fc1038482ccf36c9f7e4951abc95c5f3c05d90624912e8b3c29c6/userdata", annotations: Some({}), created: None, creator: None, use_systemd: None }, root: "/run/youki/b6d83bb0804fc1038482ccf36c9f7e4951abc95c5f3c05d90624912e8b3c29c6" } in "/run/youki/b6d83bb0804fc1038482ccf36c9f7e4951abc95c5f3c05d90624912e8b3c29c6"
[DEBUG crates/libcontainer/src/rootless.rs:50] 2021-12-24T13:16:02.779027389-03:00 This is NOT a rootless container
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:16:02.779099193-03:00 cgroup manager V2 will be used
[DEBUG crates/libcontainer/src/container/builder_impl.rs:87] 2021-12-24T13:16:02.779196757-03:00 Set OOM score to 1000
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:16:02.780030258-03:00 Controller rdma is not yet implemented.
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:16:02.780088365-03:00 Controller misc is not yet implemented.
[DEBUG crates/libcontainer/src/process/fork.rs:16] 2021-12-24T13:16:02.798124513-03:00 failed to run fork: failed to apply cgroups

Caused by:
    0: failed to apply resource limits to cgroup
    1: failed to apply cpu resource restrictions
    2: failed to write 0 100000 to "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-b6d83bb0804fc1038482ccf36c9f7e4951abc95c5f3c05d90624912e8b3c29c6/cpu.max"
    3: Invalid argument (os error 22)
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:16:02.799037404-03:00 cgroup manager V2 will be used
[DEBUG crates/libcgroups/src/v2/manager.rs:129] 2021-12-24T13:16:02.799106472-03:00 remove cgroup "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-b6d83bb0804fc1038482ccf36c9f7e4951abc95c5f3c05d90624912e8b3c29c6"
Error: failed to create container

Caused by:
    0: failed to receive a message from the intermediate process
    1: channel connection broken
  Warning  Failed  4m14s  kubelet  Error: container create failed: [DEBUG crates/youki/src/main.rs:92] 2021-12-24T13:16:16.292715158-03:00 started by user 0 with ArgsOs { inner: ["/usr/bin/youki", "--root=/run/youki", "create", "--bundle", "/run/containers/storage/overlay-containers/deff443eda98d0e212c1f5a7f4bc1ccfbf56fdbb5651fc04f769311c9bbd9fe3/userdata", "--pid-file", "/run/containers/storage/overlay-containers/deff443eda98d0e212c1f5a7f4bc1ccfbf56fdbb5651fc04f769311c9bbd9fe3/userdata/pidfile", "deff443eda98d0e212c1f5a7f4bc1ccfbf56fdbb5651fc04f769311c9bbd9fe3"] }
[DEBUG crates/libcontainer/src/container/init_builder.rs:94] 2021-12-24T13:16:16.300474788-03:00 container directory will be "/run/youki/deff443eda98d0e212c1f5a7f4bc1ccfbf56fdbb5651fc04f769311c9bbd9fe3"
[DEBUG crates/libcontainer/src/container/container.rs:191] 2021-12-24T13:16:16.300555303-03:00 Save container status: Container { state: State { oci_version: "v1.0.2", id: "deff443eda98d0e212c1f5a7f4bc1ccfbf56fdbb5651fc04f769311c9bbd9fe3", status: Creating, pid: None, bundle: "/run/containers/storage/overlay-containers/deff443eda98d0e212c1f5a7f4bc1ccfbf56fdbb5651fc04f769311c9bbd9fe3/userdata", annotations: Some({}), created: None, creator: None, use_systemd: None }, root: "/run/youki/deff443eda98d0e212c1f5a7f4bc1ccfbf56fdbb5651fc04f769311c9bbd9fe3" } in "/run/youki/deff443eda98d0e212c1f5a7f4bc1ccfbf56fdbb5651fc04f769311c9bbd9fe3"
[DEBUG crates/libcontainer/src/rootless.rs:50] 2021-12-24T13:16:16.300773318-03:00 This is NOT a rootless container
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:16:16.300838510-03:00 cgroup manager V2 will be used
[DEBUG crates/libcontainer/src/container/builder_impl.rs:87] 2021-12-24T13:16:16.300890836-03:00 Set OOM score to 1000
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:16:16.301568227-03:00 Controller rdma is not yet implemented.
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:16:16.301634839-03:00 Controller misc is not yet implemented.
[DEBUG crates/libcontainer/src/process/fork.rs:16] 2021-12-24T13:16:16.322795092-03:00 failed to run fork: failed to apply cgroups

Caused by:
    0: failed to apply resource limits to cgroup
    1: failed to apply cpu resource restrictions
    2: failed to write 0 100000 to "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-deff443eda98d0e212c1f5a7f4bc1ccfbf56fdbb5651fc04f769311c9bbd9fe3/cpu.max"
    3: Invalid argument (os error 22)
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:16:16.323684020-03:00 cgroup manager V2 will be used
[DEBUG crates/libcgroups/src/v2/manager.rs:129] 2021-12-24T13:16:16.323760977-03:00 remove cgroup "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-deff443eda98d0e212c1f5a7f4bc1ccfbf56fdbb5651fc04f769311c9bbd9fe3"
Error: failed to create container

Caused by:
    0: failed to receive a message from the intermediate process
    1: channel connection broken
  Warning  Failed  3m37s (x3 over 4m1s)  kubelet  (combined from similar events): Error: container create failed: [DEBUG crates/youki/src/main.rs:92] 2021-12-24T13:16:53.304737030-03:00 started by user 0 with ArgsOs { inner: ["/usr/bin/youki", "--root=/run/youki", "create", "--bundle", "/run/containers/storage/overlay-containers/46d967e42146cfc90d524df3a7f88b42ed7e554ec34df91bf7c73f8e47f14498/userdata", "--pid-file", "/run/containers/storage/overlay-containers/46d967e42146cfc90d524df3a7f88b42ed7e554ec34df91bf7c73f8e47f14498/userdata/pidfile", "46d967e42146cfc90d524df3a7f88b42ed7e554ec34df91bf7c73f8e47f14498"] }
[DEBUG crates/libcontainer/src/container/init_builder.rs:94] 2021-12-24T13:16:53.311219842-03:00 container directory will be "/run/youki/46d967e42146cfc90d524df3a7f88b42ed7e554ec34df91bf7c73f8e47f14498"
[DEBUG crates/libcontainer/src/container/container.rs:191] 2021-12-24T13:16:53.311383211-03:00 Save container status: Container { state: State { oci_version: "v1.0.2", id: "46d967e42146cfc90d524df3a7f88b42ed7e554ec34df91bf7c73f8e47f14498", status: Creating, pid: None, bundle: "/run/containers/storage/overlay-containers/46d967e42146cfc90d524df3a7f88b42ed7e554ec34df91bf7c73f8e47f14498/userdata", annotations: Some({}), created: None, creator: None, use_systemd: None }, root: "/run/youki/46d967e42146cfc90d524df3a7f88b42ed7e554ec34df91bf7c73f8e47f14498" } in "/run/youki/46d967e42146cfc90d524df3a7f88b42ed7e554ec34df91bf7c73f8e47f14498"
[DEBUG crates/libcontainer/src/rootless.rs:50] 2021-12-24T13:16:53.311639540-03:00 This is NOT a rootless container
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:16:53.311777752-03:00 cgroup manager V2 will be used
[DEBUG crates/libcontainer/src/container/builder_impl.rs:87] 2021-12-24T13:16:53.311887035-03:00 Set OOM score to 1000
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:16:53.312721638-03:00 Controller rdma is not yet implemented.
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-24T13:16:53.312785461-03:00 Controller misc is not yet implemented.
[DEBUG crates/libcontainer/src/process/fork.rs:16] 2021-12-24T13:16:53.326117360-03:00 failed to run fork: failed to apply cgroups

Caused by:
    0: failed to apply resource limits to cgroup
    1: failed to apply cpu resource restrictions
    2: failed to write 0 100000 to "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-46d967e42146cfc90d524df3a7f88b42ed7e554ec34df91bf7c73f8e47f14498/cpu.max"
    3: Invalid argument (os error 22)
[INFO crates/libcgroups/src/common.rs:207] 2021-12-24T13:16:53.327927594-03:00 cgroup manager V2 will be used
[DEBUG crates/libcgroups/src/v2/manager.rs:129] 2021-12-24T13:16:53.327989626-03:00 remove cgroup "/sys/fs/cgroup/kubepods/besteffort/pod11e5cb2c-c067-49ca-91d1-e2bc09338f5c/crio-46d967e42146cfc90d524df3a7f88b42ed7e554ec34df91bf7c73f8e47f14498"
Error: failed to create container

Caused by:
    0: failed to receive a message from the intermediate process
    1: channel connection broken
  Normal  Pulled  32s (x26 over 5m37s)  kubelet  Container image "quay.io/gattytto/rst:29c8045" already present on machine
youki version 0.0.1
commit: 0.0.1-0-4dc8863

@Furisto
Copy link
Collaborator

Furisto commented Dec 25, 2021

@gattytto You have hit one of these areas where the runtime spec does not specify the expected behavior and leaves it up to the interpretation of the implementer. I have already noted down a few of these areas as I implement the runtime tests for cgroup v2 and once I am finished I plan to address this on the runtime spec repository, so that we can get it incorporated into the spec.

Some background info: The cpu.max file that we try to write to consists of two values, the quota (which is the amount of cpu time in microseconds processes in the cgroup can use during one period) and the period (which specifies the length of the period in microseconds). Processes in the cgroup can also be unrestricted (i.e. they can use as much cpu time they want) in which case the quota would be displayed as 'max'.

The runtime spec defines quota as an signed integer, which we interpret as follows:

  • If quota is not specified do nothing
  • If quota is a negative value write 'max' to the cgroup file
  • If quota is zero or a positive value just write the value to the file

The problem you have observed happens because the cpu controller requires that quota and period have to be at least 1ms, so any value from 0 - 999 will not be accepted. I do not handle this invalid values in any special way as my options are to fail when I detect these values (which will happen anyway when you try to write them to the cgroup file as you have encountered) or to set some default value.

Runc currently uses the latter approach and sets quota to 'max' if zero is specified. I did not do this as I figured that someone might be tempted to suspend processes in a cgroup by specifying a quota of zero, unaware that this is not the way to do that. Returning an error instead of silently setting the quota to 'max' which is really the opposite of the desired outcome seemed the better option to me.

Considering that CRI-O now depends on this behavior and it would break any pod that is specified without resource restrictions, we should follow the behavior of runc, at least until this has been clarified in the spec. Until I have implemented the new behavior, you can specify resource restrictions for the pod which should prevent this problem.

@gattytto
Copy link

@gattytto You have hit one of these areas where the runtime spec does not specify the expected behavior and leaves it up to the interpretation of the implementer. I have already noted down a few of these areas as I implement the runtime tests for cgroup v2 and once I am finished I plan to address this on the runtime spec repository, so that we can get it incorporated into the spec.

Some background info: The cpu.max file that we try to write to consists of two values, the quota (which is the amount of cpu time in microseconds processes in the cgroup can use during one period) and the period (which specifies the length of the period in microseconds). Processes in the cgroup can also be unrestricted (i.e. they can use as much cpu time they want) in which case the quota would be displayed as 'max'.

The runtime spec defines quota as an signed integer, which we interpret as follows:

  • If quota is not specified do nothing
  • If quota is a negative value write 'max' to the cgroup file
  • If quota is zero or a positive value just write the value to the file

The problem you have observed happens because the cpu controller requires that quota and period have to be at least 1ms, so any value from 0 - 999 will not be accepted. I do not handle this invalid values in any special way as my options are to fail when I detect these values (which will happen anyway when you try to write them to the cgroup file as you have encountered) or to set some default value.

Runc currently uses the latter approach and sets quota to 'max' if zero is specified. I did not do this as I figured that someone might be tempted to suspend processes in a cgroup by specifying a quota of zero, unaware that this is not the way to do that. Returning an error instead of silently setting the quota to 'max' which is really the opposite of the desired outcome seemed the better option to me.

Considering that CRI-O now depends on this behavior and it would break any pod that is specified without resource restrictions, we should follow the behavior of runc, at least until this has been clarified in the spec. Until I have implemented the new behavior, you can specify resource restrictions for the pod which should prevent this problem.

thank you so much for explaining, I Will try with pod resource restrictions and come back

@gattytto
Copy link

I have tried both "1200ms" and "1" for cpu values with the same results:

apiVersion: v1
kind: Pod
metadata:
  name: rustest
  labels:
    name: rust
spec:
  runtimeClassName: youki
  containers:
  - name: rust
    image: quay.io/gattytto/rst:29c8045
    resources:
      requests:
        memory: "64Mi"
        cpu: 1
      limits:
        memory: "128Mi"
        cpu: 1

@Furisto
Copy link
Collaborator

Furisto commented Dec 27, 2021

@gattytto That's odd. The only explanation I can think of at the moment is that CRI-O is creating a pause container for the pod and that container does not have resource restrictions. Can you try it with #569?

@gattytto
Copy link

kubectl describe pod/rustest

Name:         rustest
Namespace:    default
Priority:     0
Node:         sweeping-bulldog/2001:----:----:----:----:----:----:8fc1
Start Time:   Tue, 28 Dec 2021 22:45:53 -0300
Labels:       name=rust
Annotations:  cni.projectcalico.org/containerID: f871c998580ef65a8571602bf324025424372c053f220a166135afa80faadc24
              cni.projectcalico.org/podIP: 1100:200::c2:6f48/128
              cni.projectcalico.org/podIPs: 1100:200::c2:6f48/128
Status:       Pending
IP:           1100:200::c2:6f48
IPs:
  IP:  1100:200::c2:6f48
Containers:
  rust:
    Container ID:
    Image:          quay.io/gattytto/rst:29c8045
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CreateContainerError
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  128Mi
    Requests:
      cpu:        1
      memory:     64Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m4dpq (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kube-api-access-m4dpq:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  26s   default-scheduler  Successfully assigned default/rustest to sweeping-bulldog
  Warning  Failed     25s   kubelet            Error: container create failed: [DEBUG crates/youki/src/main.rs:92] 2021-12-28T22:45:53.946342499-03:00 started by user 0 with ArgsOs { inner: ["/usr/bin/youki", "--root=/run/youki", "create", "--bundle", "/run/containers/storage/overlay-containers/b3adcbee75de7b5061a0192ec1bbbe2bfca110d447f51c618d4f00a5e216ae4f/userdata", "--pid-file", "/run/containers/storage/overlay-containers/b3adcbee75de7b5061a0192ec1bbbe2bfca110d447f51c618d4f00a5e216ae4f/userdata/pidfile", "b3adcbee75de7b5061a0192ec1bbbe2bfca110d447f51c618d4f00a5e216ae4f"] }
[DEBUG crates/libcontainer/src/container/init_builder.rs:94] 2021-12-28T22:45:53.952851511-03:00 container directory will be "/run/youki/b3adcbee75de7b5061a0192ec1bbbe2bfca110d447f51c618d4f00a5e216ae4f"
[DEBUG crates/libcontainer/src/container/container.rs:191] 2021-12-28T22:45:53.952911359-03:00 Save container status: Container { state: State { oci_version: "v1.0.2", id: "b3adcbee75de7b5061a0192ec1bbbe2bfca110d447f51c618d4f00a5e216ae4f", status: Creating, pid: None, bundle: "/run/containers/storage/overlay-containers/b3adcbee75de7b5061a0192ec1bbbe2bfca110d447f51c618d4f00a5e216ae4f/userdata", annotations: Some({}), created: None, creator: None, use_systemd: None }, root: "/run/youki/b3adcbee75de7b5061a0192ec1bbbe2bfca110d447f51c618d4f00a5e216ae4f" } in "/run/youki/b3adcbee75de7b5061a0192ec1bbbe2bfca110d447f51c618d4f00a5e216ae4f"
[DEBUG crates/libcontainer/src/rootless.rs:50] 2021-12-28T22:45:53.953094156-03:00 This is NOT a rootless container
[INFO crates/libcgroups/src/common.rs:207] 2021-12-28T22:45:53.953148700-03:00 cgroup manager V2 will be used
[DEBUG crates/libcontainer/src/container/builder_impl.rs:87] 2021-12-28T22:45:53.953203875-03:00 Set OOM score to 983
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-28T22:45:53.953892966-03:00 Controller rdma is not yet implemented.
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-28T22:45:53.953948025-03:00 Controller misc is not yet implemented.
[DEBUG crates/libcgroups/src/v2/hugetlb.rs:16] 2021-12-28T22:45:53.955352343-03:00 Apply hugetlb cgroup v2 config
[DEBUG crates/libcontainer/src/process/fork.rs:16] 2021-12-28T22:45:53.955448602-03:00 failed to run fork: failed to apply cgroups

Caused by:
    0: failed to apply resource limits to cgroup
    1: failed to apply hugetlb resource restrictions
    2: failed to open "/sys/fs/cgroup/kubepods/burstable/pod7e75cc29-b776-4b74-a78a-f76c6718f6cd/crio-b3adcbee75de7b5061a0192ec1bbbe2bfca110d447f51c618d4f00a5e216ae4f/hugetlb.2MB.limit_in_bytes"
    3: No such file or directory (os error 2)
[INFO crates/libcgroups/src/common.rs:207] 2021-12-28T22:45:53.956397477-03:00 cgroup manager V2 will be used
[DEBUG crates/libcgroups/src/v2/manager.rs:129] 2021-12-28T22:45:53.956449146-03:00 remove cgroup "/sys/fs/cgroup/kubepods/burstable/pod7e75cc29-b776-4b74-a78a-f76c6718f6cd/crio-b3adcbee75de7b5061a0192ec1bbbe2bfca110d447f51c618d4f00a5e216ae4f"
Error: failed to create container

Caused by:
    0: failed to receive a message from the intermediate process
    1: channel connection broken
  Warning  Failed  24s  kubelet  Error: container create failed: [DEBUG crates/youki/src/main.rs:92] 2021-12-28T22:45:55.143457765-03:00 started by user 0 with ArgsOs { inner: ["/usr/bin/youki", "--root=/run/youki", "create", "--bundle", "/run/containers/storage/overlay-containers/99e77e16ea6330932fb71ef9b33c7a94118eaa8bfa6fa256afd6de9a46e2c241/userdata", "--pid-file", "/run/containers/storage/overlay-containers/99e77e16ea6330932fb71ef9b33c7a94118eaa8bfa6fa256afd6de9a46e2c241/userdata/pidfile", "99e77e16ea6330932fb71ef9b33c7a94118eaa8bfa6fa256afd6de9a46e2c241"] }
[DEBUG crates/libcontainer/src/container/init_builder.rs:94] 2021-12-28T22:45:55.151946532-03:00 container directory will be "/run/youki/99e77e16ea6330932fb71ef9b33c7a94118eaa8bfa6fa256afd6de9a46e2c241"
[DEBUG crates/libcontainer/src/container/container.rs:191] 2021-12-28T22:45:55.152071574-03:00 Save container status: Container { state: State { oci_version: "v1.0.2", id: "99e77e16ea6330932fb71ef9b33c7a94118eaa8bfa6fa256afd6de9a46e2c241", status: Creating, pid: None, bundle: "/run/containers/storage/overlay-containers/99e77e16ea6330932fb71ef9b33c7a94118eaa8bfa6fa256afd6de9a46e2c241/userdata", annotations: Some({}), created: None, creator: None, use_systemd: None }, root: "/run/youki/99e77e16ea6330932fb71ef9b33c7a94118eaa8bfa6fa256afd6de9a46e2c241" } in "/run/youki/99e77e16ea6330932fb71ef9b33c7a94118eaa8bfa6fa256afd6de9a46e2c241"
[DEBUG crates/libcontainer/src/rootless.rs:50] 2021-12-28T22:45:55.152426505-03:00 This is NOT a rootless container
[INFO crates/libcgroups/src/common.rs:207] 2021-12-28T22:45:55.153108500-03:00 cgroup manager V2 will be used
[DEBUG crates/libcontainer/src/container/builder_impl.rs:87] 2021-12-28T22:45:55.153218001-03:00 Set OOM score to 983
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-28T22:45:55.154087491-03:00 Controller rdma is not yet implemented.
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-28T22:45:55.154151527-03:00 Controller misc is not yet implemented.
[DEBUG crates/libcgroups/src/v2/hugetlb.rs:16] 2021-12-28T22:45:55.155547152-03:00 Apply hugetlb cgroup v2 config
[DEBUG crates/libcontainer/src/process/fork.rs:16] 2021-12-28T22:45:55.155707909-03:00 failed to run fork: failed to apply cgroups

Caused by:
    0: failed to apply resource limits to cgroup
    1: failed to apply hugetlb resource restrictions
    2: failed to open "/sys/fs/cgroup/kubepods/burstable/pod7e75cc29-b776-4b74-a78a-f76c6718f6cd/crio-99e77e16ea6330932fb71ef9b33c7a94118eaa8bfa6fa256afd6de9a46e2c241/hugetlb.2MB.limit_in_bytes"
    3: No such file or directory (os error 2)
[INFO crates/libcgroups/src/common.rs:207] 2021-12-28T22:45:55.156568462-03:00 cgroup manager V2 will be used
[DEBUG crates/libcgroups/src/v2/manager.rs:129] 2021-12-28T22:45:55.156604357-03:00 remove cgroup "/sys/fs/cgroup/kubepods/burstable/pod7e75cc29-b776-4b74-a78a-f76c6718f6cd/crio-99e77e16ea6330932fb71ef9b33c7a94118eaa8bfa6fa256afd6de9a46e2c241"
Error: failed to create container

Caused by:
    0: failed to receive a message from the intermediate process
    1: channel connection broken
  Normal   Pulled  11s (x3 over 26s)  kubelet  Container image "quay.io/gattytto/rst:29c8045" already present on machine
  Warning  Failed  10s                kubelet  Error: container create failed: [DEBUG crates/youki/src/main.rs:92] 2021-12-28T22:46:09.122208345-03:00 started by user 0 with ArgsOs { inner: ["/usr/bin/youki", "--root=/run/youki", "create", "--bundle", "/run/containers/storage/overlay-containers/792be1ed9a80d17826baa8e26a0f9f7b456d61026ce19a3ea93732a55fc6cb8e/userdata", "--pid-file", "/run/containers/storage/overlay-containers/792be1ed9a80d17826baa8e26a0f9f7b456d61026ce19a3ea93732a55fc6cb8e/userdata/pidfile", "792be1ed9a80d17826baa8e26a0f9f7b456d61026ce19a3ea93732a55fc6cb8e"] }
[DEBUG crates/libcontainer/src/container/init_builder.rs:94] 2021-12-28T22:46:09.129530335-03:00 container directory will be "/run/youki/792be1ed9a80d17826baa8e26a0f9f7b456d61026ce19a3ea93732a55fc6cb8e"
[DEBUG crates/libcontainer/src/container/container.rs:191] 2021-12-28T22:46:09.129603052-03:00 Save container status: Container { state: State { oci_version: "v1.0.2", id: "792be1ed9a80d17826baa8e26a0f9f7b456d61026ce19a3ea93732a55fc6cb8e", status: Creating, pid: None, bundle: "/run/containers/storage/overlay-containers/792be1ed9a80d17826baa8e26a0f9f7b456d61026ce19a3ea93732a55fc6cb8e/userdata", annotations: Some({}), created: None, creator: None, use_systemd: None }, root: "/run/youki/792be1ed9a80d17826baa8e26a0f9f7b456d61026ce19a3ea93732a55fc6cb8e" } in "/run/youki/792be1ed9a80d17826baa8e26a0f9f7b456d61026ce19a3ea93732a55fc6cb8e"
[DEBUG crates/libcontainer/src/rootless.rs:50] 2021-12-28T22:46:09.129804631-03:00 This is NOT a rootless container
[INFO crates/libcgroups/src/common.rs:207] 2021-12-28T22:46:09.129859059-03:00 cgroup manager V2 will be used
[DEBUG crates/libcontainer/src/container/builder_impl.rs:87] 2021-12-28T22:46:09.129917478-03:00 Set OOM score to 983
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-28T22:46:09.130584468-03:00 Controller rdma is not yet implemented.
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-28T22:46:09.130638589-03:00 Controller misc is not yet implemented.
[DEBUG crates/libcgroups/src/v2/hugetlb.rs:16] 2021-12-28T22:46:09.155361888-03:00 Apply hugetlb cgroup v2 config
[DEBUG crates/libcontainer/src/process/fork.rs:16] 2021-12-28T22:46:09.155487801-03:00 failed to run fork: failed to apply cgroups

Caused by:
    0: failed to apply resource limits to cgroup
    1: failed to apply hugetlb resource restrictions
    2: failed to open "/sys/fs/cgroup/kubepods/burstable/pod7e75cc29-b776-4b74-a78a-f76c6718f6cd/crio-792be1ed9a80d17826baa8e26a0f9f7b456d61026ce19a3ea93732a55fc6cb8e/hugetlb.2MB.limit_in_bytes"
    3: No such file or directory (os error 2)
[INFO crates/libcgroups/src/common.rs:207] 2021-12-28T22:46:09.156413188-03:00 cgroup manager V2 will be used
[DEBUG crates/libcgroups/src/v2/manager.rs:129] 2021-12-28T22:46:09.156455311-03:00 remove cgroup "/sys/fs/cgroup/kubepods/burstable/pod7e75cc29-b776-4b74-a78a-f76c6718f6cd/crio-792be1ed9a80d17826baa8e26a0f9f7b456d61026ce19a3ea93732a55fc6cb8e"
Error: failed to create container

Caused by:
    0: failed to receive a message from the intermediate process
    1: channel connection broken
youki --version
youki version 0.0.1
commit: 0.0.1-0-21c9d09

@Furisto
Copy link
Collaborator

Furisto commented Dec 30, 2021

@gattytto The name of the interface file was wrong. Fixed with #579.

@gattytto
Copy link

gattytto commented Dec 31, 2021

Thank you for your answer, now it gives a different outcome

@gattytto The name of the interface file was wrong. Fixed with #579.

logs:

################################
################################
###kubectl describe pod/rustest#
################################
################################

kubectl describe pod/rustest
Name:         rustest
Namespace:    default
Priority:     0
Node:         driven-lizard/2001:----:----:----:----:----:----:1bda
Start Time:   Fri, 31 Dec 2021 14:43:16 -0300
Labels:       name=rust
Annotations:  cni.projectcalico.org/containerID: c269c49be507d20f07dc7ecdecd78db2b382cb6d4d16cfd16114d2d09b10a795
              cni.projectcalico.org/podIP: 1100:200::3e:2340/128
              cni.projectcalico.org/podIPs: 1100:200::3e:2340/128
Status:       Running
IP:           1100:200::3e:2340
IPs:
  IP:  1100:200::3e:2340
Containers:
  rust:
    Container ID:   cri-o://e06d3df94150a466707beca53cb3840d8b0b3373eba66bfbb092cb76601ccd0b
    Image:          quay.io/gattytto/rst:29c8045
    Image ID:       quay.io/gattytto/rst@sha256:c3aac85ed499108dbbed0f6c297d7f766b984c2367c5588e49ab60a3a5b44b62
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Fri, 31 Dec 2021 14:43:37 -0300
      Finished:     Fri, 31 Dec 2021 14:43:37 -0300
    Ready:          False
    Restart Count:  1
    Limits:
      cpu:     1
      memory:  128Mi
    Requests:
      cpu:        1
      memory:     64Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xdwtg (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kube-api-access-xdwtg:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  35s                default-scheduler  Successfully assigned default/rustest to driven-lizard
  Normal   Pulling    34s                kubelet            Pulling image "quay.io/gattytto/rst:29c8045"
  Normal   Pulled     15s                kubelet            Successfully pulled image "quay.io/gattytto/rst:29c8045" in 19.361145627s
  Normal   Created    14s (x2 over 14s)  kubelet            Created container rust
  Normal   Pulled     14s                kubelet            Container image "quay.io/gattytto/rst:29c8045" already present on machine
  Normal   Started    13s (x2 over 14s)  kubelet            Started container rust
  Warning  BackOff    12s (x2 over 13s)  kubelet            Back-off restarting failed container



################################
################################
####kubectl logs pod/rustpod####
################################
################################

[DEBUG crates/youki/src/main.rs:92] 2021-12-31T14:43:55.223858793-03:00 started by user 0 with ArgsOs { inner: ["/usr/bin/youki", "--root=/run/youki", "create", "--bundle", "/run/containers/storage/overlay-containers/cfc87eda48734d53b66b9a7bbe44dab1cd435f8a88d1f17fc27769fa821be601/userdata", "--pid-file", "/run/containers/storage/overlay-containers/cfc87eda48734d53b66b9a7bbe44dab1cd435f8a88d1f17fc27769fa821be601/userdata/pidfile", "cfc87eda48734d53b66b9a7bbe44dab1cd435f8a88d1f17fc27769fa821be601"] }
[DEBUG crates/libcontainer/src/container/init_builder.rs:94] 2021-12-31T14:43:55.231029905-03:00 container directory will be "/run/youki/cfc87eda48734d53b66b9a7bbe44dab1cd435f8a88d1f17fc27769fa821be601"
[DEBUG crates/libcontainer/src/container/container.rs:191] 2021-12-31T14:43:55.231082509-03:00 Save container status: Container { state: State { oci_version: "v1.0.2", id: "cfc87eda48734d53b66b9a7bbe44dab1cd435f8a88d1f17fc27769fa821be601", status: Creating, pid: None, bundle: "/run/containers/storage/overlay-containers/cfc87eda48734d53b66b9a7bbe44dab1cd435f8a88d1f17fc27769fa821be601/userdata", annotations: Some({}), created: None, creator: None, use_systemd: None }, root: "/run/youki/cfc87eda48734d53b66b9a7bbe44dab1cd435f8a88d1f17fc27769fa821be601" } in "/run/youki/cfc87eda48734d53b66b9a7bbe44dab1cd435f8a88d1f17fc27769fa821be601"
[DEBUG crates/libcontainer/src/rootless.rs:50] 2021-12-31T14:43:55.231328266-03:00 This is NOT a rootless container
[INFO crates/libcgroups/src/common.rs:207] 2021-12-31T14:43:55.231384218-03:00 cgroup manager V2 will be used
[DEBUG crates/libcontainer/src/container/builder_impl.rs:87] 2021-12-31T14:43:55.231434156-03:00 Set OOM score to 978
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-31T14:43:55.232052462-03:00 Controller rdma is not yet implemented.
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-31T14:43:55.232102612-03:00 Controller misc is not yet implemented.
[DEBUG crates/libcgroups/src/v2/hugetlb.rs:16] 2021-12-31T14:43:55.248156252-03:00 Apply hugetlb cgroup v2 config
[DEBUG crates/libcgroups/src/v2/io.rs:21] 2021-12-31T14:43:55.248258705-03:00 Apply io cgroup v2 config
[DEBUG crates/libcgroups/src/v2/pids.rs:17] 2021-12-31T14:43:55.248288706-03:00 Apply pids cgroup v2 config
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-31T14:43:55.248364965-03:00 Controller rdma is not yet implemented.
[WARN crates/libcgroups/src/v2/util.rs:41] 2021-12-31T14:43:55.248397320-03:00 Controller misc is not yet implemented.
[DEBUG crates/libcontainer/src/namespaces.rs:65] 2021-12-31T14:43:55.248437761-03:00 unshare or setns: LinuxNamespace { typ: Pid, path: None }
[DEBUG crates/libcontainer/src/process/channel.rs:52] 2021-12-31T14:43:55.248673957-03:00 sending init pid (Pid(31045))
[DEBUG crates/libcontainer/src/namespaces.rs:65] 2021-12-31T14:43:55.249624262-03:00 unshare or setns: LinuxNamespace { typ: Uts, path: Some("/var/run/utsns/8d7fc62f-718c-4ab6-acf3-e8089764ac3c") }
[DEBUG crates/libcontainer/src/namespaces.rs:65] 2021-12-31T14:43:55.249697505-03:00 unshare or setns: LinuxNamespace { typ: Ipc, path: Some("/var/run/ipcns/8d7fc62f-718c-4ab6-acf3-e8089764ac3c") }
[DEBUG crates/libcontainer/src/namespaces.rs:65] 2021-12-31T14:43:55.249723808-03:00 unshare or setns: LinuxNamespace { typ: Network, path: Some("/var/run/netns/8d7fc62f-718c-4ab6-acf3-e8089764ac3c") }
[DEBUG crates/libcontainer/src/namespaces.rs:65] 2021-12-31T14:43:55.249744874-03:00 unshare or setns: LinuxNamespace { typ: Mount, path: None }
[DEBUG crates/libcontainer/src/namespaces.rs:65] 2021-12-31T14:43:55.249813376-03:00 unshare or setns: LinuxNamespace { typ: Cgroup, path: None }
[DEBUG crates/libcontainer/src/rootfs/rootfs.rs:38] 2021-12-31T14:43:55.249834864-03:00 Prepare rootfs: "/etc/containers/storage/driven-lizard/overlay/f445b1472b515a65e86ddd97b4cdb7068d7d7f310078669559206d032721ff6b/merged"
[DEBUG crates/libcontainer/src/rootfs/rootfs.rs:59] 2021-12-31T14:43:55.252266089-03:00 mount root fs "/etc/containers/storage/driven-lizard/overlay/f445b1472b515a65e86ddd97b4cdb7068d7d7f310078669559206d032721ff6b/merged"
[DEBUG crates/libcontainer/src/rootfs/mount.rs:50] 2021-12-31T14:43:55.252309015-03:00 Mounting Mount { destination: "/proc", typ: Some("proc"), source: Some("proc"), options: Some(["nosuid", "noexec", "nodev"]) }
[DEBUG crates/libcontainer/src/rootfs/mount.rs:50] 2021-12-31T14:43:55.252452830-03:00 Mounting Mount { destination: "/dev", typ: Some("tmpfs"), source: Some("tmpfs"), options: Some(["nosuid", "strictatime", "mode=755", "size=65536k"]) }
[DEBUG crates/libcontainer/src/rootfs/mount.rs:50] 2021-12-31T14:43:55.252556693-03:00 Mounting Mount { destination: "/dev/pts", typ: Some("devpts"), source: Some("devpts"), options: Some(["nosuid", "noexec", "newinstance", "ptmxmode=0666", "mode=0620", "gid=5"]) }
[DEBUG crates/libcontainer/src/rootfs/mount.rs:50] 2021-12-31T14:43:55.252638548-03:00 Mounting Mount { destination: "/dev/mqueue", typ: Some("mqueue"), source: Some("mqueue"), options: Some(["nosuid", "noexec", "nodev"]) }
[DEBUG crates/libcontainer/src/rootfs/mount.rs:50] 2021-12-31T14:43:55.252689048-03:00 Mounting Mount { destination: "/sys", typ: Some("sysfs"), source: Some("sysfs"), options: Some(["nosuid", "noexec", "nodev", "ro"]) }
[DEBUG crates/libcontainer/src/rootfs/mount.rs:50] 2021-12-31T14:43:55.252758427-03:00 Mounting Mount { destination: "/sys/fs/cgroup", typ: Some("cgroup"), source: Some("cgroup"), options: Some(["nosuid", "noexec", "nodev", "relatime", "ro"]) }
[DEBUG crates/libcontainer/src/rootfs/mount.rs:266] 2021-12-31T14:43:55.252795374-03:00 Mounting cgroup v2 filesystem
[DEBUG crates/libcontainer/src/rootfs/mount.rs:274] 2021-12-31T14:43:55.252813100-03:00 Mount { destination: "/sys/fs/cgroup", typ: Some("cgroup2"), source: Some("cgroup"), options: Some([]) }
[DEBUG crates/libcontainer/src/rootfs/mount.rs:50] 2021-12-31T14:43:55.252863028-03:00 Mounting Mount { destination: "/dev/shm", typ: Some("bind"), source: Some("/run/containers/storage/overlay-containers/c269c49be507d20f07dc7ecdecd78db2b382cb6d4d16cfd16114d2d09b10a795/userdata/shm"), options: Some(["rw", "bind"]) }
[DEBUG crates/libcontainer/src/rootfs/mount.rs:50] 2021-12-31T14:43:55.252932483-03:00 Mounting Mount { destination: "/etc/resolv.conf", typ: Some("bind"), source: Some("/run/containers/storage/overlay-containers/c269c49be507d20f07dc7ecdecd78db2b382cb6d4d16cfd16114d2d09b10a795/userdata/resolv.conf"), options: Some(["rw", "bind", "nodev", "nosuid", "noexec"]) }
[DEBUG crates/libcontainer/src/rootfs/mount.rs:50] 2021-12-31T14:43:55.253056981-03:00 Mounting Mount { destination: "/etc/hostname", typ: Some("bind"), source: Some("/run/containers/storage/overlay-containers/c269c49be507d20f07dc7ecdecd78db2b382cb6d4d16cfd16114d2d09b10a795/userdata/hostname"), options: Some(["rw", "bind"]) }
[DEBUG crates/libcontainer/src/rootfs/mount.rs:50] 2021-12-31T14:43:55.253142886-03:00 Mounting Mount { destination: "/etc/hosts", typ: Some("bind"), source: Some("/var/lib/kubelet/pods/278b7de9-61d0-430f-aa67-1e0f88a860b9/etc-hosts"), options: Some(["rw", "rbind", "rprivate", "bind"]) }
[DEBUG crates/libcontainer/src/rootfs/mount.rs:50] 2021-12-31T14:43:55.253224861-03:00 Mounting Mount { destination: "/dev/termination-log", typ: Some("bind"), source: Some("/var/lib/kubelet/pods/278b7de9-61d0-430f-aa67-1e0f88a860b9/containers/rust/17b3e2ac"), options: Some(["rw", "rbind", "rprivate", "bind"]) }
[DEBUG crates/libcontainer/src/rootfs/mount.rs:50] 2021-12-31T14:43:55.253297279-03:00 Mounting Mount { destination: "/var/run/secrets/kubernetes.io/serviceaccount", typ: Some("bind"), source: Some("/var/lib/kubelet/pods/278b7de9-61d0-430f-aa67-1e0f88a860b9/volumes/kubernetes.io~projected/kube-api-access-xdwtg"), options: Some(["ro", "rbind", "rprivate", "bind"]) }
[DEBUG crates/libcontainer/src/process/container_init_process.rs:124] 2021-12-31T14:43:55.253806957-03:00 readonly path "/proc/bus" mounted
[DEBUG crates/libcontainer/src/process/container_init_process.rs:124] 2021-12-31T14:43:55.253836447-03:00 readonly path "/proc/fs" mounted
[DEBUG crates/libcontainer/src/process/container_init_process.rs:124] 2021-12-31T14:43:55.253857709-03:00 readonly path "/proc/irq" mounted
[DEBUG crates/libcontainer/src/process/container_init_process.rs:124] 2021-12-31T14:43:55.253877810-03:00 readonly path "/proc/sys" mounted
[DEBUG crates/libcontainer/src/process/container_init_process.rs:124] 2021-12-31T14:43:55.253899506-03:00 readonly path "/proc/sysrq-trigger" mounted
[WARN crates/libcontainer/src/process/container_init_process.rs:140] 2021-12-31T14:43:55.253968378-03:00 masked path "/proc/latency_stats" not exist
[WARN crates/libcontainer/src/process/container_init_process.rs:140] 2021-12-31T14:43:55.253995193-03:00 masked path "/proc/timer_stats" not exist
[WARN crates/libcontainer/src/process/container_init_process.rs:140] 2021-12-31T14:43:55.254012268-03:00 masked path "/proc/sched_debug" not exist
[DEBUG crates/libcontainer/src/capabilities.rs:128] 2021-12-31T14:43:55.254107018-03:00 reset all caps
[DEBUG crates/libcontainer/src/capabilities.rs:128] 2021-12-31T14:43:55.254162817-03:00 reset all caps
[DEBUG crates/libcontainer/src/capabilities.rs:135] 2021-12-31T14:43:55.254201415-03:00 dropping bounding capabilities to Some({DacOverride, Setuid, NetBindService, Kill, Fsetid, Fowner, Setgid, Chown, Setpcap})
[WARN crates/libcontainer/src/syscall/linux.rs:139] 2021-12-31T14:43:55.254266346-03:00 CAP_BPF is not supported.
[WARN crates/libcontainer/src/syscall/linux.rs:139] 2021-12-31T14:43:55.254290257-03:00 CAP_CHECKPOINT_RESTORE is not supported.
[WARN crates/libcontainer/src/syscall/linux.rs:139] 2021-12-31T14:43:55.254301861-03:00 CAP_PERFMON is not supported.
[DEBUG crates/libcontainer/src/process/container_main_process.rs:90] 2021-12-31T14:43:55.254581233-03:00 init pid is Pid(31045)
[DEBUG crates/libcontainer/src/container/container.rs:191] 2021-12-31T14:43:55.254632621-03:00 Save container status: Container { state: State { oci_version: "v1.0.2", id: "cfc87eda48734d53b66b9a7bbe44dab1cd435f8a88d1f17fc27769fa821be601", status: Created, pid: Some(31045), bundle: "/run/containers/storage/overlay-containers/cfc87eda48734d53b66b9a7bbe44dab1cd435f8a88d1f17fc27769fa821be601/userdata", annotations: Some({"io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.cri-o.ResolvPath": "/run/containers/storage/overlay-containers/c269c49be507d20f07dc7ecdecd78db2b382cb6d4d16cfd16114d2d09b10a795/userdata/resolv.conf", "io.kubernetes.cri-o.TTY": "false", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.cri-o.Stdin": "false", "io.kubernetes.container.name": "rust", "io.kubernetes.container.hash": "8200690c", "io.kubernetes.cri-o.ImageRef": "d61b000cca08f105c6675916613dc295c707965b75c2f7880615b47a1fbee4dd", "io.kubernetes.cri-o.IP.0": "1100:200::3e:2340", "io.kubernetes.cri-o.MountPoint": "/etc/containers/storage/driven-lizard/overlay/f445b1472b515a65e86ddd97b4cdb7068d7d7f310078669559206d032721ff6b/merged", "io.kubernetes.cri-o.Annotations": "{\"io.kubernetes.container.hash\":\"8200690c\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}", "io.kubernetes.cri-o.SandboxName": "k8s_rustest_default_278b7de9-61d0-430f-aa67-1e0f88a860b9_0", "io.kubernetes.cri-o.SeccompProfilePath": "", "io.kubernetes.cri-o.StdinOnce": "false", "io.kubernetes.cri-o.Name": "k8s_rust_rustest_default_278b7de9-61d0-430f-aa67-1e0f88a860b9_2", "io.kubernetes.cri-o.Labels": "{\"io.kubernetes.container.name\":\"rust\",\"io.kubernetes.pod.name\":\"rustest\",\"io.kubernetes.pod.namespace\":\"default\",\"io.kubernetes.pod.uid\":\"278b7de9-61d0-430f-aa67-1e0f88a860b9\"}", "io.kubernetes.cri-o.LogPath": "/var/log/pods/default_rustest_278b7de9-61d0-430f-aa67-1e0f88a860b9/rust/2.log", "io.kubernetes.cri-o.SandboxID": "c269c49be507d20f07dc7ecdecd78db2b382cb6d4d16cfd16114d2d09b10a795", "io.kubernetes.pod.namespace": "default", "io.container.manager": "cri-o", "io.kubernetes.cri-o.ContainerType": "container", "io.kubernetes.cri-o.Image": "d61b000cca08f105c6675916613dc295c707965b75c2f7880615b47a1fbee4dd", "io.kubernetes.pod.name": "rustest", "kubernetes.io/config.seen": "2021-12-31T14:43:16.285819888-03:00", "kubernetes.io/config.source": "api", "io.kubernetes.container.restartCount": "2", "io.kubernetes.cri-o.Volumes": "[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/278b7de9-61d0-430f-aa67-1e0f88a860b9/etc-hosts\",\"readonly\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/278b7de9-61d0-430f-aa67-1e0f88a860b9/containers/rust/17b3e2ac\",\"readonly\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/278b7de9-61d0-430f-aa67-1e0f88a860b9/volumes/kubernetes.io~projected/kube-api-access-xdwtg\",\"readonly\":true}]", "io.kubernetes.pod.terminationGracePeriod": "30", "io.kubernetes.cri-o.ContainerID": "cfc87eda48734d53b66b9a7bbe44dab1cd435f8a88d1f17fc27769fa821be601", "io.kubernetes.cri-o.Metadata": "{\"name\":\"rust\",\"attempt\":2}", "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"rust\"},\"name\":\"rustest\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"quay.io/gattytto/rst:29c8045\",\"name\":\"rust\",\"resources\":{\"limits\":{\"cpu\":1,\"memory\":\"128Mi\"},\"requests\":{\"cpu\":1,\"memory\":\"64Mi\"}}}],\"runtimeClassName\":\"youki\"}}\n", "io.kubernetes.pod.uid": "278b7de9-61d0-430f-aa67-1e0f88a860b9", "io.kubernetes.cri-o.Created": "2021-12-31T14:43:55.178340341-03:00", "io.kubernetes.cri-o.ImageName": "quay.io/gattytto/rst:29c8045"}), created: Some(2021-12-31T17:43:55.254626102Z), creator: Some(0), use_systemd: Some(false) }, root: "/run/youki/cfc87eda48734d53b66b9a7bbe44dab1cd435f8a88d1f17fc27769fa821be601" } in "/run/youki/cfc87eda48734d53b66b9a7bbe44dab1cd435f8a88d1f17fc27769fa821be601"
[DEBUG crates/libcontainer/src/notify_socket.rs:43] 2021-12-31T14:43:55.341628903-03:00 received: start container
[DEBUG crates/libcontainer/src/process/fork.rs:16] 2021-12-31T14:43:55.341786911-03:00 failed to run fork: EACCES: Permission denied
youki version 0.0.1
commit: 0.0.1-0-597a0f0

@utam0k
Copy link
Member Author

utam0k commented Jan 1, 2022

@Furisto @gattytto Hi! I'm sorry for the delay to take care of this problem. As this problem is very interesting and exciting for youki, I created the issue about this problem. To make it easier for other persons who is interesting in youki and cri-o to find and view later, we can continue in this issue. If you have any ideas on how to recreate this using kind or others, please comment this issue.
#584

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

8 participants