Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WSL2 ERROR: failed to create cluster #2323

Closed
piyushvj opened this issue Jun 22, 2021 · 46 comments · Fixed by #2465
Closed

WSL2 ERROR: failed to create cluster #2323

piyushvj opened this issue Jun 22, 2021 · 46 comments · Fixed by #2465
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@piyushvj
Copy link

ERROR: failed to create cluster:
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I0622 15:16:13.468494 216 initconfiguration.go:246] loading configuration from "/kind/kubeadm.conf"

What is Expected:
cluster should be created without any error.

How to reproduce it:
run below command to reproduce it :
$ kind create cluster

Anything else we need to know?:
I have recently install ubuntu as virtual machine on windows 10 as wsl 2.
running ubuntu on windows terminal as admin user, also installed docker and set it as non root user.
below I am providing environment related information.

ENVIRONMENT:

Ubuntu
command used $lsb_release -a

No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.2 LTS
Release: 20.04
Codename: focal

Kubectl Installation
https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-using-native-package-management

Kubectl Version
command used kubectl version --client

Client Version:
version.Info{
Major:"1", Minor:"21",
GitVersion:"v1.21.2",
GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7",
GitTreeState:"clean",
BuildDate:"2021-06-16T12:59:11Z",
GoVersion:"go1.16.5",
Compiler:"gc",
Platform:"linux/amd64"
}

kind Installtion :

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

kind version: :
command used $kind version

kind v0.11.1 go1.16.4 linux/amd64

docker info

command used $docker info

Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
scan: Docker Scan (Docker Inc., v0.8.0)

Server:
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 2
Server Version: 20.10.7
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
Default Runtime: runc
Init Binary: docker-init
containerd version: d71fcd7d8303cbf684402823e425e9dd2e99285d
runc version: b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 5.4.72-microsoft-standard-WSL2
Operating System: Ubuntu 20.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 12
Total Memory: 6.133GiB
Name: LAPTOP-TN6NO0LS
ID: JDCK:NRQ2:ML5P:EUMK:OBYG:76PM:5SXD:FMYK:KHCX:NDTB:IQ4R:KIBJ
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support

@piyushvj piyushvj added the kind/bug Categorizes issue or PR as related to a bug. label Jun 22, 2021
@BenTheElder
Copy link
Member

Hi, since this is WSL2 have you followed https://kind.sigs.k8s.io/docs/user/using-wsl2/ ?

@BenTheElder BenTheElder changed the title ERROR: failed to create cluster WSL2 ERROR: failed to create cluster Jun 22, 2021
@BenTheElder
Copy link
Member

Backing Filesystem: extfs

I seem to recall maybe we need to detect extfs and mount devicemapper in #2149

also:

running ubuntu on windows terminal as admin user, also installed docker and set it as non root user.

rootless comes with some issues to be aware of still, officially Kubernetes does not support rootless yes, but kind does anyhow, with some limitations and workarounds.

https://kind.sigs.k8s.io/docs/user/rootless/

@BenTheElder BenTheElder changed the title WSL2 ERROR: failed to create cluster WSL2 [rootless] ERROR: failed to create cluster Jun 22, 2021
@piyushvj
Copy link
Author

Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.21.1) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✗ Starting control-plane 🕹️
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I0622 15:16:13.468494 216 initconfiguration.go:246] loading configuration from "/kind/kubeadm.conf"
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration
I0622 15:16:13.475155 216 certs.go:110] creating a new certificate authority for ca
[init] Using Kubernetes version: v1.21.1
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
I0622 15:16:13.568985 216 certs.go:487] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.18.0.2 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0622 15:16:13.886452 216 certs.go:110] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0622 15:16:14.040123 216 certs.go:487] validating certificate period for front-proxy-ca certificate
I0622 15:16:14.104052 216 certs.go:110] creating a new certificate authority for etcd-ca
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
I0622 15:16:14.305692 216 certs.go:487] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.18.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.18.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0622 15:16:14.777293 216 certs.go:76] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0622 15:16:14.889612 216 kubeconfig.go:101] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0622 15:16:14.976515 216 kubeconfig.go:101] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0622 15:16:15.053991 216 kubeconfig.go:101] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0622 15:16:15.228481 216 kubeconfig.go:101] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I0622 15:16:15.415550 216 kubelet.go:63] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I0622 15:16:15.470608 216 manifests.go:96] [control-plane] getting StaticPodSpecs
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0622 15:16:15.471048 216 certs.go:487] validating certificate period for CA certificate
I0622 15:16:15.471173 216 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0622 15:16:15.471177 216 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0622 15:16:15.471180 216 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0622 15:16:15.471182 216 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0622 15:16:15.471185 216 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0622 15:16:15.475824 216 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0622 15:16:15.475857 216 manifests.go:96] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0622 15:16:15.476114 216 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0622 15:16:15.476135 216 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0622 15:16:15.476140 216 manifests.go:109] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0622 15:16:15.476144 216 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0622 15:16:15.476147 216 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0622 15:16:15.476151 216 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0622 15:16:15.476183 216 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0622 15:16:15.476931 216 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0622 15:16:15.476959 216 manifests.go:96] [control-plane] getting StaticPodSpecs
I0622 15:16:15.477225 216 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0622 15:16:15.477807 216 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0622 15:16:15.479260 216 local.go:74] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0622 15:16:15.479287 216 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthy
I0622 15:16:15.479914 216 loader.go:372] Config loaded from file: /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0622 15:16:15.482342 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:15.983265 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:16.483053 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:16.985037 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:17.483238 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:17.983700 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:18.485076 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:18.985004 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:19.483548 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:19.984536 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:20.485134 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:20.985199 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:21.485113 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:21.985383 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:22.485368 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 2 milliseconds
I0622 15:16:22.984669 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:23.483351 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:23.984489 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:24.483046 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:24.985111 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:25.485095 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:25.985002 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:26.485349 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:26.985376 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:27.485592 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:27.983543 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:28.485080 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:28.983928 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:29.485009 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 2 milliseconds
I0622 15:16:29.985503 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:30.485025 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:30.983724 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:31.484460 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:31.983691 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:32.484768 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:32.984303 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:33.484426 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:33.985032 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:34.484625 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:34.984803 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:35.484997 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:35.984604 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:36.484552 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:36.984700 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:37.485107 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:37.985983 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 2 milliseconds
I0622 15:16:38.485144 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:38.985516 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:39.485197 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:39.985157 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:40.485441 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:40.985228 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:41.485794 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 2 milliseconds
I0622 15:16:41.983800 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:42.485526 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 2 milliseconds
I0622 15:16:42.983819 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:43.486077 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:43.985170 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:44.485332 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:44.985102 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:45.484416 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:45.985452 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:46.485078 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:46.985456 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 2 milliseconds
I0622 15:16:47.485109 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:47.985092 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:48.485507 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:48.985120 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:49.485210 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 2 milliseconds
I0622 15:16:49.984550 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:50.485370 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:50.984384 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:51.484824 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:51.984771 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:52.486176 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:52.984819 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:53.485447 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:53.983365 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:54.484779 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:54.985001 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0622 15:16:55.484085 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:55.984260 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:56.485601 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 2 milliseconds
I0622 15:16:56.983142 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:57.484009 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:57.983742 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:58.484569 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:16:58.983206 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:59.484098 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:16:59.983838 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:00.485307 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0622 15:17:00.984527 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:01.484329 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:01.985678 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:02.484392 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:02.985026 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:03.484339 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:03.985116 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:04.484723 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:04.984821 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:05.483553 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:05.984816 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:06.485845 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:06.984958 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:07.484183 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:07.985466 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:08.485976 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:08.984025 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:09.485078 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:09.984726 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:10.484982 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0622 15:17:10.984252 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:11.485231 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:11.985104 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:12.483762 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:12.984817 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:13.485985 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:13.985048 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:14.485011 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:14.984752 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:15.485082 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:15.983883 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:16.485093 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:16.985225 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:17.484431 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:17.984682 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:18.484712 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:18.984808 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:19.483549 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:19.984407 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:20.483437 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:20.986119 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:21.483963 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:21.985095 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:22.484803 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:22.984379 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:23.485005 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:23.984727 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:24.484678 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:24.984607 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:25.484178 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:25.984839 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:26.484024 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:26.983558 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:27.482967 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:27.984529 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:28.483390 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:28.985834 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:29.484511 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:29.985151 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:30.484987 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0622 15:17:30.985265 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:31.484693 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:31.985377 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:32.484180 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:32.985139 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:33.484788 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:33.984788 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:34.484298 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:34.984107 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:35.484271 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:35.984950 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:36.485054 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:36.984565 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:37.483374 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:37.985457 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:38.482943 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:38.983719 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:39.483351 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:39.984726 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:40.484671 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:40.984673 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:41.484976 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:41.983434 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:42.484380 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:42.984491 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:43.484534 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:43.984130 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:44.484775 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:44.984340 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:45.485128 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:45.985359 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:46.485239 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:46.984288 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:47.485383 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:47.984843 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:48.484231 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:48.985001 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:49.484757 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:49.984322 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:50.485246 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:50.985035 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:51.484986 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:51.984181 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:52.485058 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:52.985096 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:53.484199 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:53.985085 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:54.485299 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:54.984403 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:55.485431 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:55.985246 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:56.483345 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:56.985075 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:57.484564 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:57.985886 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:58.484629 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:58.984640 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:17:59.483248 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:17:59.985049 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:18:00.485055 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:18:00.986090 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 2 milliseconds
I0622 15:18:01.484564 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:18:01.985996 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:18:02.485104 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:18:02.985111 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:18:03.483727 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:18:03.984998 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:18:04.483270 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:18:04.984416 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:18:05.485436 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:18:05.984172 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:18:06.483668 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:18:06.984597 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:18:07.483265 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:18:07.984809 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:18:08.484567 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:18:08.983248 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:18:09.485191 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0622 15:18:09.983859 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0622 15:18:10.484264 216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 1 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

    Unfortunately, an error has occurred:
            timed out waiting for the condition

    This error is likely caused by:
            - The kubelet is not running
            - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
            - 'systemctl status kubelet'
            - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
            - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
            Once you have found the failing container, you can inspect its logs with:
            - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'

couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:114
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:152
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
k8s.io/kubernetes/cmd/kubeadm/app.Run
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:225
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1371
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:152
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
k8s.io/kubernetes/cmd/kubeadm/app.Run
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:225
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1371

@BenTheElder
Copy link
Member

Hi, can you please answer my previous questions?

I can see that kubelet is timing out reaching API-server, that does not tell me:

  • have you followed the guides in our docs, which touch on known issues dealing with this environment
  • what actually failed, unfortunately this is complex and not easy for the tool to surface. as the kubeadm output says there are many possible failure modes leading to this symptom.

If not for those, please do:
kind create cluster --retain (retain prevents cleanup on failure) and then kind export logs and upload the logs to this issue, so we can see what the system component logs say.

@AkihiroSuda
Copy link
Member

I'm confused. The issue title seems about rootless, but docker info in the OP seems rootful.
Also, Cgroup Version: 1 is not supported for kind with rootless docker.

@BenTheElder
Copy link
Member

I tagged it rootless due to

running ubuntu on windows terminal as admin user, also installed docker and set it as non root user.

but in general we don't have enough info to go on for this issue yet, and the phrasing is not unambiguous

@BenTheElder BenTheElder changed the title WSL2 [rootless] ERROR: failed to create cluster WSL2 ERROR: failed to create cluster Jun 24, 2021
@jshbrntt
Copy link

Experiencing the same issue, and yes I followed the instructions on the WSL2 page.
https://kind.sigs.k8s.io/docs/user/using-wsl2/

docker info

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
  scan: Docker Scan (Docker Inc., v0.8.0)

Server:
 Containers: 1
  Running: 1
  Paused: 0
  Stopped: 0
 Images: 42
 Server Version: 20.10.7
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runtime.v1.linux runc io.containerd.runc.v2
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: d71fcd7d8303cbf684402823e425e9dd2e99285d
 runc version: b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.10.16.3-microsoft-standard-WSL2
 Operating System: Ubuntu 20.04 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 7.719GiB
 Name: diplodocus
 ID: 7TPX:P36J:ZH3U:VJ4K:SUI5:AWYR:CTIM:TDSU:QJE4:2SGH:LBUT:LPEF
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Username: joshuakodify
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support

kind create cluster --retain

Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.21.1) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✗ Starting control-plane 🕹️
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I0625 15:33:05.057449     216 initconfiguration.go:246] loading configuration from "/kind/kubeadm.conf"
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.21.1
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0625 15:33:05.071811     216 certs.go:110] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0625 15:33:05.305597     216 certs.go:487] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.18.0.2 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0625 15:33:05.716332     216 certs.go:110] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0625 15:33:05.962382     216 certs.go:487] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0625 15:33:06.118136     216 certs.go:110] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0625 15:33:06.406380     216 certs.go:487] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.18.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.18.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0625 15:33:07.287068     216 certs.go:76] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0625 15:33:07.521787     216 kubeconfig.go:101] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0625 15:33:07.721355     216 kubeconfig.go:101] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0625 15:33:07.908779     216 kubeconfig.go:101] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0625 15:33:08.043041     216 kubeconfig.go:101] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I0625 15:33:08.109474     216 kubelet.go:63] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0625 15:33:08.238200     216 manifests.go:96] [control-plane] getting StaticPodSpecs
I0625 15:33:08.238618     216 certs.go:487] validating certificate period for CA certificate
I0625 15:33:08.238735     216 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0625 15:33:08.238776     216 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0625 15:33:08.238787     216 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0625 15:33:08.238799     216 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0625 15:33:08.238840     216 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0625 15:33:08.245347     216 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0625 15:33:08.245395     216 manifests.go:96] [control-plane] getting StaticPodSpecs
I0625 15:33:08.245737     216 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0625 15:33:08.245774     216 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0625 15:33:08.245781     216 manifests.go:109] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0625 15:33:08.245786     216 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0625 15:33:08.245792     216 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0625 15:33:08.245798     216 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0625 15:33:08.245804     216 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0625 15:33:08.246613     216 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0625 15:33:08.246655     216 manifests.go:96] [control-plane] getting StaticPodSpecs
I0625 15:33:08.246931     216 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0625 15:33:08.247465     216 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0625 15:33:08.248321     216 local.go:74] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0625 15:33:08.248358     216 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthy
I0625 15:33:08.249910     216 loader.go:372] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0625 15:33:08.252973     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:08.754059     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:09.254768     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:09.754627     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:10.254856     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:10.754558     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:11.254321     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:11.755303     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:12.255248     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:12.755164     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:13.255179     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:13.755501     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:14.254633     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:14.754546     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:15.255242     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:15.755500     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:16.254360     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:16.754646     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:17.254298     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:17.755227     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:18.255618     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:18.754311     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:19.255330     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:19.755482     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:20.254432     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:20.755287     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:21.255657     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:21.754714     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:22.254746     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:22.754811     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:23.254666     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:23.754444     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:24.254439     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:24.754348     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:25.254746     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:25.755357     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:26.255409     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:26.755559     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:27.254584     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:27.754345     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:28.255864     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:28.754798     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:29.254650     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:29.754654     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:30.254717     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:30.754471     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:31.254483     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:31.754361     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:32.255800     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:32.754502     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:33.254714     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:33.754717     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:34.254270     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:34.754819     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:35.255356     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:35.755350     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:36.254310     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:36.755555     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:37.254669     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:37.754632     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:38.254464     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:38.754577     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:39.254447     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:39.755555     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:40.254527     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:40.753693     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:41.254473     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:41.754627     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:42.254525     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:42.754845     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:43.254606     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:43.754352     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:44.255433     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:44.755428     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:45.254786     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:45.754600     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:46.254651     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:46.754660     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:47.254590     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:47.754598     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0625 15:33:48.255030     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:48.755158     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:49.255241     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:49.755287     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:50.254560     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:50.754466     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:51.254360     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:51.755397     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:52.255650     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:52.754557     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:53.254474     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0625 15:33:53.754267     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:54.255313     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:54.755474     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:55.254985     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:55.754693     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:56.254662     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:56.754534     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:57.254703     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:57.754510     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:58.254522     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:33:58.755418     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:59.254353     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:33:59.755537     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:00.254656     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:00.754638     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:01.254918     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:01.754674     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:02.254680     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:02.754285     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:03.255578     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0625 15:34:03.754683     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:04.254905     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:04.754389     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:05.254508     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:05.754506     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:06.254630     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:06.754615     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:07.254622     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:07.754323     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:08.255495     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:08.755553     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:09.254764     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:09.754617     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:10.255253     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:10.755397     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:11.255390     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:11.755833     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:12.254700     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:12.754536     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:13.254480     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:13.754560     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:14.254722     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:14.754686     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:15.255320     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:15.755614     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:16.254735     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:16.754640     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:17.254682     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:17.755008     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:18.255087     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:18.755242     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:19.255419     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:19.755216     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:20.255868     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:20.755048     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:21.255010     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:21.754726     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:22.254384     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:22.754413     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:23.255488     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
I0625 15:34:23.754987     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:24.255033     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:24.754981     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:25.255427     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:25.755533     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:26.255254     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:26.755469     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:27.255472     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:27.754704     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:28.254787     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:28.754646     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:29.254486     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:29.754519     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:30.255107     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:30.754519     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:31.254418     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:31.754549     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:32.254519     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:32.754382     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:33.254688     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:33.754163     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:34.255184     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:34.755235     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:35.255342     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:35.755597     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:36.254706     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:36.754708     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:37.254592     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:37.754567     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:38.255371     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:38.755753     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:39.254625     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:39.754264     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:40.254159     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:40.755445     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:41.255448     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:41.755328     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:42.254920     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:42.755082     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:43.255009     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:43.754839     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:44.254593     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:44.754532     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:45.255254     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:45.755216     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:46.255534     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:46.754325     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:47.255364     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:47.755640     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:48.254665     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:48.754539     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:49.254509     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:49.754524     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:50.255171     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:50.755356     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:51.254660     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:51.754504     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:52.254400     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:52.755420     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:53.253841     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:53.755200     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:54.255426     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:54.755541     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:55.254836     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:55.754087     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:56.255301     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:56.755535     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:57.254737     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:57.753775     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:34:58.255076     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:58.755244     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:59.255257     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:34:59.755458     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:35:00.255062     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:35:00.754354     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:35:01.253838     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:35:01.754883     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:35:02.254860     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0625 15:35:02.754574     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0625 15:35:03.254488     216 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
                - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'

couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:114
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:152
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:225
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1371
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:152
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
k8s.io/kubernetes/cmd/kubeadm/app.Run
        /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:225
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1371

kind export logs

Exporting logs for cluster "kind" to:
/tmp/757334508

757334508.zip

@ntx-ben
Copy link

ntx-ben commented Jun 29, 2021

Using WSL2 in Alpine on my Windows 10 box.

I had this same issue when I upgraded to kind-node v1.21.1 but forgot to also update the kind CLI to 0.11.1.

@benc-uk
Copy link

benc-uk commented Jun 29, 2021

Same problem, also following the steps on the WSL2 page
kind v0.11.1 with image kindest/node:v1.21.1

Kind used to work in the past on WSL2 :'(

@BenTheElder
Copy link
Member

I had this same issue when I upgraded to kind-node v1.21.1 but forgot to also update the kind CLI to 0.11.1.

NOTE: This is not, in general, guaranteed to be supported. For pre-built images, please see the release notes for your release.
We do our best to ensure things just work across versions but as in the case of v1.21 we had to adapt to upstream changes, requiring a new kind version. We are looking at how to more generally avoid that and to make images clearer.

I just had a chance to look at the logs in #2323 (comment)

Do you all have:

Backing Filesystem: extfs

? (docker info)

If so, try #1945 (comment)

If that resolves it, we probably do still need to follow up around mounting this when we detect this backing filesystem. I've not yet found a good reference for which filesystem actually use this (and it is not always present at all), so we've added specific cases for known ones previously (zfs, btrfs).

From the serial.log for the node:

�[0;1;31mFailed to create symlink /sys/fs/cgroup/cpu: File exists�[0m
�[0;1;31mFailed to create symlink /sys/fs/cgroup/cpuacct: File exists�[0m
�[0;1;31mFailed to create symlink /sys/fs/cgroup/net_cls: File exists�[0m
�[0;1;31mFailed to create symlink /sys/fs/cgroup/net_prio: File exists�[0m

This may or may not be a problem. We are attempting to recreate a more natural hierarchy from the node POV, but it can function without this typically.

@benc-uk
Copy link

benc-uk commented Jun 29, 2021

Yep I have extfs too

$ docker info | grep Backing
  Backing Filesystem: extfs

It's not something I've ever even thought about until now
I'm trying the config suggested and still not having any luck

This is the config I'm using with kind create cluster --config=cluster.yaml

config.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
    extraPortMappings:
      - containerPort: 30000
        hostPort: 30000
        protocol: TCP
    extraMounts:
      - hostPath: /dev/mapper
        containerPath: /dev/mapper

still getting the issue on WSL2 :(

@BenTheElder
Copy link
Member

looking around: it seems distros (ubuntu) under WSL2 are not using systemd even if they do otherwise? (alpine typically doesn't period)

we've had trouble with this before, Kubernetes, and most of our developers on linux etc. are largely supporting / using systemd (which relates to cgroup setup), #2156 (comment)

d777456 was meant to resolve some of this, but that's in v0.11+ images

@benc-uk
Copy link

benc-uk commented Jun 30, 2021

In WSL2 I'm running Ubuntu 20.04

Running 1.17.17 image finally worked for me
kind create cluster --config=cluster.yaml --image kindest/node:v1.17.17

@BenTheElder
Copy link
Member

without pinning the digest it's possible this was an older cached iteration of that image. for v01.11. the digests are here https://github.com/kubernetes-sigs/kind/releases/tag/v0.11.1

@jshbrntt
Copy link

jshbrntt commented Jul 2, 2021

In WSL2 I'm running Ubuntu 20.04

Running 1.17.17 image finally worked for me
kind create cluster --config=cluster.yaml --image kindest/node:v1.17.17

Also works for me thanks!

@DanielJoyce
Copy link

DanielJoyce commented Jul 15, 2021

I'm seeing the same behaviour on a Pop-OS (Ubuntu) laptop. Stops at same point as OP, cluster fails to come up. This was working until I installed the latest batch of updates for my OS.

localdev on  master [!?] 
❯ uname -a
Linux pop-os 5.11.0-7620-generic #21~1624379747~20.10~3abeff8-Ubuntu SMP Wed Jun 23 02:23:59 UTC  x86_64 x86_64 x86_64 GNU/Linux

localdev on  master [!?] 
❯ cat /etc/lsb-release
DISTRIB_ID=Pop
DISTRIB_RELEASE=20.10
DISTRIB_CODENAME=groovy
DISTRIB_DESCRIPTION="Pop!_OS 20.10"

Will try workaround

EDIT: No dice.

@networkop
Copy link
Contributor

I think I may have found a problem. Starting control-plane fails because kubelet on worker nodes fails to start which is because /var/lib/kubelet/config.yaml is not present, which is because API server on the control plane nodes is not running, which is because kubelet on the control-plane node is not running which is because cgroup-root kubelet is not found which is because of this line

Since this is baked into an image, the best workaround I've found is to create a file without cgroup-root:

# https://github.com/kubernetes/kubernetes/blob/ba8fcafaf8c502a454acd86b728c857932555315/build/debs/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
# On cgroup v1, the /kubelet cgroup is created in the entrypoint script before running systemd.
# On cgroup v2, the /kubelet cgroup is created here. (See the comments in the entrypoint script for the reason.)
ExecStartPre=/bin/sh -euc "if [ -f /sys/fs/cgroup/cgroup.controllers ]; then create-kubelet-cgroup-v2; fi"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

and modify kind.yaml to mount this over the baked-in file:

nodes:
- role: control-plane
  extraMounts:
    - hostPath: hacks/10-kubeadm.conf
      containerPath: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
- role: worker
  extraMounts:
    - hostPath: hacks/10-kubeadm.conf
      containerPath: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
- role: worker
  extraMounts:
    - hostPath: hacks/10-kubeadm.conf
      containerPath: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Seems to work but I'm not sure why this flag was added and what are the consequences of removing it.

@aojea
Copy link
Contributor

aojea commented Jul 24, 2021

Great investigation, the cgroup-root=/kubelet is needed to run kind inside of kubernetes
8c68b60

The logs attached clearly confirm this is the problem

Jun 25 15:33:33 kind-control-plane kubelet[293]: E0625 15:33:33.327088 293 server.go:292] "Failed to run kubelet" err="failed to run Kubelet: invalid configuration: cgroup-root ["kubelet"] doesn't exist"

The serial.log files has these entries

systemd 247.3-3ubuntu3 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +ZSTD +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
Detected virtualization wsl.
Detected architecture x86-64.
�[0;1;31mFailed to create symlink /sys/fs/cgroup/cpu: File exists�[0m
�[0;1;31mFailed to create symlink /sys/fs/cgroup/cpuacct: File exists�[0m
�[0;1;31mFailed to create symlink /sys/fs/cgroup/net_cls: File exists�[0m
�[0;1;31mFailed to create symlink /sys/fs/cgroup/net_prio: File exists�[0m

is not working the bind mounting on wsl?

@networkop
Copy link
Contributor

it does work with bind mounts on wsl. just wanted to make sure it doesn't break anything unexpectedly. based on the referenced PR, the workaround should work fine.

@aojea
Copy link
Contributor

aojea commented Jul 24, 2021

cgroup-root ["kubelet"] doesn't exist"

but .... why the /kubelet cgroup doesn't exist?

@networkop
Copy link
Contributor

I'm not sure why. I couldn't find where it's supposed to be created. I can try a dig deeper, need to have a look at that PR you referenced

@networkop
Copy link
Contributor

ok, so I dug a bit deeper and it looks like it could be a bug in kind's entrypoint. Normally, with cgroupv2, /sys/fs/cgroup/kubelet is created by this function during the ExecPreStart of the kubelet services.
For every other system that doesn't support cgroupv2, e.g. WSL , this branch of code is executed and nowhere in this branch is /sys/fs/cgroup/kubelet created.
I'm not really strong with cgroups so I dunno if simply creating the path would solve the problem. @BenTheElder can you advise? RCA is in this comment.

@aojea
Copy link
Contributor

aojea commented Jul 25, 2021

what I use to do to debug this problems is to log into a runing kind node (despite it fails you can use --retain to keep it alive) and execute the /usr/local/bin/entrypoint script manually setting -x
there is some logic to map the cgroups in the script, because it is different depending on the evironment, I will verify that the problem is not here

local cgroup_mounts
# xref: https://github.com/kubernetes/minikube/pull/9508
# Example inputs:
#
# Docker: /docker/562a56986a84b3cd38d6a32ac43fdfcc8ad4d2473acf2839cbf549273f35c206 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:143 master:23 - cgroup devices rw,devices
# podman: /libpod_parent/libpod-73a4fb9769188ae5dc51cb7e24b9f2752a4af7b802a8949f06a7b2f2363ab0e9 ...
# Cloud Shell: /kubepods/besteffort/pod3d6beaa3004913efb68ce073d73494b0/accdf94879f0a494f317e9a0517f23cdd18b35ff9439efd0175f17bbc56877c4 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime master:19 - cgroup cgroup rw,memory
# GitHub actions #9304: /actions_job/0924fbbcf7b18d2a00c171482b4600747afc367a9dfbeac9d6b14b35cda80399 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:263 master:24 - cgroup cgroup rw,memory
cgroup_mounts=$(grep -E -o '/[[:alnum:]].* /sys/fs/cgroup.*.*cgroup' /proc/self/mountinfo || true)
if [[ -n "${cgroup_mounts}" ]]; then
local mount_root
mount_root=$(head -n 1 <<<"${cgroup_mounts}" | cut -d' ' -f1)
for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2); do
# bind mount each mount_point to mount_point + mount_root
# mount --bind /sys/fs/cgroup/cpu /sys/fs/cgroup/cpu/docker/fb07bb6daf7730a3cb14fc7ff3e345d1e47423756ce54409e66e01911bab2160
local target="${mount_point}${mount_root}"
if ! findmnt "${target}"; then
mkdir -p "${target}"
mount --bind "${mount_point}" "${target}"
fi
done
fi
# kubelet will try to manage cgroups / pods that are not owned by it when
# "nesting" clusters, unless we instruct it to use a different cgroup root.
# We do this, and when doing so we must fixup this alternative root
# currently this is hardcoded to be /kubelet
mount --make-rprivate /sys/fs/cgroup
echo "${cgroup_subsystems}" |
while IFS= read -r subsystem; do
mount_kubelet_cgroup_root "/kubelet" "${subsystem}"
done
}

@networkop
Copy link
Contributor

this is exactly what I'd done. I've carved out the entire fix_cgroup() function together with mount_kubelet_cgroup_root and ran them separately.

Here's the output

root@k8s-guide-control-plane:/# ./test.sh
+ set -e
+ fix_cgroup
+ [[ -f /sys/fs/cgroup/cgroup.controllers ]]
+ echo 'INFO: detected cgroup v1'
INFO: detected cgroup v1
+ echo 'INFO: fix cgroup mounts for all subsystems'
INFO: fix cgroup mounts for all subsystems
+ local current_cgroup
++ grep -E '^[^:]*:([^:]*,)?cpu(,[^,:]*)?:.*' /proc/self/cgroup
++ cut -d: -f3
+ current_cgroup=/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ local cgroup_subsystems
++ findmnt -lun -o source,target -t cgroup
++ grep /docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
++ awk '{print $2}'
+ cgroup_subsystems='/sys/fs/cgroup/cpuset
/sys/fs/cgroup/cpu
/sys/fs/cgroup/cpuacct
/sys/fs/cgroup/blkio
/sys/fs/cgroup/memory
/sys/fs/cgroup/devices
/sys/fs/cgroup/freezer
/sys/fs/cgroup/net_cls
/sys/fs/cgroup/perf_event
/sys/fs/cgroup/net_prio
/sys/fs/cgroup/hugetlb
/sys/fs/cgroup/pids
/sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/cpuset/kubelet
/sys/fs/cgroup/cpu/kubelet
/sys/fs/cgroup/cpuacct/kubelet
/sys/fs/cgroup/blkio/kubelet
/sys/fs/cgroup/memory/kubelet
/sys/fs/cgroup/devices/kubelet
/sys/fs/cgroup/freezer/kubelet
/sys/fs/cgroup/net_cls/kubelet
/sys/fs/cgroup/perf_event/kubelet
/sys/fs/cgroup/net_prio/kubelet
/sys/fs/cgroup/hugetlb/kubelet
/sys/fs/cgroup/pids/kubelet'
+ local cgroup_mounts
++ grep -E -o '/[[:alnum:]].* /sys/fs/cgroup.*.*cgroup' /proc/self/mountinfo
+ cgroup_mounts='/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/cpuset/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/cpu/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/cpuacct/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/blkio/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/memory/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/devices/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/freezer/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/net_cls/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/perf_event/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/net_prio/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/hugetlb/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/pids/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup'
+ [[ -n /docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/cpuset/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/cpu/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/cpuacct/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/blkio/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/memory/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/devices/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/freezer/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/net_cls/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/perf_event/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/net_prio/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/hugetlb/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/pids/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup ]]
+ local mount_root
++ head -n 1
++ cut '-d ' -f1
+ mount_root=/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
++ echo '/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/net_prio rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/cpuset/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/cpu/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/cpuacct/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/blkio/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/memory/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/devices/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/freezer/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/net_cls/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/perf_event/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/net_prio/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/hugetlb/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup
/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/kubelet /sys/fs/cgroup/pids/kubelet rw,nosuid,nodev,noexec,relatime - cgroup cgroup'
++ cut '-d ' -f 2
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
TARGET                                                                                        SOURCE                                                                           FSTYPE OPTIONS
/sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c cgroup[/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c] cgroup rw,nosuid,nodev,noexec,relatime,cpuset
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
TARGET                                                                                     SOURCE                                                                           FSTYPE OPTIONS
/sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c cgroup[/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c] cgroup rw,nosuid,nodev,noexec,relatime,cpu
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
TARGET                                                                                         SOURCE                                                                           FSTYPE OPTIONS
/sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c cgroup[/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c] cgroup rw,nosuid,nodev,noexec,relatime,cpuacct
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
TARGET                                                                                       SOURCE                                                                           FSTYPE OPTIONS
/sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c cgroup[/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c] cgroup rw,nosuid,nodev,noexec,relatime,blkio
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
TARGET                                                                                        SOURCE                                                                           FSTYPE OPTIONS
/sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c cgroup[/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c] cgroup rw,nosuid,nodev,noexec,relatime,memory
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
TARGET                                                                                         SOURCE                                                                           FSTYPE OPTIONS
/sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c cgroup[/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c] cgroup rw,nosuid,nodev,noexec,relatime,devices
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
TARGET                                                                                         SOURCE                                                                           FSTYPE OPTIONS
/sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c cgroup[/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c] cgroup rw,nosuid,nodev,noexec,relatime,freezer
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
TARGET                                                                                         SOURCE                                                                           FSTYPE OPTIONS
/sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c cgroup[/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c] cgroup rw,nosuid,nodev,noexec,relatime,net_cls
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
TARGET                                                                                            SOURCE                                                                           FSTYPE OPTIONS
/sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c cgroup[/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c] cgroup rw,nosuid,nodev,noexec,relatime,perf_event
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
TARGET                                                                                          SOURCE                                                                           FSTYPE OPTIONS
/sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c cgroup[/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c] cgroup rw,nosuid,nodev,noexec,relatime,net_prio
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
TARGET                                                                                         SOURCE                                                                           FSTYPE OPTIONS
/sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c cgroup[/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c] cgroup rw,nosuid,nodev,noexec,relatime,hugetlb
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
TARGET                                                                                      SOURCE                                                                           FSTYPE OPTIONS
/sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c cgroup[/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c] cgroup rw,nosuid,nodev,noexec,relatime,pids
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c /sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/cpuset/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/cpuset/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/cpuset/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/cpuset/kubelet /sys/fs/cgroup/cpuset/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/cpu/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/cpu/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/cpu/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/cpu/kubelet /sys/fs/cgroup/cpu/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/cpuacct/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/cpuacct/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/cpuacct/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/cpuacct/kubelet /sys/fs/cgroup/cpuacct/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/blkio/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/blkio/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/blkio/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/blkio/kubelet /sys/fs/cgroup/blkio/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/memory/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/memory/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/memory/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/memory/kubelet /sys/fs/cgroup/memory/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/devices/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/devices/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/devices/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/devices/kubelet /sys/fs/cgroup/devices/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/freezer/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/freezer/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/freezer/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/freezer/kubelet /sys/fs/cgroup/freezer/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/net_cls/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/net_cls/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/net_cls/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/net_cls/kubelet /sys/fs/cgroup/net_cls/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/perf_event/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/perf_event/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/perf_event/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/perf_event/kubelet /sys/fs/cgroup/perf_event/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/net_prio/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/net_prio/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/net_prio/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/net_prio/kubelet /sys/fs/cgroup/net_prio/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/hugetlb/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/hugetlb/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/hugetlb/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/hugetlb/kubelet /sys/fs/cgroup/hugetlb/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
+ local target=/sys/fs/cgroup/pids/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ findmnt /sys/fs/cgroup/pids/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mkdir -p /sys/fs/cgroup/pids/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --bind /sys/fs/cgroup/pids/kubelet /sys/fs/cgroup/pids/kubelet/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ mount --make-rprivate /sys/fs/cgroup
+ echo '/sys/fs/cgroup/cpuset
/sys/fs/cgroup/cpu
/sys/fs/cgroup/cpuacct
/sys/fs/cgroup/blkio
/sys/fs/cgroup/memory
/sys/fs/cgroup/devices
/sys/fs/cgroup/freezer
/sys/fs/cgroup/net_cls
/sys/fs/cgroup/perf_event
/sys/fs/cgroup/net_prio
/sys/fs/cgroup/hugetlb
/sys/fs/cgroup/pids
/sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
/sys/fs/cgroup/cpuset/kubelet
/sys/fs/cgroup/cpu/kubelet
/sys/fs/cgroup/cpuacct/kubelet
/sys/fs/cgroup/blkio/kubelet
/sys/fs/cgroup/memory/kubelet
/sys/fs/cgroup/devices/kubelet
/sys/fs/cgroup/freezer/kubelet
/sys/fs/cgroup/net_cls/kubelet
/sys/fs/cgroup/perf_event/kubelet
/sys/fs/cgroup/net_prio/kubelet
/sys/fs/cgroup/hugetlb/kubelet
/sys/fs/cgroup/pids/kubelet'
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuset
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/cpuset
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/cpuset//kubelet
+ '[' /sys/fs/cgroup/cpuset == /sys/fs/cgroup/cpuset ']'
+ cat /sys/fs/cgroup/cpuset/cpuset.cpus
+ cat /sys/fs/cgroup/cpuset/cpuset.mems
+ mount --bind /sys/fs/cgroup/cpuset//kubelet /sys/fs/cgroup/cpuset//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpu
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/cpu
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/cpu//kubelet
+ '[' /sys/fs/cgroup/cpu == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/cpu//kubelet /sys/fs/cgroup/cpu//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuacct
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/cpuacct
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/cpuacct//kubelet
+ '[' /sys/fs/cgroup/cpuacct == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/cpuacct//kubelet /sys/fs/cgroup/cpuacct//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/blkio
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/blkio
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/blkio//kubelet
+ '[' /sys/fs/cgroup/blkio == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/blkio//kubelet /sys/fs/cgroup/blkio//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/memory
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/memory
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/memory//kubelet
+ '[' /sys/fs/cgroup/memory == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/memory//kubelet /sys/fs/cgroup/memory//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/devices
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/devices
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/devices//kubelet
+ '[' /sys/fs/cgroup/devices == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/devices//kubelet /sys/fs/cgroup/devices//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/freezer
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/freezer
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/freezer//kubelet
+ '[' /sys/fs/cgroup/freezer == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/freezer//kubelet /sys/fs/cgroup/freezer//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_cls
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/net_cls
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/net_cls//kubelet
+ '[' /sys/fs/cgroup/net_cls == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/net_cls//kubelet /sys/fs/cgroup/net_cls//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/perf_event
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/perf_event
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/perf_event//kubelet
+ '[' /sys/fs/cgroup/perf_event == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/perf_event//kubelet /sys/fs/cgroup/perf_event//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_prio
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/net_prio
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/net_prio//kubelet
+ '[' /sys/fs/cgroup/net_prio == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/net_prio//kubelet /sys/fs/cgroup/net_prio//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/hugetlb
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/hugetlb
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/hugetlb//kubelet
+ '[' /sys/fs/cgroup/hugetlb == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/hugetlb//kubelet /sys/fs/cgroup/hugetlb//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/pids
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/pids
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/pids//kubelet
+ '[' /sys/fs/cgroup/pids == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/pids//kubelet /sys/fs/cgroup/pids//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ '[' /sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet /sys/fs/cgroup/cpuset/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ '[' /sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet /sys/fs/cgroup/cpu/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ '[' /sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet /sys/fs/cgroup/cpuacct/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ '[' /sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet /sys/fs/cgroup/blkio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ '[' /sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet /sys/fs/cgroup/memory/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ '[' /sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet /sys/fs/cgroup/devices/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ '[' /sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet /sys/fs/cgroup/freezer/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ '[' /sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet /sys/fs/cgroup/net_cls/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ '[' /sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet /sys/fs/cgroup/perf_event/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ '[' /sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet /sys/fs/cgroup/net_prio/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ '[' /sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet /sys/fs/cgroup/hugetlb/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ '[' /sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet /sys/fs/cgroup/pids/docker/b54f7ec25f7cfcd9edea8523c936e9c8926cb44362a10af3f1a7d102ecb7638c//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuset/kubelet
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/cpuset/kubelet
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/cpuset/kubelet//kubelet
+ '[' /sys/fs/cgroup/cpuset/kubelet == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/cpuset/kubelet//kubelet /sys/fs/cgroup/cpuset/kubelet//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpu/kubelet
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/cpu/kubelet
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/cpu/kubelet//kubelet
+ '[' /sys/fs/cgroup/cpu/kubelet == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/cpu/kubelet//kubelet /sys/fs/cgroup/cpu/kubelet//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuacct/kubelet
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/cpuacct/kubelet
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/cpuacct/kubelet//kubelet
+ '[' /sys/fs/cgroup/cpuacct/kubelet == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/cpuacct/kubelet//kubelet /sys/fs/cgroup/cpuacct/kubelet//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/blkio/kubelet
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/blkio/kubelet
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/blkio/kubelet//kubelet
+ '[' /sys/fs/cgroup/blkio/kubelet == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/blkio/kubelet//kubelet /sys/fs/cgroup/blkio/kubelet//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/memory/kubelet
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/memory/kubelet
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/memory/kubelet//kubelet
+ '[' /sys/fs/cgroup/memory/kubelet == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/memory/kubelet//kubelet /sys/fs/cgroup/memory/kubelet//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/devices/kubelet
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/devices/kubelet
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/devices/kubelet//kubelet
+ '[' /sys/fs/cgroup/devices/kubelet == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/devices/kubelet//kubelet /sys/fs/cgroup/devices/kubelet//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/freezer/kubelet
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/freezer/kubelet
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/freezer/kubelet//kubelet
+ '[' /sys/fs/cgroup/freezer/kubelet == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/freezer/kubelet//kubelet /sys/fs/cgroup/freezer/kubelet//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_cls/kubelet
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/net_cls/kubelet
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/net_cls/kubelet//kubelet
+ '[' /sys/fs/cgroup/net_cls/kubelet == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/net_cls/kubelet//kubelet /sys/fs/cgroup/net_cls/kubelet//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/perf_event/kubelet
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/perf_event/kubelet
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/perf_event/kubelet//kubelet
+ '[' /sys/fs/cgroup/perf_event/kubelet == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/perf_event/kubelet//kubelet /sys/fs/cgroup/perf_event/kubelet//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_prio/kubelet
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/net_prio/kubelet
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/net_prio/kubelet//kubelet
+ '[' /sys/fs/cgroup/net_prio/kubelet == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/net_prio/kubelet//kubelet /sys/fs/cgroup/net_prio/kubelet//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/hugetlb/kubelet
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/hugetlb/kubelet
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/hugetlb/kubelet//kubelet
+ '[' /sys/fs/cgroup/hugetlb/kubelet == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/hugetlb/kubelet//kubelet /sys/fs/cgroup/hugetlb/kubelet//kubelet
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/pids/kubelet
+ local cgroup_root=/kubelet
+ local subsystem=/sys/fs/cgroup/pids/kubelet
+ '[' -z /kubelet ']'
+ mkdir -p /sys/fs/cgroup/pids/kubelet//kubelet
+ '[' /sys/fs/cgroup/pids/kubelet == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/pids/kubelet//kubelet /sys/fs/cgroup/pids/kubelet//kubelet
+ IFS=
+ read -r subsystem
root@k8s-guide-control-plane:/# echo $?
0

@aojea
Copy link
Contributor

aojea commented Jul 26, 2021

hmm, these are how it looks like in my working system (Linux), maybe we can spot some differences?

root@kind-worker2:/# mount | grep kubelet
cgroup on /sys/fs/cgroup/systemd/kubelet type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/net_cls,net_prio/kubelet type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/pids/kubelet type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/hugetlb/kubelet type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/devices/kubelet type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/memory/kubelet type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/cpu,cpuacct/kubelet type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/freezer/kubelet type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/blkio/kubelet type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event/kubelet type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/cpuset/kubelet type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
tmpfs on /var/lib/kubelet/pods/e455b257-22f4-4afc-b853-f08cd1eaeb37/volumes/kubernetes.io~projected/kube-api-access-jqqrm type tmpfs (rw,relatime)
tmpfs on /var/lib/kubelet/pods/529c1dea-d845-405b-af44-bcb5480bc810/volumes/kubernetes.io~projected/kube-api-access-fhslg type tmpfs (rw,relatime)
tmpfs on /var/lib/kubelet/pods/4f03fd8b-b706-4326-a12a-a94b0e80aff2/volumes/kubernetes.io~projected/kube-api-access-h9pll type tmpfs (rw,relatime)
root@kind-worker2:/# findmnt | grep  "\[" | grep kubelet                                                                                                                         
| | | `-/sys/fs/cgroup/systemd/kubelet                                                                                           cgroup[/docker/f4946973e1e85d92908c487d55ca2ca822028599f1bd9571f6926f8d5f2fe222/kubelet]                       cgroup    rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
| | | `-/sys/fs/cgroup/net_cls,net_prio/kubelet                                                                                  cgroup[/docker/f4946973e1e85d92908c487d55ca2ca822028599f1bd9571f6926f8d5f2fe222/kubelet]                       cgroup    rw,nosuid,nodev,noexec,relatime,net_cls,net_prio
| | | `-/sys/fs/cgroup/pids/kubelet                                                                                              cgroup[/docker/f4946973e1e85d92908c487d55ca2ca822028599f1bd9571f6926f8d5f2fe222/kubelet]                       cgroup    rw,nosuid,nodev,noexec,relatime,pids
| | | `-/sys/fs/cgroup/hugetlb/kubelet                                                                                           cgroup[/docker/f4946973e1e85d92908c487d55ca2ca822028599f1bd9571f6926f8d5f2fe222/kubelet]                       cgroup    rw,nosuid,nodev,noexec,relatime,hugetlb
| | | `-/sys/fs/cgroup/devices/kubelet                                                                                           cgroup[/docker/f4946973e1e85d92908c487d55ca2ca822028599f1bd9571f6926f8d5f2fe222/kubelet]                       cgroup    rw,nosuid,nodev,noexec,relatime,devices
| | | `-/sys/fs/cgroup/memory/kubelet                                                                                            cgroup[/docker/f4946973e1e85d92908c487d55ca2ca822028599f1bd9571f6926f8d5f2fe222/kubelet]                       cgroup    rw,nosuid,nodev,noexec,relatime,memory
| | | `-/sys/fs/cgroup/cpu,cpuacct/kubelet                                                                                       cgroup[/docker/f4946973e1e85d92908c487d55ca2ca822028599f1bd9571f6926f8d5f2fe222/kubelet]                       cgroup    rw,nosuid,nodev,noexec,relatime,cpu,cpuacct
| | | `-/sys/fs/cgroup/freezer/kubelet                                                                                           cgroup[/docker/f4946973e1e85d92908c487d55ca2ca822028599f1bd9571f6926f8d5f2fe222/kubelet]                       cgroup    rw,nosuid,nodev,noexec,relatime,freezer
| | | `-/sys/fs/cgroup/blkio/kubelet                                                                                             cgroup[/docker/f4946973e1e85d92908c487d55ca2ca822028599f1bd9571f6926f8d5f2fe222/kubelet]                       cgroup    rw,nosuid,nodev,noexec,relatime,blkio
| | | `-/sys/fs/cgroup/perf_event/kubelet                                                                                        cgroup[/docker/f4946973e1e85d92908c487d55ca2ca822028599f1bd9571f6926f8d5f2fe222/kubelet]                       cgroup    rw,nosuid,nodev,noexec,relatime,perf_event
| |   `-/sys/fs/cgroup/cpuset/kubelet                                                                                            cgroup[/docker/f4946973e1e85d92908c487d55ca2ca822028599f1bd9571f6926f8d5f2fe222/kubelet]                       cgroup    rw,nosuid,nodev,noexec,relatime,cpuset

@aojea
Copy link
Contributor

aojea commented Jul 26, 2021

you don't have systemd/kubelet
| | | `-/sys/fs/cgroup/systemd/kubelet cgroup[/docker/f4946973e1e85d92908c487d55ca2ca822028599f1bd9571f6926f8d5f2fe222/kubelet] cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd

@networkop
Copy link
Contributor

yep, I can confirm that if I mkdir -p /sys/fs/cgroup/systemd/kubelet and add the --cgroup-root=/kubelet flag back it, the kubelet restarts and runs fine.

I think it fails because cgroup_subsystems does not include systemd:

current_cgroup=$(grep -E '^[^:]*:([^:]*,)?cpu(,[^,:]*)?:.*' /proc/self/cgroup | cut -d: -f3)
cgroup_subsystems=$(findmnt -lun -o source,target -t cgroup | grep "${current_cgroup}" | awk '{print $2}')

 echo $cgroup_subsystems
/sys/fs/cgroup/cpuset /sys/fs/cgroup/cpu /sys/fs/cgroup/cpuacct /sys/fs/cgroup/blkio /sys/fs/cgroup/memory /sys/fs/cgroup/devices /sys/fs/cgroup/freezer /sys/fs/cgroup/net_cls /sys/fs/cgroup/perf_event /sys/fs/cgroup/net_prio /sys/fs/cgroup/hugetlb /sys/fs/cgroup/pids /sys/fs/cgroup/cpuset/docker/2720eb016b0506637a46ae603967699b4a7db65e2a1fb9cc086f50f2931d42e5 /sys/fs/cgroup/cpu/docker/2720eb016b0506637a46ae603967699b4a7db65e2a1fb9cc086f50f2931d42e5 /sys/fs/cgroup/cpuacct/docker/2720eb016b0506637a46ae603967699b4a7db65e2a1fb9cc086f50f2931d42e5 /sys/fs/cgroup/blkio/docker/2720eb016b0506637a46ae603967699b4a7db65e2a1fb9cc086f50f2931d42e5 /sys/fs/cgroup/memory/docker/2720eb016b0506637a46ae603967699b4a7db65e2a1fb9cc086f50f2931d42e5 /sys/fs/cgroup/devices/docker/2720eb016b0506637a46ae603967699b4a7db65e2a1fb9cc086f50f2931d42e5 /sys/fs/cgroup/freezer/docker/2720eb016b0506637a46ae603967699b4a7db65e2a1fb9cc086f50f2931d42e5 /sys/fs/cgroup/net_cls/docker/2720eb016b0506637a46ae603967699b4a7db65e2a1fb9cc086f50f2931d42e5 /sys/fs/cgroup/perf_event/docker/2720eb016b0506637a46ae603967699b4a7db65e2a1fb9cc086f50f2931d42e5 /sys/fs/cgroup/net_prio/docker/2720eb016b0506637a46ae603967699b4a7db65e2a1fb9cc086f50f2931d42e5 /sys/fs/cgroup/hugetlb/docker/2720eb016b0506637a46ae603967699b4a7db65e2a1fb9cc086f50f2931d42e5 /sys/fs/cgroup/pids/docker/2720eb016b0506637a46ae603967699b4a7db65e2a1fb9cc086f50f2931d42e5 /sys/fs/cgroup/cpuset/kubelet /sys/fs/cgroup/cpu/kubelet /sys/fs/cgroup/cpuacct/kubelet /sys/fs/cgroup/blkio/kubelet /sys/fs/cgroup/memory/kubelet /sys/fs/cgroup/devices/kubelet /sys/fs/cgroup/freezer/kubelet /sys/fs/cgroup/net_cls/kubelet /sys/fs/cgroup/perf_event/kubelet /sys/fs/cgroup/net_prio/kubelet /sys/fs/cgroup/hugetlb/kubelet /sys/fs/cgroup/pids/kubelet

could it be because WSL doesn't run systemd itself?

@aojea
Copy link
Contributor

aojea commented Jul 27, 2021

yep, I can confirm that if I mkdir -p /sys/fs/cgroup/systemd/kubelet and add the --cgroup-root=/kubelet flag back it, the kubelet restarts and runs fine.

yep, and this issue on wls confirms it microsoft/WSL#4189 (comment)

@networkop can you submit a patch to add

mkdir -p /sys/fs/cgroup/systemd/kubelet

to the entrypoint script

# kubelet will try to manage cgroups / pods that are not owned by it when
# "nesting" clusters, unless we instruct it to use a different cgroup root.
# We do this, and when doing so we must fixup this alternative root
# currently this is hardcoded to be /kubelet

please add a comment explaining the reason referencing this issue,

@BenTheElder
Copy link
Member

could it be because WSL doesn't run systemd itself?

kind is expected to work on systems without systemd (see e.g. #2091) but for practical reasons kubernetes, containerd, kind, etc. .. the ecosystem is tested on systemd.

@BenTheElder
Copy link
Member

I think the issue here may be the interaction with the containerized systemd + kubelet running on a host that does not itself run systemd. Containerized systemd expects to either do all of the mounting itself or have the hierarchy already fully mounted IIRC, so it wouldn't mount this. We should probably ensure that path and ensure it is configured like a systemd host would (see comment on the PR).

For every other system that doesn't support cgroupv2, e.g. WSL , this branch of code is executed and nowhere in this branch is /sys/fs/cgroup/kubelet created.

We're nearly always running without cgroupv2 (though the kind project does have a CI job to test cgroupv2 specifically), this may change in the future but currently it also applies to Kubernetes core. Running without cgroupv2 should be fine.

We do however need the cgroup-root workaround, as we often run CI inside of Kubernetes because that is how the Kubernetes project largely operates currently.

@BenTheElder
Copy link
Member

sorry this is still blocked on some issues pushing new images w/ buildkit. #2390 should fix this when it lands hopefully ... @aojea has a meta-PR to ship a new base image with this and other fixes, we've just had some trouble building the images #2465

@Jitsusama
Copy link

@BenTheElder; I was wondering if you would mind letting me know which release of kind might include this fix? I'm on 0.11.1 on WSL2 w/out Docker Desktop installed and I'm getting the dreaded kubelet cgroup error during kubelet bootstrap.

@aojea
Copy link
Contributor

aojea commented Nov 11, 2021

next version 0.12.0, or you can use kind from HEAd

@Jitsusama
Copy link

@aojea; first, thanks for your prompt response! Second, do you have any rough idea when 0.12.0 might be released?

@aojea
Copy link
Contributor

aojea commented Nov 11, 2021

we want to do it soon, but I can't promise anything, unfortunately we are being very busy this days

@Johnz86
Copy link

Johnz86 commented Nov 15, 2021

I got to install 'kind' because some articles recommended to use it instead of minikube on wsl.
They pointed out that minikube requires shell scripts or genie because of missing systemd.
Now I fail on basic hello world setup, because of systemd.
I will postpone my question, why this cancer is needed.
I will ask this:

Which kubernetes implementation does not require systemd and runs well inside wsl ubuntu, so that I do not need to use docker-desktop for windows?

@aojea
Copy link
Contributor

aojea commented Nov 15, 2021

next version 0.12.0, or you can use kind from HEAd

@BenTheElder
Copy link
Member

This is a regression in v0.11.X, it is fixed in the latest sources, and it is not present in previous releases.

You can use an older release or build from the latest sources (clone the repo in WSL/linux/macOS, run make build, the binary will be in bin/kind).

All Kubernetes distros I'm aware of use systemd. Here we had a bug triggered when systemd was not used on the host machine.

After the regression fix systemd is not required in the host environment but it is and will continue to be used within the Kubernetes node containers by KIND and within the Kubernetes nodes by every other major Kubernetes distro because it is free software that performs the init / PID1 task well.

Nearly all major linux distros use systemd and Kubernetes upstream is developed exclusively with systemd, so tricky system interactions like this can go uncaught.

Testing every possible combination of system software is prohibitive and Kubernetes's CI and GitHub actions lack support for Windows (nested) virtualization so WSL2 CI is not available. #1529 If you're interested in solving this.

@jaredweinfurtner
Copy link

for those still experiencing this issue on kind v0.11.1, the following word for me: kind create cluster --image kindest/node:v1.23.0

L1ghtman2k added a commit to ScoreTrak/helm-charts that referenced this issue Dec 31, 2021
thisisibrahimd added a commit to ScoreTrak/helm-charts that referenced this issue Dec 31, 2021
* update scoretrak+server helm chart

* update scoretrak+server helm chart

* upgrade versions

* upgrade versions

* upgrade versions

* upgrade versions

* upgrade versions

* upgrade versions

* upgrade versions

* upgrade versions

* upgrade versions

* upgrade versions

* update chart for testing

* change bank condition and fqdn

* change bank sub charts conditions

* provide container version tag value

* change payment user cmd

* use newer image

* update versions

* fix kubernetes-sigs/kind#2323 (comment) for wsl2.

* update versions

Co-authored-by: Ibrahim Diallo <[email protected]>
@wolf99
Copy link

wolf99 commented Feb 3, 2022

I've hit this bug again since some months ago.
Any chance of a release containing this fix?

(Yes I know I could clone and build HEAD, but that's unnecessary hassle, when you have a build CI)

Forgive me, I don't doubt that you are busy, but it seems a bit silly to be busy with development if that development does not get released (since 9 months now) ?

Or maybe I am wrong and there is not CI to make a release simple ?

@aojea
Copy link
Contributor

aojea commented Feb 3, 2022

2 outstanding issues https://github.com/kubernetes-sigs/kind/milestone/15 for release :(

@BenTheElder
Copy link
Member

I've hit this bug again since some months ago.
Any chance of a release containing this fix?

Forgive me, I don't doubt that you are busy, but it seems a bit silly to be busy with development if that development does not get released (since 9 months now) ?

@aojea and I are the two active maintainers right now. There are only a few recurring contributors and maintainers.

Of those, none of us work on KIND exclusively / full-time at this point. KIND is not commercial software.

I myself also chair + tech lead Kubernetes SIG Testing, co- or solo- maintain many other parts of the Kubernetes project, and I have other work at my job.
I've been out some since the last release due to sick and dying/dead family.
I've tried to make myself available to review PRs and potentially cut the release when it's ready, but I haven't been able to finish fixing the outstanding regressions.

Unfortunately, throughout the Kubernetes project you will find that the bugs / issues and work required outpace the consistently available developer time. I've had to pick up the slack more in other critical areas, and I'm working on finding people to take over some of those.

Or maybe I am wrong and there is not CI to make a release simple ?

This is not the issue, there is build tooling - though we do more than that and organize and write up the changes, emphasizing the most notable ones to make upgrades easier, that takes maybe an afternoon.

In this repo we track releases in milestones, like much of the rest of the Kubernetes organization.

There are outstanding regressions that will notably affect users, so we have not released.

Incoming changes that "fix" things have caused more regressions. It is not so simple to cover supporting all these things, as mentioned last time I commented in this thread: for Windows we do not have CI for and nobody has helped resolve that #1529. As a result regressions to e.g. Windows quirks are difficult to catch (or in this case rootless which we did add CI for, but now linux without /dev/fuse is hit due to the rootless fixes ... ).

As Antonio linked above, the v0.12.0 milestone shows what remains before we're ready for a general v0.12.0 release.

If you're happy with the current state and won't be affected by these, it is trivial to obtain a binary, but it will not be a supported release on our part due to these outstanding issues.

I have only been accepting fix PRs and documentation improvements, but we're not there yet, and there is no ETA.

If folks are interested in helping work on these problems, we have a detailed contributor guide.

@wolf99
Copy link

wolf99 commented Feb 3, 2022

I hope you are feeling well again, sorry to hear that you and your family had some sadness 😞 .

Personally I wouldn't even begin to know where to start with #1529.
But thank you for the link to the milestone - I will take a look.
(I don't know Go yet, but everyone has to start somewhere 🙂 )

@lrbnew
Copy link

lrbnew commented Feb 18, 2022

for those still experiencing this issue on kind v0.11.1, the following word for me: kind create cluster --image kindest/node:v1.23.0

Works for me (wsl2,ubuntu20.04, kind 0.11)

@EltonBraz
Copy link

In WSL2 I'm running Ubuntu 20.04

Running 1.17.17 image finally worked for me kind create cluster --config=cluster.yaml --image kindest/node:v1.17.17

That was perfect. You have saved my life man! 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.