Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Cluster fails to start on cgroup v2 #493

Closed
derricms opened this issue Feb 12, 2021 · 41 comments · Fixed by #579
Closed

[BUG] Cluster fails to start on cgroup v2 #493

derricms opened this issue Feb 12, 2021 · 41 comments · Fixed by #579
Assignees
Labels
help wanted Extra attention is needed k3s This is likely an issue with k3s not k3d itself priority/high runtime Issue with the container runtime (docker)
Milestone

Comments

@derricms
Copy link

derricms commented Feb 12, 2021

What did you do

Start a minimal cluster on Kali Linux 2020.4
* How was the cluster created?
* k3d cluster create

    • What did you do afterwards?

      • I inspected the error and saw it had something to do with cgroups and I noticed the latest kernel update to Kali switch the cgroup file hiearchy from v1 to v2.

What did you expect to happen

That a minimal cluster would start

Screenshots or terminal output

{"log":"time=\"2021-02-10T15:54:15.154488575Z\" level=info msg=\"Containerd is now running\"\n","stream":"stderr","time":"2021-02-10T15:54:15.154604054Z"}
{"log":"time=\"2021-02-10T15:54:15.276436029Z\" level=info msg=\"Connecting to proxy\" url=\"wss://127.0.0.1:6443/v1-k3s/connect\"\n","stream":"stderr","time":"2021-02-10T15:54:15.276584849Z"}
{"log":"time=\"2021-02-10T15:54:15.344809810Z\" level=info msg=\"Handling backend connection request [k3d-minimal-default-server-0]\"\n","stream":"stderr","time":"2021-02-10T15:54:15.344941507Z"}
{"log":"time=\"2021-02-10T15:54:15.383483103Z\" level=warning msg=\"**Disabling CPU quotas due to missing cpu.cfs_period_us**\"\n","stream":"stderr","time":"2021-02-10T15:54:15.383600244Z"}
{"log":"time=\"2021-02-10T15:54:15.383649950Z\" level=warning msg=\"**Disabling pod PIDs limit feature due to missing cgroup pids support**\"\n","stream":"stderr","time":"2021-02-10T15:54:15.383683752Z"}
{"log":"time=\"2021-02-10T15:54:15.383773636Z\" level=info msg=\"Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --cgroups-per-qos=false --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=unix:///run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --enforce-node-allocatable= --eviction-hard=imagefs.available\u003c5%,nodefs.available\u003c5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=SupportPodPidsLimit=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-minimal-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key\"\n","stream":"stderr","time":"2021-02-10T15:54:15.383842163Z"}
{"log":"time=\"2021-02-10T15:54:15.384645964Z\" level=info msg=\"Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-minimal-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables\"\n","stream":"stderr","time":"2021-02-10T15:54:15.38471723Z"}
{"log":"Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.\n","stream":"stderr","time":"2021-02-10T15:54:15.387483943Z"}
{"log":"Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.\n","stream":"stderr","time":"2021-02-10T15:54:15.387594058Z"}
{"log":"F0210 15:54:15.387923       7 server.go:181] cannot set feature gate SupportPodPidsLimit to false, feature is locked to true\n","stream":"stderr","time":"2021-02-10T15:54:15.387966646Z"}
{"log":"goroutine 3978 [running]:\n","stream":"stderr","time":"2021-02-10T15:54:15.549704084Z"}

Which OS & Architecture

 * Linux Kali 2020.4 amd64 x86

Which version of k3d

 * output of `k3d version`
   $ k3d version
   k3d version v4.2.0
   k3s version v1.20.0-k3s1 (default)

Which version of docker

 * output of `docker version` and `docker info`
   $ docker version
   Client: 
   Version:           20.10.2+dfsg1
   API version:       1.41
   Go version:        go1.15.6
   Git commit:        2291f61
   Built:            Fri Jan 8 07:08:51 2021
   OS/Arch:           linux/amd64
   Experimental:      true

Server: Engine: Version: 20.10.2+dfsg1 API version: 1.41 (minimum version 1.12) Go version: go1.15.6 Git commit: 8891c58 Built: Fri Jan 8 07:08:51 2021 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.4.3~ds1 GitCommit: 1.4.3~ds1-1+b1 runc: Version: 1.0.0-rc92+dfsgl GitCommit: 1.0.0-rc92+dfsgl-5+b1 docker-init: Version: 0.19.0 GitCommit:

@SuperQ
Copy link

SuperQ commented Feb 15, 2021

I think I'm seeing the same or similar issue. When I rollback to rancher/k3s:v1.19.7-k3s1, the cluster starts fine.

F0215 11:59:24.389048       6 server.go:181] cannot set feature gate SupportPodPidsLimit to false, feature is locked to true
Client: Docker Engine - Community
 Version:           20.10.3
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        48d30b5
 Built:             Fri Jan 29 14:33:21 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server:
 Engine:
  Version:          19.03.13
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       feb6e8a9b5
  Built:            Mon Nov  2 04:17:19 2020
  OS/Arch:          linux/amd64
  Experimental:     true
 containerd:
  Version:          v1.3.7
  GitCommit:        8fba4e9a7d01810a393d5d25a3621dc101981175
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

@fernandoacorreia
Copy link

fernandoacorreia commented Feb 24, 2021

Same issue on Fedora 33:

❯ k3d version
k3d version v4.2.0
k3s version v1.20.2-k3s1 (default)

❯ docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)

Server:
 Containers: 2
  Running: 0
  Paused: 0
  Stopped: 2
 Images: 4
 Server Version: 20.10.3
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc version: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
  cgroupns
 Kernel Version: 5.10.16-200.fc33.x86_64
 Operating System: Fedora 33 (Cloud Edition)
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 15.34GiB
 Name: ip-172-31-15-82.us-west-2.compute.internal
 ID: J7KD:DU6M:ESY2:7Z7C:JQF4:4DDA:PN4V:YAH3:RGYS:YDRC:LFCG:SHGR
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No kernel memory TCP limit support
WARNING: No oom kill disable support
WARNING: Support for cgroup v2 is experimental

Logs:

time="2021-02-24T01:09:17.112114425Z" level=info msg="Starting k3s v1.20.2+k3s1 (1d4adb03)"
time="2021-02-24T01:09:17.119709683Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2021-02-24T01:09:17.119758494Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2021-02-24T01:09:17.123875742Z" level=info msg="Database tables and indexes are up to date"
time="2021-02-24T01:09:17.125089386Z" level=info msg="Kine listening on unix://kine.sock"
time="2021-02-24T01:09:17.140581290Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.141329140Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.142008240Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.142748155Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.143466897Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.144108543Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.144808998Z" level=info msg="certificate CN=cloud-controller-manager signed by CN=k3s-client-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.145932675Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.147053204Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.148192915Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.148832224Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.149941869Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.504108606Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.504625136Z" level=info msg="Active TLS secret  (ver=) (count 8): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.18.0.2:172.18.0.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=63817433C7020D7097F94041647F7EF794694F36]"
time="2021-02-24T01:09:17.508648371Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --feature-gates=ServiceAccountIssuerDiscovery=false --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I0224 01:09:17.509687       7 server.go:659] external host was not specified, using 172.18.0.2
I0224 01:09:17.509892       7 server.go:196] Version: v1.20.2+k3s1
I0224 01:09:17.925297       7 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I0224 01:09:17.926455       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0224 01:09:17.926469       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0224 01:09:17.927652       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0224 01:09:17.927668       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0224 01:09:17.955930       7 instance.go:289] Using reconciler: lease
I0224 01:09:17.994318       7 rest.go:131] the default service ipfamily for this cluster is: IPv4
W0224 01:09:18.301788       7 genericapiserver.go:419] Skipping API batch/v2alpha1 because it has no resources.
W0224 01:09:18.310771       7 genericapiserver.go:419] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0224 01:09:18.320194       7 genericapiserver.go:419] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0224 01:09:18.328529       7 genericapiserver.go:419] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0224 01:09:18.332445       7 genericapiserver.go:419] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0224 01:09:18.338364       7 genericapiserver.go:419] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0224 01:09:18.341107       7 genericapiserver.go:419] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W0224 01:09:18.346281       7 genericapiserver.go:419] Skipping API apps/v1beta2 because it has no resources.
W0224 01:09:18.346301       7 genericapiserver.go:419] Skipping API apps/v1beta1 because it has no resources.
I0224 01:09:18.355798       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0224 01:09:18.355818       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
time="2021-02-24T01:09:18.365852269Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
time="2021-02-24T01:09:18.365885385Z" level=info msg="Waiting for API server to become available"
time="2021-02-24T01:09:18.366717179Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2021-02-24T01:09:18.377087749Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2021-02-24T01:09:18.377374699Z" level=info msg="To join node to cluster: k3s agent -s https://172.18.0.2:6443 -t ${NODE_TOKEN}"
time="2021-02-24T01:09:18.378913604Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2021-02-24T01:09:18.379385488Z" level=info msg="Run: k3s kubectl"
time="2021-02-24T01:09:18.379733584Z" level=info msg="Module overlay was already loaded"
time="2021-02-24T01:09:18.379831478Z" level=info msg="Module nf_conntrack was already loaded"
time="2021-02-24T01:09:18.379908677Z" level=info msg="Module br_netfilter was already loaded"
time="2021-02-24T01:09:18.380027128Z" level=info msg="Module iptable_nat was already loaded"
time="2021-02-24T01:09:18.407443193Z" level=info msg="Cluster-Http-Server 2021/02/24 01:09:18 http: TLS handshake error from 127.0.0.1:34152: remote error: tls: bad certificate"
time="2021-02-24T01:09:18.412452420Z" level=info msg="Cluster-Http-Server 2021/02/24 01:09:18 http: TLS handshake error from 127.0.0.1:34158: remote error: tls: bad certificate"
time="2021-02-24T01:09:18.431100320Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:18 +0000 UTC"
time="2021-02-24T01:09:18.450385357Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:18 +0000 UTC"
time="2021-02-24T01:09:18.506694294Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2021-02-24T01:09:18.506916025Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2021-02-24T01:09:19.509429187Z" level=info msg="Containerd is now running"
time="2021-02-24T01:09:19.521239706Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2021-02-24T01:09:19.523780251Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
time="2021-02-24T01:09:19.524892860Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
time="2021-02-24T01:09:19.524916381Z" level=warning msg="Disabling pod PIDs limit feature due to missing cgroup pids support"
time="2021-02-24T01:09:19.524972024Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --cgroups-per-qos=false --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=unix:///run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --enforce-node-allocatable= --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=SupportPodPidsLimit=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2021-02-24T01:09:19.525664896Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
W0224 01:09:19.528187       7 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
W0224 01:09:19.528712       7 proxier.go:651] Failed to read file /lib/modules/5.10.16-200.fc33.x86_64/modules.builtin with error open /lib/modules/5.10.16-200.fc33.x86_64/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0224 01:09:19.529343       7 proxier.go:661] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0224 01:09:19.529826       7 proxier.go:661] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0224 01:09:19.530268       7 proxier.go:661] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0224 01:09:19.530671       7 proxier.go:661] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0224 01:09:19.531084       7 proxier.go:661] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
F0224 01:09:19.532466       7 server.go:181] cannot set feature gate SupportPodPidsLimit to false, feature is locked to true

@mj41-gdc
Copy link

mj41-gdc commented Mar 2, 2021

k3d cluster create --verbose --trace --image rancher/k3s:v1.19.8-k3s1 wsop
Doesn't work on Fedora 33 as there is different error

> docker logs --follow k3d-wsop-server-0 2>&1 
...
time="2021-03-03T11:52:51.432307543Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"

@mj41-gdc
Copy link

mj41-gdc commented Mar 3, 2021

Seems like

k3d cluster create --verbose --trace --timestamps -v /dev/mapper:/dev/mapper --image rancher/k3s:v1.20.4-k3s1 wso

started but keeps restarting for some reason.

~ $ docker ps && echo && kubectl get all -A
CONTAINER ID   IMAGE                      COMMAND                  CREATED         STATUS                            PORTS                             NAMES
cef7e29af72d   rancher/k3d-proxy:v4.2.0   "/bin/sh -c nginx-pr…"   9 minutes ago   Up 9 minutes                      80/tcp, 0.0.0.0:37461->6443/tcp   k3d-wsop-serverlb
5c0ee211ec2e   rancher/k3s:v1.20.4-k3s1   "/bin/k3s server --t…"   9 minutes ago   Restarting (255) 57 seconds ago                                     k3d-wsop-server-0

NAMESPACE     NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes       ClusterIP   10.43.0.1      <none>        443/TCP                  9m54s
kube-system   service/kube-dns         ClusterIP   10.43.0.10     <none>        53/UDP,53/TCP,9153/TCP   9m52s
kube-system   service/metrics-server   ClusterIP   10.43.217.47   <none>        443/TCP                  9m52s
/sys/fs/cgroup $ docker logs --follow k3d-wsop-server-0 2>&1 | grep -i error -B 4 | tail -n 25
--
time="2021-03-03T16:19:44.510502112Z" level=info msg="Module overlay was already loaded"
time="2021-03-03T16:19:44.510562036Z" level=info msg="Module nf_conntrack was already loaded"
time="2021-03-03T16:19:44.510584246Z" level=info msg="Module br_netfilter was already loaded"
time="2021-03-03T16:19:44.510603312Z" level=info msg="Module iptable_nat was already loaded"
time="2021-03-03T16:19:44.532234242Z" level=info msg="Cluster-Http-Server 2021/03/03 16:19:44 http: TLS handshake error from 127.0.0.1:55982: remote error: tls: bad certificate"
time="2021-03-03T16:19:44.539567218Z" level=info msg="Cluster-Http-Server 2021/03/03 16:19:44 http: TLS handshake error from 127.0.0.1:55988: remote error: tls: bad certificate"
--
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
W0303 16:19:45.632310       7 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I0303 16:19:45.632948       7 server.go:412] Version: v1.20.4+k3s1
W0303 16:19:45.633098       7 proxier.go:651] Failed to read file /lib/modules/5.10.19-200.fc33.x86_64/modules.builtin with error open /lib/modules/5.10.19-200.fc33.x86_64/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
--
I0303 16:19:47.130403       7 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0303 16:19:47.137979       7 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
time="2021-03-03T16:19:47.651998207Z" level=info msg="Waiting for node k3d-wsop-server-0 CIDR not assigned yet"
W0303 16:19:47.760260       7 handler_proxy.go:102] no RequestInfo found in the context
E0303 16:19:47.760396       7 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
--
I0303 16:19:49.914319       7 request.go:655] Throttling request took 1.047944521s, request: GET:https://127.0.0.1:6444/apis/k3s.cattle.io/v1?timeout=32s
time="2021-03-03T16:19:50.629767889Z" level=info msg="Stopped tunnel to 127.0.0.1:6443"
time="2021-03-03T16:19:50.629843807Z" level=info msg="Connecting to proxy" url="wss://172.28.0.2:6443/v1-k3s/connect"
time="2021-03-03T16:19:50.629865458Z" level=info msg="Proxy done" err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2021-03-03T16:19:50.630026513Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"

@iwilltry42 iwilltry42 self-assigned this Mar 8, 2021
@iwilltry42 iwilltry42 added k3s This is likely an issue with k3s not k3d itself runtime Issue with the container runtime (docker) labels Mar 8, 2021
@iwilltry42
Copy link
Member

Hi @derricms, thanks for opening this issue and Hi to the others who joined in 👋
Seems like we're seeing many issues in this thread, but the original issue is caused by the fact, that k3s didn't support cgroupsv2 until recently.
k3s-io/k3s#2844 landed in the v1.20.4-k3s1 release two weeks ago -> https://github.com/k3s-io/k3s/releases/tag/v1.20.4%2Bk3s1

K3s now supports cgroupv2 (#2844)

@mj41-gdc you also saw an issue I guess with a filesystem like zfs or btrfs, right (/dev/mapper)?

@mj41-gdc
Copy link

mj41-gdc commented Mar 8, 2021

btfrs

Yes, btfrs is Fedora 33 default filesystem. My disc is also encrypted (LUKS). Also selinux is enabled.

I tried --image rancher/k3s:v1.20.4-k3s1 with k3d but it didn't help. I surrendered after a few hours of fighting. A least k3s 1.20.4 alone (no k3d) works fine for me.

@iwilltry42
Copy link
Member

@mj41-gdc wait...

I tried --image rancher/k3s:v1.20.4-k3s1 with k3d but it didn't help. I surrendered after a few hours of fighting. A least k3s 1.20.4 alone (no k3d) works fine for me.

So k3s v1.20.4 works without issues but in k3d it throws the cgroup v2 incompatibility error? 🤔

Actually, on Fedora, you may as well have issues with firewalld and the docker bridge network.

@mj41-gdc
Copy link

mj41-gdc commented Mar 8, 2021

@iwilltry42 My cluster started but container was restarting every few seconds. And I was not able to debug what is the root cause. I'm a newbie here.

@iwilltry42
Copy link
Member

That's weird... @mj41-gdc , can you try docker logs k3d-k3s-default-server-0 (change name if applicable) to get the logs of the server container and then paste the output here?

mj41-gdc added a commit to mj41-gdc/k3d-debug that referenced this issue Mar 9, 2021
mj41-gdc added a commit to mj41-gdc/k3d-debug that referenced this issue Mar 9, 2021
@mj41-gdc
Copy link

mj41-gdc commented Mar 9, 2021

Hi @iwilltry42,

I did this (in short)

k3d cluster create default --image rancher/k3s:v1.20.4-k3s1
docker logs --timestamps --details k3d-default-server-0

Full detail what I did:

    # setup
    mkdir -p ~/devel/k3d
    cd ~/devel/k3d
    pwd
    ls -al

    # console 1
    # check logs and send them to github if all seems ok
    ls -als log*.txt
    echo '```' ; head -n 10 log*.txt ; echo '```'
    # cleanup previous run
    k3d cluster delete default
    rm ~/devel/k3d/log-*.txt
    ls -al
    cat README.md

    # console 2
    echo "#Start: `date --rfc-3339=ns`" > log-docker-events.txt ; docker events | tee -a log-docker-events.txt

    # console 3
    echo "#Start: `date --rfc-3339=ns`" > log-start-k3d.txt ; k3d cluster create default --image rancher/k3s:v1.20.4-k3s1 2>&1 | tee -a log-start-k3d.txt ; echo "#End: `date --rfc-3339=ns`" >> log-start-k3d.txt

    # console 4
    # a few times run these till you see the first restart
    echo "#Start_ps: `date --rfc-3339=ns`" | tee -a log-docker-ps.txt ; docker ps | tee -a log-docker-ps.txt

    # console 5
    echo "#Start_logs: `date --rfc-3339=ns`" | tee -a log-docker-logs.txt ; docker logs --timestamps --details k3d-default-server-0 2>&1 | tee -a log-docker-logs.txt

    # console 2
    # Press ctrl+c

    # check logs, repeat if needed

Full logs are a few MBs so I put them here
https://github.com/mj41-gdc/k3d-debug/tree/k3d-issues-493-mj1

A few lines from each here:

==> log-docker-events.txt <==
#Start: 2021-03-09 10:46:59.070339807+01:00
2021-03-09T10:47:03.054607479+01:00 network create db56467763196702e67db3b02c7820f3ade46eaf7e90273898f1a461c1d52166 (name=k3d-default, type=bridge)
2021-03-09T10:47:03.075469598+01:00 volume create k3d-default-images (driver=local)
2021-03-09T10:47:04.148884301+01:00 volume create k3d-default-images (driver=local)
2021-03-09T10:47:04.187260447+01:00 volume create 048c11ad75dcb8f4eaee3e986c7c34166660b9d42c7d0ac4029083eccda6c863 (driver=local)
2021-03-09T10:47:04.208496571+01:00 volume create e2b8bccb0b20ef56b620d6abef14f8972292730b39923ebdb5126b2eacf17045 (driver=local)
2021-03-09T10:47:04.227034484+01:00 volume create dbcdf8e954300ab8acbf3933eaab2e7c6247449460e8025ea10a7922dfc69a8a (driver=local)
2021-03-09T10:47:04.247326552+01:00 volume create bbd986ef44472f296dc9e2668f28997a1306cd60dc36d53dfe295e1b1c6300a8 (driver=local)
2021-03-09T10:47:04.274343577+01:00 container create 121e1c23a56aced769de358b9f1029cb94faa50c4f1434e93b169ac7f99b53a6 (app=k3d, image=rancher/k3s:v1.20.4-k3s1, k3d.cluster=default, k3d.cluster.imageVolume=k3d-default-images, k3d.cluster.network=k3d-default, k3d.cluster.network.external=false, k3d.cluster.network.id=db56467763196702e67db3b02c7820f3ade46eaf7e90273898f1a461c1d52166, k3d.cluster.token=IHaRPCtasCzMsmjvoLyF, k3d.cluster.url=https://k3d-default-server-0:6443, k3d.role=server, k3d.server.api.host=0.0.0.0, k3d.server.api.hostIP=0.0.0.0, k3d.server.api.port=38477, k3d.version=v4.2.0, name=k3d-default-server-0, org.label-schema.build-date=2021-02-22T19:51:15Z, org.label-schema.schema-version=1.0, org.label-schema.vcs-ref=838a906ab5eba62ff529d6a3a746384eba810758, org.label-schema.vcs-url=https://github.com/k3s-io/k3s.git)
2021-03-09T10:47:04.340660411+01:00 container create 4c64b179f55e66b3da735cfda03b13dc9a82e094ebbb52bb1fcec2e1c6b79cd2 (app=k3d, image=docker.io/rancher/k3d-proxy:v4.2.0, k3d.cluster=default, k3d.cluster.imageVolume=k3d-default-images, k3d.cluster.network=k3d-default, k3d.cluster.network.external=false, k3d.cluster.network.id=db56467763196702e67db3b02c7820f3ade46eaf7e90273898f1a461c1d52166, k3d.cluster.token=IHaRPCtasCzMsmjvoLyF, k3d.cluster.url=https://k3d-default-server-0:6443, k3d.role=loadbalancer, k3d.version=v4.2.0, maintainer=NGINX Docker Maintainers <[email protected]>, name=k3d-default-serverlb, org.label-schema.build-date=2021-02-09T17:09:34Z, org.label-schema.schema-version=1.0, org.label-schema.vcs-ref=a30c1e61fac6c53447cac5085c1c5ac6e473b241, org.label-schema.vcs-url=https://github.com/rancher/k3d.git)

==> log-docker-logs.txt <==
#Start_logs: 2021-03-09 10:47:27.777587861+01:00
2021-03-09T09:47:05.202790704Z  time="2021-03-09T09:47:05.202560085Z" level=info msg="Starting k3s v1.20.4+k3s1 (838a906a)"
2021-03-09T09:47:05.217443440Z  time="2021-03-09T09:47:05.217284060Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
2021-03-09T09:47:05.217487441Z  time="2021-03-09T09:47:05.217338118Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
2021-03-09T09:47:05.225290970Z  time="2021-03-09T09:47:05.225114199Z" level=info msg="Database tables and indexes are up to date"
2021-03-09T09:47:05.226424228Z  time="2021-03-09T09:47:05.226296487Z" level=info msg="Kine listening on unix://kine.sock"
2021-03-09T09:47:05.244774126Z  time="2021-03-09T09:47:05.244639261Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1615283225: notBefore=2021-03-09 09:47:05 +0000 UTC notAfter=2022-03-09 09:47:05 +0000 UTC"
2021-03-09T09:47:05.245864711Z  time="2021-03-09T09:47:05.245716366Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1615283225: notBefore=2021-03-09 09:47:05 +0000 UTC notAfter=2022-03-09 09:47:05 +0000 UTC"
2021-03-09T09:47:05.247305204Z  time="2021-03-09T09:47:05.247042738Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1615283225: notBefore=2021-03-09 09:47:05 +0000 UTC notAfter=2022-03-09 09:47:05 +0000 UTC"
2021-03-09T09:47:05.248697716Z  time="2021-03-09T09:47:05.248501879Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1615283225: notBefore=2021-03-09 09:47:05 +0000 UTC notAfter=2022-03-09 09:47:05 +0000 UTC"

==> log-docker-ps.txt <==
#Start_ps: 2021-03-09 10:47:04.477008561+01:00
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
#Start_ps: 2021-03-09 10:47:06.080563257+01:00
CONTAINER ID   IMAGE                      COMMAND                  CREATED         STATUS        PORTS     NAMES
121e1c23a56a   rancher/k3s:v1.20.4-k3s1   "/bin/k3s server --t…"   2 seconds ago   Up 1 second             k3d-default-server-0
#Start_ps: 2021-03-09 10:47:08.978519181+01:00
CONTAINER ID   IMAGE                      COMMAND                  CREATED         STATUS         PORTS     NAMES
121e1c23a56a   rancher/k3s:v1.20.4-k3s1   "/bin/k3s server --t…"   5 seconds ago   Up 4 seconds             k3d-default-server-0
#Start_ps: 2021-03-09 10:47:10.543779096+01:00
CONTAINER ID   IMAGE                      COMMAND                  CREATED         STATUS         PORTS     NAMES

==> log-start-k3d.txt <==
#Start: 2021-03-09 10:47:02.499774877+01:00
INFO[0000] Prep: Network                                
INFO[0000] Created network 'k3d-default'                
INFO[0000] Created volume 'k3d-default-images'          
INFO[0001] Creating node 'k3d-default-server-0'         
INFO[0001] Creating LoadBalancer 'k3d-default-serverlb' 
INFO[0001] Starting cluster 'default'                   
INFO[0001] Starting servers...                          
INFO[0001] Starting Node 'k3d-default-server-0'         
INFO[0007] Starting agents...                           

Let me know what I should try next. And thank you very much for your time.

@iwilltry42
Copy link
Member

Thanks for the input @mj41-gdc !
This part of the logs is interesting:

2021-03-09T09:47:12.303920929Z  E0309 09:47:12.303765       7 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": failed to get device for dir "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": could not find device with major: 0, minor: 34 in cached partitions map.
2021-03-09T09:47:12.303930987Z  E0309 09:47:12.303805       7 kubelet.go:1292] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
...
2021-03-09T09:47:12.318837808Z  W0309 09:47:12.318720       7 fs.go:570] stat failed on /dev/mapper/luks-dede0fc2-4b41-4164-8732-c87f8283d22d with error: no such file or directory
2021-03-09T09:47:12.318848913Z  F0309 09:47:12.318733       7 kubelet.go:1368] Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 34 in cached partitions map

So the issue is with your filesystem.
I guess it could be a one of https://k3d.io/faq/faq/#issues-with-btrfs & https://k3d.io/faq/faq/#issues-with-zfs
I think you mentioned, that you're on btrfs, you would need to follow the advice of the first link by adding -v /dev/mapper:/dev/mapper to the k3d cluster create command.

mj41-gdc added a commit to mj41-gdc/k3d-debug that referenced this issue Mar 9, 2021
@mj41-gdc
Copy link

mj41-gdc commented Mar 9, 2021

One step a time :-). Now I got Failed to read file /lib/modules/5.10.19-200.fc33.x86_64/modules.builtin ...

2021-03-09T11:38:34.039478723Z  W0309 11:38:34.039361       8 proxier.go:651] Failed to read file /lib/modules/5.10.19-200.fc33.x86_64/modules.builtin with error open /lib/modules/5.10.19-200.fc33.x86_64/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

Full logs here
https://github.com/mj41-gdc/k3d-debug/tree/k3d-issues-493-mj2

mj41-gdc added a commit to mj41-gdc/k3d-debug that referenced this issue Mar 9, 2021
@iwilltry42
Copy link
Member

That's a warning that you can ignore for now 👍
This is the line giving a hint on the restart cause:
2021-03-09T11:38:31.070373571Z F0309 11:38:31.070274 7 kubelet.go:1368] Failed to start ContainerManager cannot enter cgroupv2 "/sys/fs/cgroup/kubepods" with domain controllers -- it is in an invalid state
... so cgroups again

@mj41-gdc
Copy link

mj41-gdc commented Mar 9, 2021

I tried to switch docker to cgroupfs

~/devel/k3d [k3d-issues-493-mj4 L|✚ 4]$ docker info | grep -i cgroup
 Cgroup Driver: cgroupfs
 Cgroup Version: 2
  cgroupns
WARNING: Support for cgroup v2 is experimental

per kubernetes/kubeadm#1394 (comment)

[root@mjlaptop ~]# grep cgroupfs /etc/systemd/system/multi-user.target.wants/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=cgroupfs

But still the same

2021-03-09T13:37:41.972575734Z  F0309 13:37:41.972448       7 kubelet.go:1368] Failed to start ContainerManager cannot enter cgroupv2 "/sys/fs/cgroup/kubepods" with domain controllers -- it is in an invalid state

Per https://unix.stackexchange.com/questions/480747/how-to-find-out-if-systemd-uses-legacy-hybrid-or-unified-mode-cgroupsv1-vs-cgr and great
https://systemd.io/CGROUP_DELEGATION/

~/devel/k3d [k3d-issues-493-mj4 L|✔]$ [ $(stat -fc %T /sys/fs/cgroup/) = "cgroup2fs" ] && echo "unified" || ( [ -e /sys/fs/cgroup/unified/ ] && echo "hybrid" || echo "legacy")
unified

mj41-gdc added a commit to mj41-gdc/k3d-debug that referenced this issue Mar 9, 2021
@mj41-gdc
Copy link

mj41-gdc commented Mar 10, 2021

@iwilltry42
Copy link
Member

I'm really lost here and have no idea at the moment what could be the issue 🤔
I remember, that @birdiesanders in #417 and @Ka0o0 in #427 have/had issues on Fedora as well, maybe they have some input here.

@iwilltry42 iwilltry42 added the help wanted Extra attention is needed label Mar 10, 2021
@fr33ky
Copy link

fr33ky commented Mar 10, 2021

Hi,
I'm not sure this will help, anyway I'm facing the same problem (k3d-k3s-default-server-0 restart), with the same error message, running on Debian Sid.
I launched k3d cluster create, which ran fine, but server-0 entered a restart loop.
Checking docker logs :

[…]
2021-03-10T19:46:28.657738663Z  E0310 19:46:28.657616       7 node_container_manager_linux.go:57] Failed to create ["kubepods"] cgroup
2021-03-10T19:46:28.657843134Z  F0310 19:46:28.657796       7 kubelet.go:1368] Failed to start ContainerManager cannot enter cgroupv2 "/sys/fs/cgroup/kubepods" with domain controllers -- it is in an invalid state
[…]
$ k3d version
k3d version v4.3.0
k3s version v1.20.4-k3s1 (default)
$ docker version
Client: Docker Engine - Community
 Version:           20.10.5
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        55c4c88
 Built:             Tue Mar  2 20:17:50 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.5
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       363e9a8
  Built:            Tue Mar  2 20:15:47 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Regards

@iwilltry42
Copy link
Member

Hi @fr33ky , thanks for your input.
I guess when you run docker info, you see that Cgroup Version is 2? 🤔
I'm on the same docker version on Ubuntu, but using cgroup v1... no problems here 🤔

@iwilltry42
Copy link
Member

Looks like @AkihiroSuda did a good job fixing issues with cgroup v2 in kind (see kubernetes-sigs/kind#2014). This could be a good starting point for us as well, even though our issue seems to be slightly different.

@iwilltry42 iwilltry42 changed the title Running into an issue starting up a minimal k3d cluster on Kali Linux w/ cgroup v2 [BUG] Cluster fails to start on cgroup v2 Mar 11, 2021
@Tchoupinax
Copy link

Hello,

I have the same issue on Arch Linux. I also have cgroup v2

あ→ docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc.)

Server:
 Containers: 7
  Running: 3
  Paused: 0
  Stopped: 4
 Images: 7
 Server Version: 20.10.5
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runtime.v1.linux runc io.containerd.runc.v2
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e.m
 runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
  cgroupns
 Kernel Version: 5.11.11-arch1-1
 Operating System: Arch Linux
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 23.35GiB

@wdv4758h
Copy link

wdv4758h commented Apr 6, 2021

I get exactly same errors with cgroup v2. Any hint to fix it?

@fr33ky
Copy link

fr33ky commented Apr 6, 2021

I get exactly same errors with cgroup v2. Any hint to fix it?

Using Debian Sid, in the meantime, I personally switched back to cgroup v1.
I added systemd.unified_cgroup_hierarchy=0 to my GRUB_CMDLINE_LINUX_DEFAULT (/etc/default/grub) and then ran update-grub.

@akkie
Copy link

akkie commented Apr 6, 2021

I get exactly same errors with cgroup v2. Any hint to fix it?

Using Debian Sid, in the meantime, I personally switched back to cgroup v1.
I added systemd.unified_cgroup_hierarchy=0 to my GRUB_CMDLINE_LINUX_DEFAULT (/etc/default/grub) and then ran update-grub.

Works for me on Arch by executing grub-mkconfig -o /boot/grub/grub.cfg after adding the same to my /etc/default/grub file.

@benley
Copy link

benley commented Apr 6, 2021

For anyone running into this on NixOS, setting systemd.enableUnifiedCgroupHierarchy = false; in your configuration.nix ought to help. (See NixOS/nixpkgs#111835)

@wdv4758h
Copy link

wdv4758h commented Apr 7, 2021

I get exactly same errors with cgroup v2. Any hint to fix it?

Using Debian Sid, in the meantime, I personally switched back to cgroup v1.
I added systemd.unified_cgroup_hierarchy=0 to my GRUB_CMDLINE_LINUX_DEFAULT (/etc/default/grub) and then ran update-grub.

Works for me on Arch by executing grub-mkconfig -o /boot/grub/grub.cfg after adding the same to my /etc/default/grub file.

Thanks. Switching back to cgroup v1 works.

@metalmatze
Copy link

For ArchLinux users that now run systemd v248+ and use systemd-boot here's how I fixed it for my system:
vim /boot/loader/entries/arch.conf

...
-options	root=/dev/mapper/root
+options	root=/dev/mapper/root systemd.unified_cgroup_hierarchy=0

Then verified with ls /sys/fs/cgroup to see if there's a blkio/ folder among others again. As described by https://wiki.archlinux.org/index.php/cgroups#Switching_to_cgroups_v2

@nemonik
Copy link

nemonik commented Apr 13, 2021

On Arch, the latest rancher-k3d-bin (v4.2.0) would just loop trying to start the servers...

I followed what @wdv4758h suggested above.

By executing grub-mkconfig -o /boot/grub/grub.cfg after adding systemd.unified_cgroup_hierarchy=0 to my GRUB_CMDLINE_LINUX_DEFAULT to my /etc/default/grub file.

This reverted me back to cgroup 1 verified by docker info and k3d ran fine.

@ejose19
Copy link
Contributor

ejose19 commented Apr 21, 2021

Issue still persist on k3d v4.4.2 with k3s v1.20.6-k3s1

Would be good if docs listed that k3d is not yet compatible with cgroupv2, so users would know in advance if they need to adjust kernel opts.

@umeat
Copy link

umeat commented Apr 26, 2021

I have the same issue on NixOS (unstable channel).

k3d version v4.4.2
k3s version v1.20.6-k3s1 (default)
Client:
 Version:           20.10.2
 API version:       1.41
 Go version:        go1.16.3
 Git commit:        v20.10.2
 Built:             Thu Jan  1 00:00:00 1970
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server:
 Engine:
  Version:          20.10.2
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.2
  Git commit:       v20.10.2
  Built:            Tue Jan  1 00:00:00 1980
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.4.4
  GitCommit:        v1.4.4
 runc:
  Version:          1.0.0-rc92
  GitCommit:        
 docker-init:
  Version:          0.18.0
  GitCommit: 

Also using cgroups v2. Just figuring out how to switch it to v1 with NixOS and I'll report back if it works.

@AkihiroSuda
Copy link

That's a warning that you can ignore for now 👍
This is the line giving a hint on the restart cause:
2021-03-09T11:38:31.070373571Z F0309 11:38:31.070274 7 kubelet.go:1368] Failed to start ContainerManager cannot enter cgroupv2 "/sys/fs/cgroup/kubepods" with domain controllers -- it is in an invalid state
... so cgroups again

For cgroup v2, k3s/k3d needs to have a logic to evacuate the init process from the top-level cgroup to somewhere else, like this: https://github.com/moby/moby/blob/e0170da0dc6e660594f98bc66e7a98ce9c2abb46/hack/dind#L28-L37

@iwilltry42
Copy link
Member

Thanks for the hint @AkihiroSuda , I actually found a way to re-use your linked moby source and the changes you did in kind to make this work 😃
Am I OK to re-use this in k3s?

@AkihiroSuda
Copy link

@iwilltry42 Yes, thanks

@benley
Copy link

benley commented Apr 27, 2021

I have the same issue on NixOS (unstable channel).

k3d version v4.4.2
k3s version v1.20.6-k3s1 (default)
Client:
 Version:           20.10.2
 API version:       1.41
 Go version:        go1.16.3
 Git commit:        v20.10.2
 Built:             Thu Jan  1 00:00:00 1970
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server:
 Engine:
  Version:          20.10.2
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.2
  Git commit:       v20.10.2
  Built:            Tue Jan  1 00:00:00 1980
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.4.4
  GitCommit:        v1.4.4
 runc:
  Version:          1.0.0-rc92
  GitCommit:        
 docker-init:
  Version:          0.18.0
  GitCommit: 

Also using cgroups v2. Just figuring out how to switch it to v1 with NixOS and I'll report back if it works.

#493 (comment) is most likely what you're after

@iwilltry42
Copy link
Member

You can give this a try now on cgroupv2: k3d cluster create test --image iwilltry42/k3s:dev-20210427.2 --verbose the image is custom but only contains the new entrypoint from k3s-io/k3s#3237 .
There's the discussion to move this entrypoint script's functionality into the k3s agent, so we'll have to wait for that.
iwilltry42/k3s:dev-20210427.2 is built from the current rancher/k3s:latest(sha256-17d1cc189d289649d309169f25cee5e2c2e6e25ecf5b84026c3063c6590af9c8), which is v1.21.0+k3s1.

I tested it without issues on Ubuntu 20.10 with cgroupv1 and cgroupv2 (systemd).

@ejose19
Copy link
Contributor

ejose19 commented Apr 27, 2021

@iwilltry42 I confirm that image works correctly with cgroupv2 on archlinux

EDIT: I also confirm it works correctly with https://github.com/rancher/k3d/releases/tag/v4.4.3-dev.0 using env.

@iwilltry42
Copy link
Member

I just created a (temporary) fix/workaround using the entrypoint script that we can use until it was fixed upstream (in k3s). See PR #579 .
There's a dev release out already: https://github.com/rancher/k3d/releases/tag/v4.4.3-dev.0
Please test it with the environment variable K3D_FIX_CGROUPV2=1 set to enable the workaround.
Feedback welcome :)

@no-reply
Copy link

@iwilltry42 i'm able to confirm that this adds v2 support on my system. thank you!

@iwilltry42 iwilltry42 modified the milestones: Backlog, v4.4.3 Apr 29, 2021
@iwilltry42
Copy link
Member

Fixed by #579 (should not interfere with k3s-io/k3s#3242 later)
Will be released in v4.4.3 ✔️
Thanks for all the input folks! ANd special thanks @AkihiroSuda :)

@mj41-gdc
Copy link

export K3D_FIX_CGROUPV2=1 ; k3d cluster create default -v /dev/mapper:/dev/mapper

works on Fedora 33 with Docker and cgroupv2. Great work. Thank you @iwilltry42 , @AkihiroSuda and others.

~/devel/k3d [k3d-issues-493-mj7 L|✔]$ uname -a ; k3d version ; kubectl config use-context k3d-default ; kubectl cluster-info ; kubectl get all -A
Linux mjlaptop 5.11.15-200.fc33.x86_64 #1 SMP Fri Apr 16 13:41:20 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
k3d version v4.4.3
k3s version v1.20.6-k3s1 (default)
Switched to context "k3d-default".
Kubernetes control plane is running at https://0.0.0.0:38781
CoreDNS is running at https://0.0.0.0:38781/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:38781/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
NAMESPACE     NAME                                          READY   STATUS      RESTARTS   AGE
kube-system   pod/coredns-854c77959c-8x6q9                  1/1     Running     0          9m49s
kube-system   pod/metrics-server-86cbb8457f-st8z7           1/1     Running     0          9m49s
kube-system   pod/local-path-provisioner-5ff76fc89d-qr64c   1/1     Running     0          9m49s
kube-system   pod/helm-install-traefik-2vnjc                0/1     Completed   0          9m49s
kube-system   pod/svclb-traefik-j56c2                       2/2     Running     0          9m19s
kube-system   pod/traefik-6f9cbd9bd4-hrxj2                  1/1     Running     0          9m19s

NAMESPACE     NAME                         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
default       service/kubernetes           ClusterIP      10.43.0.1       <none>        443/TCP                      10m
kube-system   service/kube-dns             ClusterIP      10.43.0.10      <none>        53/UDP,53/TCP,9153/TCP       10m
kube-system   service/metrics-server       ClusterIP      10.43.80.104    <none>        443/TCP                      10m
kube-system   service/traefik-prometheus   ClusterIP      10.43.239.35    <none>        9100/TCP                     9m20s
kube-system   service/traefik              LoadBalancer   10.43.214.123   172.27.0.2    80:30827/TCP,443:30933/TCP   9m20s

NAMESPACE     NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   daemonset.apps/svclb-traefik   1         1         1       1            1           <none>          9m19s

NAMESPACE     NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns                  1/1     1            1           10m
kube-system   deployment.apps/metrics-server           1/1     1            1           10m
kube-system   deployment.apps/local-path-provisioner   1/1     1            1           10m
kube-system   deployment.apps/traefik                  1/1     1            1           9m20s

NAMESPACE     NAME                                                DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-854c77959c                  1         1         1       9m49s
kube-system   replicaset.apps/metrics-server-86cbb8457f           1         1         1       9m49s
kube-system   replicaset.apps/local-path-provisioner-5ff76fc89d   1         1         1       9m49s
kube-system   replicaset.apps/traefik-6f9cbd9bd4                  1         1         1       9m19s

NAMESPACE     NAME                             COMPLETIONS   DURATION   AGE
kube-system   job.batch/helm-install-traefik   1/1           30s        10m
~/devel/k3d [k3d-issues-493-mj7 L|✔]$ 

Detailed logs https://github.com/mj41-gdc/k3d-debug/tree/k3d-issues-493-mj7

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed k3s This is likely an issue with k3s not k3d itself priority/high runtime Issue with the container runtime (docker)
Projects
None yet
Development

Successfully merging a pull request may close this issue.