Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minikube 1.16.0 Fedora 33 (podman + cri-o) doesn't start #10182

Closed
mrizzi opened this issue Jan 20, 2021 · 20 comments
Closed

Minikube 1.16.0 Fedora 33 (podman + cri-o) doesn't start #10182

mrizzi opened this issue Jan 20, 2021 · 20 comments
Labels
co/podman-driver podman driver issues co/runtime/crio CRIO related issues kind/bug Categorizes issue or PR as related to a bug. os/linux priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@mrizzi
Copy link

mrizzi commented Jan 20, 2021

Steps to reproduce the issue:

  1. $ minikube start --driver=podman --container-runtime=cri-o --alsologtostderr

Full output of failed command:

I0120 10:10:07.692919   34725 out.go:221] Setting OutFile to fd 1 ...
I0120 10:10:07.693207   34725 out.go:273] isatty.IsTerminal(1) = true
I0120 10:10:07.693217   34725 out.go:234] Setting ErrFile to fd 2...
I0120 10:10:07.693224   34725 out.go:273] isatty.IsTerminal(2) = true
I0120 10:10:07.693305   34725 root.go:280] Updating PATH: /home/mrizzi/.minikube/bin
W0120 10:10:07.693390   34725 root.go:255] Error reading config file at /home/mrizzi/.minikube/config/config.json: open /home/mrizzi/.minikube/config/config.json: no such file or directory
I0120 10:10:07.693726   34725 out.go:228] Setting JSON to false
I0120 10:10:07.706938   34725 start.go:104] hostinfo: {"hostname":"fedora-p1","uptime":50503,"bootTime":1611083304,"procs":443,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"33","kernelVersion":"5.10.7-200.fc33.x86_64","virtualizationSystem":"","virtualizationRole":"","hostid":"2a0ffbe8-79f8-479f-b627-66a4d7b9718b"}
I0120 10:10:07.707432   34725 start.go:114] virtualization:  
I0120 10:10:07.707738   34725 out.go:119] 😄  minikube v1.16.0 on Fedora 33
😄  minikube v1.16.0 on Fedora 33
I0120 10:10:07.707846   34725 driver.go:303] Setting default libvirt URI to qemu:///system
I0120 10:10:07.707906   34725 notify.go:126] Checking for updates...
I0120 10:10:07.781589   34725 podman.go:118] podman version: 2.2.1
I0120 10:10:07.781701   34725 out.go:119] ✨  Using the podman (experimental) driver based on user configuration
✨  Using the podman (experimental) driver based on user configuration
I0120 10:10:07.781716   34725 start.go:277] selected driver: podman
I0120 10:10:07.781722   34725 start.go:686] validating driver "podman" against <nil>
I0120 10:10:07.781737   34725 start.go:697] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
I0120 10:10:07.781879   34725 cli_runner.go:111] Run: sudo -n podman system info --format json
I0120 10:10:07.873838   34725 info.go:273] podman info: {Host:{BuildahVersion:1.18.0 CgroupVersion:v2 Conmon:{Package:conmon-2.0.21-3.fc33.x86_64 Path:/usr/bin/conmon Version:conmon version 2.0.21, commit: 0f53fb68333bdead5fe4dc5175703e22cf9882ab} Distribution:{Distribution:fedora Version:33} MemFree:22332567552 MemTotal:33410228224 OCIRuntime:{Name:crun Package:crun-0.16-3.fc33.x86_64 Path:/usr/bin/crun Version:crun version 0.16
commit: eb0145e5ad4d8207e84a327248af76663d4e50dd
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:4294963200 SwapTotal:4294963200 Arch:amd64 Cpus:12 Eventlogger:journald Hostname:fedora-p1 Kernel:5.10.7-200.fc33.x86_64 Os:linux Rootless:false Uptime:14h 1m 43.28s (Approximately 0.58 days)} Registries:{Search:[registry.fedoraproject.org registry.access.redhat.com registry.centos.org docker.io]} Store:{ConfigFile:/etc/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/lib/containers/storage GraphStatus:{BackingFilesystem:btrfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:2} RunRoot:/var/run/containers/storage VolumePath:/var/lib/containers/storage/volumes}}
I0120 10:10:07.873928   34725 start_flags.go:235] no existing cluster config was found, will generate one from the flags 
I0120 10:10:07.874581   34725 start_flags.go:253] Using suggested 7900MB memory alloc based on sys=31862MB, container=31862MB
I0120 10:10:07.874682   34725 start_flags.go:648] Wait components to verify : map[apiserver:true system_pods:true]
I0120 10:10:07.874707   34725 cni.go:74] Creating CNI manager for ""
I0120 10:10:07.874713   34725 cni.go:120] "podman" driver + crio runtime found, recommending kindnet
I0120 10:10:07.874725   34725 start_flags.go:362] Found "CNI" CNI - setting NetworkPlugin=cni
I0120 10:10:07.874733   34725 start_flags.go:367] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] MultiNodeRequested:false}
I0120 10:10:07.874844   34725 out.go:119] 👍  Starting control plane node minikube in cluster minikube
👍  Starting control plane node minikube in cluster minikube
I0120 10:10:07.874858   34725 cache.go:112] Driver isn't docker, skipping base image download
I0120 10:10:07.874864   34725 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0120 10:10:08.103660   34725 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
I0120 10:10:08.103728   34725 cache.go:54] Caching tarball of preloaded images
I0120 10:10:08.103796   34725 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0120 10:10:08.308135   34725 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
I0120 10:10:08.308471   34725 out.go:119] 💾  Downloading Kubernetes v1.20.0 preload ...
💾  Downloading Kubernetes v1.20.0 preload ...
I0120 10:10:08.308741   34725 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4 -> /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
    > preloaded-images-k8s-v8-v1....: 555.86 MiB / 555.86 MiB  100.00% 8.23 MiB
I0120 10:11:16.885251   34725 preload.go:160] saving checksum for preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
I0120 10:11:17.123192   34725 preload.go:177] verifying checksumm of /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
I0120 10:11:18.110836   34725 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.0 on crio
I0120 10:11:18.111034   34725 profile.go:147] Saving config to /home/mrizzi/.minikube/profiles/minikube/config.json ...
I0120 10:11:18.111055   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/config.json: {Name:mk473a46e0a7385fc7b1c17eee8567719c4a2678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:18.111277   34725 cache.go:185] Successfully downloaded all kic artifacts
I0120 10:11:18.111300   34725 start.go:314] acquiring machines lock for minikube: {Name:mk6d494bfb92177bc8505684a7c42000ca387cb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 10:11:18.111346   34725 start.go:318] acquired machines lock for "minikube" in 32.849µs
I0120 10:11:18.111365   34725 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}
I0120 10:11:18.111409   34725 start.go:127] createHost starting for "" (driver="podman")
I0120 10:11:18.111516   34725 out.go:119] 🔥  Creating podman container (CPUs=2, Memory=7900MB) ...
🔥  Creating podman container (CPUs=2, Memory=7900MB) ...
I0120 10:11:18.111629   34725 start.go:164] libmachine.API.Create for "minikube" (driver="podman")
I0120 10:11:18.111648   34725 client.go:165] LocalClient.Create starting
I0120 10:11:18.111670   34725 main.go:119] libmachine: Creating CA: /home/mrizzi/.minikube/certs/ca.pem
I0120 10:11:18.201203   34725 main.go:119] libmachine: Creating client certificate: /home/mrizzi/.minikube/certs/cert.pem
I0120 10:11:18.386075   34725 cli_runner.go:111] Run: sudo -n podman network inspect minikube --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}"
I0120 10:11:18.462551   34725 network_create.go:59] Found existing network {name:minikube subnet:0xc0002d8480 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:0}
I0120 10:11:18.462587   34725 kic.go:96] calculated static IP "192.168.49.2" for the "minikube" container
I0120 10:11:18.462659   34725 cli_runner.go:111] Run: sudo -n podman ps -a --format {{.Names}}
I0120 10:11:18.534680   34725 cli_runner.go:111] Run: sudo -n podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0120 10:11:18.622635   34725 oci.go:102] Successfully created a podman volume minikube
I0120 10:11:18.622695   34725 cli_runner.go:111] Run: sudo -n podman run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4 -d /var/lib
I0120 10:11:19.142364   34725 oci.go:106] Successfully prepared a podman volume minikube
I0120 10:11:19.142404   34725 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime crio
W0120 10:11:19.142406   34725 oci.go:159] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0120 10:11:19.142428   34725 oci.go:201] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I0120 10:11:19.142580   34725 preload.go:105] Found local preload: /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
I0120 10:11:19.142593   34725 kic.go:159] Starting extracting preloaded images to volume ...
I0120 10:11:19.142697   34725 cli_runner.go:111] Run: sudo -n podman info --format "'{{json .SecurityOptions}}'"
I0120 10:11:19.142699   34725 cli_runner.go:111] Run: sudo -n podman run --rm --entrypoint /usr/bin/tar --security-opt label=disable -v /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4 -I lz4 -xf /preloaded.tar -C /extractDir
W0120 10:11:19.237592   34725 cli_runner.go:149] sudo -n podman info --format "'{{json .SecurityOptions}}'" returned with exit code 125
I0120 10:11:19.237790   34725 cli_runner.go:111] Run: sudo -n podman run --cgroup-manager cgroupfs -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var:exec -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4
I0120 10:11:19.763960   34725 cli_runner.go:111] Run: sudo -n podman container inspect minikube --format={{.State.Running}}
I0120 10:11:19.852462   34725 cli_runner.go:111] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
I0120 10:11:19.934538   34725 cli_runner.go:111] Run: sudo -n podman exec minikube stat /var/lib/dpkg/alternatives/iptables
I0120 10:11:20.256688   34725 oci.go:246] the created container "minikube" has a running status.
I0120 10:11:20.256710   34725 kic.go:190] Creating ssh key for kic: /home/mrizzi/.minikube/machines/minikube/id_rsa...
I0120 10:11:20.388662   34725 kic_runner.go:187] podman (temp): /home/mrizzi/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0120 10:11:20.388890   34725 kic_runner.go:217] Run: /usr/bin/sudo -n podman cp /tmp/tmpf-memory-asset879966068 minikube:/home/docker/.ssh/authorized_keys
I0120 10:11:20.693112   34725 cli_runner.go:111] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
I0120 10:11:20.773164   34725 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0120 10:11:20.773216   34725 kic_runner.go:114] Args: [sudo -n podman exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0120 10:11:22.411940   34725 cli_runner.go:155] Completed: sudo -n podman run --rm --entrypoint /usr/bin/tar --security-opt label=disable -v /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.269208312s)
I0120 10:11:22.411973   34725 kic.go:168] duration metric: took 3.269382 seconds to extract preloaded images to volume
I0120 10:11:22.412052   34725 cli_runner.go:111] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
I0120 10:11:22.489611   34725 machine.go:88] provisioning docker machine ...
I0120 10:11:22.489645   34725 ubuntu.go:169] provisioning hostname "minikube"
I0120 10:11:22.489762   34725 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 10:11:22.559720   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 10:11:22.634695   34725 main.go:119] libmachine: Using SSH client type: native
I0120 10:11:22.634857   34725 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x80b6c0] 0x80b680 <nil>  [] 0s} 127.0.0.1 38549 <nil> <nil>}
I0120 10:11:22.634873   34725 main.go:119] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0120 10:11:22.635051   34725 main.go:119] libmachine: Error dialing TCP: dial tcp 127.0.0.1:38549: connect: connection refused
I0120 10:11:25.767898   34725 main.go:119] libmachine: SSH cmd err, output: <nil>: minikube

I0120 10:11:25.768114   34725 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 10:11:25.843664   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 10:11:25.918664   34725 main.go:119] libmachine: Using SSH client type: native
I0120 10:11:25.918860   34725 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x80b6c0] 0x80b680 <nil>  [] 0s} 127.0.0.1 38549 <nil> <nil>}
I0120 10:11:25.918881   34725 main.go:119] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0120 10:11:26.045737   34725 main.go:119] libmachine: SSH cmd err, output: <nil>: 
I0120 10:11:26.045810   34725 ubuntu.go:175] set auth options {CertDir:/home/mrizzi/.minikube CaCertPath:/home/mrizzi/.minikube/certs/ca.pem CaPrivateKeyPath:/home/mrizzi/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/mrizzi/.minikube/machines/server.pem ServerKeyPath:/home/mrizzi/.minikube/machines/server-key.pem ClientKeyPath:/home/mrizzi/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/mrizzi/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/mrizzi/.minikube}
I0120 10:11:26.045888   34725 ubuntu.go:177] setting up certificates
I0120 10:11:26.045910   34725 provision.go:83] configureAuth start
I0120 10:11:26.046065   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I0120 10:11:26.128641   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0120 10:11:26.202588   34725 provision.go:137] copyHostCerts
I0120 10:11:26.202652   34725 exec_runner.go:152] cp: /home/mrizzi/.minikube/certs/ca.pem --> /home/mrizzi/.minikube/ca.pem (1078 bytes)
I0120 10:11:26.202761   34725 exec_runner.go:152] cp: /home/mrizzi/.minikube/certs/cert.pem --> /home/mrizzi/.minikube/cert.pem (1119 bytes)
I0120 10:11:26.202838   34725 exec_runner.go:152] cp: /home/mrizzi/.minikube/certs/key.pem --> /home/mrizzi/.minikube/key.pem (1679 bytes)
I0120 10:11:26.202889   34725 provision.go:111] generating server cert: /home/mrizzi/.minikube/machines/server.pem ca-key=/home/mrizzi/.minikube/certs/ca.pem private-key=/home/mrizzi/.minikube/certs/ca-key.pem org=mrizzi.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0120 10:11:26.301469   34725 provision.go:165] copyRemoteCerts
I0120 10:11:26.301515   34725 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0120 10:11:26.301576   34725 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 10:11:26.371741   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 10:11:26.446591   34725 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:38549 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 10:11:26.546541   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0120 10:11:26.594719   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0120 10:11:26.627907   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0120 10:11:26.643211   34725 provision.go:86] duration metric: configureAuth took 597.279487ms
I0120 10:11:26.643286   34725 ubuntu.go:193] setting minikube options for container-runtime
I0120 10:11:26.643662   34725 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 10:11:26.718687   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 10:11:26.792613   34725 main.go:119] libmachine: Using SSH client type: native
I0120 10:11:26.792742   34725 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x80b6c0] 0x80b680 <nil>  [] 0s} 127.0.0.1 38549 <nil> <nil>}
I0120 10:11:26.792757   34725 main.go:119] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube
I0120 10:11:26.939546   34725 main.go:119] libmachine: SSH cmd err, output: <nil>: 
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '

I0120 10:11:26.939672   34725 machine.go:91] provisioned docker machine in 4.450039663s
I0120 10:11:26.939708   34725 client.go:168] LocalClient.Create took 8.828047593s
I0120 10:11:26.939746   34725 start.go:172] duration metric: libmachine.API.Create for "minikube" took 8.82811025s
I0120 10:11:26.939770   34725 start.go:268] post-start starting for "minikube" (driver="podman")
I0120 10:11:26.939787   34725 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0120 10:11:26.939907   34725 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0120 10:11:26.940050   34725 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 10:11:27.010578   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 10:11:27.086604   34725 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:38549 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 10:11:27.185515   34725 ssh_runner.go:149] Run: cat /etc/os-release
I0120 10:11:27.192618   34725 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0120 10:11:27.192694   34725 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0120 10:11:27.192733   34725 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0120 10:11:27.192755   34725 info.go:97] Remote host: Ubuntu 20.04.1 LTS
I0120 10:11:27.192783   34725 filesync.go:118] Scanning /home/mrizzi/.minikube/addons for local assets ...
I0120 10:11:27.192929   34725 filesync.go:118] Scanning /home/mrizzi/.minikube/files for local assets ...
I0120 10:11:27.193018   34725 start.go:271] post-start completed in 253.229663ms
I0120 10:11:27.193855   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I0120 10:11:27.270695   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0120 10:11:27.344643   34725 profile.go:147] Saving config to /home/mrizzi/.minikube/profiles/minikube/config.json ...
I0120 10:11:27.344899   34725 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0120 10:11:27.344948   34725 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 10:11:27.412744   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 10:11:27.488611   34725 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:38549 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 10:11:27.576580   34725 start.go:130] duration metric: createHost completed in 9.465149655s
I0120 10:11:27.576636   34725 start.go:81] releasing machines lock for "minikube", held for 9.465274756s
I0120 10:11:27.576921   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I0120 10:11:27.660607   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0120 10:11:27.736755   34725 ssh_runner.go:149] Run: systemctl --version
I0120 10:11:27.736813   34725 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 10:11:27.736755   34725 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0120 10:11:27.736880   34725 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 10:11:27.810626   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 10:11:27.862682   34725 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 10:11:27.891524   34725 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:38549 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 10:11:27.939638   34725 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:38549 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 10:11:28.122052   34725 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0120 10:11:28.153457   34725 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
I0120 10:11:28.189199   34725 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0120 10:11:28.196452   34725 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
I0120 10:11:28.203119   34725 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
image-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I0120 10:11:28.211968   34725 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.2"|' -i /etc/crio/crio.conf"
I0120 10:11:28.218728   34725 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0120 10:11:28.223138   34725 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0120 10:11:28.227372   34725 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0120 10:11:28.301867   34725 ssh_runner.go:149] Run: sudo systemctl start crio
I0120 10:11:28.465403   34725 ssh_runner.go:149] Run: crio --version
I0120 10:11:28.505287   34725 out.go:119] 🎁  Preparing Kubernetes v1.20.0 on CRI-O 1.19.0 ...
🎁  Preparing Kubernetes v1.20.0 on CRI-O 1.19.0 ...
I0120 10:11:28.505351   34725 cli_runner.go:111] Run: sudo -n podman container inspect --format {{.NetworkSettings.Gateway}} minikube
I0120 10:11:28.583617   34725 ssh_runner.go:149] Run: grep <nil>	host.minikube.internal$ /etc/hosts
I0120 10:11:28.585831   34725 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "<nil>	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0120 10:11:28.592190   34725 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0120 10:11:28.592224   34725 preload.go:105] Found local preload: /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
I0120 10:11:28.592262   34725 ssh_runner.go:149] Run: sudo crictl images --output json
I0120 10:11:28.625212   34725 crio.go:345] all images are preloaded for cri-o runtime.
I0120 10:11:28.625230   34725 crio.go:260] Images already preloaded, skipping extraction
I0120 10:11:28.625280   34725 ssh_runner.go:149] Run: sudo crictl images --output json
I0120 10:11:28.635701   34725 crio.go:345] all images are preloaded for cri-o runtime.
I0120 10:11:28.635719   34725 cache_images.go:74] Images are preloaded, skipping loading
I0120 10:11:28.635768   34725 ssh_runner.go:149] Run: crio config
I0120 10:11:28.676303   34725 cni.go:74] Creating CNI manager for ""
I0120 10:11:28.676321   34725 cni.go:120] "podman" driver + crio runtime found, recommending kindnet
I0120 10:11:28.676333   34725 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0120 10:11:28.676345   34725 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0120 10:11:28.676438   34725 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/crio/crio.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 192.168.49.2:10249

I0120 10:11:28.676559   34725 kubeadm.go:862] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=minikube --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m

[Install]
 config:
{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0120 10:11:28.676617   34725 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0120 10:11:28.681597   34725 binaries.go:44] Found k8s binaries, skipping transfer
I0120 10:11:28.681640   34725 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0120 10:11:28.686507   34725 ssh_runner.go:310] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (487 bytes)
I0120 10:11:28.696472   34725 ssh_runner.go:310] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0120 10:11:28.705973   34725 ssh_runner.go:310] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1843 bytes)
I0120 10:11:28.715510   34725 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
I0120 10:11:28.717371   34725 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0120 10:11:28.723448   34725 certs.go:52] Setting up /home/mrizzi/.minikube/profiles/minikube for IP: 192.168.49.2
I0120 10:11:28.723494   34725 certs.go:173] generating minikubeCA CA: /home/mrizzi/.minikube/ca.key
I0120 10:11:28.968184   34725 crypto.go:157] Writing cert to /home/mrizzi/.minikube/ca.crt ...
I0120 10:11:28.968203   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/ca.crt: {Name:mke03e9a1920afba460c060be5f4b6769ef644b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:28.968462   34725 crypto.go:165] Writing key to /home/mrizzi/.minikube/ca.key ...
I0120 10:11:28.968472   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/ca.key: {Name:mkb240f7f8e6f82e4d610aab52b47468a1329330 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:28.968559   34725 certs.go:173] generating proxyClientCA CA: /home/mrizzi/.minikube/proxy-client-ca.key
I0120 10:11:29.156962   34725 crypto.go:157] Writing cert to /home/mrizzi/.minikube/proxy-client-ca.crt ...
I0120 10:11:29.156981   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/proxy-client-ca.crt: {Name:mk4174df0f1b4beaf8e5a275fbdf42244be71f15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:29.157136   34725 crypto.go:165] Writing key to /home/mrizzi/.minikube/proxy-client-ca.key ...
I0120 10:11:29.157146   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/proxy-client-ca.key: {Name:mk5e6950da80fd9764adae2b6dd79810410ec3ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:29.157252   34725 certs.go:277] generating minikube-user signed cert: /home/mrizzi/.minikube/profiles/minikube/client.key
I0120 10:11:29.157260   34725 crypto.go:69] Generating cert /home/mrizzi/.minikube/profiles/minikube/client.crt with IP's: []
I0120 10:11:29.238721   34725 crypto.go:157] Writing cert to /home/mrizzi/.minikube/profiles/minikube/client.crt ...
I0120 10:11:29.238744   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/client.crt: {Name:mk2ff7788ac9d0de0cd174f0617feb2f1dd707c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:29.238881   34725 crypto.go:165] Writing key to /home/mrizzi/.minikube/profiles/minikube/client.key ...
I0120 10:11:29.238891   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/client.key: {Name:mkedf501c0d6a07a0aa78a08660f8e8e7cc0c918 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:29.238986   34725 certs.go:277] generating minikube signed cert: /home/mrizzi/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0120 10:11:29.238994   34725 crypto.go:69] Generating cert /home/mrizzi/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0120 10:11:29.341821   34725 crypto.go:157] Writing cert to /home/mrizzi/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I0120 10:11:29.341842   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk422858b15bd0eaea2b6fcba46c45cc115c0286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:29.341983   34725 crypto.go:165] Writing key to /home/mrizzi/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I0120 10:11:29.341997   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk0658a97766b6658717586fb5056c92e38378bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:29.342087   34725 certs.go:288] copying /home/mrizzi/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/mrizzi/.minikube/profiles/minikube/apiserver.crt
I0120 10:11:29.342171   34725 certs.go:292] copying /home/mrizzi/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/mrizzi/.minikube/profiles/minikube/apiserver.key
I0120 10:11:29.342239   34725 certs.go:277] generating aggregator signed cert: /home/mrizzi/.minikube/profiles/minikube/proxy-client.key
I0120 10:11:29.342248   34725 crypto.go:69] Generating cert /home/mrizzi/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0120 10:11:29.442250   34725 crypto.go:157] Writing cert to /home/mrizzi/.minikube/profiles/minikube/proxy-client.crt ...
I0120 10:11:29.442270   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/proxy-client.crt: {Name:mka2338a78f50214ee1948cd9bf268c531eaa3f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:29.442402   34725 crypto.go:165] Writing key to /home/mrizzi/.minikube/profiles/minikube/proxy-client.key ...
I0120 10:11:29.442410   34725 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/proxy-client.key: {Name:mk969b8bdb9a7c95302616c350453daaad785fcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 10:11:29.442554   34725 certs.go:352] found cert: /home/mrizzi/.minikube/certs/home/mrizzi/.minikube/certs/ca-key.pem (1679 bytes)
I0120 10:11:29.442581   34725 certs.go:352] found cert: /home/mrizzi/.minikube/certs/home/mrizzi/.minikube/certs/ca.pem (1078 bytes)
I0120 10:11:29.442597   34725 certs.go:352] found cert: /home/mrizzi/.minikube/certs/home/mrizzi/.minikube/certs/cert.pem (1119 bytes)
I0120 10:11:29.442615   34725 certs.go:352] found cert: /home/mrizzi/.minikube/certs/home/mrizzi/.minikube/certs/key.pem (1679 bytes)
I0120 10:11:29.443251   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0120 10:11:29.457143   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0120 10:11:29.470424   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0120 10:11:29.483690   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0120 10:11:29.495539   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0120 10:11:29.508794   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0120 10:11:29.520759   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0120 10:11:29.533025   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0120 10:11:29.546061   34725 ssh_runner.go:310] scp /home/mrizzi/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0120 10:11:29.558122   34725 ssh_runner.go:310] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0120 10:11:29.567914   34725 ssh_runner.go:149] Run: openssl version
I0120 10:11:29.571701   34725 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0120 10:11:29.576915   34725 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0120 10:11:29.578926   34725 certs.go:393] hashing: -rw-r--r--. 1 root root 1111 Jan 20 09:11 /usr/share/ca-certificates/minikubeCA.pem
I0120 10:11:29.578961   34725 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0120 10:11:29.582173   34725 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0120 10:11:29.586946   34725 kubeadm.go:364] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] MultiNodeRequested:false}
I0120 10:11:29.586994   34725 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I0120 10:11:29.587036   34725 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 10:11:29.597079   34725 cri.go:76] found id: ""
I0120 10:11:29.597161   34725 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0120 10:11:29.603001   34725 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0120 10:11:29.607835   34725 kubeadm.go:213] ignoring SystemVerification for kubeadm because of podman driver
I0120 10:11:29.607877   34725 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0120 10:11:29.612767   34725 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0120 10:11:29.612799   34725 ssh_runner.go:236] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0120 10:11:29.790705   34725 out.go:140]     ▪ Generating certificates and keys ...
    ▪ Generating certificates and keys ...| I0120 10:11:31.863066   34725 out.go:140]     ▪ Booting up control plane ...

    ▪ Booting up control plane ...\ W0120 10:13:26.882652   34725 out.go:181] 💢  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:

💢  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:

I0120 10:13:26.882820   34725 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
| I0120 10:13:28.235433   34725 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.352592269s)
I0120 10:13:28.235493   34725 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
/ I0120 10:13:28.244390   34725 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
I0120 10:13:28.244451   34725 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 10:13:28.256278   34725 cri.go:76] found id: ""
I0120 10:13:28.256313   34725 kubeadm.go:213] ignoring SystemVerification for kubeadm because of podman driver
I0120 10:13:28.256389   34725 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0120 10:13:28.262111   34725 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0120 10:13:28.262144   34725 ssh_runner.go:236] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
- I0120 10:13:28.435841   34725 out.go:140]     ▪ Generating certificates and keys ...

    ▪ Generating certificates and keys ...| I0120 10:13:29.019428   34725 out.go:140]     ▪ Booting up control plane ...

    ▪ Booting up control plane ...\ I0120 10:15:24.039004   34725 kubeadm.go:366] StartCluster complete in 3m54.452045986s
I0120 10:15:24.039040   34725 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I0120 10:15:24.039148   34725 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 10:15:24.051246   34725 cri.go:76] found id: ""
I0120 10:15:24.051265   34725 logs.go:206] 0 containers: []
W0120 10:15:24.051277   34725 logs.go:208] No container was found matching "kube-apiserver"
I0120 10:15:24.051291   34725 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I0120 10:15:24.051339   34725 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
I0120 10:15:24.063070   34725 cri.go:76] found id: ""
I0120 10:15:24.063091   34725 logs.go:206] 0 containers: []
W0120 10:15:24.063103   34725 logs.go:208] No container was found matching "etcd"
I0120 10:15:24.063113   34725 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I0120 10:15:24.063162   34725 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
I0120 10:15:24.073915   34725 cri.go:76] found id: ""
I0120 10:15:24.073933   34725 logs.go:206] 0 containers: []
W0120 10:15:24.073944   34725 logs.go:208] No container was found matching "coredns"
I0120 10:15:24.073955   34725 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I0120 10:15:24.074003   34725 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 10:15:24.084882   34725 cri.go:76] found id: ""
I0120 10:15:24.084904   34725 logs.go:206] 0 containers: []
W0120 10:15:24.084915   34725 logs.go:208] No container was found matching "kube-scheduler"
I0120 10:15:24.084930   34725 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I0120 10:15:24.084973   34725 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
| I0120 10:15:24.102385   34725 cri.go:76] found id: ""
I0120 10:15:24.102464   34725 logs.go:206] 0 containers: []
W0120 10:15:24.102476   34725 logs.go:208] No container was found matching "kube-proxy"
I0120 10:15:24.102500   34725 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 10:15:24.102574   34725 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 10:15:24.122481   34725 cri.go:76] found id: ""
I0120 10:15:24.122536   34725 logs.go:206] 0 containers: []
W0120 10:15:24.122553   34725 logs.go:208] No container was found matching "kubernetes-dashboard"
I0120 10:15:24.122572   34725 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
I0120 10:15:24.122681   34725 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 10:15:24.142397   34725 cri.go:76] found id: ""
I0120 10:15:24.142422   34725 logs.go:206] 0 containers: []
W0120 10:15:24.142435   34725 logs.go:208] No container was found matching "storage-provisioner"
I0120 10:15:24.142444   34725 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I0120 10:15:24.142554   34725 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 10:15:24.168968   34725 cri.go:76] found id: ""
I0120 10:15:24.169070   34725 logs.go:206] 0 containers: []
W0120 10:15:24.169143   34725 logs.go:208] No container was found matching "kube-controller-manager"
I0120 10:15:24.169194   34725 logs.go:120] Gathering logs for kubelet ...
I0120 10:15:24.169277   34725 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
/ I0120 10:15:24.236790   34725 logs.go:120] Gathering logs for dmesg ...
I0120 10:15:24.236837   34725 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 10:15:24.256094   34725 logs.go:120] Gathering logs for describe nodes ...
I0120 10:15:24.256127   34725 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
- W0120 10:15:24.345646   34725 logs.go:127] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: 
** stderr ** 
The connection to the server localhost:8443 was refused - did you specify the right host or port?

** /stderr **
I0120 10:15:24.345681   34725 logs.go:120] Gathering logs for CRI-O ...
I0120 10:15:24.345702   34725 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
\ I0120 10:15:24.416703   34725 logs.go:120] Gathering logs for container status ...
I0120 10:15:24.416772   34725 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0120 10:15:24.444499   34725 out.go:294] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
W0120 10:15:24.444840   34725 out.go:181] 

W0120 10:15:24.445143   34725 out.go:181] 💣  Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:

💣  Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:

W0120 10:15:24.445372   34725 out.go:181] 

W0120 10:15:24.445418   34725 out.go:181] 😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
W0120 10:15:24.445481   34725 out.go:181] 👉  https://github.com/kubernetes/minikube/issues/new/choose
👉  https://github.com/kubernetes/minikube/issues/new/choose
I0120 10:15:24.447800   34725 out.go:119] 


W0120 10:15:24.448113   34725 out.go:181] ❌  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:

❌  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:

W0120 10:15:24.452245   34725 out.go:181] 💡  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
💡  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0120 10:15:24.452310   34725 out.go:181] 🍿  Related issue: https://github.com/kubernetes/minikube/issues/4172
🍿  Related issue: https://github.com/kubernetes/minikube/issues/4172
I0120 10:15:24.452338   34725 out.go:119] 

Full output of minikube logs command:

==> CRI-O <==
-- Logs begin at Wed 2021-01-20 09:11:25 UTC, end at Wed 2021-01-20 09:16:46 UTC. --
Jan 20 09:13:28 minikube crio[351]: time="2021-01-20 09:13:28.419778143Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=c143b1a7-7629-41cf-ae35-6604b4661000 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:28 minikube crio[351]: time="2021-01-20 09:13:28.420843854Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{info: {\"imageSpec\":{\"created\":\"2020-02-14T10:51:50.60182885-08:00\",\"architecture\":\"amd64\",\"os\":\"linux\",\"config\":{\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Entrypoint\":[\"/pause\"],\"WorkingDir\":\"/\"},\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:ba0dae6243cc9fa2890df40a625721fdbea5c94ca6da897acdd814d710149770\"]},\"history\":[{\"created\":\"2020-02-14T10:51:50.60182885-08:00\",\"created_by\":\"ARG ARCH\",\"comment\":\"buildkit.dockerfile.v0\",\"empty_layer\":true},{\"created\":\"2020-02-14T10:51:50.60182885-08:00\",\"created_by\":\"ADD bin/pause-amd64 /pause # buildkit\",\"comment\":\"buildkit.dockerfile.v0\"},{\"created\":\"2020-02-14T10:51:50.60182885-08:00\",\"created_by\":\"ENTRYPOINT [\\\"/pause\\\"]\",\"comment\":\"buildkit.dockerfile.v0\",\"empty_layer\":true}]}},},}" id=c143b1a7-7629-41cf-ae35-6604b4661000 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:28 minikube crio[351]: time="2021-01-20 09:13:28.425544638Z" level=info msg="Checking image status: k8s.gcr.io/etcd:3.4.13-0" id=82276258-4fb9-46f8-8add-4a1f85e32393 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:28 minikube crio[351]: time="2021-01-20 09:13:28.428568383Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,RepoTags:[k8s.gcr.io/etcd:3.4.13-0],RepoDigests:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a],Size_:254662613,Uid:nil,Username:,Spec:nil,},Info:map[string]string{info: {\"imageSpec\":{\"created\":\"2020-08-27T13:47:36.718716443Z\",\"architecture\":\"amd64\",\"os\":\"linux\",\"config\":{\"ExposedPorts\":{\"2379/tcp\":{},\"2380/tcp\":{},\"4001/tcp\":{},\"7001/tcp\":{}},\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\"SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt\"],\"WorkingDir\":\"/\"},\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:d72a74c56330b347f7d18b64d2effd93edd695fde25dc301d52c37efbcf4844e\",\"sha256:d61c79b2929916dd31e6d4aa48d30587f63a3192ab0418db8e7fcbea1ad654b9\",\"sha256:1a4e46412eb09db65f559c3921e4b39ab2dfb059482ebe416bcb740c10769ab3\",\"sha256:bfa5849f3d098e8f222dacc4d682250340a9cab32590d052b6922f0956ccaa04\",\"sha256:bb63b9467928d4b064be1ccbb88d0f4ec868ce4aa4a7dd44338090528838b79e\"]},\"history\":[{\"created\":\"1970-01-01T00:00:00Z\",\"created_by\":\"bazel build ...\",\"author\":\"Bazel\"},{\"created\":\"2020-08-27T13:47:31.271664261Z\",\"created_by\":\"/bin/sh -c #(nop) WORKDIR /\",\"empty_layer\":true},{\"created\":\"2020-08-27T13:47:31.436965941Z\",\"created_by\":\"/bin/sh -c #(nop) COPY file:93201c93ac7e6e5b3976190c2d70671eb6576373537fda9ac1bd50d90e342ed1 in /bin/ \"},{\"created\":\"2020-08-27T13:47:31.550192267Z\",\"created_by\":\"/bin/sh -c #(nop)  EXPOSE 2379 2380 4001 7001\",\"empty_layer\":true},{\"created\":\"2020-08-27T13:47:34.464243112Z\",\"created_by\":\"/bin/sh -c #(nop) COPY multi:db2195e6dcec23938ed1dcaf030f0ec72e3ae97af5ef0c8a74c72a2a097ec8fd in /usr/local/bin/ \"},{\"created\":\"2020-08-27T13:47:36.357785715Z\",\"created_by\":\"/bin/sh -c #(nop) COPY file:cf93caea4c1e5a0eaaa9cf9147de2dd27a8545620caa35f0a592e42099d44ed0 in /bin/ \"},{\"created\":\"2020-08-27T13:47:36.718716443Z\",\"created_by\":\"/bin/sh -c #(nop) COPY multi:a1881dd50cdbd92225791143eb662674b0a4155ae2577453cd6fae7dab43f859 in /usr/local/bin/ \"}]}},},}" id=82276258-4fb9-46f8-8add-4a1f85e32393 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:28 minikube crio[351]: time="2021-01-20 09:13:28.433103303Z" level=info msg="Checking image status: k8s.gcr.io/coredns:1.7.0" id=db0139da-c693-41ce-8f15-bf5318c06e6d name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:28 minikube crio[351]: time="2021-01-20 09:13:28.434567427Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16,RepoTags:[k8s.gcr.io/coredns:1.7.0],RepoDigests:[k8s.gcr.io/coredns@sha256:242d440e3192ffbcecd40e9536891f4d9be46a650363f3a004497c2070f96f5a k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c],Size_:45358048,Uid:nil,Username:,Spec:nil,},Info:map[string]string{info: {\"imageSpec\":{\"created\":\"2020-06-18T00:55:59.462921357Z\",\"architecture\":\"amd64\",\"os\":\"linux\",\"config\":{\"ExposedPorts\":{\"53/udp\":{},\"53/tcp\":{}},\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Entrypoint\":[\"/coredns\"]},\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:225df95e717ceb672de0e45aa49f352eace21512240205972aca0fccc9612722\",\"sha256:96d17b0b58a73f2d35707e37e5911f65cca8b4467dc54420b811d07784caee64\"]},\"history\":[{\"created\":\"2019-07-28T20:18:27.224802511Z\",\"created_by\":\"/bin/sh -c #(nop) COPY dir:0284c6efacdcf29cb632136811b7130fbe84998aefe3d1c36a0570424c7a2c92 in /etc/ssl/certs \"},{\"created\":\"2020-06-18T00:55:58.768320531Z\",\"created_by\":\"/bin/sh -c #(nop) ADD file:a39148838cdb612e6ae2cfd5672098607e86503673395922b6521249a1edbf6a in /coredns \"},{\"created\":\"2020-06-18T00:55:59.195850503Z\",\"created_by\":\"/bin/sh -c #(nop)  EXPOSE 53 53/udp\",\"empty_layer\":true},{\"created\":\"2020-06-18T00:55:59.462921357Z\",\"created_by\":\"/bin/sh -c #(nop)  ENTRYPOINT [\\\"/coredns\\\"]\",\"empty_layer\":true}]}},},}" id=db0139da-c693-41ce-8f15-bf5318c06e6d name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:35 minikube crio[351]: time="2021-01-20 09:13:35.741500466Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=72927076-80e3-4624-8f8b-b451607dd3bc name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:35 minikube crio[351]: time="2021-01-20 09:13:35.743212723Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=72927076-80e3-4624-8f8b-b451607dd3bc name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:42 minikube crio[351]: time="2021-01-20 09:13:42.944566021Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=2e7ff016-583b-4103-8f80-d2bc458c8a83 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:42 minikube crio[351]: time="2021-01-20 09:13:42.947114859Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2e7ff016-583b-4103-8f80-d2bc458c8a83 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:50 minikube crio[351]: time="2021-01-20 09:13:50.222266459Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=e4df4d5f-00f5-41c2-8816-17aa3cdcf80d name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:50 minikube crio[351]: time="2021-01-20 09:13:50.223955381Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e4df4d5f-00f5-41c2-8816-17aa3cdcf80d name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:57 minikube crio[351]: time="2021-01-20 09:13:57.463740576Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=8e9d76d0-a1e2-4e7b-baf8-f1349e938cdd name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:13:57 minikube crio[351]: time="2021-01-20 09:13:57.465470724Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8e9d76d0-a1e2-4e7b-baf8-f1349e938cdd name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:04 minikube crio[351]: time="2021-01-20 09:14:04.692850595Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=2cba5fc0-63c3-42db-be75-e56af2274c48 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:04 minikube crio[351]: time="2021-01-20 09:14:04.695926108Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2cba5fc0-63c3-42db-be75-e56af2274c48 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:11 minikube crio[351]: time="2021-01-20 09:14:11.963037978Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=5b158a6d-7c7d-4254-9d15-6baa602e220f name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:11 minikube crio[351]: time="2021-01-20 09:14:11.965113740Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=5b158a6d-7c7d-4254-9d15-6baa602e220f name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:19 minikube crio[351]: time="2021-01-20 09:14:19.173060475Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=ef61a749-afa4-4a05-aa29-dec885496617 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:19 minikube crio[351]: time="2021-01-20 09:14:19.174695245Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ef61a749-afa4-4a05-aa29-dec885496617 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:26 minikube crio[351]: time="2021-01-20 09:14:26.479249436Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=335fe976-7171-4578-820f-0324341cda71 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:26 minikube crio[351]: time="2021-01-20 09:14:26.482346510Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=335fe976-7171-4578-820f-0324341cda71 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:33 minikube crio[351]: time="2021-01-20 09:14:33.723104028Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=2686a37e-a142-439c-b558-77f9a3b65329 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:33 minikube crio[351]: time="2021-01-20 09:14:33.724867701Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2686a37e-a142-439c-b558-77f9a3b65329 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:40 minikube crio[351]: time="2021-01-20 09:14:40.865064180Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=e2f08879-ee4c-458f-aa50-1ce96cda3a34 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:40 minikube crio[351]: time="2021-01-20 09:14:40.866856487Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e2f08879-ee4c-458f-aa50-1ce96cda3a34 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:48 minikube crio[351]: time="2021-01-20 09:14:48.197141018Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=d5aae54f-58b6-4f7e-9c50-a855c182e83b name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:48 minikube crio[351]: time="2021-01-20 09:14:48.199036098Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d5aae54f-58b6-4f7e-9c50-a855c182e83b name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:55 minikube crio[351]: time="2021-01-20 09:14:55.440276964Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=c27ca990-3475-4289-985a-f27553b49281 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:14:55 minikube crio[351]: time="2021-01-20 09:14:55.442227891Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c27ca990-3475-4289-985a-f27553b49281 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:02 minikube crio[351]: time="2021-01-20 09:15:02.674854399Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=2080cecc-d7f5-4229-9556-297ed924b970 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:02 minikube crio[351]: time="2021-01-20 09:15:02.676503253Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2080cecc-d7f5-4229-9556-297ed924b970 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:09 minikube crio[351]: time="2021-01-20 09:15:09.964515922Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=8106490d-ec3b-4511-86ce-8be4cfe280dc name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:09 minikube crio[351]: time="2021-01-20 09:15:09.966450337Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8106490d-ec3b-4511-86ce-8be4cfe280dc name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:17 minikube crio[351]: time="2021-01-20 09:15:17.187216639Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=1020507f-6a59-41fb-b6bb-8f1df6a2d08c name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:17 minikube crio[351]: time="2021-01-20 09:15:17.189133592Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1020507f-6a59-41fb-b6bb-8f1df6a2d08c name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:24 minikube crio[351]: time="2021-01-20 09:15:24.450267532Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=9726db19-bdac-4108-a04c-eca8d27c3cd5 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:24 minikube crio[351]: time="2021-01-20 09:15:24.453788633Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9726db19-bdac-4108-a04c-eca8d27c3cd5 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:31 minikube crio[351]: time="2021-01-20 09:15:31.733362540Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=d36661b8-e730-4e0d-a131-b491d2190902 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:31 minikube crio[351]: time="2021-01-20 09:15:31.735310911Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d36661b8-e730-4e0d-a131-b491d2190902 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:38 minikube crio[351]: time="2021-01-20 09:15:38.914004287Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=0d3da55e-f22b-43de-94cc-dc48c1951cac name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:38 minikube crio[351]: time="2021-01-20 09:15:38.915797186Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=0d3da55e-f22b-43de-94cc-dc48c1951cac name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:46 minikube crio[351]: time="2021-01-20 09:15:46.189616711Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=138d2fb3-8ddb-4aa0-aa72-8aba0755a271 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:46 minikube crio[351]: time="2021-01-20 09:15:46.191544853Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=138d2fb3-8ddb-4aa0-aa72-8aba0755a271 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:53 minikube crio[351]: time="2021-01-20 09:15:53.422787267Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=bddffcbe-20b8-413d-b870-45c1801b03ca name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:15:53 minikube crio[351]: time="2021-01-20 09:15:53.424587191Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=bddffcbe-20b8-413d-b870-45c1801b03ca name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:00 minikube crio[351]: time="2021-01-20 09:16:00.674148788Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=44f9501a-19ea-4886-b1ab-be476eb5c551 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:00 minikube crio[351]: time="2021-01-20 09:16:00.676378040Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=44f9501a-19ea-4886-b1ab-be476eb5c551 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:07 minikube crio[351]: time="2021-01-20 09:16:07.918510351Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=2367c594-5a12-47f7-b0b6-45d0185b5d8a name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:07 minikube crio[351]: time="2021-01-20 09:16:07.921542749Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2367c594-5a12-47f7-b0b6-45d0185b5d8a name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:15 minikube crio[351]: time="2021-01-20 09:16:15.220587918Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=7230d8d4-ca5b-414c-ba5e-050d5edff0c9 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:15 minikube crio[351]: time="2021-01-20 09:16:15.222502700Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=7230d8d4-ca5b-414c-ba5e-050d5edff0c9 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:22 minikube crio[351]: time="2021-01-20 09:16:22.464335672Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=8f407728-5591-439f-8386-479d066c225f name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:22 minikube crio[351]: time="2021-01-20 09:16:22.466506601Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8f407728-5591-439f-8386-479d066c225f name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:29 minikube crio[351]: time="2021-01-20 09:16:29.728772336Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=4a4a7852-8aa9-45fe-8439-9056101df44d name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:29 minikube crio[351]: time="2021-01-20 09:16:29.730742153Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=4a4a7852-8aa9-45fe-8439-9056101df44d name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:36 minikube crio[351]: time="2021-01-20 09:16:36.979665190Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=da76ac1b-a75c-4136-9080-87d92e09984c name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:36 minikube crio[351]: time="2021-01-20 09:16:36.981318464Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=da76ac1b-a75c-4136-9080-87d92e09984c name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:44 minikube crio[351]: time="2021-01-20 09:16:44.187176071Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=07b89e5a-6adf-4bee-b1b6-8245281d1049 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 09:16:44 minikube crio[351]: time="2021-01-20 09:16:44.188969236Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=07b89e5a-6adf-4bee-b1b6-8245281d1049 name=/runtime.v1alpha2.ImageService/ImageStatus

==> container status <==
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID

==> describe nodes <==
E0120 10:16:46.103955   45918 logs.go:181] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"

==> dmesg <==
[Jan19 19:08] x86/cpu: VMX (outside TXT) disabled by BIOS
[  +0.023630] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[  +0.792548] systemd[1]: /usr/lib/systemd/system/plymouth-start.service:15: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
[  +0.208865] acpi PNP0C14:02: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000038] acpi PNP0C14:03: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000109] acpi PNP0C14:04: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000071] acpi PNP0C14:05: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000048] acpi PNP0C14:06: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000063] acpi PNP0C14:07: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000086] acpi PNP0C14:08: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.016163] usb: port power management may be unreliable
[  +0.110762] nvme nvme0: missing or invalid SUBNQN field.
[ +14.894224] kauditd_printk_skb: 18 callbacks suppressed
[  +0.817494] systemd-sysv-generator[995]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[  +0.000049] systemd-sysv-generator[995]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[  +0.069329] systemd[1]: /usr/lib/systemd/system/plymouth-start.service:15: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
[  +0.342651] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[  +0.074614] iwlwifi 0000:00:14.3: api flags index 2 larger than supported by driver
[  +0.112511] resource sanity check: requesting [mem 0xfed10000-0xfed15fff], which spans more than pnp 00:07 [mem 0xfed10000-0xfed13fff]
[  +0.000009] caller snb_uncore_imc_init_box+0x6a/0xa0 [intel_uncore] mapping multiple BARs
[  +0.034138] r8152 4-2.1.2:1.0 (unnamed net_device) (uninitialized): Invalid header when reading pass-thru MAC addr
[  +0.331260] thermal thermal_zone13: failed to read out thermal zone (-61)
[  +0.179322] sof-audio-pci 0000:00:1f.3: ASoC: Parent card not yet available, widget card binding deferred
[  +0.257875] snd_hda_codec_realtek ehdaudio0D0: ASoC: sink widget AIF1TX overwritten
[  +0.000005] snd_hda_codec_realtek ehdaudio0D0: ASoC: source widget AIF1RX overwritten
[  +0.000180] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget hifi3 overwritten
[  +0.000003] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget hifi2 overwritten
[  +0.000003] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget hifi1 overwritten
[  +0.000003] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Codec Output Pin1 overwritten
[  +0.000003] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Codec Input Pin1 overwritten
[  +0.000004] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Analog Codec Playback overwritten
[  +0.000004] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Digital Codec Playback overwritten
[  +0.000003] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Alt Analog Codec Playback overwritten
[  +0.000005] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Analog Codec Capture overwritten
[  +0.000003] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Digital Codec Capture overwritten
[  +0.000005] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Alt Analog Codec Capture overwritten
[  +0.005502] snd_hda_codec_hdmi ehdaudio0D2: Monitor plugged-in, Failed to power up codec ret=[-13]
[  +0.005862] snd_hda_codec_hdmi ehdaudio0D2: Monitor plugged-in, Failed to power up codec ret=[-13]
[ +16.142052] usb 3-2.1.1.2: 1:1: cannot get freq at ep 0x81
[Jan19 19:09] [drm:drm_dp_mst_dpcd_read [drm_kms_helper]] *ERROR* mstb 0000000005a5d522 port 1: DPCD read on addr 0x4b0 for 1 bytes NAKed
[  +0.030189] [drm:drm_dp_mst_dpcd_read [drm_kms_helper]] *ERROR* mstb 0000000005a5d522 port 3: DPCD read on addr 0x4b0 for 1 bytes NAKed
[Jan19 19:23] IRQ 166: no longer affine to CPU1
[  +0.004626] IRQ 167: no longer affine to CPU2
[  +0.005180] IRQ 168: no longer affine to CPU3
[  +0.004011] IRQ 169: no longer affine to CPU4
[  +0.004128] IRQ 170: no longer affine to CPU5
[  +0.004502] IRQ 171: no longer affine to CPU6
[  +0.002426] IRQ 172: no longer affine to CPU7
[  +0.001989] IRQ 173: no longer affine to CPU8
[  +0.002057] IRQ 174: no longer affine to CPU9
[  +0.002182] IRQ 175: no longer affine to CPU10
[  +0.007428] smpboot: Scheduler frequency invariance went wobbly, disabling!
[  +1.710710] usb 4-2: Disable of device-initiated U1 failed.
[  +0.000011] usb 4-2: Disable of device-initiated U2 failed.
[  +0.874553] usb 4-2.1: Disable of device-initiated U1 failed.
[  +0.010342] usb 4-2.1: Disable of device-initiated U2 failed.
[  +4.489934] done.
[  +0.879613] r8152 4-2.1.2:1.0 (unnamed net_device) (uninitialized): Invalid header when reading pass-thru MAC addr

==> kernel <==
 09:16:46 up 14:08,  0 users,  load average: 0.68, 0.56, 0.64
Linux minikube 5.10.7-200.fc33.x86_64 #1 SMP Tue Jan 12 20:20:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.1 LTS"

==> kubelet <==
-- Logs begin at Wed 2021-01-20 09:11:25 UTC, end at Wed 2021-01-20 09:16:46 UTC. --
Jan 20 09:16:44 minikube kubelet[6945]: goroutine 398 [select]:
Jan 20 09:16:44 minikube kubelet[6945]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).housekeepingTick(0xc0003466c0, 0xc00077bb00, 0x5f5e100, 0xc000714200)
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:536 +0x127
Jan 20 09:16:44 minikube kubelet[6945]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).housekeeping(0xc0003466c0)
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:494 +0x25a
Jan 20 09:16:44 minikube kubelet[6945]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).Start
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:114 +0x3f
Jan 20 09:16:44 minikube kubelet[6945]: goroutine 628 [select]:
Jan 20 09:16:44 minikube kubelet[6945]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).Start.func1(0xc0011ba3c0, 0xc000d72ea0)
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:91 +0x125
Jan 20 09:16:44 minikube kubelet[6945]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).Start
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:89 +0x477
Jan 20 09:16:44 minikube kubelet[6945]: goroutine 629 [select]:
Jan 20 09:16:44 minikube kubelet[6945]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewContainers.func1(0xc000e00a00, 0xc000a8a910, 0xc000642ae0)
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:1164 +0xe5
Jan 20 09:16:44 minikube kubelet[6945]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewContainers
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:1162 +0x21d
Jan 20 09:16:44 minikube kubelet[6945]: goroutine 630 [select]:
Jan 20 09:16:44 minikube kubelet[6945]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).globalHousekeeping(0xc000e00a00, 0xc000c0d560)
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:385 +0x145
Jan 20 09:16:44 minikube kubelet[6945]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:319 +0x585
Jan 20 09:16:44 minikube kubelet[6945]: goroutine 631 [select]:
Jan 20 09:16:44 minikube kubelet[6945]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).updateMachineInfo(0xc000e00a00, 0xc000c0d5c0)
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:357 +0xd4
Jan 20 09:16:44 minikube kubelet[6945]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
Jan 20 09:16:44 minikube kubelet[6945]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:323 +0x608
Jan 20 09:16:45 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 27.
Jan 20 09:16:45 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jan 20 09:16:45 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jan 20 09:16:45 minikube kubelet[7084]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 20 09:16:45 minikube kubelet[7084]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.089876    7084 server.go:416] Version: v1.20.0
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.090080    7084 server.go:837] Client rotation is on, will bootstrap in background
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.091552    7084 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.092214    7084 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
Jan 20 09:16:45 minikube kubelet[7084]: W0120 09:16:45.092221    7084 manager.go:159] Cannot detect current cgroup on cgroup v2
Jan 20 09:16:45 minikube kubelet[7084]: W0120 09:16:45.138965    7084 fs.go:208] stat failed on /dev/mapper/luks-04d26ab7-d155-44f4-906f-c64d950aa812 with error: no such file or directory
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.154757    7084 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.154880    7084 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.154895    7084 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.154937    7084 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.154942    7084 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.154946    7084 container_manager_linux.go:315] Creating device plugin manager: true
Jan 20 09:16:45 minikube kubelet[7084]: W0120 09:16:45.154992    7084 util_unix.go:103] Using "/var/run/crio/crio.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/crio/crio.sock".
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155012    7084 remote_runtime.go:62] parsed scheme: ""
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155018    7084 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155035    7084 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155040    7084 clientconn.go:948] ClientConn switching balancer to "pick_first"
Jan 20 09:16:45 minikube kubelet[7084]: W0120 09:16:45.155081    7084 util_unix.go:103] Using "/var/run/crio/crio.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/crio/crio.sock".
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155091    7084 remote_image.go:50] parsed scheme: ""
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155095    7084 remote_image.go:50] scheme "" not registered, fallback to default scheme
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155101    7084 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155106    7084 clientconn.go:948] ClientConn switching balancer to "pick_first"
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155129    7084 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.155148    7084 kubelet.go:273] Watching apiserver
Jan 20 09:16:45 minikube kubelet[7084]: E0120 09:16:45.155806    7084 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Jan 20 09:16:45 minikube kubelet[7084]: E0120 09:16:45.155820    7084 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Jan 20 09:16:45 minikube kubelet[7084]: E0120 09:16:45.155855    7084 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Jan 20 09:16:45 minikube kubelet[7084]: I0120 09:16:45.159969    7084 kuberuntime_manager.go:216] Container runtime cri-o initialized, version: 1.19.0, apiVersion: v1alpha1

❗  unable to fetch logs for: describe nodes

Tools versions

$ podman version
Version:      2.2.1
API Version:  2.1.0
Go Version:   go1.15.5
Built:        Tue Dec  8 15:37:50 2020
OS/Arch:      linux/amd64

$ minikube version
minikube version: v1.16.0
commit: 9f1e482427589ff8451c4723b6ba53bb9742fbb1

$ cat /etc/redhat-release 
Fedora release 33 (Thirty Three)

$ uname -a
Linux fedora-p1 5.10.7-200.fc33.x86_64 #1 SMP Tue Jan 12 20:20:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Notes
The test has been done with cgroups v2.
I've tested anyway also v1 but it doesn't work anyway.

Thanks

@afbjorklund afbjorklund added co/podman-driver podman driver issues co/runtime/crio CRIO related issues kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Jan 20, 2021
@afbjorklund
Copy link
Collaborator

afbjorklund commented Jan 20, 2021

Looks like some kind of CRI-O issue. Not sure if upgrading from 1.19 to 1.20 would help:

docker@minikube:~$ more /etc/apt/sources.list.d/devel\:kubic\:libcontainers\:stable\:cri-o\:1.18.list 
deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/1.19/xUbuntu_20.04/ /

It does work on Ubuntu 20.04 (with podman and crio), so something specific to Fedora...

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jan 20, 2021

We don't test minikube on Fedora, so need community help with that: #3552

Notes
The test has been done with cgroups v2.
I've tested anyway also v1 but it doesn't work anyway.

Have you disabled selinux ? Also check sysctl net.bridge.bridge-nf-call-iptables

@mrizzi
Copy link
Author

mrizzi commented Jan 20, 2021

Sure about helping with #3552: do you think that if we find a running conf for F33 w/ podman+cri-o it could become the testing CI env for covering Minikube on Fedora?

Sorry, about Not sure if upgrading from 1.19 to 1.20 would help, do you want me to try? Or is this something you already tried?

Checked:

$ sysctl net.bridge.bridge-nf-call-iptables 
net.bridge.bridge-nf-call-iptables = 1

Then tried with both SELinux permissive and disabled but it didn't start.
I report the outputs with SELinux disabled in case it helps.

Full output of failed command:

I0120 13:18:07.104199   21226 out.go:221] Setting OutFile to fd 1 ...
I0120 13:18:07.104283   21226 out.go:273] isatty.IsTerminal(1) = true
I0120 13:18:07.104291   21226 out.go:234] Setting ErrFile to fd 2...
I0120 13:18:07.104298   21226 out.go:273] isatty.IsTerminal(2) = true
I0120 13:18:07.104363   21226 root.go:280] Updating PATH: /home/mrizzi/.minikube/bin
W0120 13:18:07.104439   21226 root.go:255] Error reading config file at /home/mrizzi/.minikube/config/config.json: open /home/mrizzi/.minikube/config/config.json: no such file or directory
I0120 13:18:07.104688   21226 out.go:228] Setting JSON to false
I0120 13:18:07.117981   21226 start.go:104] hostinfo: {"hostname":"fedora-p1","uptime":759,"bootTime":1611144328,"procs":447,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"33","kernelVersion":"5.10.7-200.fc33.x86_64","virtualizationSystem":"","virtualizationRole":"","hostid":"2a0ffbe8-79f8-479f-b627-66a4d7b9718b"}
I0120 13:18:07.118430   21226 start.go:114] virtualization:  
I0120 13:18:07.118746   21226 out.go:119] 😄  minikube v1.16.0 on Fedora 33
😄  minikube v1.16.0 on Fedora 33
I0120 13:18:07.118860   21226 driver.go:303] Setting default libvirt URI to qemu:///system
I0120 13:18:07.118909   21226 notify.go:126] Checking for updates...
I0120 13:18:07.188668   21226 podman.go:118] podman version: 2.2.1
I0120 13:18:07.188775   21226 out.go:119] ✨  Using the podman (experimental) driver based on user configuration
✨  Using the podman (experimental) driver based on user configuration
I0120 13:18:07.188788   21226 start.go:277] selected driver: podman
I0120 13:18:07.188792   21226 start.go:686] validating driver "podman" against <nil>
I0120 13:18:07.188804   21226 start.go:697] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
I0120 13:18:07.188885   21226 cli_runner.go:111] Run: sudo -n podman system info --format json
I0120 13:18:07.275082   21226 info.go:273] podman info: {Host:{BuildahVersion:1.18.0 CgroupVersion:v2 Conmon:{Package:conmon-2.0.21-3.fc33.x86_64 Path:/usr/bin/conmon Version:conmon version 2.0.21, commit: 0f53fb68333bdead5fe4dc5175703e22cf9882ab} Distribution:{Distribution:fedora Version:33} MemFree:25840971776 MemTotal:33410228224 OCIRuntime:{Name:crun Package:crun-0.16-3.fc33.x86_64 Path:/usr/bin/crun Version:crun version 0.16
commit: eb0145e5ad4d8207e84a327248af76663d4e50dd
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:4294963200 SwapTotal:4294963200 Arch:amd64 Cpus:12 Eventlogger:journald Hostname:fedora-p1 Kernel:5.10.7-200.fc33.x86_64 Os:linux Rootless:false Uptime:12m 38.69s} Registries:{Search:[registry.fedoraproject.org registry.access.redhat.com registry.centos.org docker.io]} Store:{ConfigFile:/etc/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/lib/containers/storage GraphStatus:{BackingFilesystem:btrfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:2} RunRoot:/var/run/containers/storage VolumePath:/var/lib/containers/storage/volumes}}
I0120 13:18:07.275164   21226 start_flags.go:235] no existing cluster config was found, will generate one from the flags 
I0120 13:18:07.275873   21226 start_flags.go:253] Using suggested 7900MB memory alloc based on sys=31862MB, container=31862MB
I0120 13:18:07.275997   21226 start_flags.go:648] Wait components to verify : map[apiserver:true system_pods:true]
I0120 13:18:07.276022   21226 cni.go:74] Creating CNI manager for ""
I0120 13:18:07.276027   21226 cni.go:120] "podman" driver + crio runtime found, recommending kindnet
I0120 13:18:07.276039   21226 start_flags.go:362] Found "CNI" CNI - setting NetworkPlugin=cni
I0120 13:18:07.276047   21226 start_flags.go:367] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] MultiNodeRequested:false}
I0120 13:18:07.276182   21226 out.go:119] 👍  Starting control plane node minikube in cluster minikube
👍  Starting control plane node minikube in cluster minikube
I0120 13:18:07.276197   21226 cache.go:112] Driver isn't docker, skipping base image download
I0120 13:18:07.276207   21226 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0120 13:18:07.469307   21226 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
I0120 13:18:07.469397   21226 cache.go:54] Caching tarball of preloaded images
I0120 13:18:07.469455   21226 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0120 13:18:07.614995   21226 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
I0120 13:18:07.615439   21226 out.go:119] 💾  Downloading Kubernetes v1.20.0 preload ...
💾  Downloading Kubernetes v1.20.0 preload ...
I0120 13:18:07.615677   21226 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4 -> /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
    > preloaded-images-k8s-v8-v1....: 555.86 MiB / 555.86 MiB  100.00% 8.32 MiB
I0120 13:19:15.328499   21226 preload.go:160] saving checksum for preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
I0120 13:19:15.533874   21226 preload.go:177] verifying checksumm of /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
I0120 13:19:16.513596   21226 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.0 on crio
I0120 13:19:16.513817   21226 profile.go:147] Saving config to /home/mrizzi/.minikube/profiles/minikube/config.json ...
I0120 13:19:16.513837   21226 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/config.json: {Name:mk473a46e0a7385fc7b1c17eee8567719c4a2678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 13:19:16.513988   21226 cache.go:185] Successfully downloaded all kic artifacts
I0120 13:19:16.514009   21226 start.go:314] acquiring machines lock for minikube: {Name:mk6d494bfb92177bc8505684a7c42000ca387cb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 13:19:16.514043   21226 start.go:318] acquired machines lock for "minikube" in 25.303µs
I0120 13:19:16.514058   21226 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}
I0120 13:19:16.514103   21226 start.go:127] createHost starting for "" (driver="podman")
I0120 13:19:16.514193   21226 out.go:119] 🔥  Creating podman container (CPUs=2, Memory=7900MB) ...
🔥  Creating podman container (CPUs=2, Memory=7900MB) ...
I0120 13:19:16.514303   21226 start.go:164] libmachine.API.Create for "minikube" (driver="podman")
I0120 13:19:16.514324   21226 client.go:165] LocalClient.Create starting
I0120 13:19:16.514346   21226 main.go:119] libmachine: Creating CA: /home/mrizzi/.minikube/certs/ca.pem
I0120 13:19:16.602101   21226 main.go:119] libmachine: Creating client certificate: /home/mrizzi/.minikube/certs/cert.pem
I0120 13:19:16.778621   21226 cli_runner.go:111] Run: sudo -n podman network inspect minikube --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}"
I0120 13:19:16.852751   21226 network_create.go:59] Found existing network {name:minikube subnet:0xc000c210e0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:0}
I0120 13:19:16.852786   21226 kic.go:96] calculated static IP "192.168.49.2" for the "minikube" container
I0120 13:19:16.852843   21226 cli_runner.go:111] Run: sudo -n podman ps -a --format {{.Names}}
I0120 13:19:16.921802   21226 cli_runner.go:111] Run: sudo -n podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0120 13:19:16.997641   21226 oci.go:102] Successfully created a podman volume minikube
I0120 13:19:16.997711   21226 cli_runner.go:111] Run: sudo -n podman run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4 -d /var/lib
I0120 13:19:17.499761   21226 oci.go:106] Successfully prepared a podman volume minikube
W0120 13:19:17.499803   21226 oci.go:159] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0120 13:19:17.499818   21226 oci.go:201] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I0120 13:19:17.499864   21226 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0120 13:19:17.499893   21226 cli_runner.go:111] Run: sudo -n podman info --format "'{{json .SecurityOptions}}'"
I0120 13:19:17.499894   21226 preload.go:105] Found local preload: /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
I0120 13:19:17.499907   21226 kic.go:159] Starting extracting preloaded images to volume ...
I0120 13:19:17.499951   21226 cli_runner.go:111] Run: sudo -n podman run --rm --entrypoint /usr/bin/tar --security-opt label=disable -v /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4 -I lz4 -xf /preloaded.tar -C /extractDir
W0120 13:19:17.586433   21226 cli_runner.go:149] sudo -n podman info --format "'{{json .SecurityOptions}}'" returned with exit code 125
I0120 13:19:17.586528   21226 cli_runner.go:111] Run: sudo -n podman run --cgroup-manager cgroupfs -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var:exec -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4
I0120 13:19:18.099616   21226 cli_runner.go:111] Run: sudo -n podman container inspect minikube --format={{.State.Running}}
I0120 13:19:18.186047   21226 cli_runner.go:111] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
I0120 13:19:18.276899   21226 cli_runner.go:111] Run: sudo -n podman exec minikube stat /var/lib/dpkg/alternatives/iptables
I0120 13:19:18.458665   21226 oci.go:246] the created container "minikube" has a running status.
I0120 13:19:18.458688   21226 kic.go:190] Creating ssh key for kic: /home/mrizzi/.minikube/machines/minikube/id_rsa...
I0120 13:19:18.785381   21226 kic_runner.go:187] podman (temp): /home/mrizzi/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0120 13:19:18.785490   21226 kic_runner.go:217] Run: /usr/bin/sudo -n podman cp /tmp/tmpf-memory-asset068568544 minikube:/home/docker/.ssh/authorized_keys
I0120 13:19:19.083801   21226 cli_runner.go:111] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
I0120 13:19:19.162881   21226 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0120 13:19:19.162908   21226 kic_runner.go:114] Args: [sudo -n podman exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0120 13:19:20.709168   21226 cli_runner.go:155] Completed: sudo -n podman run --rm --entrypoint /usr/bin/tar --security-opt label=disable -v /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.209189748s)
I0120 13:19:20.709190   21226 kic.go:168] duration metric: took 3.209283 seconds to extract preloaded images to volume
I0120 13:19:20.709266   21226 cli_runner.go:111] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
I0120 13:19:20.781780   21226 machine.go:88] provisioning docker machine ...
I0120 13:19:20.781814   21226 ubuntu.go:169] provisioning hostname "minikube"
I0120 13:19:20.781865   21226 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 13:19:20.850772   21226 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 13:19:20.924820   21226 main.go:119] libmachine: Using SSH client type: native
I0120 13:19:20.924990   21226 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x80b6c0] 0x80b680 <nil>  [] 0s} 127.0.0.1 40111 <nil> <nil>}
I0120 13:19:20.925005   21226 main.go:119] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0120 13:19:21.066928   21226 main.go:119] libmachine: SSH cmd err, output: <nil>: minikube

I0120 13:19:21.067130   21226 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 13:19:21.139701   21226 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 13:19:21.212789   21226 main.go:119] libmachine: Using SSH client type: native
I0120 13:19:21.212922   21226 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x80b6c0] 0x80b680 <nil>  [] 0s} 127.0.0.1 40111 <nil> <nil>}
I0120 13:19:21.212947   21226 main.go:119] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0120 13:19:21.333062   21226 main.go:119] libmachine: SSH cmd err, output: <nil>: 
I0120 13:19:21.333114   21226 ubuntu.go:175] set auth options {CertDir:/home/mrizzi/.minikube CaCertPath:/home/mrizzi/.minikube/certs/ca.pem CaPrivateKeyPath:/home/mrizzi/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/mrizzi/.minikube/machines/server.pem ServerKeyPath:/home/mrizzi/.minikube/machines/server-key.pem ClientKeyPath:/home/mrizzi/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/mrizzi/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/mrizzi/.minikube}
I0120 13:19:21.333158   21226 ubuntu.go:177] setting up certificates
I0120 13:19:21.333180   21226 provision.go:83] configureAuth start
I0120 13:19:21.333341   21226 cli_runner.go:111] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I0120 13:19:21.414803   21226 cli_runner.go:111] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0120 13:19:21.488713   21226 provision.go:137] copyHostCerts
I0120 13:19:21.488760   21226 exec_runner.go:152] cp: /home/mrizzi/.minikube/certs/key.pem --> /home/mrizzi/.minikube/key.pem (1675 bytes)
I0120 13:19:21.488848   21226 exec_runner.go:152] cp: /home/mrizzi/.minikube/certs/ca.pem --> /home/mrizzi/.minikube/ca.pem (1078 bytes)
I0120 13:19:21.488900   21226 exec_runner.go:152] cp: /home/mrizzi/.minikube/certs/cert.pem --> /home/mrizzi/.minikube/cert.pem (1123 bytes)
I0120 13:19:21.488939   21226 provision.go:111] generating server cert: /home/mrizzi/.minikube/machines/server.pem ca-key=/home/mrizzi/.minikube/certs/ca.pem private-key=/home/mrizzi/.minikube/certs/ca-key.pem org=mrizzi.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0120 13:19:21.623554   21226 provision.go:165] copyRemoteCerts
I0120 13:19:21.623611   21226 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0120 13:19:21.623666   21226 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 13:19:21.691859   21226 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 13:19:21.764725   21226 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:40111 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 13:19:21.859797   21226 ssh_runner.go:310] scp /home/mrizzi/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0120 13:19:21.907201   21226 ssh_runner.go:310] scp /home/mrizzi/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0120 13:19:21.923643   21226 ssh_runner.go:310] scp /home/mrizzi/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0120 13:19:21.936858   21226 provision.go:86] duration metric: configureAuth took 603.655974ms
I0120 13:19:21.936876   21226 ubuntu.go:193] setting minikube options for container-runtime
I0120 13:19:21.937085   21226 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 13:19:22.007794   21226 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 13:19:22.080804   21226 main.go:119] libmachine: Using SSH client type: native
I0120 13:19:22.080940   21226 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x80b6c0] 0x80b680 <nil>  [] 0s} 127.0.0.1 40111 <nil> <nil>}
I0120 13:19:22.080963   21226 main.go:119] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube
I0120 13:19:22.221948   21226 main.go:119] libmachine: SSH cmd err, output: <nil>: 
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '

I0120 13:19:22.221997   21226 machine.go:91] provisioned docker machine in 1.440196489s
I0120 13:19:22.222037   21226 client.go:168] LocalClient.Create took 5.707706284s
I0120 13:19:22.222066   21226 start.go:172] duration metric: libmachine.API.Create for "minikube" took 5.707757265s
I0120 13:19:22.222083   21226 start.go:268] post-start starting for "minikube" (driver="podman")
I0120 13:19:22.222097   21226 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0120 13:19:22.222197   21226 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0120 13:19:22.222355   21226 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 13:19:22.293814   21226 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 13:19:22.368140   21226 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:40111 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 13:19:22.466324   21226 ssh_runner.go:149] Run: cat /etc/os-release
I0120 13:19:22.470297   21226 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0120 13:19:22.470328   21226 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0120 13:19:22.470342   21226 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0120 13:19:22.470349   21226 info.go:97] Remote host: Ubuntu 20.04.1 LTS
I0120 13:19:22.470362   21226 filesync.go:118] Scanning /home/mrizzi/.minikube/addons for local assets ...
I0120 13:19:22.470408   21226 filesync.go:118] Scanning /home/mrizzi/.minikube/files for local assets ...
I0120 13:19:22.470438   21226 start.go:271] post-start completed in 248.342898ms
I0120 13:19:22.470729   21226 cli_runner.go:111] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I0120 13:19:22.548787   21226 cli_runner.go:111] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0120 13:19:22.622781   21226 profile.go:147] Saving config to /home/mrizzi/.minikube/profiles/minikube/config.json ...
I0120 13:19:22.622993   21226 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0120 13:19:22.623038   21226 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 13:19:22.691822   21226 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 13:19:22.765725   21226 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:40111 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 13:19:22.846819   21226 start.go:130] duration metric: createHost completed in 6.332704163s
I0120 13:19:22.846844   21226 start.go:81] releasing machines lock for "minikube", held for 6.332792146s
I0120 13:19:22.846975   21226 cli_runner.go:111] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I0120 13:19:22.923745   21226 cli_runner.go:111] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0120 13:19:22.995814   21226 ssh_runner.go:149] Run: systemctl --version
I0120 13:19:22.995879   21226 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 13:19:22.995914   21226 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0120 13:19:22.995964   21226 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 13:19:23.066774   21226 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 13:19:23.118788   21226 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 13:19:23.144790   21226 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:40111 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 13:19:23.195735   21226 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:40111 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 13:19:23.426927   21226 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0120 13:19:23.454942   21226 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
I0120 13:19:23.492551   21226 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0120 13:19:23.500923   21226 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
I0120 13:19:23.507879   21226 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
image-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I0120 13:19:23.518300   21226 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.2"|' -i /etc/crio/crio.conf"
I0120 13:19:23.524569   21226 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0120 13:19:23.529469   21226 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0120 13:19:23.534005   21226 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0120 13:19:23.597989   21226 ssh_runner.go:149] Run: sudo systemctl start crio
I0120 13:19:23.750578   21226 ssh_runner.go:149] Run: crio --version
I0120 13:19:23.788942   21226 out.go:119] 🎁  Preparing Kubernetes v1.20.0 on CRI-O 1.19.0 ...
🎁  Preparing Kubernetes v1.20.0 on CRI-O 1.19.0 ...
I0120 13:19:23.789022   21226 cli_runner.go:111] Run: sudo -n podman container inspect --format {{.NetworkSettings.Gateway}} minikube
I0120 13:19:23.861833   21226 ssh_runner.go:149] Run: grep <nil>	host.minikube.internal$ /etc/hosts
I0120 13:19:23.864018   21226 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "<nil>	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0120 13:19:23.870511   21226 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0120 13:19:23.870562   21226 preload.go:105] Found local preload: /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
I0120 13:19:23.870598   21226 ssh_runner.go:149] Run: sudo crictl images --output json
I0120 13:19:23.902203   21226 crio.go:345] all images are preloaded for cri-o runtime.
I0120 13:19:23.902220   21226 crio.go:260] Images already preloaded, skipping extraction
I0120 13:19:23.902258   21226 ssh_runner.go:149] Run: sudo crictl images --output json
I0120 13:19:23.911722   21226 crio.go:345] all images are preloaded for cri-o runtime.
I0120 13:19:23.911739   21226 cache_images.go:74] Images are preloaded, skipping loading
I0120 13:19:23.911791   21226 ssh_runner.go:149] Run: crio config
I0120 13:19:23.952107   21226 cni.go:74] Creating CNI manager for ""
I0120 13:19:23.952120   21226 cni.go:120] "podman" driver + crio runtime found, recommending kindnet
I0120 13:19:23.952129   21226 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0120 13:19:23.952144   21226 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0120 13:19:23.952251   21226 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/crio/crio.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 192.168.49.2:10249

I0120 13:19:23.952344   21226 kubeadm.go:862] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=minikube --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m

[Install]
 config:
{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0120 13:19:23.952404   21226 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0120 13:19:23.957154   21226 binaries.go:44] Found k8s binaries, skipping transfer
I0120 13:19:23.957199   21226 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0120 13:19:23.961758   21226 ssh_runner.go:310] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (487 bytes)
I0120 13:19:23.970295   21226 ssh_runner.go:310] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0120 13:19:23.980229   21226 ssh_runner.go:310] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1843 bytes)
I0120 13:19:23.989570   21226 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
I0120 13:19:23.991419   21226 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0120 13:19:23.998029   21226 certs.go:52] Setting up /home/mrizzi/.minikube/profiles/minikube for IP: 192.168.49.2
I0120 13:19:23.998061   21226 certs.go:173] generating minikubeCA CA: /home/mrizzi/.minikube/ca.key
I0120 13:19:24.141635   21226 crypto.go:157] Writing cert to /home/mrizzi/.minikube/ca.crt ...
I0120 13:19:24.141658   21226 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/ca.crt: {Name:mke03e9a1920afba460c060be5f4b6769ef644b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 13:19:24.141819   21226 crypto.go:165] Writing key to /home/mrizzi/.minikube/ca.key ...
I0120 13:19:24.141830   21226 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/ca.key: {Name:mkb240f7f8e6f82e4d610aab52b47468a1329330 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 13:19:24.141903   21226 certs.go:173] generating proxyClientCA CA: /home/mrizzi/.minikube/proxy-client-ca.key
I0120 13:19:24.199364   21226 crypto.go:157] Writing cert to /home/mrizzi/.minikube/proxy-client-ca.crt ...
I0120 13:19:24.199384   21226 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/proxy-client-ca.crt: {Name:mk4174df0f1b4beaf8e5a275fbdf42244be71f15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 13:19:24.199505   21226 crypto.go:165] Writing key to /home/mrizzi/.minikube/proxy-client-ca.key ...
I0120 13:19:24.199515   21226 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/proxy-client-ca.key: {Name:mk5e6950da80fd9764adae2b6dd79810410ec3ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 13:19:24.199619   21226 certs.go:277] generating minikube-user signed cert: /home/mrizzi/.minikube/profiles/minikube/client.key
I0120 13:19:24.199627   21226 crypto.go:69] Generating cert /home/mrizzi/.minikube/profiles/minikube/client.crt with IP's: []
I0120 13:19:24.278859   21226 crypto.go:157] Writing cert to /home/mrizzi/.minikube/profiles/minikube/client.crt ...
I0120 13:19:24.278881   21226 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/client.crt: {Name:mk2ff7788ac9d0de0cd174f0617feb2f1dd707c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 13:19:24.279033   21226 crypto.go:165] Writing key to /home/mrizzi/.minikube/profiles/minikube/client.key ...
I0120 13:19:24.279043   21226 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/client.key: {Name:mkedf501c0d6a07a0aa78a08660f8e8e7cc0c918 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 13:19:24.279111   21226 certs.go:277] generating minikube signed cert: /home/mrizzi/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0120 13:19:24.279119   21226 crypto.go:69] Generating cert /home/mrizzi/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0120 13:19:24.509908   21226 crypto.go:157] Writing cert to /home/mrizzi/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I0120 13:19:24.509930   21226 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk422858b15bd0eaea2b6fcba46c45cc115c0286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 13:19:24.510066   21226 crypto.go:165] Writing key to /home/mrizzi/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I0120 13:19:24.510078   21226 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk0658a97766b6658717586fb5056c92e38378bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 13:19:24.510138   21226 certs.go:288] copying /home/mrizzi/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/mrizzi/.minikube/profiles/minikube/apiserver.crt
I0120 13:19:24.510204   21226 certs.go:292] copying /home/mrizzi/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/mrizzi/.minikube/profiles/minikube/apiserver.key
I0120 13:19:24.510287   21226 certs.go:277] generating aggregator signed cert: /home/mrizzi/.minikube/profiles/minikube/proxy-client.key
I0120 13:19:24.510298   21226 crypto.go:69] Generating cert /home/mrizzi/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0120 13:19:24.619422   21226 crypto.go:157] Writing cert to /home/mrizzi/.minikube/profiles/minikube/proxy-client.crt ...
I0120 13:19:24.619444   21226 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/proxy-client.crt: {Name:mka2338a78f50214ee1948cd9bf268c531eaa3f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 13:19:24.619579   21226 crypto.go:165] Writing key to /home/mrizzi/.minikube/profiles/minikube/proxy-client.key ...
I0120 13:19:24.619591   21226 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/proxy-client.key: {Name:mk969b8bdb9a7c95302616c350453daaad785fcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 13:19:24.619734   21226 certs.go:352] found cert: /home/mrizzi/.minikube/certs/home/mrizzi/.minikube/certs/ca-key.pem (1675 bytes)
I0120 13:19:24.619770   21226 certs.go:352] found cert: /home/mrizzi/.minikube/certs/home/mrizzi/.minikube/certs/ca.pem (1078 bytes)
I0120 13:19:24.619793   21226 certs.go:352] found cert: /home/mrizzi/.minikube/certs/home/mrizzi/.minikube/certs/cert.pem (1123 bytes)
I0120 13:19:24.619820   21226 certs.go:352] found cert: /home/mrizzi/.minikube/certs/home/mrizzi/.minikube/certs/key.pem (1675 bytes)
I0120 13:19:24.620436   21226 ssh_runner.go:310] scp /home/mrizzi/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0120 13:19:24.632742   21226 ssh_runner.go:310] scp /home/mrizzi/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0120 13:19:24.644859   21226 ssh_runner.go:310] scp /home/mrizzi/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0120 13:19:24.657455   21226 ssh_runner.go:310] scp /home/mrizzi/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0120 13:19:24.669913   21226 ssh_runner.go:310] scp /home/mrizzi/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0120 13:19:24.683073   21226 ssh_runner.go:310] scp /home/mrizzi/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0120 13:19:24.695135   21226 ssh_runner.go:310] scp /home/mrizzi/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0120 13:19:24.706806   21226 ssh_runner.go:310] scp /home/mrizzi/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0120 13:19:24.719969   21226 ssh_runner.go:310] scp /home/mrizzi/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0120 13:19:24.731714   21226 ssh_runner.go:310] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0120 13:19:24.740721   21226 ssh_runner.go:149] Run: openssl version
I0120 13:19:24.744415   21226 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0120 13:19:24.750063   21226 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0120 13:19:24.752319   21226 certs.go:393] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:19 /usr/share/ca-certificates/minikubeCA.pem
I0120 13:19:24.752362   21226 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0120 13:19:24.755792   21226 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0120 13:19:24.760615   21226 kubeadm.go:364] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] MultiNodeRequested:false}
I0120 13:19:24.760668   21226 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I0120 13:19:24.760701   21226 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 13:19:24.770818   21226 cri.go:76] found id: ""
I0120 13:19:24.770870   21226 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0120 13:19:24.776173   21226 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0120 13:19:24.781565   21226 kubeadm.go:213] ignoring SystemVerification for kubeadm because of podman driver
I0120 13:19:24.781616   21226 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0120 13:19:24.786286   21226 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0120 13:19:24.786313   21226 ssh_runner.go:236] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0120 13:19:24.967248   21226 out.go:140]     ▪ Generating certificates and keys ...
    ▪ Generating certificates and keys .../ I0120 13:19:27.164039   21226 out.go:140]     ▪ Booting up control plane ...

    ▪ Booting up control plane ...| W0120 13:21:22.184371   21226 out.go:181] 💢  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
	[
💢  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
	[
I0120 13:21:22.184498   21226 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
/ I0120 13:21:23.071531   21226 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
I0120 13:21:23.079618   21226 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
I0120 13:21:23.079694   21226 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 13:21:23.090633   21226 cri.go:76] found id: ""
I0120 13:21:23.090694   21226 kubeadm.go:213] ignoring SystemVerification for kubeadm because of podman driver
I0120 13:21:23.090755   21226 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0120 13:21:23.096443   21226 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0120 13:21:23.096483   21226 ssh_runner.go:236] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
\ I0120 13:21:23.270751   21226 out.go:140]     ▪ Generating certificates and keys ...

    ▪ Generating certificates and keys ...\ I0120 13:21:24.031822   21226 out.go:140]     ▪ Booting up control plane ...

    ▪ Booting up control plane .../ I0120 13:23:19.049833   21226 kubeadm.go:366] StartCluster complete in 3m54.289218287s
I0120 13:23:19.049868   21226 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I0120 13:23:19.049920   21226 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 13:23:19.062321   21226 cri.go:76] found id: ""
I0120 13:23:19.062338   21226 logs.go:206] 0 containers: []
W0120 13:23:19.062345   21226 logs.go:208] No container was found matching "kube-apiserver"
I0120 13:23:19.062353   21226 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I0120 13:23:19.062404   21226 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
I0120 13:23:19.073805   21226 cri.go:76] found id: ""
I0120 13:23:19.073823   21226 logs.go:206] 0 containers: []
W0120 13:23:19.073833   21226 logs.go:208] No container was found matching "etcd"
I0120 13:23:19.073842   21226 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I0120 13:23:19.073883   21226 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
I0120 13:23:19.083945   21226 cri.go:76] found id: ""
I0120 13:23:19.083967   21226 logs.go:206] 0 containers: []
W0120 13:23:19.083979   21226 logs.go:208] No container was found matching "coredns"
I0120 13:23:19.083986   21226 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I0120 13:23:19.084029   21226 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
- I0120 13:23:19.094601   21226 cri.go:76] found id: ""
I0120 13:23:19.094620   21226 logs.go:206] 0 containers: []
W0120 13:23:19.094629   21226 logs.go:208] No container was found matching "kube-scheduler"
I0120 13:23:19.094638   21226 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I0120 13:23:19.094704   21226 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 13:23:19.104758   21226 cri.go:76] found id: ""
I0120 13:23:19.104774   21226 logs.go:206] 0 containers: []
W0120 13:23:19.104785   21226 logs.go:208] No container was found matching "kube-proxy"
I0120 13:23:19.104795   21226 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 13:23:19.104840   21226 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0120 13:23:19.114117   21226 cri.go:76] found id: ""
I0120 13:23:19.114132   21226 logs.go:206] 0 containers: []
W0120 13:23:19.114145   21226 logs.go:208] No container was found matching "kubernetes-dashboard"
I0120 13:23:19.114155   21226 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
I0120 13:23:19.114192   21226 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 13:23:19.123961   21226 cri.go:76] found id: ""
I0120 13:23:19.123978   21226 logs.go:206] 0 containers: []
W0120 13:23:19.123988   21226 logs.go:208] No container was found matching "storage-provisioner"
I0120 13:23:19.123995   21226 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I0120 13:23:19.124034   21226 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 13:23:19.133902   21226 cri.go:76] found id: ""
I0120 13:23:19.133920   21226 logs.go:206] 0 containers: []
W0120 13:23:19.133928   21226 logs.go:208] No container was found matching "kube-controller-manager"
I0120 13:23:19.133939   21226 logs.go:120] Gathering logs for kubelet ...
I0120 13:23:19.133949   21226 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0120 13:23:19.173172   21226 logs.go:120] Gathering logs for dmesg ...
I0120 13:23:19.173197   21226 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 13:23:19.182731   21226 logs.go:120] Gathering logs for describe nodes ...
I0120 13:23:19.182761   21226 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
\ W0120 13:23:19.225562   21226 logs.go:127] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: 
** stderr ** 
The connection to the server localhost:8443 was refused - did you specify the right host or port?

** /stderr **
I0120 13:23:19.225586   21226 logs.go:120] Gathering logs for CRI-O ...
I0120 13:23:19.225598   21226 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
I0120 13:23:19.260580   21226 logs.go:120] Gathering logs for container status ...
I0120 13:23:19.260604   21226 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0120 13:23:19.272131   21226 out.go:294] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
	[
W0120 13:23:19.272240   21226 out.go:181] 

W0120 13:23:19.272360   21226 out.go:181] 💣  Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
	[
💣  Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
	[
W0120 13:23:19.272493   21226 out.go:181] 

W0120 13:23:19.272521   21226 out.go:181] 😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
W0120 13:23:19.272546   21226 out.go:181] 👉  https://github.com/kubernetes/minikube/issues/new/choose
👉  https://github.com/kubernetes/minikube/issues/new/choose
I0120 13:23:19.273843   21226 out.go:119] 


W0120 13:23:19.273968   21226 out.go:181] ❌  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
	[
❌  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
	[
W0120 13:23:19.275685   21226 out.go:181] 💡  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
💡  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0120 13:23:19.275776   21226 out.go:181] 🍿  Related issue: https://github.com/kubernetes/minikube/issues/4172
🍿  Related issue: https://github.com/kubernetes/minikube/issues/4172
I0120 13:23:19.275813   21226 out.go:119] 

Full output of minikube logs command:

==> CRI-O <==
-- Logs begin at Wed 2021-01-20 12:19:18 UTC, end at Wed 2021-01-20 12:24:30 UTC. --
Jan 20 12:21:23 minikube crio[348]: time="2021-01-20 12:21:23.239663265Z" level=info msg="Checking image status: k8s.gcr.io/kube-scheduler:v1.20.0" id=4776033d-fed8-4773-9fe6-fcbe61d47ff0 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:23 minikube crio[348]: time="2021-01-20 12:21:23.241475686Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899,RepoTags:[k8s.gcr.io/kube-scheduler:v1.20.0],RepoDigests:[k8s.gcr.io/kube-scheduler@sha256:47fd311588de93073af653698a65a616c798acffe901707339ce4fdc3aca5570 k8s.gcr.io/kube-scheduler@sha256:beaa710325047fa9c867eff4ab9af38d9c2acec05ac5b416c708c304f76bdbef],Size_:47633457,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{info: {\"labels\":{\"description\":\"go based runner for distroless scenarios\",\"maintainers\":\"Kubernetes Authors\"},\"imageSpec\":{\"created\":\"2020-12-08T18:11:03.652347948Z\",\"architecture\":\"amd64\",\"os\":\"linux\",\"config\":{\"User\":\"0\",\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\"SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt\"],\"Entrypoint\":[\"/go-runner\"],\"WorkingDir\":\"/\",\"Labels\":{\"description\":\"go based runner for distroless scenarios\",\"maintainers\":\"Kubernetes Authors\"}},\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:e7ee84ae4d1363ccf59b14bf34a79c245705dfd55429918b63c754d84c85d904\",\"sha256:597f1090d8e9bd4f1847ea4b72a3c3ea1f0997011120816c9dae2fe858077b32\",\"sha256:aa679bed73e1392240a0d9c10ed3a14b9d434d678ed083454acfd05f9df04206\"]},\"history\":[{\"created\":\"1970-01-01T00:00:00Z\",\"created_by\":\"bazel build ...\",\"author\":\"Bazel\"},{\"created\":\"2020-11-24T01:39:04.46965334Z\",\"created_by\":\"LABEL maintainers=Kubernetes Authors\",\"comment\":\"buildkit.dockerfile.v0\",\"empty_layer\":true},{\"created\":\"2020-11-24T01:39:04.46965334Z\",\"created_by\":\"LABEL description=go based runner for distroless scenarios\",\"comment\":\"buildkit.dockerfile.v0\",\"empty_layer\":true},{\"created\":\"2020-11-24T01:39:04.46965334Z\",\"created_by\":\"WORKDIR /\",\"comment\":\"buildkit.dockerfile.v0\",\"empty_layer\":true},{\"created\":\"2020-11-24T01:39:04.46965334Z\",\"created_by\":\"COPY /workspace/go-runner . # buildkit\",\"comment\":\"buildkit.dockerfile.v0\"},{\"created\":\"2020-11-24T01:39:04.46965334Z\",\"created_by\":\"ENTRYPOINT [\\\"/go-runner\\\"]\",\"comment\":\"buildkit.dockerfile.v0\",\"empty_layer\":true},{\"created\":\"2020-12-08T18:11:03.652347948Z\",\"created_by\":\"/bin/sh -c #(nop) COPY file:7220349347288f7ddc05c4df53ec0efe02265264dc38b192af4a1d7698461e48 in /usr/local/bin/kube-scheduler \"}]}},},}" id=4776033d-fed8-4773-9fe6-fcbe61d47ff0 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:23 minikube crio[348]: time="2021-01-20 12:21:23.246199130Z" level=info msg="Checking image status: k8s.gcr.io/kube-proxy:v1.20.0" id=d3881b1c-86c2-44db-ba68-5b76938ac10a name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:23 minikube crio[348]: time="2021-01-20 12:21:23.249534206Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc,RepoTags:[k8s.gcr.io/kube-proxy:v1.20.0],RepoDigests:[k8s.gcr.io/kube-proxy@sha256:40423415eebbd598d1c2660a0a38606ad1d949ea9404c405eaf25929163b479d k8s.gcr.io/kube-proxy@sha256:f0c3f51c1216bcab9bfd5146eb2810f604a1c4ff2718bc3a1028cc089f8aeac7],Size_:120357007,Uid:nil,Username:,Spec:nil,},Info:map[string]string{info: {\"imageSpec\":{\"created\":\"2020-12-08T18:11:12.011329419Z\",\"architecture\":\"amd64\",\"os\":\"linux\",\"config\":{\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Cmd\":[\"/bin/sh\"]},\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:f00bc8568f7bbf2863db216b90193b921672a923d0295e59d3311a6c9d2b41c8\",\"sha256:6ee930b14c6f167bf31c25365639c3646cb5dcec3511e208efc536a0ad1bca2f\",\"sha256:2b046f2c87084cd0b62a1d2cbc66f86d71d6fd29f8e35f2a40be526fb0395015\",\"sha256:f6be8a0f65afe84be8d59807fca7d9557420dca319363c1765d4bfd394af6f39\",\"sha256:3a90582021f956dee0ad5289a27aa29fe86cbf86e609478e43384a8526a10cc3\",\"sha256:94812b0f02cee020db86e32ff3f810c9f1503ee8b67585730e89eb0b0064cac5\",\"sha256:3a478f418c9c33bf0c886534ff30b335513be9a948d328f3ff000695c197968a\"]},\"history\":[{\"created\":\"2020-09-11T11:32:24.450068282Z\",\"created_by\":\"/bin/sh -c #(nop) ADD file:958919051423e2871000d6b40bcfa96d0443bea2a627b1f342c833ae7c9b2771 in / \"},{\"created\":\"2020-09-11T11:32:24.959299293Z\",\"created_by\":\"/bin/sh -c #(nop)  CMD [\\\"/bin/sh\\\"]\",\"empty_layer\":true},{\"created\":\"2020-09-11T15:25:52.925760651Z\",\"created_by\":\"/bin/sh -c #(nop)  ARG IPTABLES_VERSION\",\"empty_layer\":true},{\"created\":\"2020-09-11T15:25:58.836413045Z\",\"created_by\":\"|1 IPTABLES_VERSION=1.8.5 /bin/sh -c echo deb http://deb.debian.org/debian buster-backports main \\u003e\\u003e /etc/apt/sources.list     \\u0026\\u0026 apt-get update     \\u0026\\u0026 apt-get -t buster-backports -y --no-install-recommends install         iptables=${IPTABLES_VERSION}*         ebtables\"},{\"created\":\"2020-09-11T15:26:03.133878646Z\",\"created_by\":\"|1 IPTABLES_VERSION=1.8.5 /bin/sh -c clean-install     conntrack     ipset     kmod     netbase\"},{\"created\":\"2020-09-11T15:26:03.295860457Z\",\"created_by\":\"/bin/sh -c #(nop) COPY file:753d2b895bae0725b470e608b4745c5a21d9ebb4ce1e9a13ad3d26721e1e6dd8 in /usr/sbin/iptables-wrapper \"},{\"created\":\"2020-09-11T15:26:03.911855501Z\",\"created_by\":\"|1 IPTABLES_VERSION=1.8.5 /bin/sh -c update-alternatives \\t--install /usr/sbin/iptables iptables /usr/sbin/iptables-wrapper 100 \\t--slave /usr/sbin/iptables-restore iptables-restore /usr/sbin/iptables-wrapper \\t--slave /usr/sbin/iptables-save iptables-save /usr/sbin/iptables-wrapper\"},{\"created\":\"2020-09-11T15:26:04.583167193Z\",\"created_by\":\"|1 IPTABLES_VERSION=1.8.5 /bin/sh -c update-alternatives \\t--install /usr/sbin/ip6tables ip6tables /usr/sbin/iptables-wrapper 100 \\t--slave /usr/sbin/ip6tables-restore ip6tables-restore /usr/sbin/iptables-wrapper \\t--slave /usr/sbin/ip6tables-save ip6tables-save /usr/sbin/iptables-wrapper\"},{\"created\":\"2020-12-08T18:11:12.011329419Z\",\"created_by\":\"/bin/sh -c #(nop) COPY file:968a9e5c9465dbe89567a96edf5ae0dfe15c6870da3247f0886f2a7d01cf749d in /usr/local/bin/kube-proxy \"}]}},},}" id=d3881b1c-86c2-44db-ba68-5b76938ac10a name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:23 minikube crio[348]: time="2021-01-20 12:21:23.254596835Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=b84af90f-5596-4117-b534-42ac1ede3eec name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:23 minikube crio[348]: time="2021-01-20 12:21:23.255700683Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{info: {\"imageSpec\":{\"created\":\"2020-02-14T10:51:50.60182885-08:00\",\"architecture\":\"amd64\",\"os\":\"linux\",\"config\":{\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Entrypoint\":[\"/pause\"],\"WorkingDir\":\"/\"},\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:ba0dae6243cc9fa2890df40a625721fdbea5c94ca6da897acdd814d710149770\"]},\"history\":[{\"created\":\"2020-02-14T10:51:50.60182885-08:00\",\"created_by\":\"ARG ARCH\",\"comment\":\"buildkit.dockerfile.v0\",\"empty_layer\":true},{\"created\":\"2020-02-14T10:51:50.60182885-08:00\",\"created_by\":\"ADD bin/pause-amd64 /pause # buildkit\",\"comment\":\"buildkit.dockerfile.v0\"},{\"created\":\"2020-02-14T10:51:50.60182885-08:00\",\"created_by\":\"ENTRYPOINT [\\\"/pause\\\"]\",\"comment\":\"buildkit.dockerfile.v0\",\"empty_layer\":true}]}},},}" id=b84af90f-5596-4117-b534-42ac1ede3eec name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:23 minikube crio[348]: time="2021-01-20 12:21:23.260856128Z" level=info msg="Checking image status: k8s.gcr.io/etcd:3.4.13-0" id=e2460dee-3507-4207-b17c-3938ebdfb362 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:23 minikube crio[348]: time="2021-01-20 12:21:23.263318616Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934,RepoTags:[k8s.gcr.io/etcd:3.4.13-0],RepoDigests:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd@sha256:bd4d2c9a19be8a492bc79df53eee199fd04b415e9993eb69f7718052602a147a],Size_:254662613,Uid:nil,Username:,Spec:nil,},Info:map[string]string{info: {\"imageSpec\":{\"created\":\"2020-08-27T13:47:36.718716443Z\",\"architecture\":\"amd64\",\"os\":\"linux\",\"config\":{\"ExposedPorts\":{\"7001/tcp\":{},\"2379/tcp\":{},\"2380/tcp\":{},\"4001/tcp\":{}},\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\"SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt\"],\"WorkingDir\":\"/\"},\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:d72a74c56330b347f7d18b64d2effd93edd695fde25dc301d52c37efbcf4844e\",\"sha256:d61c79b2929916dd31e6d4aa48d30587f63a3192ab0418db8e7fcbea1ad654b9\",\"sha256:1a4e46412eb09db65f559c3921e4b39ab2dfb059482ebe416bcb740c10769ab3\",\"sha256:bfa5849f3d098e8f222dacc4d682250340a9cab32590d052b6922f0956ccaa04\",\"sha256:bb63b9467928d4b064be1ccbb88d0f4ec868ce4aa4a7dd44338090528838b79e\"]},\"history\":[{\"created\":\"1970-01-01T00:00:00Z\",\"created_by\":\"bazel build ...\",\"author\":\"Bazel\"},{\"created\":\"2020-08-27T13:47:31.271664261Z\",\"created_by\":\"/bin/sh -c #(nop) WORKDIR /\",\"empty_layer\":true},{\"created\":\"2020-08-27T13:47:31.436965941Z\",\"created_by\":\"/bin/sh -c #(nop) COPY file:93201c93ac7e6e5b3976190c2d70671eb6576373537fda9ac1bd50d90e342ed1 in /bin/ \"},{\"created\":\"2020-08-27T13:47:31.550192267Z\",\"created_by\":\"/bin/sh -c #(nop)  EXPOSE 2379 2380 4001 7001\",\"empty_layer\":true},{\"created\":\"2020-08-27T13:47:34.464243112Z\",\"created_by\":\"/bin/sh -c #(nop) COPY multi:db2195e6dcec23938ed1dcaf030f0ec72e3ae97af5ef0c8a74c72a2a097ec8fd in /usr/local/bin/ \"},{\"created\":\"2020-08-27T13:47:36.357785715Z\",\"created_by\":\"/bin/sh -c #(nop) COPY file:cf93caea4c1e5a0eaaa9cf9147de2dd27a8545620caa35f0a592e42099d44ed0 in /bin/ \"},{\"created\":\"2020-08-27T13:47:36.718716443Z\",\"created_by\":\"/bin/sh -c #(nop) COPY multi:a1881dd50cdbd92225791143eb662674b0a4155ae2577453cd6fae7dab43f859 in /usr/local/bin/ \"}]}},},}" id=e2460dee-3507-4207-b17c-3938ebdfb362 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:23 minikube crio[348]: time="2021-01-20 12:21:23.268116967Z" level=info msg="Checking image status: k8s.gcr.io/coredns:1.7.0" id=163de9b6-52f5-42c3-a032-763f27029f6f name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:23 minikube crio[348]: time="2021-01-20 12:21:23.269492940Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16,RepoTags:[k8s.gcr.io/coredns:1.7.0],RepoDigests:[k8s.gcr.io/coredns@sha256:242d440e3192ffbcecd40e9536891f4d9be46a650363f3a004497c2070f96f5a k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c],Size_:45358048,Uid:nil,Username:,Spec:nil,},Info:map[string]string{info: {\"imageSpec\":{\"created\":\"2020-06-18T00:55:59.462921357Z\",\"architecture\":\"amd64\",\"os\":\"linux\",\"config\":{\"ExposedPorts\":{\"53/tcp\":{},\"53/udp\":{}},\"Env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\"],\"Entrypoint\":[\"/coredns\"]},\"rootfs\":{\"type\":\"layers\",\"diff_ids\":[\"sha256:225df95e717ceb672de0e45aa49f352eace21512240205972aca0fccc9612722\",\"sha256:96d17b0b58a73f2d35707e37e5911f65cca8b4467dc54420b811d07784caee64\"]},\"history\":[{\"created\":\"2019-07-28T20:18:27.224802511Z\",\"created_by\":\"/bin/sh -c #(nop) COPY dir:0284c6efacdcf29cb632136811b7130fbe84998aefe3d1c36a0570424c7a2c92 in /etc/ssl/certs \"},{\"created\":\"2020-06-18T00:55:58.768320531Z\",\"created_by\":\"/bin/sh -c #(nop) ADD file:a39148838cdb612e6ae2cfd5672098607e86503673395922b6521249a1edbf6a in /coredns \"},{\"created\":\"2020-06-18T00:55:59.195850503Z\",\"created_by\":\"/bin/sh -c #(nop)  EXPOSE 53 53/udp\",\"empty_layer\":true},{\"created\":\"2020-06-18T00:55:59.462921357Z\",\"created_by\":\"/bin/sh -c #(nop)  ENTRYPOINT [\\\"/coredns\\\"]\",\"empty_layer\":true}]}},},}" id=163de9b6-52f5-42c3-a032-763f27029f6f name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:30 minikube crio[348]: time="2021-01-20 12:21:30.671109178Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=2a08757c-fd00-4547-baa1-3592105854e9 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:30 minikube crio[348]: time="2021-01-20 12:21:30.674233172Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2a08757c-fd00-4547-baa1-3592105854e9 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:37 minikube crio[348]: time="2021-01-20 12:21:37.877237965Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=f87be7de-7ac5-4d97-b448-e3f104669824 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:37 minikube crio[348]: time="2021-01-20 12:21:37.879052963Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f87be7de-7ac5-4d97-b448-e3f104669824 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:45 minikube crio[348]: time="2021-01-20 12:21:45.129694608Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=a9332e7f-d0e3-46c8-aad2-f4de585f7f17 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:45 minikube crio[348]: time="2021-01-20 12:21:45.131176998Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a9332e7f-d0e3-46c8-aad2-f4de585f7f17 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:52 minikube crio[348]: time="2021-01-20 12:21:52.415419737Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=a93045e9-3f92-4a41-8002-b02f9ddce9ec name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:52 minikube crio[348]: time="2021-01-20 12:21:52.417820270Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=a93045e9-3f92-4a41-8002-b02f9ddce9ec name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:59 minikube crio[348]: time="2021-01-20 12:21:59.591031289Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=f4ebcbb7-9704-4edc-a446-7ef137a54800 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:21:59 minikube crio[348]: time="2021-01-20 12:21:59.593233751Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f4ebcbb7-9704-4edc-a446-7ef137a54800 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:22:06 minikube crio[348]: time="2021-01-20 12:22:06.916234923Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=60305a64-69bd-4e3c-85e8-3caead143c7e name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:22:06 minikube crio[348]: time="2021-01-20 12:22:06.917769671Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=60305a64-69bd-4e3c-85e8-3caead143c7e name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:22:14 minikube crio[348]: time="2021-01-20 12:22:14.181090793Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=06aa4a5e-a5bb-4941-afa8-99edfe037b2d name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:22:14 minikube crio[348]: time="2021-01-20 12:22:14.183145411Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=06aa4a5e-a5bb-4941-afa8-99edfe037b2d name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:22:21 minikube crio[348]: time="2021-01-20 12:22:21.354046952Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=e583bbe1-ca13-4c92-9299-b4958438e879 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:22:21 minikube crio[348]: time="2021-01-20 12:22:21.355940956Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e583bbe1-ca13-4c92-9299-b4958438e879 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:22:28 minikube crio[348]: time="2021-01-20 12:22:28.635802073Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=267a4266-a21b-4e54-b952-c581d42bcf15 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:22:28 minikube crio[348]: time="2021-01-20 12:22:28.637521992Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=267a4266-a21b-4e54-b952-c581d42bcf15 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:22:35 minikube crio[348]: time="2021-01-20 12:22:35.855642602Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=19727f2f-9267-4b1b-8e6e-fcd76ef79417 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:22:35 minikube crio[348]: time="2021-01-20 12:22:35.857163131Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=19727f2f-9267-4b1b-8e6e-fcd76ef79417 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:22:43 minikube crio[348]: time="2021-01-20 12:22:43.128764012Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=fc9bb555-09bb-4fba-84a7-49923fbb1b9c name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:22:43 minikube crio[348]: time="2021-01-20 12:22:43.130488118Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=fc9bb555-09bb-4fba-84a7-49923fbb1b9c name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:22:50 minikube crio[348]: time="2021-01-20 12:22:50.400705617Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=b26d1119-bf4f-436a-a654-4379b30b17bc name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:22:50 minikube crio[348]: time="2021-01-20 12:22:50.402515257Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b26d1119-bf4f-436a-a654-4379b30b17bc name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:22:57 minikube crio[348]: time="2021-01-20 12:22:57.605467014Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=4bd095e5-f0fe-4af1-b5e4-29080a74a742 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:22:57 minikube crio[348]: time="2021-01-20 12:22:57.607206839Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=4bd095e5-f0fe-4af1-b5e4-29080a74a742 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:23:04 minikube crio[348]: time="2021-01-20 12:23:04.884236154Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=7d4afab9-9ba0-4232-99ba-206e0a1b9b5a name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:23:04 minikube crio[348]: time="2021-01-20 12:23:04.885878806Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=7d4afab9-9ba0-4232-99ba-206e0a1b9b5a name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:23:12 minikube crio[348]: time="2021-01-20 12:23:12.128986632Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=097d0e34-3de4-417e-8e67-b66049e87a80 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:23:12 minikube crio[348]: time="2021-01-20 12:23:12.130899042Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=097d0e34-3de4-417e-8e67-b66049e87a80 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:23:19 minikube crio[348]: time="2021-01-20 12:23:19.410836288Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=521097a4-34ec-48fa-a8e6-5e4bc6b1f1a8 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:23:19 minikube crio[348]: time="2021-01-20 12:23:19.412727859Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=521097a4-34ec-48fa-a8e6-5e4bc6b1f1a8 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:23:26 minikube crio[348]: time="2021-01-20 12:23:26.686259661Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=8b076a56-5588-4d1f-8d59-242548868f07 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:23:26 minikube crio[348]: time="2021-01-20 12:23:26.688348234Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8b076a56-5588-4d1f-8d59-242548868f07 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:23:33 minikube crio[348]: time="2021-01-20 12:23:33.850015839Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=f06be0dc-13e3-4d3b-ab32-9a0b8b5ec4da name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:23:33 minikube crio[348]: time="2021-01-20 12:23:33.852003899Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f06be0dc-13e3-4d3b-ab32-9a0b8b5ec4da name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:23:41 minikube crio[348]: time="2021-01-20 12:23:41.186079239Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=7f29058f-63c1-4bb2-99c2-2270955068b4 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:23:41 minikube crio[348]: time="2021-01-20 12:23:41.188454229Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=7f29058f-63c1-4bb2-99c2-2270955068b4 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:23:48 minikube crio[348]: time="2021-01-20 12:23:48.366583879Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=e2b73d93-8fc8-4ad3-810c-ebdacb2b3a10 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:23:48 minikube crio[348]: time="2021-01-20 12:23:48.369056923Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e2b73d93-8fc8-4ad3-810c-ebdacb2b3a10 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:23:55 minikube crio[348]: time="2021-01-20 12:23:55.622907810Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=ca3ba6d4-e968-416b-99f3-03edae93a110 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:23:55 minikube crio[348]: time="2021-01-20 12:23:55.625001445Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ca3ba6d4-e968-416b-99f3-03edae93a110 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:24:02 minikube crio[348]: time="2021-01-20 12:24:02.818522966Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=5fde8aff-7847-4fba-bb2c-6858fec78bfd name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:24:02 minikube crio[348]: time="2021-01-20 12:24:02.819990497Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=5fde8aff-7847-4fba-bb2c-6858fec78bfd name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:24:10 minikube crio[348]: time="2021-01-20 12:24:10.130943825Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=b1446a00-3f39-41de-a2de-7db57d5a29dc name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:24:10 minikube crio[348]: time="2021-01-20 12:24:10.132579447Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b1446a00-3f39-41de-a2de-7db57d5a29dc name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:24:17 minikube crio[348]: time="2021-01-20 12:24:17.394198265Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=40ae31cd-6710-4996-843c-0b774ce77ef0 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:24:17 minikube crio[348]: time="2021-01-20 12:24:17.395928704Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=40ae31cd-6710-4996-843c-0b774ce77ef0 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:24:24 minikube crio[348]: time="2021-01-20 12:24:24.611765426Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=1f757ca1-7fd2-4a36-bef7-7749b6f95670 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 12:24:24 minikube crio[348]: time="2021-01-20 12:24:24.614677958Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1f757ca1-7fd2-4a36-bef7-7749b6f95670 name=/runtime.v1alpha2.ImageService/ImageStatus

==> container status <==
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID

==> describe nodes <==
E0120 13:24:30.906916   31704 logs.go:181] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"

==> dmesg <==
[Jan20 12:05] x86/cpu: VMX (outside TXT) disabled by BIOS
[  +0.023633] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[  +0.798387] systemd[1]: /usr/lib/systemd/system/plymouth-start.service:15: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
[  +0.210508] acpi PNP0C14:02: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000563] acpi PNP0C14:03: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.007364] acpi PNP0C14:04: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000059] acpi PNP0C14:05: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000044] acpi PNP0C14:06: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000042] acpi PNP0C14:07: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000062] acpi PNP0C14:08: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.003125] usb: port power management may be unreliable
[  +0.110831] nvme nvme0: missing or invalid SUBNQN field.
[ +13.361372] kauditd_printk_skb: 18 callbacks suppressed
[  +0.915126] systemd-sysv-generator[977]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[  +0.000028] systemd-sysv-generator[977]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[  +0.060945] systemd[1]: /usr/lib/systemd/system/plymouth-start.service:15: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
[  +0.354672] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[  +0.071022] iwlwifi 0000:00:14.3: api flags index 2 larger than supported by driver
[  +0.078483] r8152 4-2.1.2:1.0 (unnamed net_device) (uninitialized): Invalid header when reading pass-thru MAC addr
[  +0.050122] resource sanity check: requesting [mem 0xfed10000-0xfed15fff], which spans more than pnp 00:07 [mem 0xfed10000-0xfed13fff]
[  +0.000008] caller snb_uncore_imc_init_box+0x6a/0xa0 [intel_uncore] mapping multiple BARs
[  +0.277200] thermal thermal_zone13: failed to read out thermal zone (-61)
[  +0.301940] sof-audio-pci 0000:00:1f.3: ASoC: Parent card not yet available, widget card binding deferred
[  +0.256035] snd_hda_codec_realtek ehdaudio0D0: ASoC: sink widget AIF1TX overwritten
[  +0.000006] snd_hda_codec_realtek ehdaudio0D0: ASoC: source widget AIF1RX overwritten
[  +0.000339] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget hifi3 overwritten
[  +0.000004] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget hifi2 overwritten
[  +0.000003] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget hifi1 overwritten
[  +0.000003] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Codec Output Pin1 overwritten
[  +0.000002] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Codec Input Pin1 overwritten
[  +0.000005] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Analog Codec Playback overwritten
[  +0.000004] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Digital Codec Playback overwritten
[  +0.000004] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Alt Analog Codec Playback overwritten
[  +0.000005] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Analog Codec Capture overwritten
[  +0.000004] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Digital Codec Capture overwritten
[  +0.000004] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Alt Analog Codec Capture overwritten
[  +0.006852] snd_hda_codec_hdmi ehdaudio0D2: Monitor plugged-in, Failed to power up codec ret=[-13]
[  +0.005622] snd_hda_codec_hdmi ehdaudio0D2: Monitor plugged-in, Failed to power up codec ret=[-13]
[  +7.584854] usb 3-2.1.1.2: 1:1: cannot get freq at ep 0x81
[Jan20 12:06] [drm:drm_dp_mst_dpcd_read [drm_kms_helper]] *ERROR* mstb 0000000023227777 port 1: DPCD read on addr 0x4b0 for 1 bytes NAKed
[  +0.030359] [drm:drm_dp_mst_dpcd_read [drm_kms_helper]] *ERROR* mstb 0000000023227777 port 3: DPCD read on addr 0x4b0 for 1 bytes NAKed

==> kernel <==
 12:24:30 up 19 min,  0 users,  load average: 1.04, 0.82, 0.52
Linux minikube 5.10.7-200.fc33.x86_64 #1 SMP Tue Jan 12 20:20:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.1 LTS"

==> kubelet <==
-- Logs begin at Wed 2021-01-20 12:19:18 UTC, end at Wed 2021-01-20 12:24:30 UTC. --
Jan 20 12:24:24 minikube kubelet[6664]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:114 +0x3f
Jan 20 12:24:24 minikube kubelet[6664]: goroutine 633 [select]:
Jan 20 12:24:24 minikube kubelet[6664]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).Start.func1(0xc000d41940, 0xc000d923c0)
Jan 20 12:24:24 minikube kubelet[6664]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:91 +0x125
Jan 20 12:24:24 minikube kubelet[6664]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).Start
Jan 20 12:24:24 minikube kubelet[6664]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:89 +0x477
Jan 20 12:24:24 minikube kubelet[6664]: goroutine 634 [select]:
Jan 20 12:24:24 minikube kubelet[6664]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewContainers.func1(0xc000b4e780, 0xc00066cbe0, 0xc000ee00c0)
Jan 20 12:24:24 minikube kubelet[6664]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:1164 +0xe5
Jan 20 12:24:24 minikube kubelet[6664]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewContainers
Jan 20 12:24:24 minikube kubelet[6664]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:1162 +0x21d
Jan 20 12:24:24 minikube kubelet[6664]: goroutine 635 [select]:
Jan 20 12:24:24 minikube kubelet[6664]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).globalHousekeeping(0xc000b4e780, 0xc000c163c0)
Jan 20 12:24:24 minikube kubelet[6664]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:385 +0x145
Jan 20 12:24:24 minikube kubelet[6664]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
Jan 20 12:24:24 minikube kubelet[6664]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:319 +0x585
Jan 20 12:24:24 minikube kubelet[6664]: goroutine 636 [select]:
Jan 20 12:24:24 minikube kubelet[6664]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).updateMachineInfo(0xc000b4e780, 0xc000c16420)
Jan 20 12:24:24 minikube kubelet[6664]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:357 +0xd4
Jan 20 12:24:24 minikube kubelet[6664]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
Jan 20 12:24:24 minikube kubelet[6664]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:323 +0x608
Jan 20 12:24:25 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 25.
Jan 20 12:24:25 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jan 20 12:24:25 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jan 20 12:24:25 minikube kubelet[6802]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 20 12:24:25 minikube kubelet[6802]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.531087    6802 server.go:416] Version: v1.20.0
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.533397    6802 server.go:837] Client rotation is on, will bootstrap in background
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.538478    6802 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jan 20 12:24:25 minikube kubelet[6802]: W0120 12:24:25.539153    6802 manager.go:159] Cannot detect current cgroup on cgroup v2
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.539216    6802 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
Jan 20 12:24:25 minikube kubelet[6802]: W0120 12:24:25.588568    6802 fs.go:208] stat failed on /dev/mapper/luks-04d26ab7-d155-44f4-906f-c64d950aa812 with error: no such file or directory
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.604822    6802 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.604931    6802 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: []
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.604948    6802 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.604981    6802 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.604989    6802 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.604993    6802 container_manager_linux.go:315] Creating device plugin manager: true
Jan 20 12:24:25 minikube kubelet[6802]: W0120 12:24:25.605048    6802 util_unix.go:103] Using "/var/run/crio/crio.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/crio/crio.sock".
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.605074    6802 remote_runtime.go:62] parsed scheme: ""
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.605079    6802 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.605095    6802 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.605101    6802 clientconn.go:948] ClientConn switching balancer to "pick_first"
Jan 20 12:24:25 minikube kubelet[6802]: W0120 12:24:25.605123    6802 util_unix.go:103] Using "/var/run/crio/crio.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/crio/crio.sock".
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.605131    6802 remote_image.go:50] parsed scheme: ""
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.605135    6802 remote_image.go:50] scheme "" not registered, fallback to default scheme
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.605141    6802 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/crio/crio.sock  <nil> 0 <nil>}] <nil> <nil>}
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.605144    6802 clientconn.go:948] ClientConn switching balancer to "pick_first"
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.605168    6802 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.605181    6802 kubelet.go:273] Watching apiserver
Jan 20 12:24:25 minikube kubelet[6802]: E0120 12:24:25.605869    6802 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Jan 20 12:24:25 minikube kubelet[6802]: E0120 12:24:25.605881    6802 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Jan 20 12:24:25 minikube kubelet[6802]: E0120 12:24:25.605944    6802 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Jan 20 12:24:25 minikube kubelet[6802]: I0120 12:24:25.610380    6802 kuberuntime_manager.go:216] Container runtime cri-o initialized, version: 1.19.0, apiVersion: v1alpha1
Jan 20 12:24:26 minikube kubelet[6802]: E0120 12:24:26.707004    6802 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Jan 20 12:24:26 minikube kubelet[6802]: E0120 12:24:26.740022    6802 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Jan 20 12:24:27 minikube kubelet[6802]: E0120 12:24:27.007532    6802 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Jan 20 12:24:28 minikube kubelet[6802]: E0120 12:24:28.504628    6802 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Jan 20 12:24:29 minikube kubelet[6802]: E0120 12:24:29.109324    6802 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
Jan 20 12:24:29 minikube kubelet[6802]: E0120 12:24:29.751350    6802 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused

❗  unable to fetch logs for: describe nodes

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jan 20, 2021

Thanks for testing. You can upgrade cri-o if you want (just to see if it helps), but it was working with 1.19 here.

Update: No issues when running in a standard Fedora 33 vagrant box either (podman-2.2.1-1.fc33.x86_64)

😄 minikube v1.16.0 on Fedora 33 (vbox/amd64)
✨ Using the podman (experimental) driver based on user configuration
🎁 Preparing Kubernetes v1.20.0 on CRI-O 1.19.0 ...
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

(skipped some steps there)

Vagrant.configure("2") do |config|
  config.vm.box = "fedora/33-cloud-base"

  config.vm.network "private_network", ip: "192.168.50.4"

  config.vm.provider "virtualbox" do |vb|
     vb.cpus = 2
     vb.memory = "2048"
  end

  config.vm.provision "shell", inline: <<-SHELL
    yum install -y net-tools conntrack-tools

    # setenforce 0
    sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
  SHELL
end

@mrizzi
Copy link
Author

mrizzi commented Jan 20, 2021

Well, good it works with a F33 VM.
I've installed the laptop some days ago so it's quite standard as the VM was I believe.
Just checked and I'm using the same podman package:

$ dnf list podman
Last metadata expiration check: 2:10:18 ago on Wed 20 Jan 2021 12:08:01 PM CET.
Installed Packages
podman.x86_64                                         2:2.2.1-1.fc33                                          @updates

@mrizzi
Copy link
Author

mrizzi commented Jan 20, 2021

Thanks for the Vagrant conf: I was missing conntrack-tools so I've added them but it didn't solve the issue for me.

$ sudo setenforce 0
[sudo] password for mrizzi: 
$ getenforce
Permissive
$ dnf list net-tools conntrack-tools
Last metadata expiration check: 5:56:17 ago on Wed 20 Jan 2021 12:08:01 PM CET.
Installed Packages
conntrack-tools.x86_64                               1.4.5-6.fc33                                            @fedora  
net-tools.x86_64                                     2.0-0.58.20160912git.fc33                               @anaconda

but the output looks the same to me:

I0120 18:27:25.804612  169041 out.go:221] Setting OutFile to fd 1 ...
I0120 18:27:25.804825  169041 out.go:273] isatty.IsTerminal(1) = true
I0120 18:27:25.804834  169041 out.go:234] Setting ErrFile to fd 2...
I0120 18:27:25.804839  169041 out.go:273] isatty.IsTerminal(2) = true
I0120 18:27:25.804919  169041 root.go:280] Updating PATH: /home/mrizzi/.minikube/bin
W0120 18:27:25.805001  169041 root.go:255] Error reading config file at /home/mrizzi/.minikube/config/config.json: open /home/mrizzi/.minikube/config/config.json: no such file or directory
I0120 18:27:25.805424  169041 out.go:228] Setting JSON to false
I0120 18:27:25.819008  169041 start.go:104] hostinfo: {"hostname":"fedora-p1","uptime":11161,"bootTime":1611152484,"procs":493,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"33","kernelVersion":"5.10.7-200.fc33.x86_64","virtualizationSystem":"","virtualizationRole":"","hostid":"2a0ffbe8-79f8-479f-b627-66a4d7b9718b"}
I0120 18:27:25.819531  169041 start.go:114] virtualization:  
I0120 18:27:25.819817  169041 out.go:119] 😄  minikube v1.16.0 on Fedora 33
😄  minikube v1.16.0 on Fedora 33
I0120 18:27:25.819938  169041 driver.go:303] Setting default libvirt URI to qemu:///system
I0120 18:27:25.819976  169041 notify.go:126] Checking for updates...
I0120 18:27:25.900684  169041 podman.go:118] podman version: 2.2.1
I0120 18:27:25.900804  169041 out.go:119] ✨  Using the podman (experimental) driver based on user configuration
✨  Using the podman (experimental) driver based on user configuration
I0120 18:27:25.900828  169041 start.go:277] selected driver: podman
I0120 18:27:25.900834  169041 start.go:686] validating driver "podman" against <nil>
I0120 18:27:25.900847  169041 start.go:697] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
I0120 18:27:25.900956  169041 cli_runner.go:111] Run: sudo -n podman system info --format json
I0120 18:27:25.990673  169041 info.go:273] podman info: {Host:{BuildahVersion:1.18.0 CgroupVersion:v2 Conmon:{Package:conmon-2.0.21-3.fc33.x86_64 Path:/usr/bin/conmon Version:conmon version 2.0.21, commit: 0f53fb68333bdead5fe4dc5175703e22cf9882ab} Distribution:{Distribution:fedora Version:33} MemFree:14963048448 MemTotal:33410228224 OCIRuntime:{Name:crun Package:crun-0.16-3.fc33.x86_64 Path:/usr/bin/crun Version:crun version 0.16
commit: eb0145e5ad4d8207e84a327248af76663d4e50dd
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:4294963200 SwapTotal:4294963200 Arch:amd64 Cpus:12 Eventlogger:journald Hostname:fedora-p1 Kernel:5.10.7-200.fc33.x86_64 Os:linux Rootless:false Uptime:3h 6m 1.41s (Approximately 0.12 days)} Registries:{Search:[registry.fedoraproject.org registry.access.redhat.com registry.centos.org docker.io]} Store:{ConfigFile:/etc/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/lib/containers/storage GraphStatus:{BackingFilesystem:btrfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:1} RunRoot:/var/run/containers/storage VolumePath:/var/lib/containers/storage/volumes}}
I0120 18:27:25.990744  169041 start_flags.go:235] no existing cluster config was found, will generate one from the flags 
I0120 18:27:25.991390  169041 start_flags.go:253] Using suggested 7900MB memory alloc based on sys=31862MB, container=31862MB
I0120 18:27:25.991485  169041 start_flags.go:648] Wait components to verify : map[apiserver:true system_pods:true]
I0120 18:27:25.991503  169041 cni.go:74] Creating CNI manager for ""
I0120 18:27:25.991508  169041 cni.go:120] "podman" driver + crio runtime found, recommending kindnet
I0120 18:27:25.991517  169041 start_flags.go:362] Found "CNI" CNI - setting NetworkPlugin=cni
I0120 18:27:25.991525  169041 start_flags.go:367] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] MultiNodeRequested:false}
I0120 18:27:25.991641  169041 out.go:119] 👍  Starting control plane node minikube in cluster minikube
👍  Starting control plane node minikube in cluster minikube
I0120 18:27:25.991654  169041 cache.go:112] Driver isn't docker, skipping base image download
I0120 18:27:25.991663  169041 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0120 18:27:26.168258  169041 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
I0120 18:27:26.168329  169041 cache.go:54] Caching tarball of preloaded images
I0120 18:27:26.168417  169041 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0120 18:27:26.309493  169041 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
I0120 18:27:26.309855  169041 out.go:119] 💾  Downloading Kubernetes v1.20.0 preload ...
💾  Downloading Kubernetes v1.20.0 preload ...
I0120 18:27:26.310052  169041 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4 -> /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
    > preloaded-images-k8s-v8-v1....: 555.86 MiB / 555.86 MiB  100.00% 8.22 MiB
I0120 18:28:34.676517  169041 preload.go:160] saving checksum for preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
I0120 18:28:34.982392  169041 preload.go:177] verifying checksumm of /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
I0120 18:28:36.002373  169041 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.0 on crio
I0120 18:28:36.002596  169041 profile.go:147] Saving config to /home/mrizzi/.minikube/profiles/minikube/config.json ...
I0120 18:28:36.002615  169041 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/config.json: {Name:mk473a46e0a7385fc7b1c17eee8567719c4a2678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 18:28:36.002855  169041 cache.go:185] Successfully downloaded all kic artifacts
I0120 18:28:36.002870  169041 start.go:314] acquiring machines lock for minikube: {Name:mk6d494bfb92177bc8505684a7c42000ca387cb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 18:28:36.002904  169041 start.go:318] acquired machines lock for "minikube" in 26.151µs
I0120 18:28:36.002919  169041 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}
I0120 18:28:36.002957  169041 start.go:127] createHost starting for "" (driver="podman")
I0120 18:28:36.003050  169041 out.go:119] 🔥  Creating podman container (CPUs=2, Memory=7900MB) ...
🔥  Creating podman container (CPUs=2, Memory=7900MB) ...
I0120 18:28:36.003162  169041 start.go:164] libmachine.API.Create for "minikube" (driver="podman")
I0120 18:28:36.003189  169041 client.go:165] LocalClient.Create starting
I0120 18:28:36.003226  169041 main.go:119] libmachine: Creating CA: /home/mrizzi/.minikube/certs/ca.pem
I0120 18:28:36.120722  169041 main.go:119] libmachine: Creating client certificate: /home/mrizzi/.minikube/certs/cert.pem
I0120 18:28:36.439379  169041 cli_runner.go:111] Run: sudo -n podman network inspect minikube --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}"
I0120 18:28:36.513394  169041 network_create.go:59] Found existing network {name:minikube subnet:0xc001352060 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:0}
I0120 18:28:36.513424  169041 kic.go:96] calculated static IP "192.168.49.2" for the "minikube" container
I0120 18:28:36.513500  169041 cli_runner.go:111] Run: sudo -n podman ps -a --format {{.Names}}
I0120 18:28:36.586469  169041 cli_runner.go:111] Run: sudo -n podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0120 18:28:36.677520  169041 oci.go:102] Successfully created a podman volume minikube
I0120 18:28:36.677605  169041 cli_runner.go:111] Run: sudo -n podman run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4 -d /var/lib
I0120 18:28:37.213276  169041 oci.go:106] Successfully prepared a podman volume minikube
W0120 18:28:37.213318  169041 oci.go:159] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0120 18:28:37.213324  169041 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime crio
W0120 18:28:37.213331  169041 oci.go:201] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
I0120 18:28:37.213367  169041 preload.go:105] Found local preload: /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
I0120 18:28:37.213375  169041 kic.go:159] Starting extracting preloaded images to volume ...
I0120 18:28:37.213584  169041 cli_runner.go:111] Run: sudo -n podman info --format "'{{json .SecurityOptions}}'"
I0120 18:28:37.213588  169041 cli_runner.go:111] Run: sudo -n podman run --rm --entrypoint /usr/bin/tar --security-opt label=disable -v /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4 -I lz4 -xf /preloaded.tar -C /extractDir
W0120 18:28:37.306577  169041 cli_runner.go:149] sudo -n podman info --format "'{{json .SecurityOptions}}'" returned with exit code 125
I0120 18:28:37.306739  169041 cli_runner.go:111] Run: sudo -n podman run --cgroup-manager cgroupfs -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var:exec -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4
I0120 18:28:37.855800  169041 cli_runner.go:111] Run: sudo -n podman container inspect minikube --format={{.State.Running}}
I0120 18:28:37.951129  169041 cli_runner.go:111] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
I0120 18:28:38.049826  169041 cli_runner.go:111] Run: sudo -n podman exec minikube stat /var/lib/dpkg/alternatives/iptables
I0120 18:28:38.229946  169041 oci.go:246] the created container "minikube" has a running status.
I0120 18:28:38.229971  169041 kic.go:190] Creating ssh key for kic: /home/mrizzi/.minikube/machines/minikube/id_rsa...
I0120 18:28:38.337470  169041 kic_runner.go:187] podman (temp): /home/mrizzi/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0120 18:28:38.337862  169041 kic_runner.go:217] Run: /usr/bin/sudo -n podman cp /tmp/tmpf-memory-asset145623890 minikube:/home/docker/.ssh/authorized_keys
I0120 18:28:38.656027  169041 cli_runner.go:111] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
I0120 18:28:38.739487  169041 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0120 18:28:38.739518  169041 kic_runner.go:114] Args: [sudo -n podman exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0120 18:28:40.706380  169041 cli_runner.go:155] Completed: sudo -n podman run --rm --entrypoint /usr/bin/tar --security-opt label=disable -v /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.492764618s)
I0120 18:28:40.706414  169041 kic.go:168] duration metric: took 3.493037 seconds to extract preloaded images to volume
I0120 18:28:40.706624  169041 cli_runner.go:111] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
I0120 18:28:40.786419  169041 machine.go:88] provisioning docker machine ...
I0120 18:28:40.786446  169041 ubuntu.go:169] provisioning hostname "minikube"
I0120 18:28:40.786571  169041 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 18:28:40.858505  169041 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 18:28:40.932588  169041 main.go:119] libmachine: Using SSH client type: native
I0120 18:28:40.932734  169041 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x80b6c0] 0x80b680 <nil>  [] 0s} 127.0.0.1 35611 <nil> <nil>}
I0120 18:28:40.932750  169041 main.go:119] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0120 18:28:41.072095  169041 main.go:119] libmachine: SSH cmd err, output: <nil>: minikube

I0120 18:28:41.072266  169041 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 18:28:41.146471  169041 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 18:28:41.223439  169041 main.go:119] libmachine: Using SSH client type: native
I0120 18:28:41.223577  169041 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x80b6c0] 0x80b680 <nil>  [] 0s} 127.0.0.1 35611 <nil> <nil>}
I0120 18:28:41.223596  169041 main.go:119] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0120 18:28:41.343631  169041 main.go:119] libmachine: SSH cmd err, output: <nil>: 
I0120 18:28:41.343700  169041 ubuntu.go:175] set auth options {CertDir:/home/mrizzi/.minikube CaCertPath:/home/mrizzi/.minikube/certs/ca.pem CaPrivateKeyPath:/home/mrizzi/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/mrizzi/.minikube/machines/server.pem ServerKeyPath:/home/mrizzi/.minikube/machines/server-key.pem ClientKeyPath:/home/mrizzi/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/mrizzi/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/mrizzi/.minikube}
I0120 18:28:41.343753  169041 ubuntu.go:177] setting up certificates
I0120 18:28:41.343779  169041 provision.go:83] configureAuth start
I0120 18:28:41.343944  169041 cli_runner.go:111] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I0120 18:28:41.426481  169041 cli_runner.go:111] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0120 18:28:41.502433  169041 provision.go:137] copyHostCerts
I0120 18:28:41.502494  169041 exec_runner.go:152] cp: /home/mrizzi/.minikube/certs/ca.pem --> /home/mrizzi/.minikube/ca.pem (1078 bytes)
I0120 18:28:41.502643  169041 exec_runner.go:152] cp: /home/mrizzi/.minikube/certs/cert.pem --> /home/mrizzi/.minikube/cert.pem (1123 bytes)
I0120 18:28:41.502705  169041 exec_runner.go:152] cp: /home/mrizzi/.minikube/certs/key.pem --> /home/mrizzi/.minikube/key.pem (1679 bytes)
I0120 18:28:41.502749  169041 provision.go:111] generating server cert: /home/mrizzi/.minikube/machines/server.pem ca-key=/home/mrizzi/.minikube/certs/ca.pem private-key=/home/mrizzi/.minikube/certs/ca-key.pem org=mrizzi.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0120 18:28:41.587236  169041 provision.go:165] copyRemoteCerts
I0120 18:28:41.587371  169041 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0120 18:28:41.587412  169041 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 18:28:41.658451  169041 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 18:28:41.733456  169041 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:35611 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 18:28:41.826279  169041 ssh_runner.go:310] scp /home/mrizzi/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0120 18:28:41.857829  169041 ssh_runner.go:310] scp /home/mrizzi/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0120 18:28:41.871397  169041 ssh_runner.go:310] scp /home/mrizzi/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0120 18:28:41.883595  169041 provision.go:86] duration metric: configureAuth took 539.795548ms
I0120 18:28:41.883617  169041 ubuntu.go:193] setting minikube options for container-runtime
I0120 18:28:41.883863  169041 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 18:28:41.955543  169041 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 18:28:42.032416  169041 main.go:119] libmachine: Using SSH client type: native
I0120 18:28:42.032552  169041 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x80b6c0] 0x80b680 <nil>  [] 0s} 127.0.0.1 35611 <nil> <nil>}
I0120 18:28:42.032571  169041 main.go:119] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube
I0120 18:28:42.155727  169041 main.go:119] libmachine: SSH cmd err, output: <nil>: 
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '

I0120 18:28:42.155748  169041 machine.go:91] provisioned docker machine in 1.369312511s
I0120 18:28:42.155760  169041 client.go:168] LocalClient.Create took 6.152566255s
I0120 18:28:42.155772  169041 start.go:172] duration metric: libmachine.API.Create for "minikube" took 6.152608836s
I0120 18:28:42.155782  169041 start.go:268] post-start starting for "minikube" (driver="podman")
I0120 18:28:42.155790  169041 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0120 18:28:42.155838  169041 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0120 18:28:42.155897  169041 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 18:28:42.228474  169041 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 18:28:42.304388  169041 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:35611 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 18:28:42.392095  169041 ssh_runner.go:149] Run: cat /etc/os-release
I0120 18:28:42.396378  169041 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0120 18:28:42.396436  169041 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0120 18:28:42.396470  169041 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0120 18:28:42.396490  169041 info.go:97] Remote host: Ubuntu 20.04.1 LTS
I0120 18:28:42.396510  169041 filesync.go:118] Scanning /home/mrizzi/.minikube/addons for local assets ...
I0120 18:28:42.396611  169041 filesync.go:118] Scanning /home/mrizzi/.minikube/files for local assets ...
I0120 18:28:42.396681  169041 start.go:271] post-start completed in 240.887695ms
I0120 18:28:42.397203  169041 cli_runner.go:111] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I0120 18:28:42.477476  169041 cli_runner.go:111] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0120 18:28:42.557466  169041 profile.go:147] Saving config to /home/mrizzi/.minikube/profiles/minikube/config.json ...
I0120 18:28:42.557776  169041 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0120 18:28:42.557829  169041 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 18:28:42.632564  169041 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 18:28:42.712407  169041 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:35611 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 18:28:42.798673  169041 start.go:130] duration metric: createHost completed in 6.79569811s
I0120 18:28:42.798726  169041 start.go:81] releasing machines lock for "minikube", held for 6.795806522s
I0120 18:28:42.798972  169041 cli_runner.go:111] Run: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I0120 18:28:42.921409  169041 cli_runner.go:111] Run: sudo -n podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0120 18:28:43.000677  169041 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0120 18:28:43.000729  169041 ssh_runner.go:149] Run: systemctl --version
I0120 18:28:43.000743  169041 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 18:28:43.000788  169041 cli_runner.go:111] Run: sudo -n podman version --format {{.Version}}
I0120 18:28:43.078462  169041 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 18:28:43.131473  169041 cli_runner.go:111] Run: sudo -n podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0120 18:28:43.159480  169041 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:35611 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 18:28:43.210399  169041 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:35611 SSHKeyPath:/home/mrizzi/.minikube/machines/minikube/id_rsa Username:docker}
I0120 18:28:43.239564  169041 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0120 18:28:43.246217  169041 ssh_runner.go:149] Run: sudo systemctl stop -f containerd
I0120 18:28:43.486826  169041 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0120 18:28:43.515856  169041 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
I0120 18:28:43.540023  169041 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
image-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I0120 18:28:43.567028  169041 ssh_runner.go:149] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.2"|' -i /etc/crio/crio.conf"
I0120 18:28:43.585182  169041 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0120 18:28:43.599799  169041 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0120 18:28:43.611388  169041 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0120 18:28:43.670531  169041 ssh_runner.go:149] Run: sudo systemctl start crio
I0120 18:28:43.818535  169041 ssh_runner.go:149] Run: crio --version
I0120 18:28:43.857266  169041 out.go:119] 🎁  Preparing Kubernetes v1.20.0 on CRI-O 1.19.0 ...
🎁  Preparing Kubernetes v1.20.0 on CRI-O 1.19.0 ...
I0120 18:28:43.857357  169041 cli_runner.go:111] Run: sudo -n podman container inspect --format {{.NetworkSettings.Gateway}} minikube
I0120 18:28:43.933530  169041 ssh_runner.go:149] Run: grep <nil>	host.minikube.internal$ /etc/hosts
I0120 18:28:43.936018  169041 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "<nil>	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0120 18:28:43.942344  169041 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0120 18:28:43.942383  169041 preload.go:105] Found local preload: /home/mrizzi/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-cri-o-overlay-amd64.tar.lz4
I0120 18:28:43.942442  169041 ssh_runner.go:149] Run: sudo crictl images --output json
I0120 18:28:43.974057  169041 crio.go:345] all images are preloaded for cri-o runtime.
I0120 18:28:43.974074  169041 crio.go:260] Images already preloaded, skipping extraction
I0120 18:28:43.974116  169041 ssh_runner.go:149] Run: sudo crictl images --output json
I0120 18:28:43.983631  169041 crio.go:345] all images are preloaded for cri-o runtime.
I0120 18:28:43.983653  169041 cache_images.go:74] Images are preloaded, skipping loading
I0120 18:28:43.983703  169041 ssh_runner.go:149] Run: crio config
I0120 18:28:44.024644  169041 cni.go:74] Creating CNI manager for ""
I0120 18:28:44.024658  169041 cni.go:120] "podman" driver + crio runtime found, recommending kindnet
I0120 18:28:44.024668  169041 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0120 18:28:44.024679  169041 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0120 18:28:44.024781  169041 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/crio/crio.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 192.168.49.2:10249

I0120 18:28:44.024870  169041 kubeadm.go:862] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --hostname-override=minikube --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m

[Install]
 config:
{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0120 18:28:44.024915  169041 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0120 18:28:44.029782  169041 binaries.go:44] Found k8s binaries, skipping transfer
I0120 18:28:44.029832  169041 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0120 18:28:44.034851  169041 ssh_runner.go:310] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (487 bytes)
I0120 18:28:44.044454  169041 ssh_runner.go:310] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0120 18:28:44.054594  169041 ssh_runner.go:310] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1843 bytes)
I0120 18:28:44.065282  169041 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
I0120 18:28:44.067358  169041 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0120 18:28:44.073936  169041 certs.go:52] Setting up /home/mrizzi/.minikube/profiles/minikube for IP: 192.168.49.2
I0120 18:28:44.073968  169041 certs.go:173] generating minikubeCA CA: /home/mrizzi/.minikube/ca.key
I0120 18:28:44.448133  169041 crypto.go:157] Writing cert to /home/mrizzi/.minikube/ca.crt ...
I0120 18:28:44.448153  169041 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/ca.crt: {Name:mke03e9a1920afba460c060be5f4b6769ef644b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 18:28:44.448392  169041 crypto.go:165] Writing key to /home/mrizzi/.minikube/ca.key ...
I0120 18:28:44.448403  169041 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/ca.key: {Name:mkb240f7f8e6f82e4d610aab52b47468a1329330 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 18:28:44.448482  169041 certs.go:173] generating proxyClientCA CA: /home/mrizzi/.minikube/proxy-client-ca.key
I0120 18:28:44.507594  169041 crypto.go:157] Writing cert to /home/mrizzi/.minikube/proxy-client-ca.crt ...
I0120 18:28:44.507614  169041 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/proxy-client-ca.crt: {Name:mk4174df0f1b4beaf8e5a275fbdf42244be71f15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 18:28:44.507778  169041 crypto.go:165] Writing key to /home/mrizzi/.minikube/proxy-client-ca.key ...
I0120 18:28:44.507787  169041 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/proxy-client-ca.key: {Name:mk5e6950da80fd9764adae2b6dd79810410ec3ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 18:28:44.507877  169041 certs.go:277] generating minikube-user signed cert: /home/mrizzi/.minikube/profiles/minikube/client.key
I0120 18:28:44.507884  169041 crypto.go:69] Generating cert /home/mrizzi/.minikube/profiles/minikube/client.crt with IP's: []
I0120 18:28:44.787190  169041 crypto.go:157] Writing cert to /home/mrizzi/.minikube/profiles/minikube/client.crt ...
I0120 18:28:44.787210  169041 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/client.crt: {Name:mk2ff7788ac9d0de0cd174f0617feb2f1dd707c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 18:28:44.787356  169041 crypto.go:165] Writing key to /home/mrizzi/.minikube/profiles/minikube/client.key ...
I0120 18:28:44.787366  169041 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/client.key: {Name:mkedf501c0d6a07a0aa78a08660f8e8e7cc0c918 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 18:28:44.787452  169041 certs.go:277] generating minikube signed cert: /home/mrizzi/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0120 18:28:44.787459  169041 crypto.go:69] Generating cert /home/mrizzi/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0120 18:28:44.944614  169041 crypto.go:157] Writing cert to /home/mrizzi/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I0120 18:28:44.944634  169041 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk422858b15bd0eaea2b6fcba46c45cc115c0286 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 18:28:44.944772  169041 crypto.go:165] Writing key to /home/mrizzi/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I0120 18:28:44.944782  169041 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk0658a97766b6658717586fb5056c92e38378bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 18:28:44.944844  169041 certs.go:288] copying /home/mrizzi/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/mrizzi/.minikube/profiles/minikube/apiserver.crt
I0120 18:28:44.944928  169041 certs.go:292] copying /home/mrizzi/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/mrizzi/.minikube/profiles/minikube/apiserver.key
I0120 18:28:44.944992  169041 certs.go:277] generating aggregator signed cert: /home/mrizzi/.minikube/profiles/minikube/proxy-client.key
I0120 18:28:44.944999  169041 crypto.go:69] Generating cert /home/mrizzi/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0120 18:28:45.191599  169041 crypto.go:157] Writing cert to /home/mrizzi/.minikube/profiles/minikube/proxy-client.crt ...
I0120 18:28:45.191626  169041 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/proxy-client.crt: {Name:mka2338a78f50214ee1948cd9bf268c531eaa3f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 18:28:45.191837  169041 crypto.go:165] Writing key to /home/mrizzi/.minikube/profiles/minikube/proxy-client.key ...
I0120 18:28:45.191851  169041 lock.go:36] WriteFile acquiring /home/mrizzi/.minikube/profiles/minikube/proxy-client.key: {Name:mk969b8bdb9a7c95302616c350453daaad785fcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 18:28:45.192038  169041 certs.go:352] found cert: /home/mrizzi/.minikube/certs/home/mrizzi/.minikube/certs/ca-key.pem (1679 bytes)
I0120 18:28:45.192085  169041 certs.go:352] found cert: /home/mrizzi/.minikube/certs/home/mrizzi/.minikube/certs/ca.pem (1078 bytes)
I0120 18:28:45.192118  169041 certs.go:352] found cert: /home/mrizzi/.minikube/certs/home/mrizzi/.minikube/certs/cert.pem (1123 bytes)
I0120 18:28:45.192150  169041 certs.go:352] found cert: /home/mrizzi/.minikube/certs/home/mrizzi/.minikube/certs/key.pem (1679 bytes)
I0120 18:28:45.193086  169041 ssh_runner.go:310] scp /home/mrizzi/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0120 18:28:45.207196  169041 ssh_runner.go:310] scp /home/mrizzi/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0120 18:28:45.220691  169041 ssh_runner.go:310] scp /home/mrizzi/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0120 18:28:45.233963  169041 ssh_runner.go:310] scp /home/mrizzi/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0120 18:28:45.246576  169041 ssh_runner.go:310] scp /home/mrizzi/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0120 18:28:45.259927  169041 ssh_runner.go:310] scp /home/mrizzi/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0120 18:28:45.273239  169041 ssh_runner.go:310] scp /home/mrizzi/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0120 18:28:45.287146  169041 ssh_runner.go:310] scp /home/mrizzi/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0120 18:28:45.299478  169041 ssh_runner.go:310] scp /home/mrizzi/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0120 18:28:45.312137  169041 ssh_runner.go:310] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0120 18:28:45.322592  169041 ssh_runner.go:149] Run: openssl version
I0120 18:28:45.325919  169041 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0120 18:28:45.331120  169041 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0120 18:28:45.333218  169041 certs.go:393] hashing: -rw-r--r--. 1 root root 1111 Jan 20 17:28 /usr/share/ca-certificates/minikubeCA.pem
I0120 18:28:45.333252  169041 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0120 18:28:45.336661  169041 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0120 18:28:45.341718  169041 kubeadm.go:364] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] MultiNodeRequested:false}
I0120 18:28:45.341766  169041 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I0120 18:28:45.341802  169041 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 18:28:45.352493  169041 cri.go:76] found id: ""
I0120 18:28:45.352543  169041 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0120 18:28:45.357543  169041 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0120 18:28:45.362302  169041 kubeadm.go:213] ignoring SystemVerification for kubeadm because of podman driver
I0120 18:28:45.362372  169041 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0120 18:28:45.367145  169041 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0120 18:28:45.367185  169041 ssh_runner.go:236] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0120 18:28:45.542772  169041 out.go:140]     ▪ Generating certificates and keys ...
    ▪ Generating certificates and keys ...| I0120 18:28:48.012870  169041 out.go:140]     ▪ Booting up control plane ...

    ▪ Booting up control plane ...\ W0120 18:30:43.034331  169041 out.go:181] 💢  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
	[WARNIN
💢  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
	[WARNIN
I0120 18:30:43.034497  169041 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
/ I0120 18:30:44.032001  169041 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
I0120 18:30:44.040600  169041 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
I0120 18:30:44.040675  169041 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 18:30:44.052588  169041 cri.go:76] found id: ""
I0120 18:30:44.052628  169041 kubeadm.go:213] ignoring SystemVerification for kubeadm because of podman driver
I0120 18:30:44.052680  169041 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0120 18:30:44.058324  169041 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0120 18:30:44.058367  169041 ssh_runner.go:236] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
\ I0120 18:30:44.232280  169041 out.go:140]     ▪ Generating certificates and keys ...

    ▪ Generating certificates and keys ...| I0120 18:30:45.146968  169041 out.go:140]     ▪ Booting up control plane ...

    ▪ Booting up control plane ...\ I0120 18:32:40.174766  169041 kubeadm.go:366] StartCluster complete in 3m54.833053477s
I0120 18:32:40.174798  169041 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I0120 18:32:40.174863  169041 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0120 18:32:40.186475  169041 cri.go:76] found id: ""
I0120 18:32:40.186502  169041 logs.go:206] 0 containers: []
W0120 18:32:40.186511  169041 logs.go:208] No container was found matching "kube-apiserver"
I0120 18:32:40.186520  169041 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I0120 18:32:40.186568  169041 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
I0120 18:32:40.197345  169041 cri.go:76] found id: ""
I0120 18:32:40.197366  169041 logs.go:206] 0 containers: []
W0120 18:32:40.197375  169041 logs.go:208] No container was found matching "etcd"
I0120 18:32:40.197385  169041 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I0120 18:32:40.197437  169041 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
I0120 18:32:40.208547  169041 cri.go:76] found id: ""
I0120 18:32:40.208566  169041 logs.go:206] 0 containers: []
W0120 18:32:40.208574  169041 logs.go:208] No container was found matching "coredns"
I0120 18:32:40.208602  169041 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I0120 18:32:40.208645  169041 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0120 18:32:40.218579  169041 cri.go:76] found id: ""
I0120 18:32:40.218598  169041 logs.go:206] 0 containers: []
W0120 18:32:40.218606  169041 logs.go:208] No container was found matching "kube-scheduler"
I0120 18:32:40.218615  169041 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I0120 18:32:40.218659  169041 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0120 18:32:40.228413  169041 cri.go:76] found id: ""
I0120 18:32:40.228435  169041 logs.go:206] 0 containers: []
W0120 18:32:40.228447  169041 logs.go:208] No container was found matching "kube-proxy"
I0120 18:32:40.228458  169041 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
I0120 18:32:40.228512  169041 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
| I0120 18:32:40.239013  169041 cri.go:76] found id: ""
I0120 18:32:40.239028  169041 logs.go:206] 0 containers: []
W0120 18:32:40.239035  169041 logs.go:208] No container was found matching "kubernetes-dashboard"
I0120 18:32:40.239043  169041 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
I0120 18:32:40.239087  169041 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0120 18:32:40.248248  169041 cri.go:76] found id: ""
I0120 18:32:40.248263  169041 logs.go:206] 0 containers: []
W0120 18:32:40.248271  169041 logs.go:208] No container was found matching "storage-provisioner"
I0120 18:32:40.248279  169041 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I0120 18:32:40.248329  169041 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0120 18:32:40.257763  169041 cri.go:76] found id: ""
I0120 18:32:40.257807  169041 logs.go:206] 0 containers: []
W0120 18:32:40.257822  169041 logs.go:208] No container was found matching "kube-controller-manager"
I0120 18:32:40.257836  169041 logs.go:120] Gathering logs for kubelet ...
I0120 18:32:40.257849  169041 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0120 18:32:40.297503  169041 logs.go:120] Gathering logs for dmesg ...
I0120 18:32:40.297527  169041 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0120 18:32:40.308553  169041 logs.go:120] Gathering logs for describe nodes ...
I0120 18:32:40.308575  169041 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
/ W0120 18:32:40.353077  169041 logs.go:127] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: 
** stderr ** 
The connection to the server localhost:8443 was refused - did you specify the right host or port?

** /stderr **
I0120 18:32:40.353096  169041 logs.go:120] Gathering logs for CRI-O ...
I0120 18:32:40.353110  169041 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
I0120 18:32:40.387340  169041 logs.go:120] Gathering logs for container status ...
I0120 18:32:40.387367  169041 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0120 18:32:40.398960  169041 out.go:294] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:
W0120 18:32:40.399058  169041 out.go:181] 

W0120 18:32:40.399193  169041 out.go:181] 💣  Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:

💣  Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:

W0120 18:32:40.399306  169041 out.go:181] 

W0120 18:32:40.399328  169041 out.go:181] 😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
W0120 18:32:40.399368  169041 out.go:181] 👉  https://github.com/kubernetes/minikube/issues/new/choose
👉  https://github.com/kubernetes/minikube/issues/new/choose
I0120 18:32:40.400577  169041 out.go:119] 


W0120 18:32:40.400684  169041 out.go:181] ❌  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:

❌  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'


stderr:

W0120 18:32:40.400858  169041 out.go:181] 💡  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
💡  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0120 18:32:40.400920  169041 out.go:181] 🍿  Related issue: https://github.com/kubernetes/minikube/issues/4172
🍿  Related issue: https://github.com/kubernetes/minikube/issues/4172
I0120 18:32:40.400945  169041 out.go:119] 
and the `minikube logs` output:
==> CRI-O <==
-- Logs begin at Wed 2021-01-20 17:28:38 UTC, end at Wed 2021-01-20 17:39:56 UTC. --
Jan 20 17:36:25 minikube crio[348]: time="2021-01-20 17:36:25.582620894Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=feca7a5e-dd27-491f-b2e6-1f544e84a61c name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:36:25 minikube crio[348]: time="2021-01-20 17:36:25.584586665Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=feca7a5e-dd27-491f-b2e6-1f544e84a61c name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:36:32 minikube crio[348]: time="2021-01-20 17:36:32.815603122Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=84f3aa45-7bb3-4c25-bafb-ce6d9719fa49 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:36:32 minikube crio[348]: time="2021-01-20 17:36:32.817431443Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=84f3aa45-7bb3-4c25-bafb-ce6d9719fa49 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:36:40 minikube crio[348]: time="2021-01-20 17:36:40.065745921Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=09e08aeb-90d0-4416-bea6-47877f42c8cd name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:36:40 minikube crio[348]: time="2021-01-20 17:36:40.067331700Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=09e08aeb-90d0-4416-bea6-47877f42c8cd name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:36:47 minikube crio[348]: time="2021-01-20 17:36:47.328946113Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=790d7783-5856-4f75-b7e4-f230d632698e name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:36:47 minikube crio[348]: time="2021-01-20 17:36:47.330591455Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=790d7783-5856-4f75-b7e4-f230d632698e name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:36:54 minikube crio[348]: time="2021-01-20 17:36:54.551482123Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=959ed0a7-2a2d-4c9e-be4c-858651299120 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:36:54 minikube crio[348]: time="2021-01-20 17:36:54.553251567Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=959ed0a7-2a2d-4c9e-be4c-858651299120 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:37:01 minikube crio[348]: time="2021-01-20 17:37:01.812608100Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=7530e163-5bbc-4c38-ab39-516096c459a0 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:37:01 minikube crio[348]: time="2021-01-20 17:37:01.814153904Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=7530e163-5bbc-4c38-ab39-516096c459a0 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:37:09 minikube crio[348]: time="2021-01-20 17:37:09.043781809Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=2c36be60-3d08-4642-bafa-00d63917f8a9 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:37:09 minikube crio[348]: time="2021-01-20 17:37:09.045446936Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2c36be60-3d08-4642-bafa-00d63917f8a9 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:37:16 minikube crio[348]: time="2021-01-20 17:37:16.295925703Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=8b7fe305-c8f5-4f49-88f3-ccdc69cb4f04 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:37:16 minikube crio[348]: time="2021-01-20 17:37:16.297690452Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8b7fe305-c8f5-4f49-88f3-ccdc69cb4f04 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:37:23 minikube crio[348]: time="2021-01-20 17:37:23.516211609Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=fe9c285d-d24e-4696-9860-2e769e8c2893 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:37:23 minikube crio[348]: time="2021-01-20 17:37:23.517948472Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=fe9c285d-d24e-4696-9860-2e769e8c2893 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:37:30 minikube crio[348]: time="2021-01-20 17:37:30.790811721Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=046681bd-eb5e-4073-bd08-f97230269d91 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:37:30 minikube crio[348]: time="2021-01-20 17:37:30.794156546Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=046681bd-eb5e-4073-bd08-f97230269d91 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:37:38 minikube crio[348]: time="2021-01-20 17:37:38.070786817Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=d07446e0-16a2-415e-ad25-1c383657e7ac name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:37:38 minikube crio[348]: time="2021-01-20 17:37:38.072489398Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d07446e0-16a2-415e-ad25-1c383657e7ac name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:37:45 minikube crio[348]: time="2021-01-20 17:37:45.320659583Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=1c420cc4-7f3b-40cf-9c93-62cc23ec0354 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:37:45 minikube crio[348]: time="2021-01-20 17:37:45.322458735Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1c420cc4-7f3b-40cf-9c93-62cc23ec0354 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:37:52 minikube crio[348]: time="2021-01-20 17:37:52.531227973Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=f1445a51-28d5-462d-8e31-0d8fbbcae7f8 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:37:52 minikube crio[348]: time="2021-01-20 17:37:52.533057549Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f1445a51-28d5-462d-8e31-0d8fbbcae7f8 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:37:59 minikube crio[348]: time="2021-01-20 17:37:59.804895234Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=5bae2a73-c619-4d4d-8739-fead80d7ba82 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:37:59 minikube crio[348]: time="2021-01-20 17:37:59.807087658Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=5bae2a73-c619-4d4d-8739-fead80d7ba82 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:38:07 minikube crio[348]: time="2021-01-20 17:38:07.039239532Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=3945a176-97cd-46d7-bff6-0a1ec64b0133 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:38:07 minikube crio[348]: time="2021-01-20 17:38:07.041205392Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3945a176-97cd-46d7-bff6-0a1ec64b0133 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:38:14 minikube crio[348]: time="2021-01-20 17:38:14.342032417Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=659d562d-70d7-4d07-aa97-b40dbb7eb2fb name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:38:14 minikube crio[348]: time="2021-01-20 17:38:14.343467858Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=659d562d-70d7-4d07-aa97-b40dbb7eb2fb name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:38:21 minikube crio[348]: time="2021-01-20 17:38:21.596008491Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=cb891d6a-d183-4635-992c-e9408dcad4e4 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:38:21 minikube crio[348]: time="2021-01-20 17:38:21.598086168Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=cb891d6a-d183-4635-992c-e9408dcad4e4 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:38:28 minikube crio[348]: time="2021-01-20 17:38:28.754300655Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=61075610-2842-43cc-8ed5-8eeb534afd2d name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:38:28 minikube crio[348]: time="2021-01-20 17:38:28.764629161Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=61075610-2842-43cc-8ed5-8eeb534afd2d name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:38:35 minikube crio[348]: time="2021-01-20 17:38:35.975254860Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=4e818ef7-2007-4951-8bb1-e90fa789bd55 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:38:35 minikube crio[348]: time="2021-01-20 17:38:35.977029103Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=4e818ef7-2007-4951-8bb1-e90fa789bd55 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:38:43 minikube crio[348]: time="2021-01-20 17:38:43.273927923Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=b5491d94-c38c-4067-8340-056588aaa6a5 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:38:43 minikube crio[348]: time="2021-01-20 17:38:43.275501353Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b5491d94-c38c-4067-8340-056588aaa6a5 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:38:50 minikube crio[348]: time="2021-01-20 17:38:50.548517787Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=1e64cc53-0f34-40ac-80c1-8fa3e1cd4b06 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:38:50 minikube crio[348]: time="2021-01-20 17:38:50.550467586Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1e64cc53-0f34-40ac-80c1-8fa3e1cd4b06 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:38:57 minikube crio[348]: time="2021-01-20 17:38:57.852485099Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=061bc06e-8a1d-451d-af28-28f4e5e16da7 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:38:57 minikube crio[348]: time="2021-01-20 17:38:57.854483256Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=061bc06e-8a1d-451d-af28-28f4e5e16da7 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:39:05 minikube crio[348]: time="2021-01-20 17:39:05.025045587Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=3ec1e5b8-b65e-4c0d-93c9-dbb2c157e2c1 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:39:05 minikube crio[348]: time="2021-01-20 17:39:05.029959457Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3ec1e5b8-b65e-4c0d-93c9-dbb2c157e2c1 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:39:12 minikube crio[348]: time="2021-01-20 17:39:12.249709808Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=df9eb565-969c-44f7-a72b-02df96ce8405 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:39:12 minikube crio[348]: time="2021-01-20 17:39:12.251454864Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=df9eb565-969c-44f7-a72b-02df96ce8405 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:39:19 minikube crio[348]: time="2021-01-20 17:39:19.539208333Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=59802309-31a7-45f7-b53b-4e9e3598d64e name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:39:19 minikube crio[348]: time="2021-01-20 17:39:19.545241484Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=59802309-31a7-45f7-b53b-4e9e3598d64e name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:39:26 minikube crio[348]: time="2021-01-20 17:39:26.751474893Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=71076da8-6955-432b-b262-3f2f3f55474b name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:39:26 minikube crio[348]: time="2021-01-20 17:39:26.753533299Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=71076da8-6955-432b-b262-3f2f3f55474b name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:39:34 minikube crio[348]: time="2021-01-20 17:39:34.084359108Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=6bb8c869-2d16-459c-8833-ec9ac03b6316 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:39:34 minikube crio[348]: time="2021-01-20 17:39:34.086211372Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6bb8c869-2d16-459c-8833-ec9ac03b6316 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:39:41 minikube crio[348]: time="2021-01-20 17:39:41.303835421Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=62430090-e8a2-4b8b-b3f0-37346724890c name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:39:41 minikube crio[348]: time="2021-01-20 17:39:41.305302346Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=62430090-e8a2-4b8b-b3f0-37346724890c name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:39:48 minikube crio[348]: time="2021-01-20 17:39:48.562487225Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=98ee70f5-84e7-46e2-8cfa-ee043eb6ce52 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:39:48 minikube crio[348]: time="2021-01-20 17:39:48.564278669Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=98ee70f5-84e7-46e2-8cfa-ee043eb6ce52 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:39:55 minikube crio[348]: time="2021-01-20 17:39:55.774770434Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=6410a8b2-e3ff-46ea-aaac-2535647cb208 name=/runtime.v1alpha2.ImageService/ImageStatus
Jan 20 17:39:55 minikube crio[348]: time="2021-01-20 17:39:55.776732875Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6410a8b2-e3ff-46ea-aaac-2535647cb208 name=/runtime.v1alpha2.ImageService/ImageStatus

==> container status <==
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID

==> describe nodes <==
E0120 18:39:56.590688  186944 logs.go:181] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"

==> dmesg <==
[Jan20 14:21] x86/cpu: VMX (outside TXT) disabled by BIOS
[  +0.023623] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[  +0.799255] systemd[1]: /usr/lib/systemd/system/plymouth-start.service:15: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
[  +0.212468] acpi PNP0C14:02: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000037] acpi PNP0C14:03: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000068] acpi PNP0C14:04: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000038] acpi PNP0C14:05: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000030] acpi PNP0C14:06: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000030] acpi PNP0C14:07: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.000053] acpi PNP0C14:08: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
[  +0.010563] usb: port power management may be unreliable
[  +0.103344] nvme nvme0: missing or invalid SUBNQN field.
[ +13.618959] kauditd_printk_skb: 18 callbacks suppressed
[  +0.960290] systemd-sysv-generator[988]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[  +0.000029] systemd-sysv-generator[988]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[  +0.066404] systemd[1]: /usr/lib/systemd/system/plymouth-start.service:15: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed.
[  +0.400115] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[  +0.031551] iwlwifi 0000:00:14.3: api flags index 2 larger than supported by driver
[  +0.152590] resource sanity check: requesting [mem 0xfed10000-0xfed15fff], which spans more than pnp 00:07 [mem 0xfed10000-0xfed13fff]
[  +0.000009] caller snb_uncore_imc_init_box+0x6a/0xa0 [intel_uncore] mapping multiple BARs
[  +0.003731] r8152 4-2.1.2:1.0 (unnamed net_device) (uninitialized): Invalid header when reading pass-thru MAC addr
[  +0.193406] thermal thermal_zone13: failed to read out thermal zone (-61)
[  +0.303173] sof-audio-pci 0000:00:1f.3: ASoC: Parent card not yet available, widget card binding deferred
[  +0.265801] snd_hda_codec_realtek ehdaudio0D0: ASoC: sink widget AIF1TX overwritten
[  +0.000006] snd_hda_codec_realtek ehdaudio0D0: ASoC: source widget AIF1RX overwritten
[  +0.000392] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget hifi3 overwritten
[  +0.000004] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget hifi2 overwritten
[  +0.000003] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget hifi1 overwritten
[  +0.000005] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Codec Output Pin1 overwritten
[  +0.000002] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Codec Input Pin1 overwritten
[  +0.000009] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Analog Codec Playback overwritten
[  +0.000004] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Digital Codec Playback overwritten
[  +0.000005] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: sink widget Alt Analog Codec Playback overwritten
[  +0.000006] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Analog Codec Capture overwritten
[  +0.000006] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Digital Codec Capture overwritten
[  +0.000004] skl_hda_dsp_generic skl_hda_dsp_generic: ASoC: source widget Alt Analog Codec Capture overwritten
[  +0.006023] snd_hda_codec_hdmi ehdaudio0D2: Monitor plugged-in, Failed to power up codec ret=[-13]
[  +0.005303] snd_hda_codec_hdmi ehdaudio0D2: Monitor plugged-in, Failed to power up codec ret=[-13]
[  +9.148710] usb 3-2.1.1.2: 1:1: cannot get freq at ep 0x81
[  +9.396818] [drm:drm_dp_mst_dpcd_read [drm_kms_helper]] *ERROR* mstb 00000000bd17b342 port 1: DPCD read on addr 0x4b0 for 1 bytes NAKed
[  +0.030484] [drm:drm_dp_mst_dpcd_read [drm_kms_helper]] *ERROR* mstb 00000000bd17b342 port 3: DPCD read on addr 0x4b0 for 1 bytes NAKed
[Jan20 14:25] systemd-sysv-generator[5042]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[  +0.000035] systemd-sysv-generator[5042]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[Jan20 16:57] systemd-sysv-generator[107484]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
[  +0.000024] systemd-sysv-generator[107484]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.

==> kernel <==
 17:39:56 up  3:18,  0 users,  load average: 1.95, 1.52, 1.48
Linux minikube 5.10.7-200.fc33.x86_64 #1 SMP Tue Jan 12 20:20:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.1 LTS"

==> kubelet <==
-- Logs begin at Wed 2021-01-20 17:28:38 UTC, end at Wed 2021-01-20 17:39:56 UTC. --
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
Jan 20 17:39:55 minikube kubelet[13820]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
Jan 20 17:39:55 minikube kubelet[13820]: goroutine 256 [syscall]:
Jan 20 17:39:55 minikube kubelet[13820]: syscall.Syscall(0x0, 0x19, 0xc00148ff88, 0x10000, 0x0, 0x0, 0x0)
Jan 20 17:39:55 minikube kubelet[13820]:         /usr/local/go/src/syscall/asm_linux_amd64.s:18 +0x5
Jan 20 17:39:55 minikube kubelet[13820]: syscall.read(0x19, 0xc00148ff88, 0x10000, 0x10000, 0x0, 0x0, 0x0)
Jan 20 17:39:55 minikube kubelet[13820]:         /usr/local/go/src/syscall/zsyscall_linux_amd64.go:686 +0x5a
Jan 20 17:39:55 minikube kubelet[13820]: syscall.Read(...)
Jan 20 17:39:55 minikube kubelet[13820]:         /usr/local/go/src/syscall/syscall_unix.go:187
Jan 20 17:39:55 minikube kubelet[13820]: k8s.io/kubernetes/vendor/k8s.io/utils/inotify.(*Watcher).readEvents(0xc0009e4ec0)
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/utils/inotify/inotify_linux.go:139 +0x37e
Jan 20 17:39:55 minikube kubelet[13820]: created by k8s.io/kubernetes/vendor/k8s.io/utils/inotify.NewWatcher
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/utils/inotify/inotify_linux.go:55 +0x1de
Jan 20 17:39:55 minikube kubelet[13820]: goroutine 257 [chan receive]:
Jan 20 17:39:55 minikube kubelet[13820]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/utils/oomparser.(*OomParser).StreamOoms(0xc000bd6860, 0xc0010e1140)
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/utils/oomparser/oomparser.go:121 +0xd3
Jan 20 17:39:55 minikube kubelet[13820]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewOoms
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:1209 +0xec
Jan 20 17:39:55 minikube kubelet[13820]: goroutine 450 [chan receive]:
Jan 20 17:39:55 minikube kubelet[13820]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewOoms.func1(0xc0010e1140, 0xc0009eec80)
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:1212 +0x59
Jan 20 17:39:55 minikube kubelet[13820]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewOoms
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:1211 +0x11b
Jan 20 17:39:55 minikube kubelet[13820]: goroutine 451 [select]:
Jan 20 17:39:55 minikube kubelet[13820]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).housekeepingTick(0xc00081a6c0, 0xc0009f9a40, 0x5f5e100, 0xc0009fa000)
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:536 +0x127
Jan 20 17:39:55 minikube kubelet[13820]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).housekeeping(0xc00081a6c0)
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:494 +0x25a
Jan 20 17:39:55 minikube kubelet[13820]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).Start
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:114 +0x3f
Jan 20 17:39:55 minikube kubelet[13820]: goroutine 295 [select]:
Jan 20 17:39:55 minikube kubelet[13820]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).housekeepingTick(0xc00037e900, 0xc001312000, 0x5f5e100, 0xc00028e000)
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:536 +0x127
Jan 20 17:39:55 minikube kubelet[13820]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).housekeeping(0xc00037e900)
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:494 +0x25a
Jan 20 17:39:55 minikube kubelet[13820]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).Start
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:114 +0x3f
Jan 20 17:39:55 minikube kubelet[13820]: goroutine 631 [select]:
Jan 20 17:39:55 minikube kubelet[13820]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).Start.func1(0xc000bd8a00, 0xc0009f9800)
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:91 +0x125
Jan 20 17:39:55 minikube kubelet[13820]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw.(*rawContainerWatcher).Start
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/raw/watcher.go:89 +0x477
Jan 20 17:39:55 minikube kubelet[13820]: goroutine 632 [select]:
Jan 20 17:39:55 minikube kubelet[13820]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewContainers.func1(0xc0009eec80, 0xc0010589f0, 0xc0004781e0)
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:1164 +0xe5
Jan 20 17:39:55 minikube kubelet[13820]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).watchForNewContainers
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:1162 +0x21d
Jan 20 17:39:55 minikube kubelet[13820]: goroutine 633 [select]:
Jan 20 17:39:55 minikube kubelet[13820]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).globalHousekeeping(0xc0009eec80, 0xc00020f380)
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:385 +0x145
Jan 20 17:39:55 minikube kubelet[13820]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:319 +0x585
Jan 20 17:39:55 minikube kubelet[13820]: goroutine 634 [select]:
Jan 20 17:39:55 minikube kubelet[13820]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).updateMachineInfo(0xc0009eec80, 0xc00020f3e0)
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:357 +0xd4
Jan 20 17:39:55 minikube kubelet[13820]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start
Jan 20 17:39:55 minikube kubelet[13820]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:323 +0x608
Jan 20 17:39:56 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 76.
Jan 20 17:39:56 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.

❗  unable to fetch logs for: describe nodes

TBH I'm not so sure what else I could try other than the brute force approach comparing your VM with my host.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jan 20, 2021

I was missing conntrack-tools so I've added them but it didn't solve the issue for me.

It is just to avoid a warning from kubeadm:

                        InPathCheck{executable: "conntrack", mandatory: true, exec: execer},
                        InPathCheck{executable: "ip", mandatory: true, exec: execer},
                        InPathCheck{executable: "iptables", mandatory: true, exec: execer},
                        InPathCheck{executable: "mount", mandatory: true, exec: execer},
                        InPathCheck{executable: "nsenter", mandatory: true, exec: execer},
                        InPathCheck{executable: "ebtables", mandatory: false, exec: execer},
                        InPathCheck{executable: "ethtool", mandatory: false, exec: execer},
                        InPathCheck{executable: "socat", mandatory: false, exec: execer},
                        InPathCheck{executable: "tc", mandatory: false, exec: execer},
                        InPathCheck{executable: "touch", mandatory: false, exec: execer})

@mrizzi
Copy link
Author

mrizzi commented Jan 21, 2021

So, I did some further investigations on what makes my host different and I've figured out that from F33 the default file system is btrfs (ref. BtrfsByDefault).
This took me to #7923 (comment) so I've added --feature-gates="LocalStorageCapacityIsolation=false" and then it started.

So $ minikube start --driver=podman --container-runtime=cri-o --feature-gates="LocalStorageCapacityIsolation=false" worked for me the first time.

Output:

😄  minikube v1.16.0 on Fedora 33
✨  Using the podman (experimental) driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.20.0 preload ...
    > preloaded-images-k8s-v8-v1....: 555.86 MiB / 555.86 MiB  100.00% 8.27 MiB
🔥  Creating podman container (CPUs=2, Memory=7900MB) ...
🎁  Preparing Kubernetes v1.20.0 on CRI-O 1.19.0 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass

❗  /usr/bin/kubectl is version 1.18.2, which may have incompatibilites with Kubernetes 1.20.0.
    ▪ Want kubectl v1.20.0? Try 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Then I stopped it executing:

$ minikube stop
✋  Stopping node "minikube"  ...
🛑  Powering off "minikube" via SSH ...
✋  Stopping node "minikube"  ...
🛑  1 nodes stopped.

and then started again with the same
$ minikube start --driver=podman --container-runtime=cri-o --feature-gates="LocalStorageCapacityIsolation=false"
it worked but with some kind of network issues:

😄  minikube v1.16.0 on Fedora 33
✨  Using the podman (experimental) driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing podman container for "minikube" ...
❗  Due to issues with CRI-O post v1.17.3, we need to restart your cluster.
❗  See details at https://github.com/kubernetes/minikube/issues/8861
✋  Stopping node "minikube"  ...
🛑  Powering off "minikube" via SSH ...
❗  This container is having trouble accessing https://k8s.gcr.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing podman container for "minikube" ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.49.2
🎁  Preparing Kubernetes v1.20.0 on CRI-O 1.19.0 ...
    ▪ env NO_PROXY=192.168.49.2
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass, dashboard

❗  /usr/bin/kubectl is version 1.18.2, which may have incompatibilites with Kubernetes 1.20.0.
    ▪ Want kubectl v1.20.0? Try 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

So I'm happy it started but I have two further questions:

  • is --feature-gates="LocalStorageCapacityIsolation=false" fine for you as a solution to work on F33 fresh installations with btrfs?
  • is This container is having trouble accessing https://k8s.gcr.io really an issue?

@mrizzi
Copy link
Author

mrizzi commented Jan 21, 2021

Beside this, I moved forward in trying to push images following Pushing directly to in-cluster CRI-O. (podman-env).
It doesn't work:

$ minikube podman-env
export CONTAINER_HOST="ssh://[email protected]:44957/run/podman/podman.sock"
export CONTAINER_SSHKEY="/home/mrizzi/.minikube/machines/minikube/id_rsa"
export MINIKUBE_ACTIVE_PODMAN="minikube"

# To point your shell to minikube's podman service, run:
# eval $(minikube -p minikube podman-env)
$ eval $(minikube -p minikube podman-env)
$ podman-remote version
Error: Get "http://d/v2.0.0/libpod/_ping": ssh: rejected: connect failed (open failed)

But if try to use ssh from terminal it works:

~ $ ssh [email protected] -p 44957 -i /home/mrizzi/.minikube/machines/minikube/id_rsa
Last login: Thu Jan 21 08:48:24 2021 from 192.168.49.1
docker@minikube:~$ uname -a
Linux minikube 5.10.7-200.fc33.x86_64 #1 SMP Tue Jan 12 20:20:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
docker@minikube:~$ logout
Connection to 127.0.0.1 closed.

@afbjorklund
Copy link
Collaborator

So, I did some further investigations on what makes my host different and I've figured out that from F33 the default file system is btrfs (ref. BtrfsByDefault).

That would be the reason then, see #7975

@afbjorklund
Copy link
Collaborator

$ podman-remote version
Error: Get "http://d/v2.0.0/libpod/_ping": ssh: rejected: connect failed (open failed)

Maybe it got broken again. Try podman --remote --url "$CONTAINER_HOST" perhaps, see if that works better ?

Works here (Ubuntu 20.04):

$ eval $(minikube -p minikube podman-env)
$ podman-remote version
Client:
Version:      2.2.1
API Version:  2.1.0
Go Version:   go1.15.2
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

Server:
Version:      2.2.1
API Version:  2.0.0
Go Version:   go1.15.2
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

@mrizzi
Copy link
Author

mrizzi commented Jan 21, 2021

I've just tried but no luck (also with --identity "$CONTAINER_SSHKEY" for the sake of testing):

$ podman --remote --url "$CONTAINER_HOST" version 
Error: Get "http://d/v2.0.0/libpod/_ping": ssh: rejected: connect failed (open failed)
$ podman --remote --url "$CONTAINER_HOST" --identity "$CONTAINER_SSHKEY" version 
Error: Get "http://d/v2.0.0/libpod/_ping": ssh: rejected: connect failed (open failed)

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jan 21, 2021

Yup, confirmed. podman-remote-2:2.2.1-1.fc33.x86_64 is completely broken.

Or, more likely, there is something wrong with the remote podman.service itself:

[vagrant@localhost ~]$ minikube ssh
docker@minikube:~$ sudo podman-remote version
Error: Get "http://d/v2.0.0/libpod/_ping": dial unix ///run/podman/podman.sock: connect: connection refused

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jan 21, 2021

Looks like the ubuntu podman package is broken:

Jan 21 10:38:21 minikube systemd[1]: Starting Podman API Service...
Jan 21 10:38:21 minikube podman[3396]: time="2021-01-21T10:38:21Z" level=info msg="/usr/bin/pod
man filtering at log level info"
Jan 21 10:38:21 minikube podman[3396]: time="2021-01-21T10:38:21Z" level=info msg="[graphdriver
] using prior storage driver: overlay"
Jan 21 10:38:21 minikube podman[3396]: time="2021-01-21T10:38:21Z" level=warning msg="Error ini
tializing configured OCI runtime crun: no valid1 executable found for OCI runtime crun: invalid 
argument"
Jan 21 10:38:21 minikube podman[3396]: time="2021-01-21T10:38:21Z" level=warning msg="Error ini
tializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid 
argument"
Jan 21 10:38:21 minikube podman[3396]: Error: default OCI runtime "crun" not found: invalid arg
ument
Jan 21 10:38:21 minikube systemd[1]: podman.service: Main process exite
d, code=exited, status=125/n/a
Jan 21 10:38:21 minikube systemd[1]: podman.service: Failed
 with result 'exit-code'.
Jan 21 10:38:21 minikube systemd[1]: Failed to start Podman API Service

It "forgot" to install crun, and only installed runc.

Apparently different defaults, for different cgroups ?

        if conf.Engine.OCIRuntime == "" {
                conf.Engine.OCIRuntime = "runc"
                // If we're running on cgroups v2, default to using crun.
                if onCgroupsv2, _ := cgroups.IsCgroup2UnifiedMode(); onCgroupsv2 {
                        conf.Engine.OCIRuntime = "crun"
                }
        }

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jan 21, 2021

So basically: nobody has tested using Fedora 33.
For Fedora 32, I tested with cgroups v1 (and ext4).

$ minikube ssh -- sudo apt update
$ minikube ssh -- sudo apt install -y crun
$ minikube ssh -- sudo systemctl restart podman.socket
$ podman-remote version
Client:
Version:      2.2.1
API Version:  2.1.0
Go Version:   go1.15.5
Built:        Tue Dec  8 14:37:50 2020
OS/Arch:      linux/amd64

Server:
Version:      2.2.1
API Version:  2.0.0
Go Version:   go1.15.2
Built:        Thu Jan  1 00:00:00 1970
OS/Arch:      linux/amd64

So now it is running correctly, towards the fakenode:
(i.e. the podman container running the control plane)

[vagrant@localhost ~]$ podman info | grep -A1 dist
  distribution:
    distribution: fedora
    version: "33"
[vagrant@localhost ~]$ podman-remote info | grep -A1 dist
  distribution:
    distribution: ubuntu
    version: "20.04"

@mrizzi
Copy link
Author

mrizzi commented Jan 21, 2021

thanks once more @afbjorklund: installing crun made it working 👍

$ minikube ssh -- sudo apt update
[...]
$ minikube ssh -- sudo apt install -y crun
[...]
$ minikube ssh -- sudo systemctl restart podman.socket
$ podman-remote version
Client:
Version:      2.2.1
API Version:  2.1.0
Go Version:   go1.15.5
Built:        Tue Dec  8 15:37:50 2020
OS/Arch:      linux/amd64

Server:
Version:      2.2.1
API Version:  2.0.0
Go Version:   go1.15.2
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

To summarize for running Minikube 1.16.0 on Fedora 33 with podman and cri-o (with cgroups v2 and SELinux enforcing) it took

  • starting with command:
    $ minikube start --driver=podman --container-runtime=cri-o --feature-gates="LocalStorageCapacityIsolation=false"
    
  • when started, executing only once commands:
    $ minikube ssh -- sudo apt update
    $ minikube ssh -- sudo apt install -y crun
    $ minikube ssh -- sudo systemctl restart podman.socket
    $ podman-remote version
    

@afbjorklund
Copy link
Collaborator

I think we will add "crun" explicitly, to the list of packages to install in the kicbase.

clean-install containers-common catatonit conmon containernetworking-plugins cri-tools podman-plugins

As for btrfs support, might as well use that other ticket for that (which flag to use).

https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage

@mrizzi
Copy link
Author

mrizzi commented Jan 22, 2021

Thanks a lot once more @afbjorklund for your helpful support 👍

@mazzystr
Copy link

mazzystr commented Jan 28, 2021

sudo minikube start --driver=none --container-runtime=cri-o --feature-gates="LocalStorageCapacityIsolation=false" and
su - minikube && minikube start --driver=podman --container-runtime=cri-o --feature-gates="LocalStorageCapacityIsolation=false" seem to work for me. As soon as I add --api-name= and --api-names= minikube fails to start.

v7 verbosity show alot of these errors...

❌  Problems detected in kubelet:
    Jan 27 16:30:36 blah kubelet[732071]: E0127 16:30:36.811536  732071 reflector.go:138] object-"kube-system"/"kube-proxy-token-dzgp7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-dzgp7" is forbidden: User "system:node:blah" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'blah' and this object
    Jan 27 16:30:36 blah kubelet[732071]: E0127 16:30:36.811602  732071 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:blah" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'blah' and this object

Update: If I move the dns A reocrd to my hosts ip and manually set --api-name=api.yokel.local minikube start will succeed. When I move the A record back to my haproxy I get ssl errors.

@spowelljr
Copy link
Member

The initial issue seems to have been solved with #10426 so I'm going to close this issue.

@mazzystr I recommend you create your own issue to get more visibility, be sure to reference this issue in it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/podman-driver podman driver issues co/runtime/crio CRIO related issues kind/bug Categorizes issue or PR as related to a bug. os/linux priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

4 participants