Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to start node: controlPlane never updated to v1.18.x (re-use of cluster) #8765

Closed
Grubhart opened this issue Jul 19, 2020 · 39 comments
Closed
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code needs-solution-message Issues where where offering a solution for an error would be helpful priority/backlog Higher priority than priority/awaiting-more-evidence. top-10-issues Top 10 support issues

Comments

@Grubhart
Copy link

Grubhart commented Jul 19, 2020

I'm trying to start minikube for very first time and i get error message: startup failed: wait for healthy API server: controlPlane never updated to v1.18.3

also trying changing kubernetes version to 1.17, 1.16, always same result

mi environment is Mac OS Catalina: 10.15.5

here i include all the

Steps to reproduce the issue:

1.minikube start --driver=docker
2.
3.

Full output of failed command:

grubhart@grubharts-mbp minikube_env % minikube start --driver=docker --alsologtostderr I0719 03:34:10.985955 4950 out.go:170] Setting JSON to false I0719 03:34:11.043503 4950 start.go:101] hostinfo: {"hostname":"grubharts-mbp.lan","uptime":7083,"bootTime":1595140568,"procs":359,"os":"darwin","platform":"darwin","platformFamily":"","platformVersion":"10.15.5","kernelVersion":"19.5.0","virtualizationSystem":"","virtualizationRole":"","hostid":"54f1a78d-6f41-32bd-bfed-4381f9f6e2ef"} W0719 03:34:11.043642 4950 start.go:109] gopshost.Virtualization returned error: not implemented yet 😄 minikube v1.12.1 on Darwin 10.15.5 I0719 03:34:11.056876 4950 notify.go:125] Checking for updates... I0719 03:34:11.057070 4950 driver.go:257] Setting default libvirt URI to qemu:///system I0719 03:34:11.121691 4950 docker.go:87] docker version: linux-19.03.8 ✨ Using the docker driver based on existing profile I0719 03:34:11.132693 4950 start.go:217] selected driver: docker I0719 03:34:11.132704 4950 start.go:621] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:1991 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} I0719 03:34:11.132832 4950 start.go:632] status for docker: {Installed:true Healthy:true NeedsImprovement:false Error: Fix: Doc:} I0719 03:34:11.132909 4950 start_flags.go:340] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:1991 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} 👍 Starting control plane node minikube in cluster minikube I0719 03:34:11.213468 4950 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 in local docker daemon, skipping pull I0719 03:34:11.213529 4950 cache.go:113] gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 exists in daemon, skipping pull I0719 03:34:11.213548 4950 preload.go:95] Checking if preload exists for k8s version v1.18.3 and runtime docker I0719 03:34:11.213606 4950 preload.go:103] Found local preload: /Users/grubhart/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v4-v1.18.3-docker-overlay2-amd64.tar.lz4 I0719 03:34:11.213616 4950 cache.go:51] Caching tarball of preloaded images I0719 03:34:11.213632 4950 preload.go:129] Found /Users/grubhart/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v4-v1.18.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0719 03:34:11.213637 4950 cache.go:54] Finished verifying existence of preloaded tar for v1.18.3 on docker I0719 03:34:11.213772 4950 profile.go:150] Saving config to /Users/grubhart/.minikube/profiles/minikube/config.json ... I0719 03:34:11.214345 4950 cache.go:178] Successfully downloaded all kic artifacts I0719 03:34:11.214390 4950 start.go:241] acquiring machines lock for minikube: {Name:mk5b4ee679337cd31765a79a7a7bfc625bdc9e5e Clock:{} Delay:500ms Timeout:15m0s Cancel:} I0719 03:34:11.214516 4950 start.go:245] acquired machines lock for "minikube" in 98.605µs I0719 03:34:11.214543 4950 start.go:89] Skipping create...Using existing machine configuration I0719 03:34:11.214553 4950 fix.go:53] fixHost starting: I0719 03:34:11.214964 4950 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}} I0719 03:34:11.266347 4950 fix.go:105] recreateIfNeeded on minikube: state=Running err= W0719 03:34:11.266391 4950 fix.go:131] unexpected machine state, will restart: 🏃 Updating the running docker "minikube" container ... I0719 03:34:11.275601 4950 machine.go:88] provisioning docker machine ... I0719 03:34:11.275634 4950 ubuntu.go:166] provisioning hostname "minikube" I0719 03:34:11.275860 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0719 03:34:11.332786 4950 main.go:115] libmachine: Using SSH client type: native I0719 03:34:11.333134 4950 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x43b89f0] 0x43b89c0 [] 0s} 127.0.0.1 32787 } I0719 03:34:11.333162 4950 main.go:115] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0719 03:34:11.513273 4950 main.go:115] libmachine: SSH cmd err, output: : minikube

I0719 03:34:11.513493 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 03:34:11.562193 4950 main.go:115] libmachine: Using SSH client type: native
I0719 03:34:11.562423 4950 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x43b89f0] 0x43b89c0 [] 0s} 127.0.0.1 32787 }
I0719 03:34:11.562458 4950 main.go:115] libmachine: About to run SSH command:

	if ! grep -xq '.*\sminikube' /etc/hosts; then
		if grep -xq '127.0.1.1\s.*' /etc/hosts; then
			sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
		else 
			echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
		fi
	fi

I0719 03:34:11.703522 4950 main.go:115] libmachine: SSH cmd err, output: :
I0719 03:34:11.703602 4950 ubuntu.go:172] set auth options {CertDir:/Users/grubhart/.minikube CaCertPath:/Users/grubhart/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/grubhart/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/grubhart/.minikube/machines/server.pem ServerKeyPath:/Users/grubhart/.minikube/machines/server-key.pem ClientKeyPath:/Users/grubhart/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/grubhart/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/grubhart/.minikube}
I0719 03:34:11.703636 4950 ubuntu.go:174] setting up certificates
I0719 03:34:11.703646 4950 provision.go:82] configureAuth start
I0719 03:34:11.703877 4950 cli_runner.go:109] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0719 03:34:11.757497 4950 provision.go:131] copyHostCerts
I0719 03:34:11.757711 4950 exec_runner.go:91] found /Users/grubhart/.minikube/ca.pem, removing ...
I0719 03:34:11.758023 4950 exec_runner.go:98] cp: /Users/grubhart/.minikube/certs/ca.pem --> /Users/grubhart/.minikube/ca.pem (1042 bytes)
I0719 03:34:11.758492 4950 exec_runner.go:91] found /Users/grubhart/.minikube/cert.pem, removing ...
I0719 03:34:11.758671 4950 exec_runner.go:98] cp: /Users/grubhart/.minikube/certs/cert.pem --> /Users/grubhart/.minikube/cert.pem (1082 bytes)
I0719 03:34:11.759115 4950 exec_runner.go:91] found /Users/grubhart/.minikube/key.pem, removing ...
I0719 03:34:11.759293 4950 exec_runner.go:98] cp: /Users/grubhart/.minikube/certs/key.pem --> /Users/grubhart/.minikube/key.pem (1675 bytes)
I0719 03:34:11.759592 4950 provision.go:105] generating server cert: /Users/grubhart/.minikube/machines/server.pem ca-key=/Users/grubhart/.minikube/certs/ca.pem private-key=/Users/grubhart/.minikube/certs/ca-key.pem org=grubhart.minikube san=[172.17.0.3 localhost 127.0.0.1]
I0719 03:34:11.927781 4950 provision.go:159] copyRemoteCerts
I0719 03:34:11.928062 4950 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0719 03:34:11.928210 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 03:34:11.975614 4950 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/Users/grubhart/.minikube/machines/minikube/id_rsa Username:docker}
I0719 03:34:12.077531 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1042 bytes)
I0719 03:34:12.111364 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/machines/server.pem --> /etc/docker/server.pem (1123 bytes)
I0719 03:34:12.147493 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0719 03:34:12.182202 4950 provision.go:85] duration metric: configureAuth took 478.535115ms
I0719 03:34:12.182224 4950 ubuntu.go:190] setting minikube options for container-runtime
I0719 03:34:12.182614 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 03:34:12.233836 4950 main.go:115] libmachine: Using SSH client type: native
I0719 03:34:12.234111 4950 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x43b89f0] 0x43b89c0 [] 0s} 127.0.0.1 32787 }
I0719 03:34:12.234127 4950 main.go:115] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0719 03:34:12.387187 4950 main.go:115] libmachine: SSH cmd err, output: : overlay

I0719 03:34:12.387219 4950 ubuntu.go:71] root file system type: overlay
I0719 03:34:12.387569 4950 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0719 03:34:12.387833 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 03:34:12.442319 4950 main.go:115] libmachine: Using SSH client type: native
I0719 03:34:12.442636 4950 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x43b89f0] 0x43b89c0 [] 0s} 127.0.0.1 32787 }
I0719 03:34:12.442728 4950 main.go:115] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0719 03:34:12.600360 4950 main.go:115] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity
TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install]
WantedBy=multi-user.target

I0719 03:34:12.600671 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 03:34:12.652035 4950 main.go:115] libmachine: Using SSH client type: native
I0719 03:34:12.652320 4950 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x43b89f0] 0x43b89c0 [] 0s} 127.0.0.1 32787 }
I0719 03:34:12.652349 4950 main.go:115] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0719 03:34:12.801359 4950 main.go:115] libmachine: SSH cmd err, output: :
I0719 03:34:12.801399 4950 machine.go:91] provisioned docker machine in 1.525765298s
I0719 03:34:12.801410 4950 start.go:204] post-start starting for "minikube" (driver="docker")
I0719 03:34:12.801419 4950 start.go:214] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0719 03:34:12.801639 4950 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0719 03:34:12.801826 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 03:34:12.849051 4950 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/Users/grubhart/.minikube/machines/minikube/id_rsa Username:docker}
I0719 03:34:12.960966 4950 ssh_runner.go:148] Run: cat /etc/os-release
I0719 03:34:12.968508 4950 main.go:115] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0719 03:34:12.968541 4950 main.go:115] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0719 03:34:12.968556 4950 main.go:115] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0719 03:34:12.968565 4950 info.go:96] Remote host: Ubuntu 19.10
I0719 03:34:12.968584 4950 filesync.go:118] Scanning /Users/grubhart/.minikube/addons for local assets ...
I0719 03:34:12.969093 4950 filesync.go:118] Scanning /Users/grubhart/.minikube/files for local assets ...
I0719 03:34:12.969198 4950 start.go:207] post-start completed in 167.775628ms
I0719 03:34:12.969210 4950 fix.go:55] fixHost completed within 1.754636532s
I0719 03:34:12.969218 4950 start.go:76] releasing machines lock for "minikube", held for 1.754669732s
I0719 03:34:12.969368 4950 cli_runner.go:109] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0719 03:34:13.019641 4950 ssh_runner.go:148] Run: systemctl --version
I0719 03:34:13.019776 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 03:34:13.021484 4950 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0719 03:34:13.021854 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 03:34:13.074489 4950 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/Users/grubhart/.minikube/machines/minikube/id_rsa Username:docker}
I0719 03:34:13.077127 4950 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/Users/grubhart/.minikube/machines/minikube/id_rsa Username:docker}
I0719 03:34:14.246545 4950 ssh_runner.go:188] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.224969967s)
I0719 03:34:14.246558 4950 ssh_runner.go:188] Completed: systemctl --version: (1.226854711s)
I0719 03:34:14.246769 4950 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd
I0719 03:34:14.264051 4950 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0719 03:34:14.283642 4950 cruntime.go:192] skipping containerd shutdown because we are bound to it
I0719 03:34:14.283807 4950 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0719 03:34:14.305569 4950 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0719 03:34:14.325490 4950 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0719 03:34:14.435127 4950 ssh_runner.go:148] Run: sudo systemctl start docker
I0719 03:34:14.453617 4950 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
I0719 03:34:14.561180 4950 cli_runner.go:109] Run: docker exec -t minikube dig +short host.docker.internal
I0719 03:34:14.738497 4950 network.go:57] got host ip for mount in container by digging dns: 192.168.65.2
I0719 03:34:14.739028 4950 ssh_runner.go:148] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0719 03:34:14.749600 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0719 03:34:14.802051 4950 preload.go:95] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0719 03:34:14.802105 4950 preload.go:103] Found local preload: /Users/grubhart/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v4-v1.18.3-docker-overlay2-amd64.tar.lz4
I0719 03:34:14.802275 4950 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0719 03:34:14.887230 4950 docker.go:381] Got preloaded images: -- stdout --
kubernetesui/dashboard:v2.0.1
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-apiserver:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0719 03:34:14.887257 4950 docker.go:319] Images already preloaded, skipping extraction
I0719 03:34:14.887397 4950 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0719 03:34:14.962213 4950 docker.go:381] Got preloaded images: -- stdout --
kubernetesui/dashboard:v2.0.1
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-apiserver:v1.18.3
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0719 03:34:14.962256 4950 cache_images.go:69] Images are preloaded, skipping loading
I0719 03:34:14.962473 4950 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0719 03:34:15.050171 4950 cni.go:74] Creating CNI manager for ""
I0719 03:34:15.050205 4950 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0719 03:34:15.050222 4950 kubeadm.go:84] Using pod CIDR:
I0719 03:34:15.050243 4950 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:172.17.0.3 APIServerPort:8443 KubernetesVersion:v1.18.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.3"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0719 03:34:15.050521 4950 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.17.0.3
bindPort: 8443
bootstrapTokens:

  • groups:
    • system:bootstrappers:kubeadm:default-node-token
      ttl: 24h0m0s
      usages:
    • signing
    • authentication
      nodeRegistration:
      criSocket: /var/run/dockershim.sock
      name: "minikube"
      kubeletExtraArgs:
      node-ip: 172.17.0.3
      taints: []

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
controllerManager:
extraArgs:
"leader-elect": "false"
scheduler:
extraArgs:
"leader-elect": "false"
kubernetesVersion: v1.18.3
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"

disable disk resource management by default

imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: ""
metricsBindAddress: 172.17.0.3:10249

I0719 03:34:15.050739 4950 kubeadm.go:787] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.3

[Install]
config:
{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0719 03:34:15.050965 4950 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.3
I0719 03:34:15.066578 4950 binaries.go:43] Found k8s binaries, skipping transfer
I0719 03:34:15.066786 4950 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0719 03:34:15.081140 4950 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
I0719 03:34:15.121839 4950 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0719 03:34:15.157848 4950 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1730 bytes)
I0719 03:34:15.193253 4950 ssh_runner.go:148] Run: grep 172.17.0.3 control-plane.minikube.internal$ /etc/hosts
I0719 03:34:15.202321 4950 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0719 03:34:15.300507 4950 ssh_runner.go:148] Run: sudo systemctl start kubelet
I0719 03:34:15.318284 4950 certs.go:52] Setting up /Users/grubhart/.minikube/profiles/minikube for IP: 172.17.0.3
I0719 03:34:15.318466 4950 certs.go:169] skipping minikubeCA CA generation: /Users/grubhart/.minikube/ca.key
I0719 03:34:15.318565 4950 certs.go:169] skipping proxyClientCA CA generation: /Users/grubhart/.minikube/proxy-client-ca.key
I0719 03:34:15.318775 4950 certs.go:269] skipping minikube-user signed cert generation: /Users/grubhart/.minikube/profiles/minikube/client.key
I0719 03:34:15.318853 4950 certs.go:269] skipping minikube signed cert generation: /Users/grubhart/.minikube/profiles/minikube/apiserver.key.0f3e66d0
I0719 03:34:15.318989 4950 certs.go:269] skipping aggregator signed cert generation: /Users/grubhart/.minikube/profiles/minikube/proxy-client.key
I0719 03:34:15.319382 4950 certs.go:348] found cert: /Users/grubhart/.minikube/certs/Users/grubhart/.minikube/certs/ca-key.pem (1679 bytes)
I0719 03:34:15.319468 4950 certs.go:348] found cert: /Users/grubhart/.minikube/certs/Users/grubhart/.minikube/certs/ca.pem (1042 bytes)
I0719 03:34:15.319546 4950 certs.go:348] found cert: /Users/grubhart/.minikube/certs/Users/grubhart/.minikube/certs/cert.pem (1082 bytes)
I0719 03:34:15.319598 4950 certs.go:348] found cert: /Users/grubhart/.minikube/certs/Users/grubhart/.minikube/certs/key.pem (1675 bytes)
I0719 03:34:15.320810 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I0719 03:34:15.356182 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0719 03:34:15.390252 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0719 03:34:15.428766 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0719 03:34:15.470080 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0719 03:34:15.504531 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0719 03:34:15.542213 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0719 03:34:15.578338 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0719 03:34:15.610650 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0719 03:34:15.649183 4950 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0719 03:34:15.686491 4950 ssh_runner.go:148] Run: openssl version
I0719 03:34:15.697905 4950 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0719 03:34:15.714023 4950 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0719 03:34:15.724378 4950 certs.go:389] hashing: -rw-r--r-- 1 root root 1066 Jan 25 2019 /usr/share/ca-certificates/minikubeCA.pem
I0719 03:34:15.724645 4950 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0719 03:34:15.736817 4950 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0719 03:34:15.754397 4950 kubeadm.go:327] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:1991 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0719 03:34:15.754698 4950 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0719 03:34:15.822949 4950 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0719 03:34:15.839220 4950 kubeadm.go:338] found existing configuration files, will attempt cluster restart
I0719 03:34:15.839252 4950 kubeadm.go:512] restartCluster start
I0719 03:34:15.839536 4950 ssh_runner.go:148] Run: sudo test -d /data/minikube
I0719 03:34:15.854814 4950 kubeadm.go:122] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:

stderr:
I0719 03:34:15.854976 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0719 03:34:15.911128 4950 ssh_runner.go:148] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0719 03:34:15.924877 4950 api_server.go:146] Checking apiserver status ...
I0719 03:34:15.925053 4950 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0719 03:34:15.944938 4950 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/6638/cgroup
I0719 03:34:15.962821 4950 api_server.go:162] apiserver freezer: "7:freezer:/docker/5c02acd7c011b034fcffaa41411139fed3ebefb56d6ab7a03341443e993d4de8/kubepods/burstable/pod6ff2e3bf96dbdcdd33879625130d5ccc/9afb59caa064bfc821cbed4d4fd6a72814d3bf8d53e63adc8976563542f9cd46"
I0719 03:34:15.963004 4950 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/docker/5c02acd7c011b034fcffaa41411139fed3ebefb56d6ab7a03341443e993d4de8/kubepods/burstable/pod6ff2e3bf96dbdcdd33879625130d5ccc/9afb59caa064bfc821cbed4d4fd6a72814d3bf8d53e63adc8976563542f9cd46/freezer.state
I0719 03:34:15.978374 4950 api_server.go:184] freezer state: "THAWED"
I0719 03:34:15.978423 4950 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:32784/healthz ...
I0719 03:34:15.988276 4950 api_server.go:241] https://127.0.0.1:32784/healthz returned 200:
ok
I0719 03:34:16.000152 4950 kubeadm.go:496] needs reconfigure: Unauthorized
I0719 03:34:16.000354 4950 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0719 03:34:16.019884 4950 kubeadm.go:150] found existing configuration files:
-rw------- 1 root root 5491 Jul 19 08:29 /etc/kubernetes/admin.conf
-rw------- 1 root root 5531 Jul 19 08:29 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1911 Jul 19 08:29 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5475 Jul 19 08:29 /etc/kubernetes/scheduler.conf

I0719 03:34:16.020115 4950 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0719 03:34:16.035928 4950 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0719 03:34:16.053024 4950 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0719 03:34:16.069212 4950 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0719 03:34:16.084940 4950 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0719 03:34:16.100244 4950 kubeadm.go:573] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0719 03:34:16.100270 4950 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0719 03:34:16.201760 4950 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0719 03:34:17.302607 4950 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.100798063s)
I0719 03:34:17.302635 4950 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0719 03:34:17.393583 4950 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0719 03:34:17.482499 4950 api_server.go:48] waiting for apiserver process to appear ...
I0719 03:34:17.482693 4950 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0719 03:34:17.500885 4950 api_server.go:68] duration metric: took 18.388948ms to wait for apiserver process to appear ...
I0719 03:34:17.500915 4950 api_server.go:84] waiting for apiserver healthz status ...
I0719 03:34:17.500926 4950 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:32784/healthz ...
I0719 03:34:17.511884 4950 api_server.go:241] https://127.0.0.1:32784/healthz returned 200:
ok
W0719 03:34:17.514119 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:35:58.019458 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:35:58.524614 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:38:15.025002 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:38:15.523204 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:38:16.023884 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:38:16.521941 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:38:17.020950 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
I0719 03:38:17.522886 4950 kubeadm.go:516] restartCluster took 4m1.680553504s
🤦 Unable to restart cluster, will reset it: apiserver health: controlPlane never updated to v1.18.3
I0719 03:38:17.523130 4950 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0719 03:39:13.372517 4950 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (55.848659708s)
I0719 03:39:13.372990 4950 ssh_runner.go:148] Run: sudo systemctl stop -f kubelet
I0719 03:39:13.393428 4950 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0719 03:39:13.457088 4950 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0719 03:39:13.473221 4950 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0719 03:39:13.473385 4950 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0719 03:39:13.486241 4950 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0719 03:39:13.486289 4950 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0719 03:39:27.520700 4950 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (14.034199513s)
I0719 03:39:27.520748 4950 cni.go:74] Creating CNI manager for ""
I0719 03:39:27.520773 4950 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0719 03:39:27.520814 4950 ssh_runner.go:148] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0719 03:39:27.521003 4950 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.18.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0719 03:39:27.521041 4950 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.18.3/kubectl label nodes minikube.k8s.io/version=v1.12.1 minikube.k8s.io/commit=5664228288552de9f3a446ea4f51c6f29bbdd0e0 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_07_19T03_39_27_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0719 03:39:28.438115 4950 ops.go:35] apiserver oom_adj: -16
I0719 03:39:28.438285 4950 kubeadm.go:863] duration metric: took 917.443413ms to wait for elevateKubeSystemPrivileges.
I0719 03:39:28.438316 4950 kubeadm.go:329] StartCluster complete in 5m12.679975261s
I0719 03:39:28.438338 4950 settings.go:123] acquiring lock: {Name:mk47bf7647bc74b013a72fdf28fd00aa56bb404b Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0719 03:39:28.438489 4950 settings.go:131] Updating kubeconfig: /Users/grubhart/.kube/config
I0719 03:39:28.440920 4950 lock.go:35] WriteFile acquiring /Users/grubhart/.kube/config: {Name:mk5194232d5641140a4c29facb1774dd79565358 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0719 03:39:28.442668 4950 start.go:195] Will wait wait-timeout for node ...
I0719 03:39:28.442735 4950 addons.go:347] enableAddons start: toEnable=map[], additional=[]
🔎 Verifying Kubernetes components...
I0719 03:39:28.442802 4950 addons.go:53] Setting storage-provisioner=true in profile "minikube"
I0719 03:39:28.442802 4950 addons.go:53] Setting default-storageclass=true in profile "minikube"
I0719 03:39:28.442914 4950 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl scale deployment --replicas=1 coredns -n=kube-system
I0719 03:39:28.453401 4950 addons.go:129] Setting addon storage-provisioner=true in "minikube"
I0719 03:39:28.453412 4950 addons.go:269] enableOrDisableStorageClasses default-storageclass=true on "minikube"
W0719 03:39:28.453420 4950 addons.go:138] addon storage-provisioner should already be in state true
I0719 03:39:28.453436 4950 host.go:65] Checking if "minikube" exists ...
I0719 03:39:28.453546 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0719 03:39:28.456113 4950 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0719 03:39:28.456677 4950 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0719 03:39:28.528393 4950 addons.go:236] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0719 03:39:28.528436 4950 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (1709 bytes)
I0719 03:39:28.528768 4950 api_server.go:48] waiting for apiserver process to appear ...
I0719 03:39:28.528826 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 03:39:28.528984 4950 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0719 03:39:28.587275 4950 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/Users/grubhart/.minikube/machines/minikube/id_rsa Username:docker}
❗ Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Unauthorized]
I0719 03:39:28.907724 4950 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0719 03:39:28.949446 4950 start.go:548] successfully scaled coredns replicas to 1
I0719 03:39:28.949496 4950 api_server.go:68] duration metric: took 506.775568ms to wait for apiserver process to appear ...
I0719 03:39:28.949514 4950 api_server.go:84] waiting for apiserver healthz status ...
I0719 03:39:28.949530 4950 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:32784/healthz ...
I0719 03:39:29.022409 4950 api_server.go:241] https://127.0.0.1:32784/healthz returned 200:
ok
W0719 03:39:29.027325 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
🌟 Enabled addons: default-storageclass, storage-provisioner
I0719 03:39:29.467109 4950 addons.go:349] enableAddons completed in 1.024394514s
W0719 03:39:29.532059 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:39:40.034540 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:39:40.532937 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:39:41.035460 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:39:41.530456 4950 api_server.go:117] api server version match failed: server version: Get "https://127.0.0.1:32784/version?timeout=32s": dial tcp 127.0.0.1:32784: connect: connection refused
W0719 03:39:42.028373 4950 api_server.go:117] api server version match failed: server version: Get "https://127.0.0.1:32784/version?timeout=32s": dial tcp 127.0.0.1:32784: connect: connection refused
W0719 03:39:42.528126 4950 api_server.go:117] api server version match failed: server version: Get "https://127.0.0.1:32784/version?timeout=32s": dial tcp 127.0.0.1:32784: connect: connection refused
W0719 03:41:01.034343 4950 api_server.go:117] api server version match failed: server version: Get "https://127.0.0.1:32784/version?timeout=32s": dial tcp 127.0.0.1:32784: connect: connection refused
W0719 03:43:27.535607 4950 api_server.go:117] api server version match failed: server version: Get "https://127.0.0.1:32784/version?timeout=32s": dial tcp 127.0.0.1:32784: connect: connection refused
W0719 03:43:28.035370 4950 api_server.go:117] api server version match failed: server version: Get "https://127.0.0.1:32784/version?timeout=32s": dial tcp 127.0.0.1:32784: connect: connection refused
W0719 03:43:28.533477 4950 api_server.go:117] api server version match failed: server version: Get "https://127.0.0.1:32784/version?timeout=32s": dial tcp 127.0.0.1:32784: connect: connection refused
W0719 03:43:29.031295 4950 api_server.go:117] api server version match failed: server version: Get "https://127.0.0.1:32784/version?timeout=32s": dial tcp 127.0.0.1:32784: connect: connection refused
W0719 03:43:29.031626 4950 api_server.go:117] api server version match failed: server version: Get "https://127.0.0.1:32784/version?timeout=32s": dial tcp 127.0.0.1:32784: connect: connection refused
I0719 03:43:29.031822 4950 exit.go:58] WithError(failed to start node)=startup failed: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.18.3 called from:
goroutine 1 [running]:
runtime/debug.Stack(0x0, 0x0, 0x0)
/usr/local/Cellar/go/1.14.5/libexec/src/runtime/debug/stack.go:24 +0x9d
k8s.io/minikube/pkg/minikube/exit.WithError(0x57c141b, 0x14, 0x5adcc80, 0xc0005dfba0)
/private/tmp/minikube-20200717-69613-180ctkg/pkg/minikube/exit/exit.go:58 +0x34
k8s.io/minikube/cmd/minikube/cmd.runStart(0x6908020, 0xc0005f5440, 0x0, 0x2)
/private/tmp/minikube-20200717-69613-180ctkg/cmd/minikube/cmd/start.go:206 +0x4f8
github.com/spf13/cobra.(*Command).execute(0x6908020, 0xc0005f5420, 0x2, 0x2, 0x6908020, 0xc0005f5420)
/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/github.com/spf13/[email protected]/command.go:846 +0x29d
github.com/spf13/cobra.(*Command).ExecuteC(0x6907060, 0x0, 0x1, 0xc0005f2b60)
/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/github.com/spf13/[email protected]/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
/private/tmp/minikube-20200717-69613-180ctkg/cmd/minikube/cmd/root.go:106 +0x72c
main.main()
/private/tmp/minikube-20200717-69613-180ctkg/cmd/minikube/main.go:71 +0x11f
W0719 03:43:29.032038 4950 out.go:232] failed to start node: startup failed: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.18.3

💣 failed to start node: startup failed: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.18.3

😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose

Full output of minikube start command used, if not already included:

Optional: Full output of minikube logs command:

grubhart@grubharts-mbp minikube_env % minikube logs

💣 Unable to get machine status: state: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:

stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose
grubhart@grubharts-mbp minikube_env %

@tstromberg
Copy link
Contributor

This warning is interesting:

W0719 03:39:29.027325 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials

I've never seen that before. I also noticed:

create...Using existing machine configuration 
W0719 03:34:11.266391 4950 fix.go:131] unexpected machine state, will restart: 
🏃 Updating the running docker "minikube" container ... I0719 03:34:11.275601 4950 machine.go:88] provisioning docker machine ... 
I0719 03:34:11.275634 4950 ubuntu.go:166] provisioning hostname "minikube" I0719

That suggests to me that running minikube delete would likely fix your issue.

I'm also unsure about why minikube logs failed, but it seems that it did because Docker for Desktop wasn't running.

@tstromberg tstromberg added the kind/support Categorizes issue or PR as a support question. label Jul 22, 2020
@fabiand
Copy link
Contributor

fabiand commented Jul 24, 2020

I'm getting the same (?) problem, but minikube delete does not fix it:

[fabiand@node01 Downloads]$ minikube start -n 1 --driver=kvm2 --container-runtime=cri-o --memory 5G
😄  minikube v1.12.1 auf Fedora 31
✨  Using the kvm2 driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating kvm2 VM (CPUs=2, Memory=5120MB, Disk=20000MB) ...
🎁  Vorbereiten von Kubernetes v1.18.3 auf CRI-O 1.17.1...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
❗  Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Unauthorized]
🌟  Enabled addons: default-storageclass, storage-provisioner

💣  failed to start node: startup failed: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.18.3

😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

@fabiand
Copy link
Contributor

fabiand commented Jul 24, 2020

This happens with n=1 and n=2
Also with --memory 10G

@Grubhart
Copy link
Author

Grubhart commented Jul 24, 2020

Hi there the same error here, I started docker desktop, also tried with virtualbox too, same error

@hbobenicio
Copy link

Same error here, but with docker driver. I don't know if this may be related, but recently I've updated minikube to 1.12.2 (I was using either 1.12.0 or 1.12.1)

@hbobenicio
Copy link

this solved for me. deleted .minikube and it worked. I've also removed minikube docker images, just in case... (docker images | grep minikube)

@tstromberg tstromberg changed the title failed to start node: startup failed: wait for healthy API server: controlPlane never updated to v1.18.3 failed to start node: controlPlane never updated to v1.18.x (re-use of cluster) Aug 20, 2020
@tstromberg
Copy link
Contributor

For most people, minikube delete will workaround this issue. If it does not, please provide the output of minikube start --alsologtostderr -v=1 and minikube logs, as there is clearly an authentication issue that needs to get worked out.

Related: #8981

@JordiCano
Copy link

minikube delete does not fix the issue.

minikube_logs.txt
minikube_start.txt

@abrahamfathman
Copy link

This worked for me:

  1. minikube delete
  2. rm -rf ~/.minikube/
  3. docker images | grep minikube
    Found one image...
  4. docker rmi e3ca409c7daf

Ran minikube start again, and it worked

@tstromberg
Copy link
Contributor

All of the reports so far are for minikube v1.12.x, so it's unclear if we accidentally fixed this.

The cause of this seems to be that the data is $HOME/.kube/config is stale, but I've got no idea as to why this might be.

Can someone report back if minikube v1.13 runs into this issue?

@tstromberg tstromberg added the top-10-issues Top 10 support issues label Sep 23, 2020
@fabiand
Copy link
Contributor

fabiand commented Oct 7, 2020

Yes, I'm still seeing this with v1.13.1:

[fabiand@node01 ~]$ minikube start --driver=kvm2 --memory 12G --container-runtime=cri-o --cpus=4
😄  minikube v1.13.1 auf Fedora 31
✨  Using the kvm2 driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating kvm2 VM (CPUs=4, Memory=12288MB, Disk=20000MB) ...
🎁  Vorbereiten von Kubernetes v1.19.2 auf CRI-O 1.17.3...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
❗  Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Unauthorized]
🌟  Enabled addons: default-storageclass, storage-provisioner

❌  Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.19.2

😿  If the above advice does not help, please let us know: 
👉  https://github.com/kubernetes/minikube/issues/new/choose

THis could be resolved by using Abraham's steps above:

$ minikube delete
$ rm -rf ~/.minikube
# NOTE: No docker im removal
$ minikube start --driver=kvm2 --memory 12G --container-runtime=cri-o --cpus=4
😄  minikube v1.13.1 auf Fedora 31
✨  Using the kvm2 driver based on user configuration
💿  Downloading VM boot image ...
    > minikube-v1.13.1.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s
    > minikube-v1.13.1.iso: 173.91 MiB / 173.91 MiB  100.00% 1.28 MiB p/s 2m16s
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.19.2 preload ...
    > preloaded-images-k8s-v6-v1.19.2-cri-o-overlay-amd64.tar.lz4: 551.15 MiB /
🔥  Creating kvm2 VM (CPUs=4, Memory=12288MB, Disk=20000MB) ...
🎁  Vorbereiten von Kubernetes v1.19.2 auf CRI-O 1.17.3...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner

❗  /home/fabiand/bin/kubectl is version 1.12.0, which may have incompatibilites with Kubernetes 1.19.2.
💡  Want kubectl v1.19.2? Try 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" by default

@kaustubhd93
Copy link

kaustubhd93 commented Oct 18, 2020

I also faced the same issue on Kubuntu 20.04. But it happened after I applied flannel pod to my minikube. It kept failing and I deleted it, yet it would still run on restarting minikube. Removing the minikube docker image was also not working.
Finally @abrahamfathman's solution worked for me

@tstromberg
Copy link
Contributor

Interesting. I can't conceive of a reason why minikube delete shouldn't be sufficient, but the user experience here seems counter to that.

It may be possible that minikube v1.4.0 has improved this error situation - if someone runs into this error with v1.4.0, please follow-up on this issue.

@Enrico204
Copy link

I'm using minikube v1.14.2 (on Debian 10) and I had the same issue:

$ minikube start --kubernetes-version=v1.17.4 --driver=docker
😄  minikube v1.14.2 on Debian 10.6
    ▪ MINIKUBE_HOME=/mnt/
✨  Using the docker driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=3900MB) ...
🐳  Preparing Kubernetes v1.17.4 on Docker 19.03.8 ...
🔎  Verifying Kubernetes components...
❗  Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Unauthorized]
🌟  Enabled addons: storage-provisioner

❌  Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.17.4

😿  If the above advice does not help, please let us know: 
👉  https://github.com/kubernetes/minikube/issues/new/choose

I tried with virtualbox, kvm2 and docker drivers, no changes.

Unfortunately I already deleted ~/.minikube and this fixed the issue :-(

@asoltesz
Copy link

The issue is present on Minikube 1.15.1 (Ubuntu 20.04)

Abraham's workaround did work.

@gwgorman
Copy link

gwgorman commented Dec 8, 2020

Deleting the cluster and the .minikube directory did it for me.

@nothinux
Copy link

nothinux commented Jan 5, 2021

this is worked for me

minikube delete --all --purge
minikube start

@korpx-z
Copy link

korpx-z commented Jan 21, 2021

This worked for me:

1. minikube delete

2. rm -rf ~/.minikube/

3. docker images | grep minikube
   Found one image...

4. docker rmi e3ca409c7daf

Ran minikube start again, and it worked

this worked perfect for me thank you Abe! (v1.16.0)

$ minikube start
😄  minikube v1.16.0 on Darwin 11.1
✨  Automatically selected the docker driver. Other choices: hyperkit, virtualbox
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.20.0 preload ...
    > preloaded-images-k8s-v8-v1....: 491.00 MiB / 491.00 MiB  100.00% 20.20 Mi
🔥  Creating docker container (CPUs=2, Memory=1988MB) ...
🐳  Preparing Kubernetes v1.20.0 on Docker 20.10.0 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

@robpacheco
Copy link

I'm having a similar issue on macOS Big Sur with Minikube 1.17.

I ran:
$ minikube start -p sns --kubernetes-version v1.17.15 --vm-driver hyperkit --memory 4096 --cpus 2

Here is the output:
💿 Downloading VM boot image ...
> minikube-v1.17.0.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s
> minikube-v1.17.0.iso: 212.69 MiB / 212.69 MiB [ 100.00% 13.84 MiB p/s 16s
👍 Starting control plane node sns in cluster sns
💾 Downloading Kubernetes v1.17.15 preload ...
> preloaded-images-k8s-v8-v1....: 508.92 MiB / 508.92 MiB 100.00% 13.66 Mi
🔥 Creating hyperkit VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
🐳 Preparing Kubernetes v1.17.15 on Docker 20.10.2 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass

❌ Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.17.15

😿 If the above advice does not help, please let us know:
👉 https://github.com/kubernetes/minikube/issues/new/choose

Before this I ran minikube delete --all and also removed the ~/.minikube directory. I have no other docker daemon running.

What else would help to diagnose this?

@medyagh
Copy link
Member

medyagh commented Feb 3, 2021

@robpacheco do u mind sharing this

minikube delete --all
minikube start -p sns --kubernetes-version v1.17.15 --vm-driver hyperkit --memory 4096 --cpus 2 --alsologtostderr

btw I am curious is there a reason that you choose kubernetes-version v1.17.15 ?

@robpacheco
Copy link

@medyagh there was a lot of output, so Im attaching a file. The reason I chose 1.17.x is because a lot of the cloud providers and kube hosts are somewhere around that version, so I wanted to keep some parity there. I can try a newer version if these logs don't help and you'd like to narrow it down a bit.
minikube-output.txt

@sharifelgamal
Copy link
Collaborator

sharifelgamal commented Mar 3, 2021

The relevant output here is:

I0203 15:16:05.361572   64825 api_server.go:137] control plane version: v1.17.16-rc.0
W0203 15:16:05.361643   64825 api_server.go:117] api server version match failed: controlPane = "v1.17.16-rc.0", expected: "v1.17.15"

But it's not totally clear why the api server version is wrong.

@douglascamata
Copy link

I still see this problem today with minikube v1.19.0, FYI.

@spowelljr spowelljr added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Apr 21, 2021
@kenessajr
Copy link

this is worked for me

minikube delete --all --purge
minikube start

This worked for me

@sseide
Copy link

sseide commented Jun 1, 2021

same problem for me running minikube v1.20.0 auf Debian 10.9 with virtualbox
minikube delete or minikube delete --all did not work, i had to do a full

minikube delete --all --purge

for cluster start

@sharifelgamal sharifelgamal added priority/backlog Higher priority than priority/awaiting-more-evidence. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. needs-solution-message Issues where where offering a solution for an error would be helpful and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Jun 14, 2021
@xbnrxout
Copy link

This worked for me:

  1. minikube delete
  2. rm -rf ~/.minikube/
  3. docker images | grep minikube
    Found one image...
  4. docker rmi e3ca409c7daf

Ran minikube start again, and it worked

This worked for me! Thank you

@tomkivlin
Copy link

Adding that on minikube v1.23.2 I had this same problem, and the steps provided worked for me:

minikube delete
rm -rf ~/.minikube/
minikube start --driver=virtualbox

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 27, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 26, 2022
@klaases
Copy link
Contributor

klaases commented Jan 26, 2022

Based on @xbnrxout and @tomkivlin comments, will close out this issue.

@Grubhart, please feel free to re-open the issue by commenting with /reopen.

Thank you for sharing your experience!

@klaases klaases closed this as completed Jan 26, 2022
@carleeto
Copy link

carleeto commented Mar 1, 2022

In case it helps, I can reproduce the issue with the docker driver on Ubuntu 20.04 LTS.
However, using a different driver (like kvm2) works perfectly: minikube delete --all --purge && minikube start --driver=kvm2

Logs for minikube start after running minikube delete --all --purge:
logs.txt

@HWiese1980
Copy link

/reopen

I'm running into the same issue right now. Nothing helped so far, purged multiple times, removed images, containers, profiles... changing Docker Desktop's IP address range... it's always the same error message.

Logs follow:
logs.txt

@k8s-ci-robot
Copy link
Contributor

@HWiese1980: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

I'm running into the same issue right now. Nothing helped so far, purged multiple times, removed images, containers, profiles... changing Docker Desktop's IP address range... it's always the same error message.

Logs follow:
logs.txt

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mdsadiqueinam
Copy link

/reopen

@k8s-ci-robot
Copy link
Contributor

@sadiqueWiseboxs: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@HWiese1980
Copy link

Somehow I got it running after several purges and re-installs. I'm sorry, but unfortunately I have no idea what eventually solved the problem... Maybe there was an update of some component somewhere in between my reinstalls that I overlooked... @sadiqueWiseboxs Do you have the same issue with the most recent version of minikube?

@mdsadiqueinam
Copy link

mdsadiqueinam commented Aug 17, 2022

This is the output I am getting when running minikube start

✨  Automatically selected the docker driver. Other choices: virtualbox, ssh, none, qemu2 (experimental)
📌  Using Docker driver with root privileges
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.24.3 preload ...
    > preloaded-images-k8s-v18-v1...:  405.75 MiB / 405.75 MiB  100.00% 2.05 Mi
    > gcr.io/k8s-minikube/kicbase:  386.61 MiB / 386.61 MiB  100.00% 1.59 MiB p
    > gcr.io/k8s-minikube/kicbase:  0 B [______________________] ?% ? p/s 2m26s
🔥  Creating docker container (CPUs=2, Memory=2772MB) ...
🐳  Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...| E0817 12:49:24.482368   17705 start.go:267] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: timed out waiting for the condition

🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
❗  Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 192.168.49.2:8443: i/o timeout]
🌟  Enabled addons: storage-provisioner

❌  Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: timed out waiting for the condition

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

logs.txt

@mdsadiqueinam
Copy link

/reopen

I'm running into the same issue right now. Nothing helped so far, purged multiple times, removed images, containers, profiles... changing Docker Desktop's IP address range... it's always the same error message.

Logs follow: logs.txt

Did you find any solution yet

@HWiese1980
Copy link

@sadiqueWiseboxs None that I could share. It works again after several un- and reinstalls. I can't tell what eventually solved the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code needs-solution-message Issues where where offering a solution for an error would be helpful priority/backlog Higher priority than priority/awaiting-more-evidence. top-10-issues Top 10 support issues
Projects
None yet
Development

No branches or pull requests