-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
failed to start node: controlPlane never updated to v1.18.x (re-use of cluster) #8765
Comments
This warning is interesting:
I've never seen that before. I also noticed:
That suggests to me that running I'm also unsure about why |
I'm getting the same (?) problem, but minikube delete does not fix it:
|
This happens with n=1 and n=2 |
Hi there the same error here, I started docker desktop, also tried with virtualbox too, same error |
Same error here, but with docker driver. I don't know if this may be related, but recently I've updated minikube to 1.12.2 (I was using either 1.12.0 or 1.12.1) |
this solved for me. deleted .minikube and it worked. I've also removed minikube docker images, just in case... ( |
For most people, Related: #8981 |
minikube delete does not fix the issue. |
This worked for me:
Ran minikube start again, and it worked |
All of the reports so far are for minikube v1.12.x, so it's unclear if we accidentally fixed this. The cause of this seems to be that the data is $HOME/.kube/config is stale, but I've got no idea as to why this might be. Can someone report back if minikube v1.13 runs into this issue? |
Yes, I'm still seeing this with v1.13.1:
THis could be resolved by using Abraham's steps above:
|
I also faced the same issue on |
Interesting. I can't conceive of a reason why It may be possible that minikube v1.4.0 has improved this error situation - if someone runs into this error with v1.4.0, please follow-up on this issue. |
I'm using minikube v1.14.2 (on Debian 10) and I had the same issue:
I tried with Unfortunately I already deleted |
The issue is present on Minikube 1.15.1 (Ubuntu 20.04) |
Deleting the cluster and the .minikube directory did it for me. |
this is worked for me
|
this worked perfect for me thank you Abe! (v1.16.0)
|
I'm having a similar issue on macOS Big Sur with Minikube 1.17. I ran: Here is the output: ❌ Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.17.15 😿 If the above advice does not help, please let us know: Before this I ran What else would help to diagnose this? |
@robpacheco do u mind sharing this
btw I am curious is there a reason that you choose kubernetes-version v1.17.15 ? |
@medyagh there was a lot of output, so Im attaching a file. The reason I chose 1.17.x is because a lot of the cloud providers and kube hosts are somewhere around that version, so I wanted to keep some parity there. I can try a newer version if these logs don't help and you'd like to narrow it down a bit. |
The relevant output here is:
But it's not totally clear why the api server version is wrong. |
I still see this problem today with minikube v1.19.0, FYI. |
This worked for me |
same problem for me running minikube v1.20.0 auf Debian 10.9 with virtualbox
for cluster start |
This worked for me! Thank you |
Adding that on minikube v1.23.2 I had this same problem, and the steps provided worked for me:
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
Based on @xbnrxout and @tomkivlin comments, will close out this issue. @Grubhart, please feel free to re-open the issue by commenting with Thank you for sharing your experience! |
In case it helps, I can reproduce the issue with the docker driver on Ubuntu 20.04 LTS. Logs for |
/reopen I'm running into the same issue right now. Nothing helped so far, purged multiple times, removed images, containers, profiles... changing Docker Desktop's IP address range... it's always the same error message. Logs follow: |
@HWiese1980: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@sadiqueWiseboxs: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Somehow I got it running after several purges and re-installs. I'm sorry, but unfortunately I have no idea what eventually solved the problem... Maybe there was an update of some component somewhere in between my reinstalls that I overlooked... @sadiqueWiseboxs Do you have the same issue with the most recent version of minikube? |
This is the output I am getting when running
|
Did you find any solution yet |
@sadiqueWiseboxs None that I could share. It works again after several un- and reinstalls. I can't tell what eventually solved the problem. |
I'm trying to start minikube for very first time and i get error message: startup failed: wait for healthy API server: controlPlane never updated to v1.18.3
also trying changing kubernetes version to 1.17, 1.16, always same result
mi environment is Mac OS Catalina: 10.15.5
here i include all the
Steps to reproduce the issue:
1.minikube start --driver=docker
2.
3.
Full output of failed command:
I0719 03:34:11.513493 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 03:34:11.562193 4950 main.go:115] libmachine: Using SSH client type: native
I0719 03:34:11.562423 4950 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x43b89f0] 0x43b89c0 [] 0s} 127.0.0.1 32787 }
I0719 03:34:11.562458 4950 main.go:115] libmachine: About to run SSH command:
I0719 03:34:11.703522 4950 main.go:115] libmachine: SSH cmd err, output: :
I0719 03:34:11.703602 4950 ubuntu.go:172] set auth options {CertDir:/Users/grubhart/.minikube CaCertPath:/Users/grubhart/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/grubhart/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/grubhart/.minikube/machines/server.pem ServerKeyPath:/Users/grubhart/.minikube/machines/server-key.pem ClientKeyPath:/Users/grubhart/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/grubhart/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/grubhart/.minikube}
I0719 03:34:11.703636 4950 ubuntu.go:174] setting up certificates
I0719 03:34:11.703646 4950 provision.go:82] configureAuth start
I0719 03:34:11.703877 4950 cli_runner.go:109] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0719 03:34:11.757497 4950 provision.go:131] copyHostCerts
I0719 03:34:11.757711 4950 exec_runner.go:91] found /Users/grubhart/.minikube/ca.pem, removing ...
I0719 03:34:11.758023 4950 exec_runner.go:98] cp: /Users/grubhart/.minikube/certs/ca.pem --> /Users/grubhart/.minikube/ca.pem (1042 bytes)
I0719 03:34:11.758492 4950 exec_runner.go:91] found /Users/grubhart/.minikube/cert.pem, removing ...
I0719 03:34:11.758671 4950 exec_runner.go:98] cp: /Users/grubhart/.minikube/certs/cert.pem --> /Users/grubhart/.minikube/cert.pem (1082 bytes)
I0719 03:34:11.759115 4950 exec_runner.go:91] found /Users/grubhart/.minikube/key.pem, removing ...
I0719 03:34:11.759293 4950 exec_runner.go:98] cp: /Users/grubhart/.minikube/certs/key.pem --> /Users/grubhart/.minikube/key.pem (1675 bytes)
I0719 03:34:11.759592 4950 provision.go:105] generating server cert: /Users/grubhart/.minikube/machines/server.pem ca-key=/Users/grubhart/.minikube/certs/ca.pem private-key=/Users/grubhart/.minikube/certs/ca-key.pem org=grubhart.minikube san=[172.17.0.3 localhost 127.0.0.1]
I0719 03:34:11.927781 4950 provision.go:159] copyRemoteCerts
I0719 03:34:11.928062 4950 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0719 03:34:11.928210 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 03:34:11.975614 4950 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/Users/grubhart/.minikube/machines/minikube/id_rsa Username:docker}
I0719 03:34:12.077531 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1042 bytes)
I0719 03:34:12.111364 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/machines/server.pem --> /etc/docker/server.pem (1123 bytes)
I0719 03:34:12.147493 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0719 03:34:12.182202 4950 provision.go:85] duration metric: configureAuth took 478.535115ms
I0719 03:34:12.182224 4950 ubuntu.go:190] setting minikube options for container-runtime
I0719 03:34:12.182614 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 03:34:12.233836 4950 main.go:115] libmachine: Using SSH client type: native
I0719 03:34:12.234111 4950 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x43b89f0] 0x43b89c0 [] 0s} 127.0.0.1 32787 }
I0719 03:34:12.234127 4950 main.go:115] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0719 03:34:12.387187 4950 main.go:115] libmachine: SSH cmd err, output: : overlay
I0719 03:34:12.387219 4950 ubuntu.go:71] root file system type: overlay
I0719 03:34:12.387569 4950 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0719 03:34:12.387833 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 03:34:12.442319 4950 main.go:115] libmachine: Using SSH client type: native
I0719 03:34:12.442636 4950 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x43b89f0] 0x43b89c0 [] 0s} 127.0.0.1 32787 }
I0719 03:34:12.442728 4950 main.go:115] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0719 03:34:12.600360 4950 main.go:115] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0719 03:34:12.600671 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 03:34:12.652035 4950 main.go:115] libmachine: Using SSH client type: native
I0719 03:34:12.652320 4950 main.go:115] libmachine: &{{{ 0 [] [] []} docker [0x43b89f0] 0x43b89c0 [] 0s} 127.0.0.1 32787 }
I0719 03:34:12.652349 4950 main.go:115] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0719 03:34:12.801359 4950 main.go:115] libmachine: SSH cmd err, output: :
I0719 03:34:12.801399 4950 machine.go:91] provisioned docker machine in 1.525765298s
I0719 03:34:12.801410 4950 start.go:204] post-start starting for "minikube" (driver="docker")
I0719 03:34:12.801419 4950 start.go:214] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0719 03:34:12.801639 4950 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0719 03:34:12.801826 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 03:34:12.849051 4950 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/Users/grubhart/.minikube/machines/minikube/id_rsa Username:docker}
I0719 03:34:12.960966 4950 ssh_runner.go:148] Run: cat /etc/os-release
I0719 03:34:12.968508 4950 main.go:115] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0719 03:34:12.968541 4950 main.go:115] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0719 03:34:12.968556 4950 main.go:115] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0719 03:34:12.968565 4950 info.go:96] Remote host: Ubuntu 19.10
I0719 03:34:12.968584 4950 filesync.go:118] Scanning /Users/grubhart/.minikube/addons for local assets ...
I0719 03:34:12.969093 4950 filesync.go:118] Scanning /Users/grubhart/.minikube/files for local assets ...
I0719 03:34:12.969198 4950 start.go:207] post-start completed in 167.775628ms
I0719 03:34:12.969210 4950 fix.go:55] fixHost completed within 1.754636532s
I0719 03:34:12.969218 4950 start.go:76] releasing machines lock for "minikube", held for 1.754669732s
I0719 03:34:12.969368 4950 cli_runner.go:109] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0719 03:34:13.019641 4950 ssh_runner.go:148] Run: systemctl --version
I0719 03:34:13.019776 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 03:34:13.021484 4950 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0719 03:34:13.021854 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 03:34:13.074489 4950 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/Users/grubhart/.minikube/machines/minikube/id_rsa Username:docker}
I0719 03:34:13.077127 4950 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/Users/grubhart/.minikube/machines/minikube/id_rsa Username:docker}
I0719 03:34:14.246545 4950 ssh_runner.go:188] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.224969967s)
I0719 03:34:14.246558 4950 ssh_runner.go:188] Completed: systemctl --version: (1.226854711s)
I0719 03:34:14.246769 4950 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd
I0719 03:34:14.264051 4950 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0719 03:34:14.283642 4950 cruntime.go:192] skipping containerd shutdown because we are bound to it
I0719 03:34:14.283807 4950 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0719 03:34:14.305569 4950 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0719 03:34:14.325490 4950 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0719 03:34:14.435127 4950 ssh_runner.go:148] Run: sudo systemctl start docker
I0719 03:34:14.453617 4950 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
I0719 03:34:14.561180 4950 cli_runner.go:109] Run: docker exec -t minikube dig +short host.docker.internal
I0719 03:34:14.738497 4950 network.go:57] got host ip for mount in container by digging dns: 192.168.65.2
I0719 03:34:14.739028 4950 ssh_runner.go:148] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0719 03:34:14.749600 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0719 03:34:14.802051 4950 preload.go:95] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0719 03:34:14.802105 4950 preload.go:103] Found local preload: /Users/grubhart/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v4-v1.18.3-docker-overlay2-amd64.tar.lz4
I0719 03:34:14.802275 4950 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0719 03:34:14.887230 4950 docker.go:381] Got preloaded images: -- stdout --
kubernetesui/dashboard:v2.0.1
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-apiserver:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
gcr.io/k8s-minikube/storage-provisioner:v1.8.1
-- /stdout --
I0719 03:34:14.887257 4950 docker.go:319] Images already preloaded, skipping extraction
I0719 03:34:14.887397 4950 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0719 03:34:14.962213 4950 docker.go:381] Got preloaded images: -- stdout --
kubernetesui/dashboard:v2.0.1
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-apiserver:v1.18.3
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
gcr.io/k8s-minikube/storage-provisioner:v1.8.1
-- /stdout --
I0719 03:34:14.962256 4950 cache_images.go:69] Images are preloaded, skipping loading
I0719 03:34:14.962473 4950 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0719 03:34:15.050171 4950 cni.go:74] Creating CNI manager for ""
I0719 03:34:15.050205 4950 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0719 03:34:15.050222 4950 kubeadm.go:84] Using pod CIDR:
I0719 03:34:15.050243 4950 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:172.17.0.3 APIServerPort:8443 KubernetesVersion:v1.18.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.3"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0719 03:34:15.050521 4950 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.17.0.3
bindPort: 8443
bootstrapTokens:
ttl: 24h0m0s
usages:
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "minikube"
kubeletExtraArgs:
node-ip: 172.17.0.3
taints: []
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "172.17.0.3"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
controllerManager:
extraArgs:
"leader-elect": "false"
scheduler:
extraArgs:
"leader-elect": "false"
kubernetesVersion: v1.18.3
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: ""
metricsBindAddress: 172.17.0.3:10249
I0719 03:34:15.050739 4950 kubeadm.go:787] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.3
[Install]
config:
{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0719 03:34:15.050965 4950 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.3
I0719 03:34:15.066578 4950 binaries.go:43] Found k8s binaries, skipping transfer
I0719 03:34:15.066786 4950 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0719 03:34:15.081140 4950 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
I0719 03:34:15.121839 4950 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0719 03:34:15.157848 4950 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1730 bytes)
I0719 03:34:15.193253 4950 ssh_runner.go:148] Run: grep 172.17.0.3 control-plane.minikube.internal$ /etc/hosts
I0719 03:34:15.202321 4950 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0719 03:34:15.300507 4950 ssh_runner.go:148] Run: sudo systemctl start kubelet
I0719 03:34:15.318284 4950 certs.go:52] Setting up /Users/grubhart/.minikube/profiles/minikube for IP: 172.17.0.3
I0719 03:34:15.318466 4950 certs.go:169] skipping minikubeCA CA generation: /Users/grubhart/.minikube/ca.key
I0719 03:34:15.318565 4950 certs.go:169] skipping proxyClientCA CA generation: /Users/grubhart/.minikube/proxy-client-ca.key
I0719 03:34:15.318775 4950 certs.go:269] skipping minikube-user signed cert generation: /Users/grubhart/.minikube/profiles/minikube/client.key
I0719 03:34:15.318853 4950 certs.go:269] skipping minikube signed cert generation: /Users/grubhart/.minikube/profiles/minikube/apiserver.key.0f3e66d0
I0719 03:34:15.318989 4950 certs.go:269] skipping aggregator signed cert generation: /Users/grubhart/.minikube/profiles/minikube/proxy-client.key
I0719 03:34:15.319382 4950 certs.go:348] found cert: /Users/grubhart/.minikube/certs/Users/grubhart/.minikube/certs/ca-key.pem (1679 bytes)
I0719 03:34:15.319468 4950 certs.go:348] found cert: /Users/grubhart/.minikube/certs/Users/grubhart/.minikube/certs/ca.pem (1042 bytes)
I0719 03:34:15.319546 4950 certs.go:348] found cert: /Users/grubhart/.minikube/certs/Users/grubhart/.minikube/certs/cert.pem (1082 bytes)
I0719 03:34:15.319598 4950 certs.go:348] found cert: /Users/grubhart/.minikube/certs/Users/grubhart/.minikube/certs/key.pem (1675 bytes)
I0719 03:34:15.320810 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I0719 03:34:15.356182 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0719 03:34:15.390252 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0719 03:34:15.428766 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0719 03:34:15.470080 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0719 03:34:15.504531 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0719 03:34:15.542213 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0719 03:34:15.578338 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0719 03:34:15.610650 4950 ssh_runner.go:215] scp /Users/grubhart/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0719 03:34:15.649183 4950 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0719 03:34:15.686491 4950 ssh_runner.go:148] Run: openssl version
I0719 03:34:15.697905 4950 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0719 03:34:15.714023 4950 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0719 03:34:15.724378 4950 certs.go:389] hashing: -rw-r--r-- 1 root root 1066 Jan 25 2019 /usr/share/ca-certificates/minikubeCA.pem
I0719 03:34:15.724645 4950 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0719 03:34:15.736817 4950 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0719 03:34:15.754397 4950 kubeadm.go:327] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:1991 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0719 03:34:15.754698 4950 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0719 03:34:15.822949 4950 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0719 03:34:15.839220 4950 kubeadm.go:338] found existing configuration files, will attempt cluster restart
I0719 03:34:15.839252 4950 kubeadm.go:512] restartCluster start
I0719 03:34:15.839536 4950 ssh_runner.go:148] Run: sudo test -d /data/minikube
I0719 03:34:15.854814 4950 kubeadm.go:122] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0719 03:34:15.854976 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0719 03:34:15.911128 4950 ssh_runner.go:148] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0719 03:34:15.924877 4950 api_server.go:146] Checking apiserver status ...
I0719 03:34:15.925053 4950 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0719 03:34:15.944938 4950 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/6638/cgroup
I0719 03:34:15.962821 4950 api_server.go:162] apiserver freezer: "7:freezer:/docker/5c02acd7c011b034fcffaa41411139fed3ebefb56d6ab7a03341443e993d4de8/kubepods/burstable/pod6ff2e3bf96dbdcdd33879625130d5ccc/9afb59caa064bfc821cbed4d4fd6a72814d3bf8d53e63adc8976563542f9cd46"
I0719 03:34:15.963004 4950 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/docker/5c02acd7c011b034fcffaa41411139fed3ebefb56d6ab7a03341443e993d4de8/kubepods/burstable/pod6ff2e3bf96dbdcdd33879625130d5ccc/9afb59caa064bfc821cbed4d4fd6a72814d3bf8d53e63adc8976563542f9cd46/freezer.state
I0719 03:34:15.978374 4950 api_server.go:184] freezer state: "THAWED"
I0719 03:34:15.978423 4950 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:32784/healthz ...
I0719 03:34:15.988276 4950 api_server.go:241] https://127.0.0.1:32784/healthz returned 200:
ok
I0719 03:34:16.000152 4950 kubeadm.go:496] needs reconfigure: Unauthorized
I0719 03:34:16.000354 4950 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0719 03:34:16.019884 4950 kubeadm.go:150] found existing configuration files:
-rw------- 1 root root 5491 Jul 19 08:29 /etc/kubernetes/admin.conf
-rw------- 1 root root 5531 Jul 19 08:29 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1911 Jul 19 08:29 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5475 Jul 19 08:29 /etc/kubernetes/scheduler.conf
I0719 03:34:16.020115 4950 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0719 03:34:16.035928 4950 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0719 03:34:16.053024 4950 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0719 03:34:16.069212 4950 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0719 03:34:16.084940 4950 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0719 03:34:16.100244 4950 kubeadm.go:573] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0719 03:34:16.100270 4950 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0719 03:34:16.201760 4950 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0719 03:34:17.302607 4950 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.100798063s)
I0719 03:34:17.302635 4950 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0719 03:34:17.393583 4950 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0719 03:34:17.482499 4950 api_server.go:48] waiting for apiserver process to appear ...
I0719 03:34:17.482693 4950 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0719 03:34:17.500885 4950 api_server.go:68] duration metric: took 18.388948ms to wait for apiserver process to appear ...
I0719 03:34:17.500915 4950 api_server.go:84] waiting for apiserver healthz status ...
I0719 03:34:17.500926 4950 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:32784/healthz ...
I0719 03:34:17.511884 4950 api_server.go:241] https://127.0.0.1:32784/healthz returned 200:
ok
W0719 03:34:17.514119 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:35:58.019458 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:35:58.524614 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:38:15.025002 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:38:15.523204 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:38:16.023884 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:38:16.521941 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:38:17.020950 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
I0719 03:38:17.522886 4950 kubeadm.go:516] restartCluster took 4m1.680553504s
🤦 Unable to restart cluster, will reset it: apiserver health: controlPlane never updated to v1.18.3
I0719 03:38:17.523130 4950 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0719 03:39:13.372517 4950 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (55.848659708s)
I0719 03:39:13.372990 4950 ssh_runner.go:148] Run: sudo systemctl stop -f kubelet
I0719 03:39:13.393428 4950 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0719 03:39:13.457088 4950 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0719 03:39:13.473221 4950 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0719 03:39:13.473385 4950 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0719 03:39:13.486241 4950 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0719 03:39:13.486289 4950 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0719 03:39:27.520700 4950 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (14.034199513s)
I0719 03:39:27.520748 4950 cni.go:74] Creating CNI manager for ""
I0719 03:39:27.520773 4950 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0719 03:39:27.520814 4950 ssh_runner.go:148] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0719 03:39:27.521003 4950 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.18.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0719 03:39:27.521041 4950 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.18.3/kubectl label nodes minikube.k8s.io/version=v1.12.1 minikube.k8s.io/commit=5664228288552de9f3a446ea4f51c6f29bbdd0e0 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_07_19T03_39_27_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0719 03:39:28.438115 4950 ops.go:35] apiserver oom_adj: -16
I0719 03:39:28.438285 4950 kubeadm.go:863] duration metric: took 917.443413ms to wait for elevateKubeSystemPrivileges.
I0719 03:39:28.438316 4950 kubeadm.go:329] StartCluster complete in 5m12.679975261s
I0719 03:39:28.438338 4950 settings.go:123] acquiring lock: {Name:mk47bf7647bc74b013a72fdf28fd00aa56bb404b Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0719 03:39:28.438489 4950 settings.go:131] Updating kubeconfig: /Users/grubhart/.kube/config
I0719 03:39:28.440920 4950 lock.go:35] WriteFile acquiring /Users/grubhart/.kube/config: {Name:mk5194232d5641140a4c29facb1774dd79565358 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0719 03:39:28.442668 4950 start.go:195] Will wait wait-timeout for node ...
I0719 03:39:28.442735 4950 addons.go:347] enableAddons start: toEnable=map[], additional=[]
🔎 Verifying Kubernetes components...
I0719 03:39:28.442802 4950 addons.go:53] Setting storage-provisioner=true in profile "minikube"
I0719 03:39:28.442802 4950 addons.go:53] Setting default-storageclass=true in profile "minikube"
I0719 03:39:28.442914 4950 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl scale deployment --replicas=1 coredns -n=kube-system
I0719 03:39:28.453401 4950 addons.go:129] Setting addon storage-provisioner=true in "minikube"
I0719 03:39:28.453412 4950 addons.go:269] enableOrDisableStorageClasses default-storageclass=true on "minikube"
W0719 03:39:28.453420 4950 addons.go:138] addon storage-provisioner should already be in state true
I0719 03:39:28.453436 4950 host.go:65] Checking if "minikube" exists ...
I0719 03:39:28.453546 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0719 03:39:28.456113 4950 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0719 03:39:28.456677 4950 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0719 03:39:28.528393 4950 addons.go:236] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0719 03:39:28.528436 4950 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (1709 bytes)
I0719 03:39:28.528768 4950 api_server.go:48] waiting for apiserver process to appear ...
I0719 03:39:28.528826 4950 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0719 03:39:28.528984 4950 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0719 03:39:28.587275 4950 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/Users/grubhart/.minikube/machines/minikube/id_rsa Username:docker}
❗ Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Unauthorized]
I0719 03:39:28.907724 4950 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0719 03:39:28.949446 4950 start.go:548] successfully scaled coredns replicas to 1
I0719 03:39:28.949496 4950 api_server.go:68] duration metric: took 506.775568ms to wait for apiserver process to appear ...
I0719 03:39:28.949514 4950 api_server.go:84] waiting for apiserver healthz status ...
I0719 03:39:28.949530 4950 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:32784/healthz ...
I0719 03:39:29.022409 4950 api_server.go:241] https://127.0.0.1:32784/healthz returned 200:
ok
W0719 03:39:29.027325 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
🌟 Enabled addons: default-storageclass, storage-provisioner
I0719 03:39:29.467109 4950 addons.go:349] enableAddons completed in 1.024394514s
W0719 03:39:29.532059 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:39:40.034540 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:39:40.532937 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:39:41.035460 4950 api_server.go:117] api server version match failed: server version: the server has asked for the client to provide credentials
W0719 03:39:41.530456 4950 api_server.go:117] api server version match failed: server version: Get "https://127.0.0.1:32784/version?timeout=32s": dial tcp 127.0.0.1:32784: connect: connection refused
W0719 03:39:42.028373 4950 api_server.go:117] api server version match failed: server version: Get "https://127.0.0.1:32784/version?timeout=32s": dial tcp 127.0.0.1:32784: connect: connection refused
W0719 03:39:42.528126 4950 api_server.go:117] api server version match failed: server version: Get "https://127.0.0.1:32784/version?timeout=32s": dial tcp 127.0.0.1:32784: connect: connection refused
W0719 03:41:01.034343 4950 api_server.go:117] api server version match failed: server version: Get "https://127.0.0.1:32784/version?timeout=32s": dial tcp 127.0.0.1:32784: connect: connection refused
W0719 03:43:27.535607 4950 api_server.go:117] api server version match failed: server version: Get "https://127.0.0.1:32784/version?timeout=32s": dial tcp 127.0.0.1:32784: connect: connection refused
W0719 03:43:28.035370 4950 api_server.go:117] api server version match failed: server version: Get "https://127.0.0.1:32784/version?timeout=32s": dial tcp 127.0.0.1:32784: connect: connection refused
W0719 03:43:28.533477 4950 api_server.go:117] api server version match failed: server version: Get "https://127.0.0.1:32784/version?timeout=32s": dial tcp 127.0.0.1:32784: connect: connection refused
W0719 03:43:29.031295 4950 api_server.go:117] api server version match failed: server version: Get "https://127.0.0.1:32784/version?timeout=32s": dial tcp 127.0.0.1:32784: connect: connection refused
W0719 03:43:29.031626 4950 api_server.go:117] api server version match failed: server version: Get "https://127.0.0.1:32784/version?timeout=32s": dial tcp 127.0.0.1:32784: connect: connection refused
I0719 03:43:29.031822 4950 exit.go:58] WithError(failed to start node)=startup failed: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.18.3 called from:
goroutine 1 [running]:
runtime/debug.Stack(0x0, 0x0, 0x0)
/usr/local/Cellar/go/1.14.5/libexec/src/runtime/debug/stack.go:24 +0x9d
k8s.io/minikube/pkg/minikube/exit.WithError(0x57c141b, 0x14, 0x5adcc80, 0xc0005dfba0)
/private/tmp/minikube-20200717-69613-180ctkg/pkg/minikube/exit/exit.go:58 +0x34
k8s.io/minikube/cmd/minikube/cmd.runStart(0x6908020, 0xc0005f5440, 0x0, 0x2)
/private/tmp/minikube-20200717-69613-180ctkg/cmd/minikube/cmd/start.go:206 +0x4f8
github.com/spf13/cobra.(*Command).execute(0x6908020, 0xc0005f5420, 0x2, 0x2, 0x6908020, 0xc0005f5420)
/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/github.com/spf13/[email protected]/command.go:846 +0x29d
github.com/spf13/cobra.(*Command).ExecuteC(0x6907060, 0x0, 0x1, 0xc0005f2b60)
/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
/Users/brew/Library/Caches/Homebrew/go_mod_cache/pkg/mod/github.com/spf13/[email protected]/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
/private/tmp/minikube-20200717-69613-180ctkg/cmd/minikube/cmd/root.go:106 +0x72c
main.main()
/private/tmp/minikube-20200717-69613-180ctkg/cmd/minikube/main.go:71 +0x11f
W0719 03:43:29.032038 4950 out.go:232] failed to start node: startup failed: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.18.3
💣 failed to start node: startup failed: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.18.3
😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose
Full output of
minikube start
command used, if not already included:Optional: Full output of
minikube logs
command:💣 Unable to get machine status: state: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:
stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose
grubhart@grubharts-mbp minikube_env %
The text was updated successfully, but these errors were encountered: