You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I0511 00:50:40.448964 22940 main.go:110] libmachine: Using SSH client type: native
I0511 00:50:40.448964 22940 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7b5c20] 0x7b5bf0 [] 0s} 127.0.0.1 51145 }
I0511 00:50:40.449909 22940 main.go:110] libmachine: About to run SSH command:
if ! grep -xq '.*\sminikube' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
else
echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
fi
fi
Relaunching Kubernetes using kubeadm ...
I0511 00:51:00.802669 22940 kubeadm.go:436] RestartCluster start
I0511 00:51:00.829998 22940 ssh_runner.go:96] (SSHRunner) Run: sudo test -d /data/minikube
I0511 00:51:00.835856 22940 ssh_runner.go:139] (SSHRunner) Non-zero exit: sudo test -d /data/minikube: Process exited with status 1 (977.6µs)
I0511 00:51:00.836831 22940 kubeadm.go:229] /data/minikube skipping compat symlinks: Process exited with status 1
I0511 00:51:00.836831 22940 ssh_runner.go:96] (SSHRunner) Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0511 00:51:00.989084 22940 ssh_runner.go:96] (SSHRunner) Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0511 00:51:02.274479 22940 ssh_runner.go:96] (SSHRunner) Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0511 00:51:02.440396 22940 ssh_runner.go:96] (SSHRunner) Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0511 00:51:02.620956 22940 kubeadm.go:496] Waiting for apiserver process ...
I0511 00:51:02.650236 22940 ssh_runner.go:96] (SSHRunner) Run: sudo pgrep kube-apiserver
I0511 00:51:02.670734 22940 kubeadm.go:511] Waiting for apiserver to port healthy status ...
I0511 00:51:23.236947 22940 kubeadm.go:168] https://192.168.99.100:8443/healthz response: Get https://192.168.99.100:8443/healthz: read tcp 192.168.99.1:52662->192.168.99.100:8443: wsarecv: An existing connection was forcibly closed by the remote host.
I0511 00:51:23.237860 22940 kubeadm.go:514] apiserver status: Stopped, err:
I0511 00:51:25.741425 22940 kubeadm.go:168] https://192.168.99.100:8443/healthz response: Get https://192.168.99.100:8443/healthz: dial tcp 192.168.99.100:8443: connectex: No connection could be made because the target machine actively refused it.
I0511 00:51:25.741425 22940 kubeadm.go:514] apiserver status: Stopped, err:
I0511 00:51:28.241067 22940 kubeadm.go:168] https://192.168.99.100:8443/healthz response: Get https://192.168.99.100:8443/healthz: dial tcp 192.168.99.100:8443: connectex: No connection could be made because the target machine actively refused it.
I0511 00:51:28.241067 22940 kubeadm.go:514] apiserver status: Stopped, err:
I0511 00:51:30.744483 22940 kubeadm.go:168] https://192.168.99.100:8443/healthz response: Get https://192.168.99.100:8443/healthz: dial tcp 192.168.99.100:8443: connectex: No connection could be made because the target machine actively refused it.
I0511 00:51:30.744483 22940 kubeadm.go:514] apiserver status: Stopped, err:
I0511 00:51:33.245179 22940 kubeadm.go:168] https://192.168.99.100:8443/healthz response: Get https://192.168.99.100:8443/healthz: dial tcp 192.168.99.100:8443: connectex: No connection could be made because the target machine actively refused it.
I0511 00:51:33.246287 22940 kubeadm.go:514] apiserver status: Stopped, err:
I0511 00:51:35.742985 22940 kubeadm.go:168] https://192.168.99.100:8443/healthz response: Get https://192.168.99.100:8443/healthz: dial tcp 192.168.99.100:8443: connectex: No connection could be made because the target machine actively refused it.
I0511 00:51:35.743926 22940 kubeadm.go:514] apiserver status: Stopped, err:
I0511 00:51:38.246488 22940 kubeadm.go:168] https://192.168.99.100:8443/healthz response: Get https://192.168.99.100:8443/healthz: dial tcp 192.168.99.100:8443: connectex: No connection could be made because the target machine actively refused it.
I0511 00:51:38.246488 22940 kubeadm.go:514] apiserver status: Stopped, err:
I0511 00:51:40.743066 22940 kubeadm.go:168] https://192.168.99.100:8443/healthz response: Get https://192.168.99.100:8443/healthz: dial tcp 192.168.99.100:8443: connectex: No connection could be made because the target machine actively refused it.
Full output of minikube start command used, if not already included:
Optional: Full output of minikube logs command:
* ==> Docker <==
* -- Logs begin at Sun 2020-05-10 22:36:48 UTC, end at Sun 2020-05-10 22:56:31 UTC. --
* May 10 22:45:06 minikube dockerd[2401]: time="2020-05-10T22:45:06.612297348Z" level=info msg="shim reaped" id=2fcaf0101342594f2d6ebbc1b184d2606574bb30b14f2947845d6ee8e1058a47
* May 10 22:45:06 minikube dockerd[2401]: time="2020-05-10T22:45:06.622986375Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* May 10 22:45:06 minikube dockerd[2401]: time="2020-05-10T22:45:06.623135916Z" level=warning msg="2fcaf0101342594f2d6ebbc1b184d2606574bb30b14f2947845d6ee8e1058a47 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/2fcaf0101342594f2d6ebbc1b184d2606574bb30b14f2947845d6ee8e1058a47/mounts/shm, flags: 0x2: no such file or directory"
* May 10 22:46:21 minikube dockerd[2401]: time="2020-05-10T22:46:21.365763973Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/058cdffbb11012432e9c3e51b1a78a76ed5366ef004537c029a74ffadd32a89f/shim.sock" debug=false pid=10344
* May 10 22:46:42 minikube dockerd[2401]: time="2020-05-10T22:46:42.181642036Z" level=info msg="shim reaped" id=058cdffbb11012432e9c3e51b1a78a76ed5366ef004537c029a74ffadd32a89f
* May 10 22:46:42 minikube dockerd[2401]: time="2020-05-10T22:46:42.193267636Z" level=warning msg="058cdffbb11012432e9c3e51b1a78a76ed5366ef004537c029a74ffadd32a89f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/058cdffbb11012432e9c3e51b1a78a76ed5366ef004537c029a74ffadd32a89f/mounts/shm, flags: 0x2: no such file or directory"
* May 10 22:46:42 minikube dockerd[2401]: time="2020-05-10T22:46:42.197837010Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* May 10 22:47:49 minikube dockerd[2401]: time="2020-05-10T22:47:49.388537489Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/78a24b4024678c70882593223568530d699788c3c04f938b3ad841e1703a6546/shim.sock" debug=false pid=11348
* May 10 22:47:49 minikube dockerd[2401]: time="2020-05-10T22:47:49.587175231Z" level=info msg="shim reaped" id=78a24b4024678c70882593223568530d699788c3c04f938b3ad841e1703a6546
* May 10 22:47:49 minikube dockerd[2401]: time="2020-05-10T22:47:49.597230269Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* May 10 22:47:49 minikube dockerd[2401]: time="2020-05-10T22:47:49.597537481Z" level=warning msg="78a24b4024678c70882593223568530d699788c3c04f938b3ad841e1703a6546 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/78a24b4024678c70882593223568530d699788c3c04f938b3ad841e1703a6546/mounts/shm, flags: 0x2: no such file or directory"
* May 10 22:49:26 minikube dockerd[2401]: time="2020-05-10T22:49:26.368963175Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d9e8e63564fdf0a60f9cde6cf8ecce73ae07bddcd9397873ee2443cfb00df564/shim.sock" debug=false pid=12815
* May 10 22:49:47 minikube dockerd[2401]: time="2020-05-10T22:49:47.056537600Z" level=info msg="shim reaped" id=d9e8e63564fdf0a60f9cde6cf8ecce73ae07bddcd9397873ee2443cfb00df564
* May 10 22:49:47 minikube dockerd[2401]: time="2020-05-10T22:49:47.067084290Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* May 10 22:49:47 minikube dockerd[2401]: time="2020-05-10T22:49:47.067157701Z" level=warning msg="d9e8e63564fdf0a60f9cde6cf8ecce73ae07bddcd9397873ee2443cfb00df564 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d9e8e63564fdf0a60f9cde6cf8ecce73ae07bddcd9397873ee2443cfb00df564/mounts/shm, flags: 0x2: no such file or directory"
* May 10 22:51:00 minikube dockerd[2401]: time="2020-05-10T22:51:00.651317071Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
* May 10 22:51:00 minikube dockerd[2401]: time="2020-05-10T22:51:00.667870756Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
* May 10 22:51:00 minikube dockerd[2401]: time="2020-05-10T22:51:00.706317458Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
* May 10 22:51:00 minikube dockerd[2401]: time="2020-05-10T22:51:00.722622424Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
* May 10 22:51:00 minikube dockerd[2401]: time="2020-05-10T22:51:00.792990214Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version commit: 425e105d5a03fabd737a126ad93d62a9eeede87f\nspec: 1.0.1-dev\n"
* May 10 22:51:02 minikube dockerd[2401]: time="2020-05-10T22:51:02.322881945Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c9f42ce30454783c005e9ab4b104cc1ea4defbef5d5fdfa31db0ac08024d1744/shim.sock" debug=false pid=14459
* May 10 22:51:02 minikube dockerd[2401]: time="2020-05-10T22:51:02.328763288Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/504bf5702dcb9881b192c76ebc34372ab4400499084b05709722264be3a7ece9/shim.sock" debug=false pid=14463
* May 10 22:51:02 minikube dockerd[2401]: time="2020-05-10T22:51:02.702195746Z" level=info msg="shim reaped" id=504bf5702dcb9881b192c76ebc34372ab4400499084b05709722264be3a7ece9
* May 10 22:51:02 minikube dockerd[2401]: time="2020-05-10T22:51:02.712930931Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* May 10 22:51:02 minikube dockerd[2401]: time="2020-05-10T22:51:02.712931740Z" level=warning msg="504bf5702dcb9881b192c76ebc34372ab4400499084b05709722264be3a7ece9 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/504bf5702dcb9881b192c76ebc34372ab4400499084b05709722264be3a7ece9/mounts/shm, flags: 0x2: no such file or directory"
* May 10 22:51:18 minikube dockerd[2401]: time="2020-05-10T22:51:18.780803995Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/eed6f6ec524f1348f9aae1aa09a0795c70a57f818f7c7287e23ee96617a9a298/shim.sock" debug=false pid=14590
* May 10 22:51:18 minikube dockerd[2401]: time="2020-05-10T22:51:18.996661903Z" level=info msg="shim reaped" id=eed6f6ec524f1348f9aae1aa09a0795c70a57f818f7c7287e23ee96617a9a298
* May 10 22:51:19 minikube dockerd[2401]: time="2020-05-10T22:51:19.007991606Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* May 10 22:51:19 minikube dockerd[2401]: time="2020-05-10T22:51:19.008232859Z" level=warning msg="eed6f6ec524f1348f9aae1aa09a0795c70a57f818f7c7287e23ee96617a9a298 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/eed6f6ec524f1348f9aae1aa09a0795c70a57f818f7c7287e23ee96617a9a298/mounts/shm, flags: 0x2: no such file or directory"
* May 10 22:51:23 minikube dockerd[2401]: time="2020-05-10T22:51:23.293808444Z" level=info msg="shim reaped" id=c9f42ce30454783c005e9ab4b104cc1ea4defbef5d5fdfa31db0ac08024d1744
* May 10 22:51:23 minikube dockerd[2401]: time="2020-05-10T22:51:23.304433646Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* May 10 22:51:23 minikube dockerd[2401]: time="2020-05-10T22:51:23.304745675Z" level=warning msg="c9f42ce30454783c005e9ab4b104cc1ea4defbef5d5fdfa31db0ac08024d1744 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c9f42ce30454783c005e9ab4b104cc1ea4defbef5d5fdfa31db0ac08024d1744/mounts/shm, flags: 0x2: no such file or directory"
* May 10 22:51:39 minikube dockerd[2401]: time="2020-05-10T22:51:39.795748485Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8000cb67f85f7baf3f5595254b80115014a7bd312354fd0aac673adec7e3ac7a/shim.sock" debug=false pid=14887
* May 10 22:51:40 minikube dockerd[2401]: time="2020-05-10T22:51:40.012275768Z" level=info msg="shim reaped" id=8000cb67f85f7baf3f5595254b80115014a7bd312354fd0aac673adec7e3ac7a
* May 10 22:51:40 minikube dockerd[2401]: time="2020-05-10T22:51:40.022941799Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* May 10 22:51:40 minikube dockerd[2401]: time="2020-05-10T22:51:40.023309621Z" level=warning msg="8000cb67f85f7baf3f5595254b80115014a7bd312354fd0aac673adec7e3ac7a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/8000cb67f85f7baf3f5595254b80115014a7bd312354fd0aac673adec7e3ac7a/mounts/shm, flags: 0x2: no such file or directory"
* May 10 22:51:40 minikube dockerd[2401]: time="2020-05-10T22:51:40.774114008Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d24ab9914fc35fc8135a3eebce31e3df462aae18d14be9488f3177020e93f484/shim.sock" debug=false pid=14976
* May 10 22:52:01 minikube dockerd[2401]: time="2020-05-10T22:52:01.553849843Z" level=info msg="shim reaped" id=d24ab9914fc35fc8135a3eebce31e3df462aae18d14be9488f3177020e93f484
* May 10 22:52:01 minikube dockerd[2401]: time="2020-05-10T22:52:01.564446931Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* May 10 22:52:01 minikube dockerd[2401]: time="2020-05-10T22:52:01.564627920Z" level=warning msg="d24ab9914fc35fc8135a3eebce31e3df462aae18d14be9488f3177020e93f484 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d24ab9914fc35fc8135a3eebce31e3df462aae18d14be9488f3177020e93f484/mounts/shm, flags: 0x2: no such file or directory"
* May 10 22:52:21 minikube dockerd[2401]: time="2020-05-10T22:52:21.793721824Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f0afb72a251544be9982498dfb3b6612fef4384a9d72b4cf2a4b195b85ce62d9/shim.sock" debug=false pid=15335
* May 10 22:52:23 minikube dockerd[2401]: time="2020-05-10T22:52:23.794459817Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/66bbe7a0a0e49d23e528f589e1a25c36ecd6d9c64f1da4f1327e2742f4660afa/shim.sock" debug=false pid=15396
* May 10 22:52:24 minikube dockerd[2401]: time="2020-05-10T22:52:24.013825870Z" level=info msg="shim reaped" id=66bbe7a0a0e49d23e528f589e1a25c36ecd6d9c64f1da4f1327e2742f4660afa
* May 10 22:52:24 minikube dockerd[2401]: time="2020-05-10T22:52:24.024632617Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* May 10 22:52:24 minikube dockerd[2401]: time="2020-05-10T22:52:24.024798899Z" level=warning msg="66bbe7a0a0e49d23e528f589e1a25c36ecd6d9c64f1da4f1327e2742f4660afa cleanup: failed to unmount IPC: umount /var/lib/docker/containers/66bbe7a0a0e49d23e528f589e1a25c36ecd6d9c64f1da4f1327e2742f4660afa/mounts/shm, flags: 0x2: no such file or directory"
* May 10 22:52:42 minikube dockerd[2401]: time="2020-05-10T22:52:42.562521803Z" level=info msg="shim reaped" id=f0afb72a251544be9982498dfb3b6612fef4384a9d72b4cf2a4b195b85ce62d9
* May 10 22:52:42 minikube dockerd[2401]: time="2020-05-10T22:52:42.573220655Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* May 10 22:52:42 minikube dockerd[2401]: time="2020-05-10T22:52:42.573621075Z" level=warning msg="f0afb72a251544be9982498dfb3b6612fef4384a9d72b4cf2a4b195b85ce62d9 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/f0afb72a251544be9982498dfb3b6612fef4384a9d72b4cf2a4b195b85ce62d9/mounts/shm, flags: 0x2: no such file or directory"
* May 10 22:53:25 minikube dockerd[2401]: time="2020-05-10T22:53:25.796975516Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/31a3befce81061ff2b11da44b2076a7038e247c26435ee13afc2fc50b58ddfd6/shim.sock" debug=false pid=16244
* May 10 22:53:46 minikube dockerd[2401]: time="2020-05-10T22:53:46.793428392Z" level=info msg="shim reaped" id=31a3befce81061ff2b11da44b2076a7038e247c26435ee13afc2fc50b58ddfd6
* May 10 22:53:46 minikube dockerd[2401]: time="2020-05-10T22:53:46.803736256Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* May 10 22:53:46 minikube dockerd[2401]: time="2020-05-10T22:53:46.804018040Z" level=warning msg="31a3befce81061ff2b11da44b2076a7038e247c26435ee13afc2fc50b58ddfd6 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/31a3befce81061ff2b11da44b2076a7038e247c26435ee13afc2fc50b58ddfd6/mounts/shm, flags: 0x2: no such file or directory"
* May 10 22:53:56 minikube dockerd[2401]: time="2020-05-10T22:53:56.801066121Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0f8489f4d38f86d4f6a39dc1d0faf42fc65537881722da3743fe0a1c497ba315/shim.sock" debug=false pid=16450
* May 10 22:53:56 minikube dockerd[2401]: time="2020-05-10T22:53:56.991931310Z" level=info msg="shim reaped" id=0f8489f4d38f86d4f6a39dc1d0faf42fc65537881722da3743fe0a1c497ba315
* May 10 22:53:57 minikube dockerd[2401]: time="2020-05-10T22:53:57.001988989Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* May 10 22:53:57 minikube dockerd[2401]: time="2020-05-10T22:53:57.002294996Z" level=warning msg="0f8489f4d38f86d4f6a39dc1d0faf42fc65537881722da3743fe0a1c497ba315 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/0f8489f4d38f86d4f6a39dc1d0faf42fc65537881722da3743fe0a1c497ba315/mounts/shm, flags: 0x2: no such file or directory"
* May 10 22:55:11 minikube dockerd[2401]: time="2020-05-10T22:55:11.777442072Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/79b238c935b65b834cbb77d52c602eb4eea6afb4360210926b209360da048535/shim.sock" debug=false pid=17737
* May 10 22:55:32 minikube dockerd[2401]: time="2020-05-10T22:55:32.718749910Z" level=info msg="shim reaped" id=79b238c935b65b834cbb77d52c602eb4eea6afb4360210926b209360da048535
* May 10 22:55:32 minikube dockerd[2401]: time="2020-05-10T22:55:32.729938306Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* May 10 22:55:32 minikube dockerd[2401]: time="2020-05-10T22:55:32.730039616Z" level=warning msg="79b238c935b65b834cbb77d52c602eb4eea6afb4360210926b209360da048535 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/79b238c935b65b834cbb77d52c602eb4eea6afb4360210926b209360da048535/mounts/shm, flags: 0x2: no such file or directory"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
* 79b238c935b65 c2c9a0406787c About a minute ago Exited kube-apiserver 56 823850b8967e8
* 0f8489f4d38f8 b2756210eeabf 2 minutes ago Exited etcd 58 0fc4ed979b2ca
* 129f4f899fcc8 ebac1ae204a2c 19 minutes ago Running kube-scheduler 19 70fe07353bf99
* 8dd92a877ef12 6e4bffa46d70b 19 minutes ago Running kube-controller-manager 7 f71417232cfde
* 756775c148133 bd12a212f9dcb 19 minutes ago Running kube-addon-manager 19 bd3fe0853667a
* 5084e6d3c907d bd12a212f9dcb About an hour ago Exited kube-addon-manager 18 ad04f31d505b2
* bed488cb1f791 6e4bffa46d70b About an hour ago Exited kube-controller-manager 6 f47c4d61d7b86
* 7ace7400d1f53 ebac1ae204a2c About an hour ago Exited kube-scheduler 18 976a19ca3eb6f
* 7e3e6f1adcb2f 4689081edb103 About an hour ago Exited storage-provisioner 25 47dc144349fa6
* c1e550c2fa2ac 4a9db3cd3220c About an hour ago Exited mailchimp 162 7f1acd44b2be2
* 4f961c8f27c16 8454cbe08dc9f About an hour ago Exited kube-proxy 22 7877ce456a3cc
* 4dfa6e1cf94c0 79f72f5cf82be About an hour ago Exited adwords 35 5b14b55d37836
* 5d5235b042144 f7d9c4b5d258d About an hour ago Exited jobs 35 778c62404da85
* 4b2766a0a94e9 6802d83967b99 About an hour ago Exited kubernetes-dashboard 28 2cf4d66a550de
* 75dde7210329d fbe49eca3d6bc About an hour ago Exited chat 164 6b705b01b0228
* b9a0a344ddd97 e05c09ee1094c About an hour ago Exited google-search-console 36 d41f0e73220f8
* 7ee0ca606c052 18d8661209155 About an hour ago Exited pixel 162 4d502fbdf4935
* 1101ca456c8ac 2e0c235347464 About an hour ago Exited kafka 157 c19ef98a8ec76
* 1060624394c8e 95ea639989fe6 About an hour ago Exited analytics 158 1afe0f3a9399c
* 02c1f09eb9f7a ab31c36360479 About an hour ago Exited linkedin 32 7327318185dab
* 9b9eac3252bb1 fb1fe407af1a3 About an hour ago Exited twitter-ads 28 eac2272cb09d5
* 2e4cf0ebba795 92539e3bba6b0 About an hour ago Exited scripts 25 5bb0096afe862
* 1fe64903698ac a9477f61c257e About an hour ago Exited quora 38 16071c7cad74b
* 071e2a8f149dc 6fb488f2f9258 About an hour ago Exited website-editor 165 a5ae4b34b82c8
* 10ac1c2062bc4 aa2ac461061bf About an hour ago Exited capterra 38 c9b03bc911fde
* 9643c24b0b986 6b575d3754db3 About an hour ago Exited slack 164 a8e9ad5ddec53
* 8073a0d16a7ec bb518b1693817 About an hour ago Exited hotjar 165 d7b3cc1f38281
* e6cb77e7a7524 cb104b21a1676 About an hour ago Exited ambassador 16 fdd3536f0b39a
* 9899c5cca1c84 57b633345227a About an hour ago Exited user 5 657b57365e24a
* fc36c39ef6ae3 1ea5b0defe1d9 About an hour ago Exited core 4 be7712cc1490f
* b63230cacdffb 76334586bb1c2 About an hour ago Exited mail 129 2a915aaadcaa8
* 8f2f19d45f6c2 1ec79e53fc2a2 About an hour ago Exited live-dashboard 7 0679e97491473
* 4cba047b2076f 15f535944b31d About an hour ago Exited scripts 26 ad6f549f80bbf
* 4b7f6e0f53a20 dc785fb43e763 About an hour ago Exited google-api 16 391c76e95ce74
* c9154a9287ea1 d1c99246e7492 About an hour ago Exited scserver 6 bde9cd52824d7
* 033b127c6ed6c bf261d1579144 About an hour ago Exited coredns 17 26176b7c63149
* 7811f10a253f4 606f3f77eae68 About an hour ago Exited verifier 16 136220af5898a
* 5d56d76ad75b6 bf261d1579144 About an hour ago Exited coredns 5 486b3c43fd90d
* 314f5314e8d4e eb3f3929b99b5 About an hour ago Exited dashboard 3 69b3c4aa42ea1
* 973ecd558a954 a4d3716dbb724 About an hour ago Exited redis 12 1afe0f3a9399c
* 3691d7a0bde4f a4d3716dbb724 About an hour ago Exited redis 12 d41f0e73220f8
* 070370aca4d8b a4d3716dbb724 About an hour ago Exited redis 6 5bb0096afe862
* 5e80eb51befb2 a4d3716dbb724 About an hour ago Exited redis 8 0679e97491473
* 7b58b949fd595 37d42f497378d About an hour ago Exited landing 16 bbb4a7e0f93d2
* 84295e6c28ba8 a4d3716dbb724 About an hour ago Exited redis 12 5b14b55d37836
* 0cae5633d763f 8cb3de219af7b About an hour ago Exited grafana 17 dd42b7a4dbadc
* 1288cac157117 a4d3716dbb724 About an hour ago Exited redis 12 ad6f549f80bbf
* 8955430dd7d8b a4d3716dbb724 About an hour ago Exited redis 8 eac2272cb09d5
* e2a5ce5e1b564 a4d3716dbb724 About an hour ago Exited redis 12 391c76e95ce74
* 31ea4d5d3cbd1 beba4a7f470c3 About an hour ago Exited zookeeper 8 c732978815009
* 6d0a3eca05eee a4d3716dbb724 About an hour ago Exited redis 3 657b57365e24a
* 8763836b65c38 c9e358ad489f3 About an hour ago Exited payments 26 756ed7d97f647
* 4eb5e266b2cc4 a4d3716dbb724 About an hour ago Exited redis 3 be7712cc1490f
* b11236dad8c1d 709901356c115 About an hour ago Exited dashboard-metrics-scraper 16 61c10e02093a4
* 5dc2608de593a 577260d221dbb About an hour ago Exited influxdb 17 dd42b7a4dbadc
* e5588b96ddc7b d97bb0f05e899 About an hour ago Exited memcached 16 05abe207c1a9e
* 06750649a3e55 f57c75cd7b0aa About an hour ago Exited heapster 17 a0052f46ac7a0
*
* ==> coredns ["033b127c6ed6"] <==
* E0510 22:04:14.835329 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:04:14.835329 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* I0510 22:04:14.835745 1 trace.go:82] Trace[17988804]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-05-10 22:03:44.834195117 +0000 UTC m=+439.272636185) (total time: 30.001534988s):
* Trace[17988804]: [30.001534988s] [30.001534988s] END
* E0510 22:04:14.835820 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:04:14.835820 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/caEhe/reflector.go:90: F5ile1 0 o22:ist0*v1.Name8space5 Get2https:/ 10. 6.0.1 44 /rpiev1lnaeespateos?li.igt=500:r1sourc]e Verpikog=/: oidl/tcp 10.96.0.1:443: i/o timeout
* E051k 822.:i04:c4l.835820 1 reflector.go:12ent]-go@v11./.0oidc/ompstibol//[email protected]/.r0e+filnecompatible/tools/ctcohre./r:ef4l:ec tFra.igoe94d Failed to list *vtoNameispascte :* vGe. Ntaps://p1a0c96e0: 1:4e4t/a ih/v1t/pas:sp/a/c1es0?lim6.05001&r:s4ou3ceVpris/ivon/=0n: dial tcp 10.96.0.1:443a: im/eosptaicmeeout
* imit=500&2e0s-u0r5e-e1r0iTon=00 4di2l0 .t6p5 310 9[.I0.1:443l igoi t/mroeady: Still waiting on: "kubernetes"
* 2020-05-10T22:04:30.654Z [INFO] plugin/ready: Still waiting on: "kubernetes"
* 2020-05-10T22:04:40.655Z [INFO] plugin/ready: Still waiting on: "kubernetes"
* I0510 22:04:45.835894 1 trace.go:82] Trace[1168878201]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-05-10 22:04:15.835040778 +0000 UTC m=+470.273481855) (total time: 30.00082865s):
* Trace[1168878201]: [30.00082865s] [30.00082865s] END
* E0510 22:04:45.835921 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:04:45.835921 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:04:45.835921 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* I0510 22:04:45.837142 1 trace.go:82] Trace[1789766981]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-05-10 22:04:15.836879297 +0000 UTC m=+470.275320362) (total time: 30.000245911s):
* Trace[1789766981]: [30.000245911s] [30.000245911s] END
* E0510 22:04:45.837166 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:04:45.837166 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:04:45.837166 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* I0510 22:04:45.839308 1 trace.go:82] Trace[1176076696]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-05-10 22:04:15.837349876 +0000 UTC m=+470.275790917) (total time: 30.001944034s):
* Trace[1176076696]: [30.001944034s] [30.001944034s] END
* E0510 22:04:45.839330 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:04:45.839330 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:04:45.839330 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* 2020-05-10T22:04:50.653Z [INFO] plugin/ready: Still waiting on: "kubernetes"
* 2020-05-10T22:05:00.652Z [INFO] plugin/ready: Still waiting on: "kubernetes"
* t
* 2020-05-10T220:1005 21:.046:54Z5[.INFO]92l1u gi n/ r 1 drye:f lSetcitlolr .wgaoi:ti2n6]g kog/:o d/"k8ksu.bieor/nleiteens-"[email protected]+incIo0m5pa0i b2le/:0ol5s:/1c6a.che3/8r7fl8e1t o r g o : 9 41: Faaiceed. tgoo :l8i2st *vr.aEcnedpoi3n8t7s2:5 0G44]:h ttpesf:l/e/c1t0o.r9 6p.k0g./1m:o44d3//api.vi1oe/ncdlpoenntts-?gloimit=1500&.r0e+siounrcoeVemrpsaotn=i0b:l edi/alotoclps /0c.a9c6he/reflector.go:94 ListAndW.0.1:443: i/o timeoatch" (started: 2020-05-10 22:0u4t
* :46.83703581502 423 20:0000 UTC m=+501.275826283) (tota4l :t4i5m.e:7 13606. 0 0 3 7 4 617 7rse)fl
* ector.go:126] pkg/mod/k8s.io/[email protected]+incompaTrbale/[ool3s8c7ac2h5/0ref4l4e]cto:r .[o3:94:0 0Fa1il3e7d4t6o7l7ist] *v3.S.r0vi1ce:3 7G4e67 h7tstps:N/D10.96.0.1:443/api/v1/services?limit=500&resou0r5c1V0e r2s2i:on5=0:: 1d6i.a8l3 9c0p4120 . 9 . 0 . 1 414 :r i/fo eicetuotr.go:126] pkg/mod/k8s.io/[email protected]+incompatibEle/tools/cache/reflect0o5r10 g22o:09:445.83F3a3i0 e d 1o rleifslte cto1..gSoe1r2v6i]c epk:g /Geotd/ kh8t.tipos/:cl/i/e1n0t.-9g6o.0v.11:.404.30/+aiic/ovm1p/asteirbve/iocols/?caihmei/rtef5ec0t&or.egs:9u:r caeiVeedr tsoi loist 0*:v1.dNiaamels atce: G1e0t. h6t.ps./1/:104463.:0. 1i4/43/tpi/evo1u/tn
* amespaces?limit=500&resourceVersion=0: dial tcp0 5100.96 .2.21::044::1 i/8 3ti9m042 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incoeout
* E0510 22:05:16.839042 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:05:16.839295 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatmibplaet/tibolles//coaoclhse//craefclee/croerf.lgeoct9r4.:o :a4i:l Failoe dl itst i*sv 1*.Ev1.Sorivnitcse: GGeett hhtttpss:/:/1/01906..906.10:.4434/4a3p/i/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E05a10 22:0p5:i1/6v81/390d4p2o i n ? l1im ietf=e5c0t0o&rr.eos:12r6] Vpekr/smodo/n=80s:. od/iclli [email protected]+inc o10at6ib0l.1/t4o4l3s:/ aichoe /treflectt
* r.go:94: Failed Et0 5l1s0t 2v:0.5:e1r6v.i8c3e9:9 1G6e t h t t p 1: r/e1f0l.e9c6t.o0r..1g:o4:4132/6]p ipkg/mod/k8s.io/[email protected]./0v1/necrvicaets?llmei/=t5o00&se/scourheeV/rrseofnl=e0c:toiralg oc:p9 410 96a.iled: 4t3 :lii/s tt i*mvo1u.
* NamespacIe0:1 0G2e2: 5h:16.838781 1ttracse:./g/o1:08.29 6Tr0a.c1e:[47433110/0ap5/3v]1:/n"mResplcescltiomitp5k0g0/rmesdu/rkceV.risi/[email protected]+incompatible/tools/cache/reflector.go:94 Li: tialdtWca t10h96 0(1s:t4a3r ti/o timeout
* d: 2020-05-10 22:04:46.836920297 +0000 UTC m=+501.275361332) (total time: 30.001840684s):
* Trace[731100853]: [30.001840684s] [30.001840684s] END
* E0510 22:05:16.839295 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:05:16.839295 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:05:16.839295 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* I0510 22:05:16.839890 1 trace.go:82] Trace[1147270009]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-05-10 22:04:46.83957118 +0000 UTC m=+501.278012214) (total time: 30.000299087s):
* Trace[1147270009]: [30.000299087s] [30.000299087s] END
* E0510 22:05:16.839916 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:05:16.839916 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:05:16.839916 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* 2020-05-10T22:05:20.653Z [INFO] plugin/ready: Still waiting on: "kubernetes"
* 2020-05-10T22:05:30.653Z [INFO] plugin/ready: Still waiting on: "kubernetes"
* 2020-05-10T22:05:40.656Z [INFO] plugin/ready: Still waiting on: "kubernetes"
* [INFO] SIGTERM: Shutting down servers then terminating
* I0510 22:05:42.151831 1 trace.go:82] Trace[442440964]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-05-10 22:05:17.843270618 +0000 UTC m=+532.281711665) (total time: 24.308489907s):
* Trace[442440964]: [24.308489907s] [24.308489907s] END
* I0510 22:05:42.151860 1 trace.go:82] Trace[642854310]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-05-10 22:05:17.840719426 +0000 UTC m=+532.279160470) (total time: 24.311128397s):
* Trace[642854310]: [24.311128397s] [24.311128397s] END
* I0510 22:05:42.151876 1 trace.go:82] Trace[1106222082]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-05-10 22:05:17.839935438 +0000 UTC m=+532.278376511) (total time: 24.311931233s):
* Trace[1106222082]: [24.311931233s] [24.311931233s] END
*
* ==> coredns ["5d56d76ad75b"] <==
* E0510 22:04:14.834994 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:04:14.834994 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* I0510 22:04:14.835348 1 trace.go:82] Trace[2001536299]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-05-10 22:03:44.834466997 +0000 UTC m=+439.860812074) (total time: 30.000868311s):
* Trace[2001536299]: [30.000868311s] [30.E000868301511]0 E22D:04:E104.5130 42127: 0 : 4 .1 3r5ef1l7e c o . g o1: 1r2fl epcktog/rmgodo:/1286s] ipokcl/mod/k8s.io/[email protected]+incompiatntb-lgeo/@vto1ol.s0.c0c+ihec/ormepftlebclteo/rt.gool:s9/4c:acFael/erd toe cltiort. g*ov9.4N:a Fmeasilaece : Get htto tpsi:t//*101..9N6a.m0.s1a4e4:3 /aepti/ht1t/pnsam/e/s10.96.0.1:p4ac/esa?piim/tv=10/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeou0&reso
* urceVersion=0: d0i5a1l0 tc2p 040:.4956.70913:74243 : i/ o t1i mreofultctor.goE:01516] pkg/mod/k0 s2.:i0o:/14l.i83n54t1-7g o@ v 1 1 0r.f0leitoc.ogom12a6]tpigbmod/k8s.io/[email protected]:a9t4ib e/tFoli/cecdhetroef ector.go:9l4i Ftiledvt1o listv*vc.Na:e pact: hetthptt:s///10.99.0.1:.43:a4i/v1/napes/aves?limrt=i00eresluiceier=i5o0=0& rdealotucpc10.96e.r.s1:o4n3= 0/o tiieaut
* tcpE 05110 29:0.:14..815447 3 i 1or eflicmtoo.gt:
* 126] pkg/mod/k8s.io/[email protected]+i0co mp2ti:b0e/to4l5/.c8ch6/re3fl7e to . o 94:1Failed lo ciso *.1.Name:pa1e2 Get hptpsg///10.o6d0.1:443sapiiv1/namlspecet?lgmi@=v510&re0ou0ce+Veisnon=0m daat tcp l1e./9t.0.1l44/:ci/a cimeou/
* re2f0l0-c5-o10Tg2:04916.:31F [IiFO]epduginorealy: still *ai1ing on: pkobirnetes:
* Get h2020p05:10T2200.:966231. 1INFO]4pl3ginarpaiy: vt1ll wantinp on: nuber?etei"
* it2=20005-1rT2s:04:u6.c35V e[rNFi] p=ug:in/ rdady: ticl pwa i1in.g on: ."k.u1erne4es"
* I0510 2:04:m45.7u3t12 1 traEe.go:80] Trace[4450585378: 8Re8flector pk / od/k8 .io/clienc-goov1r1.0.o+in1om6atibpe/tgo/lm/cdc/he/resl.ctor.goi94nListgnd@at1h1 (0t.art+di n020m05-10t22b04:15.7o3363s99c+a0000eU/Tr m=+47c.t1o708.70g6o :94: Failed to list *v1.Na(tetap aimee::3 .000t2 6h02t):
* ps://1Tr.ce[49505873:]: [30/0a03i6402s] n30m00s32a40es] ENl
* imEi510 52:040:&5.793o28r e e1 refoect=o0.go 1d6i plg/todpk81.i./[email protected].:0 iincom atible/otools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial ttp 10.96.0.1:4430 5/o0ti2e2ut05:1E605.0 92602:44.79 728 11 reeleclor.go:o26. pkog:mo2/k]s.io/[email protected]+i.comp/tille/toots/caohe/1eflect.r.g+:i4n Foilpda to llst *v1tServlcs:/Gea hhtp/:/r10f96.0.1t4o4r/api/v1/9e4vicesFliamitl500&resour eVirsio =0v diaSl ecp 1i.96.0:1: 4G3: i o titeost
* //E151. 22:0.4045.7:34283 a i1 rvefl/ctor.gv:12c6e p?g/miod/mkis.io5cli0e&t-eosv11.0r.c+incomrasible/too0s/c acde/rellettop.g1:94: 6ail0d 1o :is4 *v1 Servic : Gmt otups
* //10.96.0.1:443/5pi0v /s2rv0c5s?1i6i.=580&resou0rceVersion= : dia1 cpe10f.96e0.1:443: igo :i1eo6t
* pI0510/ 22o04:/5.8358i9 / c i tnate.goo82v Trace01.507+1i163]: mRealectorepk//modoklssi//[email protected]+itcompatiblo/t9ols/:cacFe/iefeect r.go 94 ListtndWavch. (ntaptod:n2t0s0- 5-G0 t2:h4:1p.s35:23919 +0000 9TC6 .=+471.864649058) (ptit/lvt1mee 3d.p005n86s6s):imTirace5159&0r1e16o]: [c0.V0005s86o6s= [:0.d0i5a86 6sc EN D10E0910.20:04:45.33:62 3i o 1irmfeectuortgo:126] pkg/mod0k81.io/2li:[email protected]+1i8c8mpatible tools/ca he/reflectorogo:.4: oFailed to list kv1/Enopdi/tk:8Ge. hot/s:l/1e.n6.0.1g:4@3va1i/.1/.nd+ointsclimit=5a0&iesourece/eroion=0s:/diaa tcp 1/.9e.0.1:c44o: ./o ot:me4ut FaEi510 22 :04:45 .8i62t7 v 1 refmecsoragc:e26 Gkg/m d/k8t.po/cli/nt0go9v1..0.0+:n4o4p3tiale/to1l//nacme/rseflaector.gl:9m:i tai5e0d0to liso *vc.EVdprints:nGe0 h tpi:/l1 .96. .104436ap0/.v11/e:n4po3nt:s?limiot=t00mreoouots/ca
* he/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* I0510 22:04:45.838802 1 trace.go:82] Trace[491435042]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-05-10 22:04:15.837436441 +0000 UTC m=+470.863781499) (total time: 30.001346255s):
* Trace[491435042]: [30.001346255s] [30.001346255s] END
* E0510 22:04:45.838828 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:04:45.838828 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:04:45.838828 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* 2020-05-10T22:04:46.231Z [INFO] plugin/ready: Still waiting on: "kubernetes"
* 2020-05-10T22:04:56.232Z [INFO] plugin/ready: Still waiting on: "kubernetes"
* 2020-05-10T22:05:06.232Z [INFO] plugin/ready: Still waiting on: "kubernetes"
* 2020-05-10T22:05:16.231Z [INFO] plugin/ready: Still waiting on: "kubernetes"
* I0510 22:05:16.794943 1 trace.go:82] Trace[2060622233]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-05-10 22:04:46.794069902 +0000 UTC m=+501.820414964) (total time: 30.000838977s):
* Trace[2060622233]: [30.000838977s] [30.000838977s] END
* E0510 22:05:16.796204 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:05:16.796204 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:05:16.796204 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* I0510 22:05:16.839277 1 trace.go:82] Trace[1226266773]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-05-10 22:04:46.837607888 +0000 UTC m=+501.863952945) (total time: 30.001405633s):
* Trace[1226266773]: [30.001405633s] [30.001405633s] END
* E0510 22:05:16.839304 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:05:16.839304 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:05:16.839304 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* I0510 22:05:16.841843 1 trace.go:82] Trace[1430612256]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-05-10 22:04:46.839196017 +0000 UTC m=+501.865541075) (total time: 30.002617722s):
* Trace[1430612256]: [30.002617722s] [30.002617722s] END
* E0510 22:05:16.841884 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:05:16.841884 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* E0510 22:05:16.841884 1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
* 2020-05-10T22:05:26.231Z [INFO] plugin/ready: Still waiting on: "kubernetes"
* 2020-05-10T22:05:36.231Z [INFO] plugin/ready: Still waiting on: "kubernetes"
* [INFO] SIGTERM: Shutting down servers then terminating
* I0510 22:05:41.823094 1 trace.go:82] Trace[1269209171]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-05-10 22:05:17.842391003 +0000 UTC m=+532.868736072) (total time: 23.980669055s):
* Trace[1269209171]: [23.980669055s] [23.980669055s] END
* I0510 22:05:41.823153 1 trace.go:82] Trace[1347983067]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-05-10 22:05:17.84034942 +0000 UTC m=+532.866694531) (total time: 23.982784287s):
* Trace[1347983067]: [23.982784287s] [23.982784287s] END
* I0510 22:05:41.823183 1 trace.go:82] Trace[51570121]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-05-10 22:05:17.798025227 +0000 UTC m=+532.824370297) (total time: 24.025138034s):
* Trace[51570121]: [24.025138034s] [24.025138034s] END
*
* ==> dmesg <==
* [ +5.001624] hpet1: lost 318 rtc interrupts
* [ +5.001102] hpet1: lost 318 rtc interrupts
* [ +5.002194] hpet1: lost 318 rtc interrupts
* [ +5.006410] hpet1: lost 318 rtc interrupts
* [ +5.004137] hpet1: lost 319 rtc interrupts
* [ +5.004742] hpet1: lost 318 rtc interrupts
* [May10 22:52] hpet1: lost 318 rtc interrupts
* [ +5.004591] hpet1: lost 318 rtc interrupts
* [ +5.003461] hpet1: lost 319 rtc interrupts
* [ +5.004646] hpet1: lost 318 rtc interrupts
* [ +5.000940] hpet1: lost 318 rtc interrupts
* [ +5.001422] hpet1: lost 318 rtc interrupts
* [ +5.000999] hpet1: lost 318 rtc interrupts
* [ +5.000921] hpet1: lost 318 rtc interrupts
* [ +5.003177] hpet1: lost 318 rtc interrupts
* [ +5.000941] hpet1: lost 318 rtc interrupts
* [ +5.001877] hpet1: lost 319 rtc interrupts
* [ +5.000785] hpet1: lost 318 rtc interrupts
* [May10 22:53] hpet1: lost 318 rtc interrupts
* [ +4.995576] hpet1: lost 318 rtc interrupts
* [ +5.001855] hpet1: lost 318 rtc interrupts
* [ +5.001020] hpet1: lost 318 rtc interrupts
* [ +5.001177] hpet1: lost 318 rtc interrupts
* [ +5.002547] hpet1: lost 318 rtc interrupts
* [ +5.004581] hpet1: lost 319 rtc interrupts
* [ +5.005013] hpet1: lost 318 rtc interrupts
* [ +5.006620] hpet1: lost 319 rtc interrupts
* [ +5.003663] hpet1: lost 319 rtc interrupts
* [ +5.032292] hpet1: lost 320 rtc interrupts
* [ +5.005171] hpet1: lost 318 rtc interrupts
* [May10 22:54] hpet1: lost 319 rtc interrupts
* [ +5.004242] hpet1: lost 319 rtc interrupts
* [ +5.011894] hpet1: lost 319 rtc interrupts
* [ +5.003967] hpet1: lost 319 rtc interrupts
* [ +5.004186] hpet1: lost 319 rtc interrupts
* [ +5.001179] hpet1: lost 318 rtc interrupts
* [ +5.002570] hpet1: lost 318 rtc interrupts
* [ +5.001540] hpet1: lost 319 rtc interrupts
* [ +5.000801] hpet1: lost 318 rtc interrupts
* [ +5.001082] hpet1: lost 318 rtc interrupts
* [ +5.001566] hpet1: lost 318 rtc interrupts
* [ +5.002705] hpet1: lost 318 rtc interrupts
* [May10 22:55] hpet1: lost 318 rtc interrupts
* [ +5.002383] hpet1: lost 318 rtc interrupts
* [ +5.000585] hpet1: lost 318 rtc interrupts
* [ +5.001986] hpet1: lost 318 rtc interrupts
* [ +5.001411] hpet1: lost 318 rtc interrupts
* [ +5.002090] hpet1: lost 319 rtc interrupts
* [ +5.001906] hpet1: lost 318 rtc interrupts
* [ +5.002234] hpet1: lost 318 rtc interrupts
* [ +5.002044] hpet1: lost 318 rtc interrupts
* [ +5.001651] hpet1: lost 318 rtc interrupts
* [ +5.003728] hpet1: lost 318 rtc interrupts
* [ +5.002726] hpet1: lost 319 rtc interrupts
* [May10 22:56] hpet1: lost 318 rtc interrupts
* [ +5.003518] hpet1: lost 318 rtc interrupts
* [ +5.001091] hpet1: lost 318 rtc interrupts
* [ +5.002254] hpet1: lost 318 rtc interrupts
* [ +5.000924] hpet1: lost 318 rtc interrupts
* [ +5.001495] hpet1: lost 318 rtc interrupts
*
* ==> kernel <==
* 22:56:31 up 20 min, 0 users, load average: 0.06, 0.26, 0.28
* Linux minikube 4.19.76 #1 SMP Tue Oct 29 14:56:42 PDT 2019 x86_64 GNU/Linux
* PRETTY_NAME="Buildroot 2019.02.6"
*
* ==> kube-addon-manager ["5084e6d3c907"] <==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: T= Error gee tctinge cdtefau tt sthrv scee aecro ulot,a resttry84in3 0.a5s sreconse d==-
* diWd you RNp:e =i=f yE rthe right host or port?
* ror getting default service account, retry in 0.5 second ==
* WRN: == Error gettinTh ed efoanulec tsiern iteo ahcecosentve r eltorcy in 0.5 hoescto:n8d443 was refu se==
* - did yWRoNu: == Erfyor getting iefahlt servict account, retry in 0.5 second e=
* coWRN: ==iErrot getting defaurt seovice account,8retry wn 0s5 second =u
* seRN: == Error gotting default seuvice accouyt retry in 0.5 secotnd ==
* sWRN: == Error get
* ting default servicenaccount, netrt on 0.5 second ==
* veRrEr or gettingh doefsult s8ervic3 accoant, retefused - did you specify thy in 0.5 gechotd ==
* ost or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
*
* ==> kube-addon-manager ["756775c14813"] <==
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.5 second ==
* WRN: == Error getting default service account, retry in 0.T5 hee ccoonnndec=io=
* to the sRerve:r =l=o cError glehostt:8n44g dwaes aefuulstedse didi cyeo ua spccoify the rirgehtt hy sitn o r p.5rts
* eThe connection tn dt h=e=
* server loWaRlh:s t:==44 Errso regfutsteid -gd idf ayuolt seruv ispecify the right host or paocrcto?
* t, retryh e ionnec0t.io5 tse the serd e= l=o
* calhost:8W4RN :as re=f=u sErd r-orid geoutstpiecnig ydehfe auligh shoesrtv orcpeor ta?
* ccountT,e rcotnrect iion to the server localhost08443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the er locachost:8443 was refused - odndd =y
* u specify the righW RhNos:t o r= p oErt?
* or gee tctoninneg tidnfau lttheesrvrivcee lcooulnhto, tr:e84t3 rwya i rne fu0s.e5d s-edcod ynodu s=e
* cify thW riNh:t o=st=or prrrt?
* r ghee coinneg iod etoatuhe tserver localhost:8443 wasse rrvicus ac- cid tou rspecri y 0h.5 recontd =o=
* t or pWoN:
* = Error gettine default sonv te acceounet, eretlro inh o0.5 seco nd s=
* efWse:d==- rdid etotun g defiflt sherviicg accostt, repoyrt? 0.5 second =c=
* nnWRN:o== E rhr ettinegr efaualth sset:i8e accwoun tr reutsed - dry in 0.5 second p=
* cify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
* The connection to the server localhost:8443 was refused - did you specify the right host or port?
*
* ==> kube-apiserver ["79b238c935b6"] <==
* Flag --insecure-port has been deprecated, This flag will be removed in a future version.
* I0510 22:55:12.014274 1 server.go:623] external host was not specified, using 192.168.99.100
* I0510 22:55:12.014639 1 server.go:149] Version: v1.16.2
* I0510 22:55:12.620165 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
* I0510 22:55:12.620194 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
* I0510 22:55:12.621334 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
* I0510 22:55:12.621359 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
* I0510 22:55:12.623654 1 client.go:361] parsed scheme: "endpoint"
* I0510 22:55:12.623795 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
* W0510 22:55:12.624208 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* I0510 22:55:13.621368 1 client.go:361] parsed scheme: "endpoint"
* I0510 22:55:13.621523 1 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
* W0510 22:55:13.622113 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W0510 22:55:13.625042 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W0510 22:55:14.623126 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W0510 22:55:15.331792 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W0510 22:55:16.370341 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W0510 22:55:17.651172 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W0510 22:55:18.895461 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W0510 22:55:22.058511 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W0510 22:55:23.373259 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W0510 22:55:28.108894 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* W0510 22:55:30.256283 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
* panic: context deadline exceeded
*
* goroutine 1 [running]:
* k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/registry/customresourcedefinition.NewREST(0xc00087c850, 0x7ab8880, 0xc0002a4240, 0xc0002a4588)
* /workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/registry/customresourcedefinition/etcd.go:56 +0x41c
* k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver.completedConfig.New(0xc0006483a0, 0xc0007457c8, 0x7b70ac0, 0xaadab78, 0x10, 0x0, 0x0)
* /workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/apiserver.go:147 +0x15a2
* k8s.io/kubernetes/cmd/kube-apiserver/app.createAPIExtensionsServer(0xc0007457c0, 0x7b70ac0, 0xaadab78, 0x0, 0x7ab8540, 0xc0004f9140)
* /workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/apiextensions.go:95 +0x59
* k8s.io/kubernetes/cmd/kube-apiserver/app.CreateServerChain(0xc000104dc0, 0xc0002c4e40, 0x44c8a6c, 0xc, 0xc00091dca8)
* /workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/server.go:182 +0x2bb
* k8s.io/kubernetes/cmd/kube-apiserver/app.Run(0xc000104dc0, 0xc0002c4e40, 0x0, 0x0)
* /workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/server.go:151 +0x102
* k8s.io/kubernetes/cmd/kube-apiserver/app.NewAPIServerCommand.func1(0xc00028ef00, 0xc00085c4e0, 0x0, 0x1a, 0x0, 0x0)
* /workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/server.go:118 +0x104
* k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc00028ef00, 0xc0000ba010, 0x1a, 0x1b, 0xc00028ef00, 0xc0000ba010)
* /workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826 +0x465
* k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00028ef00, 0x465b2d0, 0xaabc800, 0xc00091df88)
* /workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914 +0x2fc
* k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
* /workspace/anago-v1.16.2-beta.0.19+c97fe5036ef3df/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
* main.main()
* _output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/apiserver.go:43 +0xc9
*
* ==> kube-controller-manager ["8dd92a877ef1"] <==
* E0510 22:52:52.352183 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:52:56.518195 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:52:59.157580 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:53:01.444308 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:53:04.962237 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:53:08.324541 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:53:11.182850 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:53:14.458655 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:53:18.572305 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:53:20.973584 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:53:24.913077 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:53:37.036785 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
* E0510 22:53:47.740346 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:53:50.423476 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:53:54.642078 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:53:58.392945 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:02.143987 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:06.101034 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:09.778729 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:12.330175 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:14.559451 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:16.676536 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:20.283745 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:24.164498 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:27.714493 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:31.595477 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:35.527922 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:38.170582 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:41.944984 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:44.562090 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:46.965788 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:51.172505 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:53.490228 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:55.894369 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:54:59.568222 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:55:02.593000 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:55:05.674261 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:55:07.941314 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:55:22.300526 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: context deadline exceeded
* E0510 22:55:33.640805 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:55:36.111662 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:55:38.616903 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:55:40.896871 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:55:43.643544 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:55:46.321707 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:55:49.910907 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:55:52.110203 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:55:56.393823 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:55:58.549947 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:00.792916 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:03.268813 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:05.502996 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:07.867776 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:11.315744 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:15.420089 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:18.114717 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:20.820705 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:24.885452 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:27.729290 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:30.469820 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
*
* ==> kube-controller-manager ["bed488cb1f79"] <==
* E0510 22:28:31.514045 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:28:34.982562 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:28:48.514363 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
* E0510 22:28:59.898582 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:29:02.103298 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:29:06.323299 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:29:09.569660 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:29:11.934689 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:29:26.192935 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: net/http: TLS handshake timeout
* E0510 22:29:34.133112 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:29:37.129328 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:29:39.843110 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:29:44.205945 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:29:48.276608 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:29:51.363509 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:29:54.013180 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:30:08.093336 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: net/http: TLS handshake timeout
* E0510 22:30:17.953319 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:30:20.344400 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:30:24.336761 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:30:26.513135 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:30:28.576842 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:30:32.822364 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:30:35.078311 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:30:39.211685 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:30:43.563213 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:30:47.592886 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:30:51.732933 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:30:55.678244 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:30:59.845894 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:31:03.835731 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:31:05.901011 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:31:19.520508 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: context deadline exceeded
* E0510 22:31:30.266542 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:31:33.538050 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:31:37.715903 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:31:41.096463 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:31:43.219662 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:31:46.337360 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:31:48.410230 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:31:51.995092 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:31:55.016479 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:31:58.825577 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:01.550681 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:05.793815 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:08.916975 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:11.961461 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:15.451619 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:17.984283 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:20.351466 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:23.481412 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:25.910940 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:29.023119 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:33.246418 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:36.913929 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:39.819494 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:43.821363 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:46.814624 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:50.785467 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:33:04.918332 1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
*
* ==> kube-proxy ["4f961c8f27c1"] <==
* W0510 22:04:46.083742 1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
* E0510 22:04:51.398245 1 node.go:124] Failed to retrieve node info: Get https://localhost:8443/api/v1/nodes/minikube: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:04:52.520344 1 node.go:124] Failed to retrieve node info: Get https://localhost:8443/api/v1/nodes/minikube: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:04:54.629001 1 node.go:124] Failed to retrieve node info: Get https://localhost:8443/api/v1/nodes/minikube: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:04:59.339060 1 node.go:124] Failed to retrieve node info: Get https://localhost:8443/api/v1/nodes/minikube: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:05:07.497231 1 node.go:124] Failed to retrieve node info: Get https://localhost:8443/api/v1/nodes/minikube: dial tcp 127.0.0.1:8443: connect: connection refused
* F0510 22:05:07.497254 1 server.go:443] unable to get node IP for hostname minikube
*
* ==> kube-scheduler ["129f4f899fcc"] <==
* E0510 22:56:25.756558 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:25.756873 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:25.758517 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:25.761359 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:25.764519 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:26.740037 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:26.743607 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:26.743899 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:26.747504 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:26.750163 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:26.755278 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:26.758558 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:26.759481 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:26.761182 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:26.761713 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:26.765960 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:27.740868 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:27.744447 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:27.746606 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:27.748281 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:27.750817 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:27.756380 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:27.759245 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:27.760751 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:27.761615 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:27.764204 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:27.766734 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:28.743475 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:28.745340 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:28.748413 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:28.750099 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:28.753131 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:28.758025 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:28.760901 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:28.761463 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:28.762798 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:28.765849 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:28.767671 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:29.745112 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:29.746099 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:29.749700 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:29.751363 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:29.754452 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:29.760341 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:29.762169 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:29.762978 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:29.766264 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:29.766422 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:29.768104 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:30.745859 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:30.746671 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:30.751360 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:30.751987 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:30.755330 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:30.761733 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:30.763648 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:30.763656 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:30.767465 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:30.768263 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:56:30.770222 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
*
* ==> kube-scheduler ["7ace7400d1f5"] <==
* E0510 22:32:51.430683 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:51.434032 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:51.434606 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:51.435977 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:51.438117 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:52.417487 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:52.418872 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:52.424614 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:52.424680 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:52.425127 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:52.430644 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:52.432448 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:52.435275 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:52.436437 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:52.439181 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:52.442497 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:53.418051 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:53.419755 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:53.425133 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:53.427581 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:53.428160 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:53.431446 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:53.433119 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:53.436326 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:53.436761 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:53.440256 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* E0510 22:32:53.443179 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* I0510 22:33:04.419997 1 trace.go:116] Trace[1966909062]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (started: 2020-05-10 22:32:54.418285614 +0000 UTC m=+1528.103476112) (total time: 10.001687665s):
* Trace[1966909062]: [10.001687665s] [10.001687665s] END
* E0510 22:33:04.420013 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: net/http: TLS handshake timeout
* I0510 22:33:04.423057 1 trace.go:116] Trace[616653135]: "Reflector ListAndWatch" name:k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236 (started: 2020-05-10 22:32:54.420940328 +0000 UTC m=+1528.106130838) (total time: 10.002080173s):
* Trace[616653135]: [10.002080173s] [10.002080173s] END
* E0510 22:33:04.423071 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: net/http: TLS handshake timeout
* I0510 22:33:04.426795 1 trace.go:116] Trace[1264265870]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (started: 2020-05-10 22:32:54.425852527 +0000 UTC m=+1528.111043024) (total time: 10.000923209s):
* Trace[1264265870]: [10.000923209s] [10.000923209s] END
* E0510 22:33:04.426810 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: net/http: TLS handshake timeout
* I0510 22:33:04.429936 1 trace.go:116] Trace[1349893628]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (started: 2020-05-10 22:32:54.428647008 +0000 UTC m=+1528.113837505) (total time: 10.001263216s):
* Trace[1349893628]: [10.001263216s] [10.001263216s] END
* E0510 22:33:04.429954 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: net/http: TLS handshake timeout
* I0510 22:33:04.430283 1 trace.go:116] Trace[55877917]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (started: 2020-05-10 22:32:54.429724289 +0000 UTC m=+1528.114914805) (total time: 10.00054718s):
* Trace[55877917]: [10.00054718s] [10.00054718s] END
* E0510 22:33:04.430311 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: net/http: TLS handshake timeout
* I0510 22:33:04.433758 1 trace.go:116] Trace[81927506]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (started: 2020-05-10 22:32:54.431777056 +0000 UTC m=+1528.116967567) (total time: 10.001948642s):
* Trace[81927506]: [10.001948642s] [10.001948642s] END
* E0510 22:33:04.433775 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: net/http: TLS handshake timeout
* I0510 22:33:04.434219 1 trace.go:116] Trace[2000731807]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (started: 2020-05-10 22:32:54.433623137 +0000 UTC m=+1528.118813634) (total time: 10.000575109s):
* Trace[2000731807]: [10.000575109s] [10.000575109s] END
* E0510 22:33:04.434231 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: net/http: TLS handshake timeout
* I0510 22:33:04.439564 1 trace.go:116] Trace[1812385766]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (started: 2020-05-10 22:32:54.438502917 +0000 UTC m=+1528.123693430) (total time: 10.001030336s):
* Trace[1812385766]: [10.001030336s] [10.001030336s] END
* E0510 22:33:04.439579 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: net/http: TLS handshake timeout
* I0510 22:33:04.439564 1 trace.go:116] Trace[88890143]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (started: 2020-05-10 22:32:54.43749728 +0000 UTC m=+1528.122687778) (total time: 10.002035522s):
* Trace[88890143]: [10.002035522s] [10.002035522s] END
* E0510 22:33:04.439588 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: net/http: TLS handshake timeout
* I0510 22:33:04.441502 1 trace.go:116] Trace[352394753]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (started: 2020-05-10 22:32:54.440455139 +0000 UTC m=+1528.125645653) (total time: 10.001031381s):
* Trace[352394753]: [10.001031381s] [10.001031381s] END
* E0510 22:33:04.441511 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout
* I0510 22:33:04.445652 1 trace.go:116] Trace[1539506718]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (started: 2020-05-10 22:32:54.444373566 +0000 UTC m=+1528.129564085) (total time: 10.001259558s):
* Trace[1539506718]: [10.001259558s] [10.001259558s] END
* E0510 22:33:04.445681 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: net/http: TLS handshake timeout
*
* ==> kubelet <==
* -- Logs begin at Sun 2020-05-10 22:36:48 UTC, end at Sun 2020-05-10 22:56:31 UTC. --
* May 10 22:56:27 minikube kubelet[14325]: E0510 22:56:27.951465 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:28 minikube kubelet[14325]: E0510 22:56:28.051942 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:28 minikube kubelet[14325]: E0510 22:56:28.053219 14325 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:28 minikube kubelet[14325]: E0510 22:56:28.153444 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:28 minikube kubelet[14325]: E0510 22:56:28.252480 14325 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:28 minikube kubelet[14325]: E0510 22:56:28.254871 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:28 minikube kubelet[14325]: E0510 22:56:28.356878 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:28 minikube kubelet[14325]: E0510 22:56:28.457876 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:28 minikube kubelet[14325]: E0510 22:56:28.464187 14325 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:28 minikube kubelet[14325]: E0510 22:56:28.558540 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:28 minikube kubelet[14325]: E0510 22:56:28.650976 14325 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:28 minikube kubelet[14325]: E0510 22:56:28.658954 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:28 minikube kubelet[14325]: I0510 22:56:28.671644 14325 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
* May 10 22:56:28 minikube kubelet[14325]: I0510 22:56:28.674225 14325 kubelet_node_status.go:72] Attempting to register node minikube
* May 10 22:56:28 minikube kubelet[14325]: E0510 22:56:28.761453 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:28 minikube kubelet[14325]: E0510 22:56:28.842707 14325 kubelet_node_status.go:94] Unable to register node "minikube" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:28 minikube kubelet[14325]: E0510 22:56:28.862981 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:28 minikube kubelet[14325]: E0510 22:56:28.963442 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:29 minikube kubelet[14325]: E0510 22:56:29.042869 14325 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:29 minikube kubelet[14325]: E0510 22:56:29.064988 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:29 minikube kubelet[14325]: E0510 22:56:29.165534 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:29 minikube kubelet[14325]: E0510 22:56:29.244187 14325 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:29 minikube kubelet[14325]: E0510 22:56:29.267002 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:29 minikube kubelet[14325]: E0510 22:56:29.367543 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:29 minikube kubelet[14325]: E0510 22:56:29.444221 14325 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:29 minikube kubelet[14325]: E0510 22:56:29.468980 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:29 minikube kubelet[14325]: E0510 22:56:29.571064 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:29 minikube kubelet[14325]: E0510 22:56:29.642052 14325 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:29 minikube kubelet[14325]: E0510 22:56:29.652952 14325 controller.go:135] failed to ensure node lease exists, will retry in 7s, error: Get https://localhost:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:29 minikube kubelet[14325]: E0510 22:56:29.672172 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:29 minikube kubelet[14325]: E0510 22:56:29.773493 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:29 minikube kubelet[14325]: E0510 22:56:29.843313 14325 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:29 minikube kubelet[14325]: E0510 22:56:29.875114 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:29 minikube kubelet[14325]: E0510 22:56:29.975476 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:30 minikube kubelet[14325]: E0510 22:56:30.043592 14325 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:30 minikube kubelet[14325]: E0510 22:56:30.078105 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:30 minikube kubelet[14325]: E0510 22:56:30.179754 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:30 minikube kubelet[14325]: E0510 22:56:30.245516 14325 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:30 minikube kubelet[14325]: E0510 22:56:30.281994 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:30 minikube kubelet[14325]: E0510 22:56:30.382564 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:30 minikube kubelet[14325]: E0510 22:56:30.447640 14325 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:30 minikube kubelet[14325]: E0510 22:56:30.484112 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:30 minikube kubelet[14325]: E0510 22:56:30.584638 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:30 minikube kubelet[14325]: E0510 22:56:30.632834 14325 event.go:246] Unable to write event: 'Post https://localhost:8443/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8443: connect: connection refused' (may retry after sleeping)
* May 10 22:56:30 minikube kubelet[14325]: E0510 22:56:30.644095 14325 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:30 minikube kubelet[14325]: E0510 22:56:30.684753 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:30 minikube kubelet[14325]: E0510 22:56:30.784941 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:30 minikube kubelet[14325]: E0510 22:56:30.844702 14325 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:30 minikube kubelet[14325]: E0510 22:56:30.885146 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:30 minikube kubelet[14325]: E0510 22:56:30.985409 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:31 minikube kubelet[14325]: E0510 22:56:31.044200 14325 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:31 minikube kubelet[14325]: E0510 22:56:31.086157 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:31 minikube kubelet[14325]: E0510 22:56:31.128515 14325 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get node info: node "minikube" not found
* May 10 22:56:31 minikube kubelet[14325]: E0510 22:56:31.186363 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:31 minikube kubelet[14325]: E0510 22:56:31.246876 14325 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:31 minikube kubelet[14325]: E0510 22:56:31.287467 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:31 minikube kubelet[14325]: E0510 22:56:31.387703 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:31 minikube kubelet[14325]: E0510 22:56:31.448599 14325 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
* May 10 22:56:31 minikube kubelet[14325]: E0510 22:56:31.488049 14325 kubelet.go:2267] node "minikube" not found
* May 10 22:56:31 minikube kubelet[14325]: E0510 22:56:31.588783 14325 kubelet.go:2267] node "minikube" not found
*
* ==> kubernetes-dashboard ["4b2766a0a94e"] <==
* 2020/05/10 22:04:34 Using namespace: kubernetes-dashboard
* 2020/05/10 22:04:34 Using in-cluster config to connect to apiserver
* 2020/05/10 22:04:34 Starting overwatch
* 2020/05/10 22:04:34 Using secret token for csrf signing
* 2020/05/10 22:04:34 Initializing csrf token from kubernetes-dashboard-csrf secret
* panic: Get https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial tcp 10.96.0.1:443: i/o timeout
*
* goroutine 1 [running]:
* github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00046c000)
* /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:40 +0x3b4
* github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
* /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:65
* github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc00034b900)
* /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:479 +0xc7
* github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc00034b900)
* /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:447 +0x47
* github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
* /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:528
* main.main()
* /home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105 +0x212
*
* ==> storage-provisioner ["7e3e6f1adcb2"] <==
* F0510 22:05:24.154909 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
The text was updated successfully, but these errors were encountered:
Steps to reproduce the issue:
Full output of failed command:
minikube start --alsologtostderr I0511 00:50:39.919968 22940 start.go:251] hostinfo: {"hostname":"ALYS-Laptop","uptime":3695,"bootTime":1589147344,"procs":281,"os":"windows","platform":"Microsoft Windows 10 Home Single Language","platformFamily":"Standalone Workstation","platformVersion":"10.0.18362 Build 18362","kernelVersion":"","virtualizationSystem":"","virtualizationRole":"","hostid":"0cf1da4b-ec20-4078-bfcd-bb319db46813"}
W0511 00:50:39.919968 22940 start.go:259] gopshost.Virtualization returned error: not implemented yet
I0511 00:50:39.923846 22940 start.go:547] selectDriver: flag="", old=&{{false false https://storage.googleapis.com/minikube/iso/minikube-v1.5.1.iso 2000 2 20000 virtualbox docker [] [] [] [] 192.168.99.1/24 default qemu:///system false false [] false [] /nfsshares false false true} {v1.16.2 192.168.99.100 8443 minikube minikubeCA [] [] cluster.local docker 10.96.0.0/12 [] true false}}
I0511 00:50:39.923846 22940 start.go:293] selected: virtualbox
I0511 00:50:39.923846 22940 downloader.go:60] Not caching ISO, using https://storage.googleapis.com/minikube/iso/minikube-v1.5.1.iso
I0511 00:50:39.924821 22940 profile.go:82] Saving config to C:\Users\Sherif Ali.minikube\profiles\minikube\config.json ...
I0511 00:50:39.924821 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner:v1.8.1 -> C:\Users\Sherif Ali.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v1.8.1
I0511 00:50:39.924821 22940 cache_images.go:296] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> C:\Users\Sherif Ali.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v1.8.1
I0511 00:50:39.924821 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-scheduler:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.16.2
I0511 00:50:39.924821 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-proxy:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.16.2
I0511 00:50:39.924821 22940 cache_images.go:296] CacheImage: k8s.gcr.io/kube-proxy:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.16.2
I0511 00:50:39.924821 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-sidecar-amd64:1.14.13 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-sidecar-amd64_1.14.13
I0511 00:50:39.925805 22940 cache_images.go:302] C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.16.2 exists
I0511 00:50:39.924821 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\coredns:1.6.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\coredns_1.6.2
I0511 00:50:39.925805 22940 cache_images.go:296] CacheImage: k8s.gcr.io/coredns:1.6.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\coredns_1.6.2
I0511 00:50:39.924821 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kubernetes-dashboard-amd64:v1.10.1 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kubernetes-dashboard-amd64_v1.10.1
I0511 00:50:39.925805 22940 cache_images.go:296] CacheImage: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kubernetes-dashboard-amd64_v1.10.1
I0511 00:50:39.924821 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-controller-manager:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.16.2
I0511 00:50:39.926775 22940 cache_images.go:296] CacheImage: k8s.gcr.io/kube-controller-manager:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.16.2
I0511 00:50:39.924821 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\etcd:3.3.15-0 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\etcd_3.3.15-0
I0511 00:50:39.924821 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\pause:3.1 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\pause_3.1
I0511 00:50:39.924821 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-apiserver:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.16.2
I0511 00:50:39.924821 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-addon-manager:v9.0 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-addon-manager_v9.0
I0511 00:50:39.924821 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-kube-dns-amd64:1.14.13 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-kube-dns-amd64_1.14.13
I0511 00:50:39.924821 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-dnsmasq-nanny-amd64_1.14.13
I0511 00:50:39.924821 22940 cache_images.go:302] C:\Users\Sherif Ali.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v1.8.1 exists
I0511 00:50:39.924821 22940 cache_images.go:296] CacheImage: k8s.gcr.io/kube-scheduler:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.16.2
I0511 00:50:39.924821 22940 lock.go:41] attempting to write to file "C:\Users\Sherif Ali\.minikube\profiles\minikube\config.json.tmp242301383" with filemode -rw-------
I0511 00:50:39.925805 22940 cache_images.go:296] CacheImage: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-sidecar-amd64_1.14.13
I0511 00:50:39.925805 22940 cache_images.go:298] CacheImage: k8s.gcr.io/kube-proxy:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.16.2 completed in 984µs
I0511 00:50:39.925805 22940 cache_images.go:302] C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\coredns_1.6.2 exists
I0511 00:50:39.926775 22940 cache_images.go:302] C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kubernetes-dashboard-amd64_v1.10.1 exists
I0511 00:50:39.926775 22940 cache_images.go:296] CacheImage: k8s.gcr.io/etcd:3.3.15-0 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\etcd_3.3.15-0
I0511 00:50:39.926775 22940 cache_images.go:302] C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.16.2 exists
I0511 00:50:39.926775 22940 cache_images.go:296] CacheImage: k8s.gcr.io/pause:3.1 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\pause_3.1
I0511 00:50:39.926775 22940 cache_images.go:296] CacheImage: k8s.gcr.io/kube-apiserver:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.16.2
I0511 00:50:39.927750 22940 cache_images.go:296] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-addon-manager_v9.0
I0511 00:50:39.928726 22940 cache_images.go:296] CacheImage: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-kube-dns-amd64_1.14.13
I0511 00:50:39.929700 22940 cache_images.go:296] CacheImage: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-dnsmasq-nanny-amd64_1.14.13
I0511 00:50:39.932649 22940 cache_images.go:298] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> C:\Users\Sherif Ali.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v1.8.1 completed in 7.8283ms
I0511 00:50:39.937510 22940 cache_images.go:302] C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.16.2 exists
I0511 00:50:39.938485 22940 cache_images.go:302] C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-sidecar-amd64_1.14.13 exists
I0511 00:50:39.938485 22940 cache_images.go:83] CacheImage k8s.gcr.io/kube-proxy:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.16.2 succeeded
I0511 00:50:39.939462 22940 cache_images.go:298] CacheImage: k8s.gcr.io/coredns:1.6.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\coredns_1.6.2 completed in 13.657ms
I0511 00:50:39.940437 22940 cache_images.go:298] CacheImage: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kubernetes-dashboard-amd64_v1.10.1 completed in 14.6322ms
I0511 00:50:39.940437 22940 cluster.go:101] Skipping create...Using existing machine configuration
I0511 00:50:39.941413 22940 cache_images.go:302] C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\etcd_3.3.15-0 exists
I0511 00:50:39.943391 22940 cache_images.go:298] CacheImage: k8s.gcr.io/kube-controller-manager:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.16.2 completed in 16.616ms
I0511 00:50:39.943391 22940 cache_images.go:302] C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\pause_3.1 exists
I0511 00:50:39.947273 22940 cache_images.go:302] C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.16.2 exists
I0511 00:50:39.947273 22940 cache_images.go:302] C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-addon-manager_v9.0 exists
I0511 00:50:39.948262 22940 cache_images.go:302] C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-kube-dns-amd64_1.14.13 exists
I0511 00:50:39.949223 22940 cache_images.go:302] C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-dnsmasq-nanny-amd64_1.14.13 exists
I0511 00:50:39.949223 22940 cache_images.go:83] CacheImage gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> C:\Users\Sherif Ali.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v1.8.1 succeeded
I0511 00:50:39.949223 22940 cache_images.go:298] CacheImage: k8s.gcr.io/kube-scheduler:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.16.2 completed in 24.4022ms
I0511 00:50:39.950197 22940 cache_images.go:298] CacheImage: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-sidecar-amd64_1.14.13 completed in 24.3928ms
I0511 00:50:39.951175 22940 cache_images.go:83] CacheImage k8s.gcr.io/coredns:1.6.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\coredns_1.6.2 succeeded
I0511 00:50:39.953126 22940 cache_images.go:83] CacheImage k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kubernetes-dashboard-amd64_v1.10.1 succeeded
I0511 00:50:39.958012 22940 cache_images.go:298] CacheImage: k8s.gcr.io/etcd:3.3.15-0 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\etcd_3.3.15-0 completed in 31.2364ms
I0511 00:50:39.958012 22940 cache_images.go:83] CacheImage k8s.gcr.io/kube-controller-manager:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.16.2 succeeded
I0511 00:50:39.958981 22940 main.go:110] libmachine: COMMAND: C:\Program Files\Oracle\VirtualBox\VBoxManage.exe showvminfo minikube --machinereadable
I0511 00:50:39.958981 22940 cache_images.go:298] CacheImage: k8s.gcr.io/pause:3.1 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\pause_3.1 completed in 32.2056ms
I0511 00:50:39.959957 22940 cache_images.go:298] CacheImage: k8s.gcr.io/kube-apiserver:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.16.2 completed in 33.1817ms
I0511 00:50:39.959957 22940 cache_images.go:298] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-addon-manager_v9.0 completed in 32.2072ms
I0511 00:50:39.960933 22940 cache_images.go:298] CacheImage: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-kube-dns-amd64_1.14.13 completed in 32.2074ms
I0511 00:50:39.960933 22940 cache_images.go:298] CacheImage: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-dnsmasq-nanny-amd64_1.14.13 completed in 31.2327ms
I0511 00:50:39.961909 22940 cache_images.go:83] CacheImage k8s.gcr.io/kube-scheduler:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.16.2 succeeded
I0511 00:50:39.962886 22940 cache_images.go:83] CacheImage k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-sidecar-amd64_1.14.13 succeeded
I0511 00:50:39.966790 22940 cache_images.go:83] CacheImage k8s.gcr.io/etcd:3.3.15-0 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\etcd_3.3.15-0 succeeded
I0511 00:50:39.968745 22940 cache_images.go:83] CacheImage k8s.gcr.io/pause:3.1 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\pause_3.1 succeeded
I0511 00:50:39.969719 22940 cache_images.go:83] CacheImage k8s.gcr.io/kube-apiserver:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.16.2 succeeded
I0511 00:50:39.969719 22940 cache_images.go:83] CacheImage k8s.gcr.io/kube-addon-manager:v9.0 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-addon-manager_v9.0 succeeded
I0511 00:50:39.970696 22940 cache_images.go:83] CacheImage k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-kube-dns-amd64_1.14.13 succeeded
I0511 00:50:39.970696 22940 cache_images.go:83] CacheImage k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-dnsmasq-nanny-amd64_1.14.13 succeeded
I0511 00:50:39.978501 22940 cache_images.go:90] Successfully cached all images.
I0511 00:50:40.039991 22940 main.go:110] libmachine: STDOUT:
{
name="minikube"
groups="/"
ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
UUID="4aad24ba-b0dd-4c3c-a868-5e7b162c49e4"
CfgFile="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\minikube.vbox"
SnapFldr="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\Snapshots"
LogFldr="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\Logs"
hardwareuuid="4aad24ba-b0dd-4c3c-a868-5e7b162c49e4"
memory=8192
pagefusion="off"
vram=8
cpuexecutioncap=100
hpet="on"
cpu-profile="host"
chipset="piix3"
firmware="BIOS"
cpus=2
pae="on"
longmode="on"
triplefaultreset="off"
apic="on"
x2apic="off"
nested-hw-virt="off"
cpuid-portability-level=0
bootmenu="disabled"
boot1="dvd"
boot2="dvd"
boot3="disk"
boot4="none"
acpi="on"
ioapic="on"
biosapic="apic"
biossystemtimeoffset=0
rtcuseutc="on"
hwvirtex="on"
nestedpaging="on"
largepages="on"
vtxvpid="on"
vtxux="on"
paravirtprovider="default"
effparavirtprovider="kvm"
VMState="running"
VMStateChangeTime="2020-05-10T22:36:08.215000000"
graphicscontroller="vboxvga"
monitorcount=1
accelerate3d="off"
accelerate2dvideo="off"
teleporterenabled="off"
teleporterport=0
teleporteraddress=""
teleporterpassword=""
tracing-enabled="off"
tracing-allow-vm-access="off"
tracing-config=""
autostart-enabled="off"
autostart-delay=0
defaultfrontend=""
vmprocpriority="default"
storagecontrollername0="SATA"
storagecontrollertype0="IntelAhci"
storagecontrollerinstance0="0"
storagecontrollermaxportcount0="30"
storagecontrollerportcount0="30"
storagecontrollerbootable0="on"
"SATA-0-0"="C:\Users\Sherif Ali.minikube\machines\minikube\boot2docker.iso"
"SATA-ImageUUID-0-0"="e3a8d3ba-0497-49d8-8415-f29fb495caa2"
"SATA-tempeject"="off"
"SATA-IsEjected"="off"
"SATA-1-0"="C:\Users\Sherif Ali.minikube\machines\minikube\disk.vmdk"
"SATA-ImageUUID-1-0"="787791e0-f256-4c6b-84e0-312236d4575e"
"SATA-2-0"="none"
"SATA-3-0"="none"
"SATA-4-0"="none"
"SATA-5-0"="none"
"SATA-6-0"="none"
"SATA-7-0"="none"
"SATA-8-0"="none"
"SATA-9-0"="none"
"SATA-10-0"="none"
"SATA-11-0"="none"
"SATA-12-0"="none"
"SATA-13-0"="none"
"SATA-14-0"="none"
"SATA-15-0"="none"
"SATA-16-0"="none"
"SATA-17-0"="none"
"SATA-18-0"="none"
"SATA-19-0"="none"
"SATA-20-0"="none"
"SATA-21-0"="none"
"SATA-22-0"="none"
"SATA-23-0"="none"
"SATA-24-0"="none"
"SATA-25-0"="none"
"SATA-26-0"="none"
"SATA-27-0"="none"
"SATA-28-0"="none"
"SATA-29-0"="none"
natnet1="nat"
macaddress1="0800274844BF"
cableconnected1="on"
nic1="nat"
nictype1="virtio"
nicspeed1="0"
mtu="0"
sockSnd="64"
sockRcv="64"
tcpWndSnd="64"
tcpWndRcv="64"
Forwarding(0)="ssh,tcp,127.0.0.1,51145,,22"
hostonlyadapter2="VirtualBox Host-Only Ethernet Adapter Support kubernetes dashboard. #3"
macaddress2="0800271EF814"
cableconnected2="on"
nic2="hostonly"
nictype2="virtio"
nicspeed2="0"
nic3="none"
nic4="none"
nic5="none"
nic6="none"
nic7="none"
nic8="none"
hidpointing="ps2mouse"
hidkeyboard="ps2kbd"
uart1="off"
uart2="off"
uart3="off"
uart4="off"
lpt1="off"
lpt2="off"
audio="dsound"
audio_out="on"
audio_in="on"
clipboard="disabled"
draganddrop="disabled"
SessionName="headless"
VideoMode="720,400,0"@0,0 1
vrde="off"
usb="off"
ehci="off"
xhci="off"
SharedFolderNameMachineMapping1="c/Users"
SharedFolderPathMachineMapping1="\\?\c:\Users"
VRDEActiveConnection="off"
VRDEClients==0
videocap="off"
videocapaudio="off"
capturescreens="0"
capturefilename="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\minikube.webm"
captureres="1024x768"
capturevideorate=512
capturevideofps=25
captureopts=""
GuestMemoryBalloon=0
GuestOSType="Linux26_64"
GuestAdditionsRunLevel=2
GuestAdditionsVersion="5.2.32 r132056"
GuestAdditionsFacility_VirtualBox Base Driver=50,1589150207974
GuestAdditionsFacility_VirtualBox System Service=50,1589150208295
GuestAdditionsFacility_Seamless Mode=0,1589150208856
GuestAdditionsFacility_Graphics Mode=0,1589150207974
}
I0511 00:50:40.040965 22940 main.go:110] libmachine: STDERR:
{
}
I0511 00:50:40.044870 22940 cluster.go:113] Machine state: Running
I0511 00:50:40.047798 22940 cluster.go:131] engine options: &{ArbitraryFlags:[] DNS:[] GraphDir: Env:[] Ipv6:false InsecureRegistry:[10.96.0.0/12] Labels:[] LogLevel: StorageDriver: SelinuxEnabled:false TLSVerify:false RegistryMirror:[] InstallURL:https://get.docker.com}
I0511 00:50:40.048774 22940 cluster.go:144] configureHost: &{BaseDriver:0xc0002a4100 VBoxManager:0xc0000bc848 HostInterfaces:0x27d6128 b2dUpdater:0x27d6128 sshKeyGenerator:0x27d6128 diskCreator:0x27d6128 logsReader:0x27d6128 ipWaiter:0x27d6128 randomInter:0xc0000bc850 sleeper:0x27d6128 CPU:2 Memory:8192 DiskSize:20000 NatNicType:virtio Boot2DockerURL:file://C:/Users/Sherif Ali/.minikube/cache/iso/minikube-v1.5.1.iso Boot2DockerImportVM: HostDNSResolver:true HostOnlyCIDR:192.168.99.1/24 HostOnlyNicType:virtio HostOnlyPromiscMode:deny UIType:headless HostOnlyNoDHCP:false NoShare:false DNSProxy:false NoVTXCheck:false ShareFolder:}
I0511 00:50:40.049752 22940 cluster.go:166] Configuring auth for driver virtualbox ...
I0511 00:50:40.049752 22940 main.go:110] libmachine: Waiting for SSH to be available...
I0511 00:50:40.050726 22940 main.go:110] libmachine: Getting to WaitForSSH function...
I0511 00:50:40.066341 22940 main.go:110] libmachine: Using SSH client type: native
I0511 00:50:40.067318 22940 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7b5c20] 0x7b5bf0 [] 0s} 127.0.0.1 51145 }
I0511 00:50:40.073174 22940 main.go:110] libmachine: About to run SSH command:
exit 0
I0511 00:50:40.185413 22940 main.go:110] libmachine: SSH cmd err, output: :
I0511 00:50:40.185413 22940 main.go:110] libmachine: Detecting the provisioner...
I0511 00:50:40.201031 22940 main.go:110] libmachine: Using SSH client type: native
I0511 00:50:40.201031 22940 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7b5c20] 0x7b5bf0 [] 0s} 127.0.0.1 51145 }
I0511 00:50:40.203958 22940 main.go:110] libmachine: About to run SSH command:
cat /etc/os-release
I0511 00:50:40.305462 22940 main.go:110] libmachine: SSH cmd err, output: : NAME=Buildroot
VERSION=2019.02.6
ID=buildroot
VERSION_ID=2019.02.6
PRETTY_NAME="Buildroot 2019.02.6"
I0511 00:50:40.305462 22940 main.go:110] libmachine: found compatible host: buildroot
I0511 00:50:40.307413 22940 main.go:110] libmachine: setting hostname "minikube"
I0511 00:50:40.325957 22940 main.go:110] libmachine: Using SSH client type: native
I0511 00:50:40.325957 22940 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7b5c20] 0x7b5bf0 [] 0s} 127.0.0.1 51145 }
I0511 00:50:40.326954 22940 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0511 00:50:40.433317 22940 main.go:110] libmachine: SSH cmd err, output: : minikube
I0511 00:50:40.448964 22940 main.go:110] libmachine: Using SSH client type: native
I0511 00:50:40.448964 22940 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7b5c20] 0x7b5bf0 [] 0s} 127.0.0.1 51145 }
I0511 00:50:40.449909 22940 main.go:110] libmachine: About to run SSH command:
I0511 00:50:40.557277 22940 main.go:110] libmachine: SSH cmd err, output: :
I0511 00:50:40.557277 22940 main.go:110] libmachine: set auth options {CertDir:C:\Users\Sherif Ali.minikube CaCertPath:C:\Users\Sherif Ali.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\Sherif Ali.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\Sherif Ali.minikube\machines\server.pem ServerKeyPath:C:\Users\Sherif Ali.minikube\machines\server-key.pem ClientKeyPath:C:\Users\Sherif Ali.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\Sherif Ali.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\Sherif Ali.minikube}
I0511 00:50:40.559222 22940 main.go:110] libmachine: setting up certificates
I0511 00:50:40.562152 22940 main.go:110] libmachine: COMMAND: C:\Program Files\Oracle\VirtualBox\VBoxManage.exe showvminfo minikube --machinereadable
I0511 00:50:40.631473 22940 main.go:110] libmachine: STDOUT:
{
name="minikube"
groups="/"
ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
UUID="4aad24ba-b0dd-4c3c-a868-5e7b162c49e4"
CfgFile="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\minikube.vbox"
SnapFldr="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\Snapshots"
LogFldr="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\Logs"
hardwareuuid="4aad24ba-b0dd-4c3c-a868-5e7b162c49e4"
memory=8192
pagefusion="off"
vram=8
cpuexecutioncap=100
hpet="on"
cpu-profile="host"
chipset="piix3"
firmware="BIOS"
cpus=2
pae="on"
longmode="on"
triplefaultreset="off"
apic="on"
x2apic="off"
nested-hw-virt="off"
cpuid-portability-level=0
bootmenu="disabled"
boot1="dvd"
boot2="dvd"
boot3="disk"
boot4="none"
acpi="on"
ioapic="on"
biosapic="apic"
biossystemtimeoffset=0
rtcuseutc="on"
hwvirtex="on"
nestedpaging="on"
largepages="on"
vtxvpid="on"
vtxux="on"
paravirtprovider="default"
effparavirtprovider="kvm"
VMState="running"
VMStateChangeTime="2020-05-10T22:36:08.215000000"
graphicscontroller="vboxvga"
monitorcount=1
accelerate3d="off"
accelerate2dvideo="off"
teleporterenabled="off"
teleporterport=0
teleporteraddress=""
teleporterpassword=""
tracing-enabled="off"
tracing-allow-vm-access="off"
tracing-config=""
autostart-enabled="off"
autostart-delay=0
defaultfrontend=""
vmprocpriority="default"
storagecontrollername0="SATA"
storagecontrollertype0="IntelAhci"
storagecontrollerinstance0="0"
storagecontrollermaxportcount0="30"
storagecontrollerportcount0="30"
storagecontrollerbootable0="on"
"SATA-0-0"="C:\Users\Sherif Ali.minikube\machines\minikube\boot2docker.iso"
"SATA-ImageUUID-0-0"="e3a8d3ba-0497-49d8-8415-f29fb495caa2"
"SATA-tempeject"="off"
"SATA-IsEjected"="off"
"SATA-1-0"="C:\Users\Sherif Ali.minikube\machines\minikube\disk.vmdk"
"SATA-ImageUUID-1-0"="787791e0-f256-4c6b-84e0-312236d4575e"
"SATA-2-0"="none"
"SATA-3-0"="none"
"SATA-4-0"="none"
"SATA-5-0"="none"
"SATA-6-0"="none"
"SATA-7-0"="none"
"SATA-8-0"="none"
"SATA-9-0"="none"
"SATA-10-0"="none"
"SATA-11-0"="none"
"SATA-12-0"="none"
"SATA-13-0"="none"
"SATA-14-0"="none"
"SATA-15-0"="none"
"SATA-16-0"="none"
"SATA-17-0"="none"
"SATA-18-0"="none"
"SATA-19-0"="none"
"SATA-20-0"="none"
"SATA-21-0"="none"
"SATA-22-0"="none"
"SATA-23-0"="none"
"SATA-24-0"="none"
"SATA-25-0"="none"
"SATA-26-0"="none"
"SATA-27-0"="none"
"SATA-28-0"="none"
"SATA-29-0"="none"
natnet1="nat"
macaddress1="0800274844BF"
cableconnected1="on"
nic1="nat"
nictype1="virtio"
nicspeed1="0"
mtu="0"
sockSnd="64"
sockRcv="64"
tcpWndSnd="64"
tcpWndRcv="64"
Forwarding(0)="ssh,tcp,127.0.0.1,51145,,22"
hostonlyadapter2="VirtualBox Host-Only Ethernet Adapter #3"
macaddress2="0800271EF814"
cableconnected2="on"
nic2="hostonly"
nictype2="virtio"
nicspeed2="0"
nic3="none"
nic4="none"
nic5="none"
nic6="none"
nic7="none"
nic8="none"
hidpointing="ps2mouse"
hidkeyboard="ps2kbd"
uart1="off"
uart2="off"
uart3="off"
uart4="off"
lpt1="off"
lpt2="off"
audio="dsound"
audio_out="on"
audio_in="on"
clipboard="disabled"
draganddrop="disabled"
SessionName="headless"
VideoMode="720,400,0"@0,0 1
vrde="off"
usb="off"
ehci="off"
xhci="off"
SharedFolderNameMachineMapping1="c/Users"
SharedFolderPathMachineMapping1="\\?\c:\Users"
VRDEActiveConnection="off"
VRDEClients==0
videocap="off"
videocapaudio="off"
capturescreens="0"
capturefilename="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\minikube.webm"
captureres="1024x768"
capturevideorate=512
capturevideofps=25
captureopts=""
GuestMemoryBalloon=0
GuestOSType="Linux26_64"
GuestAdditionsRunLevel=2
GuestAdditionsVersion="5.2.32 r132056"
GuestAdditionsFacility_VirtualBox Base Driver=50,1589150207974
GuestAdditionsFacility_VirtualBox System Service=50,1589150208295
GuestAdditionsFacility_Seamless Mode=0,1589150208856
GuestAdditionsFacility_Graphics Mode=0,1589150207974
}
I0511 00:50:40.632449 22940 main.go:110] libmachine: STDERR:
{
}
I0511 00:50:40.636324 22940 main.go:110] libmachine: COMMAND: C:\Program Files\Oracle\VirtualBox\VBoxManage.exe showvminfo minikube --machinereadable
I0511 00:50:40.706596 22940 main.go:110] libmachine: STDOUT:
{
name="minikube"
groups="/"
ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
UUID="4aad24ba-b0dd-4c3c-a868-5e7b162c49e4"
CfgFile="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\minikube.vbox"
SnapFldr="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\Snapshots"
LogFldr="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\Logs"
hardwareuuid="4aad24ba-b0dd-4c3c-a868-5e7b162c49e4"
memory=8192
pagefusion="off"
vram=8
cpuexecutioncap=100
hpet="on"
cpu-profile="host"
chipset="piix3"
firmware="BIOS"
cpus=2
pae="on"
longmode="on"
triplefaultreset="off"
apic="on"
x2apic="off"
nested-hw-virt="off"
cpuid-portability-level=0
bootmenu="disabled"
boot1="dvd"
boot2="dvd"
boot3="disk"
boot4="none"
acpi="on"
ioapic="on"
biosapic="apic"
biossystemtimeoffset=0
rtcuseutc="on"
hwvirtex="on"
nestedpaging="on"
largepages="on"
vtxvpid="on"
vtxux="on"
paravirtprovider="default"
effparavirtprovider="kvm"
VMState="running"
VMStateChangeTime="2020-05-10T22:36:08.215000000"
graphicscontroller="vboxvga"
monitorcount=1
accelerate3d="off"
accelerate2dvideo="off"
teleporterenabled="off"
teleporterport=0
teleporteraddress=""
teleporterpassword=""
tracing-enabled="off"
tracing-allow-vm-access="off"
tracing-config=""
autostart-enabled="off"
autostart-delay=0
defaultfrontend=""
vmprocpriority="default"
storagecontrollername0="SATA"
storagecontrollertype0="IntelAhci"
storagecontrollerinstance0="0"
storagecontrollermaxportcount0="30"
storagecontrollerportcount0="30"
storagecontrollerbootable0="on"
"SATA-0-0"="C:\Users\Sherif Ali.minikube\machines\minikube\boot2docker.iso"
"SATA-ImageUUID-0-0"="e3a8d3ba-0497-49d8-8415-f29fb495caa2"
"SATA-tempeject"="off"
"SATA-IsEjected"="off"
"SATA-1-0"="C:\Users\Sherif Ali.minikube\machines\minikube\disk.vmdk"
"SATA-ImageUUID-1-0"="787791e0-f256-4c6b-84e0-312236d4575e"
"SATA-2-0"="none"
"SATA-3-0"="none"
"SATA-4-0"="none"
"SATA-5-0"="none"
"SATA-6-0"="none"
"SATA-7-0"="none"
"SATA-8-0"="none"
"SATA-9-0"="none"
"SATA-10-0"="none"
"SATA-11-0"="none"
"SATA-12-0"="none"
"SATA-13-0"="none"
"SATA-14-0"="none"
"SATA-15-0"="none"
"SATA-16-0"="none"
"SATA-17-0"="none"
"SATA-18-0"="none"
"SATA-19-0"="none"
"SATA-20-0"="none"
"SATA-21-0"="none"
"SATA-22-0"="none"
"SATA-23-0"="none"
"SATA-24-0"="none"
"SATA-25-0"="none"
"SATA-26-0"="none"
"SATA-27-0"="none"
"SATA-28-0"="none"
"SATA-29-0"="none"
natnet1="nat"
macaddress1="0800274844BF"
cableconnected1="on"
nic1="nat"
nictype1="virtio"
nicspeed1="0"
mtu="0"
sockSnd="64"
sockRcv="64"
tcpWndSnd="64"
tcpWndRcv="64"
Forwarding(0)="ssh,tcp,127.0.0.1,51145,,22"
hostonlyadapter2="VirtualBox Host-Only Ethernet Adapter #3"
macaddress2="0800271EF814"
cableconnected2="on"
nic2="hostonly"
nictype2="virtio"
nicspeed2="0"
nic3="none"
nic4="none"
nic5="none"
nic6="none"
nic7="none"
nic8="none"
hidpointing="ps2mouse"
hidkeyboard="ps2kbd"
uart1="off"
uart2="off"
uart3="off"
uart4="off"
lpt1="off"
lpt2="off"
audio="dsound"
audio_out="on"
audio_in="on"
clipboard="disabled"
draganddrop="disabled"
SessionName="headless"
VideoMode="720,400,0"@0,0 1
vrde="off"
usb="off"
ehci="off"
xhci="off"
SharedFolderNameMachineMapping1="c/Users"
SharedFolderPathMachineMapping1="\\?\c:\Users"
VRDEActiveConnection="off"
VRDEClients==0
videocap="off"
videocapaudio="off"
capturescreens="0"
capturefilename="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\minikube.webm"
captureres="1024x768"
capturevideorate=512
capturevideofps=25
captureopts=""
GuestMemoryBalloon=0
GuestOSType="Linux26_64"
GuestAdditionsRunLevel=2
GuestAdditionsVersion="5.2.32 r132056"
GuestAdditionsFacility_VirtualBox Base Driver=50,1589150207974
GuestAdditionsFacility_VirtualBox System Service=50,1589150208295
GuestAdditionsFacility_Seamless Mode=0,1589150208856
GuestAdditionsFacility_Graphics Mode=0,1589150207974
}
I0511 00:50:40.707573 22940 main.go:110] libmachine: STDERR:
{
}
I0511 00:50:40.711477 22940 main.go:110] libmachine: Host-only MAC: 0800271ef814
I0511 00:50:40.728068 22940 main.go:110] libmachine: Using SSH client type: native
I0511 00:50:40.728068 22940 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7b5c20] 0x7b5bf0 [] 0s} 127.0.0.1 51145 }
I0511 00:50:40.730022 22940 main.go:110] libmachine: About to run SSH command:
ip addr show
I0511 00:50:40.839333 22940 main.go:110] libmachine: SSH cmd err, output: : 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:48:44:bf brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 85568sec preferred_lft 85568sec
inet6 fe80::a00:27ff:fe48:44bf/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:1e:f8:14 brd ff:ff:ff:ff:ff:ff
inet 192.168.99.100/24 brd 192.168.99.255 scope global dynamic eth1
valid_lft 366sec preferred_lft 366sec
inet6 fe80::a00:27ff:fe1e:f814/64 scope link
valid_lft forever preferred_lft forever
4: sit0@NONE: mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:e2:4a:75:21 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
I0511 00:50:40.839333 22940 main.go:110] libmachine: SSH returned: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:48:44:bf brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 85568sec preferred_lft 85568sec
inet6 fe80::a00:27ff:fe48:44bf/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:1e:f8:14 brd ff:ff:ff:ff:ff:ff
inet 192.168.99.100/24 brd 192.168.99.255 scope global dynamic eth1
valid_lft 366sec preferred_lft 366sec
inet6 fe80::a00:27ff:fe1e:f814/64 scope link
valid_lft forever preferred_lft forever
4: sit0@NONE: mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:e2:4a:75:21 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
END SSH
I0511 00:50:40.846165 22940 main.go:110] libmachine: generating server cert: C:\Users\Sherif Ali.minikube\machines\server.pem ca-key=C:\Users\Sherif Ali.minikube\certs\ca.pem private-key=C:\Users\Sherif Ali.minikube\certs\ca-key.pem org=Sherif Ali.minikube san=[192.168.99.100 localhost]
I0511 00:50:40.970119 22940 ssh_runner.go:160] Transferring 1675 bytes to /etc/docker/server-key.pem
I0511 00:50:40.971095 22940 ssh_runner.go:179] server-key.pem: copied 1675 bytes
I0511 00:50:40.980855 22940 ssh_runner.go:160] Transferring 1046 bytes to /etc/docker/ca.pem
I0511 00:50:40.981830 22940 ssh_runner.go:179] ca.pem: copied 1046 bytes
I0511 00:50:40.991590 22940 ssh_runner.go:160] Transferring 1119 bytes to /etc/docker/server.pem
I0511 00:50:40.994550 22940 ssh_runner.go:179] server.pem: copied 1119 bytes
I0511 00:50:41.021844 22940 main.go:110] libmachine: Using SSH client type: native
I0511 00:50:41.022821 22940 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7b5c20] 0x7b5bf0 [] 0s} 127.0.0.1 51145 }
I0511 00:50:41.023797 22940 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0511 00:50:41.131157 22940 main.go:110] libmachine: SSH cmd err, output: : tmpfs
I0511 00:50:41.132151 22940 main.go:110] libmachine: root file system type: tmpfs
I0511 00:50:41.136037 22940 main.go:110] libmachine: Setting Docker configuration on the remote daemon...
I0511 00:50:41.151654 22940 main.go:110] libmachine: Using SSH client type: native
I0511 00:50:41.152631 22940 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7b5c20] 0x7b5bf0 [] 0s} 127.0.0.1 51145 }
I0511 00:50:41.153627 22940 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
[Service]
Type=notify
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=virtualbox --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service
I0511 00:50:41.274629 22940 main.go:110] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
[Service]
Type=notify
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=virtualbox --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0511 00:50:41.275619 22940 main.go:110] libmachine: setting minikube options for container-runtime
I0511 00:50:41.293172 22940 main.go:110] libmachine: Using SSH client type: native
I0511 00:50:41.294149 22940 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7b5c20] 0x7b5bf0 [] 0s} 127.0.0.1 51145 }
I0511 00:50:41.294149 22940 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube
I0511 00:50:41.408341 22940 main.go:110] libmachine: SSH cmd err, output: :
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I0511 00:50:41.423958 22940 main.go:110] libmachine: Using SSH client type: native
I0511 00:50:41.423958 22940 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7b5c20] 0x7b5bf0 [] 0s} 127.0.0.1 51145 }
I0511 00:50:41.423958 22940 main.go:110] libmachine: About to run SSH command:
sudo systemctl daemon-reload
I0511 00:50:41.712929 22940 main.go:110] libmachine: SSH cmd err, output: :
I0511 00:50:41.728546 22940 main.go:110] libmachine: Using SSH client type: native
I0511 00:50:41.730497 22940 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7b5c20] 0x7b5bf0 [] 0s} 127.0.0.1 51145 }
I0511 00:50:41.734401 22940 main.go:110] libmachine: About to run SSH command:
sudo systemctl -f restart crio
I0511 00:50:41.944275 22940 main.go:110] libmachine: SSH cmd err, output: :
I0511 00:50:41.961809 22940 main.go:110] libmachine: Using SSH client type: native
I0511 00:50:41.962787 22940 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7b5c20] 0x7b5bf0 [] 0s} 127.0.0.1 51145 }
I0511 00:50:41.962787 22940 main.go:110] libmachine: About to run SSH command:
date +%s.%N
I0511 00:50:42.061360 22940 main.go:110] libmachine: SSH cmd err, output: : 1589151042.059425755
I0511 00:50:42.061360 22940 cluster.go:197] guest clock: 1589151042.059425755
I0511 00:50:42.063314 22940 cluster.go:210] Guest: 2020-05-11 00:50:42.059425755 +0200 EET Remote: 2020-05-11 00:50:41.9442758 +0200 EET m=+2.453775001 (delta=115.149955ms)
I0511 00:50:42.066242 22940 cluster.go:181] guest clock delta is within tolerance: 115.149955ms
I0511 00:50:42.067218 22940 cluster.go:146] configureHost completed within 2.0184437s
I0511 00:50:42.128705 22940 main.go:110] libmachine: COMMAND: C:\Program Files\Oracle\VirtualBox\VBoxManage.exe showvminfo minikube --machinereadable
I0511 00:50:42.200013 22940 main.go:110] libmachine: STDOUT:
{
name="minikube"
groups="/"
ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
UUID="4aad24ba-b0dd-4c3c-a868-5e7b162c49e4"
CfgFile="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\minikube.vbox"
SnapFldr="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\Snapshots"
LogFldr="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\Logs"
hardwareuuid="4aad24ba-b0dd-4c3c-a868-5e7b162c49e4"
memory=8192
pagefusion="off"
vram=8
cpuexecutioncap=100
hpet="on"
cpu-profile="host"
chipset="piix3"
firmware="BIOS"
cpus=2
pae="on"
longmode="on"
triplefaultreset="off"
apic="on"
x2apic="off"
nested-hw-virt="off"
cpuid-portability-level=0
bootmenu="disabled"
boot1="dvd"
boot2="dvd"
boot3="disk"
boot4="none"
acpi="on"
ioapic="on"
biosapic="apic"
biossystemtimeoffset=0
rtcuseutc="on"
hwvirtex="on"
nestedpaging="on"
largepages="on"
vtxvpid="on"
vtxux="on"
paravirtprovider="default"
effparavirtprovider="kvm"
VMState="running"
VMStateChangeTime="2020-05-10T22:36:08.215000000"
graphicscontroller="vboxvga"
monitorcount=1
accelerate3d="off"
accelerate2dvideo="off"
teleporterenabled="off"
teleporterport=0
teleporteraddress=""
teleporterpassword=""
tracing-enabled="off"
tracing-allow-vm-access="off"
tracing-config=""
autostart-enabled="off"
autostart-delay=0
defaultfrontend=""
vmprocpriority="default"
storagecontrollername0="SATA"
storagecontrollertype0="IntelAhci"
storagecontrollerinstance0="0"
storagecontrollermaxportcount0="30"
storagecontrollerportcount0="30"
storagecontrollerbootable0="on"
"SATA-0-0"="C:\Users\Sherif Ali.minikube\machines\minikube\boot2docker.iso"
"SATA-ImageUUID-0-0"="e3a8d3ba-0497-49d8-8415-f29fb495caa2"
"SATA-tempeject"="off"
"SATA-IsEjected"="off"
"SATA-1-0"="C:\Users\Sherif Ali.minikube\machines\minikube\disk.vmdk"
"SATA-ImageUUID-1-0"="787791e0-f256-4c6b-84e0-312236d4575e"
"SATA-2-0"="none"
"SATA-3-0"="none"
"SATA-4-0"="none"
"SATA-5-0"="none"
"SATA-6-0"="none"
"SATA-7-0"="none"
"SATA-8-0"="none"
"SATA-9-0"="none"
"SATA-10-0"="none"
"SATA-11-0"="none"
"SATA-12-0"="none"
"SATA-13-0"="none"
"SATA-14-0"="none"
"SATA-15-0"="none"
"SATA-16-0"="none"
"SATA-17-0"="none"
"SATA-18-0"="none"
"SATA-19-0"="none"
"SATA-20-0"="none"
"SATA-21-0"="none"
"SATA-22-0"="none"
"SATA-23-0"="none"
"SATA-24-0"="none"
"SATA-25-0"="none"
"SATA-26-0"="none"
"SATA-27-0"="none"
"SATA-28-0"="none"
"SATA-29-0"="none"
natnet1="nat"
macaddress1="0800274844BF"
cableconnected1="on"
nic1="nat"
nictype1="virtio"
nicspeed1="0"
mtu="0"
sockSnd="64"
sockRcv="64"
tcpWndSnd="64"
tcpWndRcv="64"
Forwarding(0)="ssh,tcp,127.0.0.1,51145,,22"
hostonlyadapter2="VirtualBox Host-Only Ethernet Adapter #3"
macaddress2="0800271EF814"
cableconnected2="on"
nic2="hostonly"
nictype2="virtio"
nicspeed2="0"
nic3="none"
nic4="none"
nic5="none"
nic6="none"
nic7="none"
nic8="none"
hidpointing="ps2mouse"
hidkeyboard="ps2kbd"
uart1="off"
uart2="off"
uart3="off"
uart4="off"
lpt1="off"
lpt2="off"
audio="dsound"
audio_out="on"
audio_in="on"
clipboard="disabled"
draganddrop="disabled"
SessionName="headless"
VideoMode="720,400,0"@0,0 1
vrde="off"
usb="off"
ehci="off"
xhci="off"
SharedFolderNameMachineMapping1="c/Users"
SharedFolderPathMachineMapping1="\\?\c:\Users"
VRDEActiveConnection="off"
VRDEClients==0
videocap="off"
videocapaudio="off"
capturescreens="0"
capturefilename="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\minikube.webm"
captureres="1024x768"
capturevideorate=512
capturevideofps=25
captureopts=""
GuestMemoryBalloon=0
GuestOSType="Linux26_64"
GuestAdditionsRunLevel=2
GuestAdditionsVersion="5.2.32 r132056"
GuestAdditionsFacility_VirtualBox Base Driver=50,1589150207974
GuestAdditionsFacility_VirtualBox System Service=50,1589150208295
GuestAdditionsFacility_Seamless Mode=0,1589150208856
GuestAdditionsFacility_Graphics Mode=0,1589150207974
}
I0511 00:50:42.200968 22940 main.go:110] libmachine: STDERR:
{
}
I0511 00:50:42.204835 22940 main.go:110] libmachine: COMMAND: C:\Program Files\Oracle\VirtualBox\VBoxManage.exe showvminfo minikube --machinereadable
I0511 00:50:42.276135 22940 main.go:110] libmachine: STDOUT:
{
name="minikube"
groups="/"
ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
UUID="4aad24ba-b0dd-4c3c-a868-5e7b162c49e4"
CfgFile="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\minikube.vbox"
SnapFldr="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\Snapshots"
LogFldr="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\Logs"
hardwareuuid="4aad24ba-b0dd-4c3c-a868-5e7b162c49e4"
memory=8192
pagefusion="off"
vram=8
cpuexecutioncap=100
hpet="on"
cpu-profile="host"
chipset="piix3"
firmware="BIOS"
cpus=2
pae="on"
longmode="on"
triplefaultreset="off"
apic="on"
x2apic="off"
nested-hw-virt="off"
cpuid-portability-level=0
bootmenu="disabled"
boot1="dvd"
boot2="dvd"
boot3="disk"
boot4="none"
acpi="on"
ioapic="on"
biosapic="apic"
biossystemtimeoffset=0
rtcuseutc="on"
hwvirtex="on"
nestedpaging="on"
largepages="on"
vtxvpid="on"
vtxux="on"
paravirtprovider="default"
effparavirtprovider="kvm"
VMState="running"
VMStateChangeTime="2020-05-10T22:36:08.215000000"
graphicscontroller="vboxvga"
monitorcount=1
accelerate3d="off"
accelerate2dvideo="off"
teleporterenabled="off"
teleporterport=0
teleporteraddress=""
teleporterpassword=""
tracing-enabled="off"
tracing-allow-vm-access="off"
tracing-config=""
autostart-enabled="off"
autostart-delay=0
defaultfrontend=""
vmprocpriority="default"
storagecontrollername0="SATA"
storagecontrollertype0="IntelAhci"
storagecontrollerinstance0="0"
storagecontrollermaxportcount0="30"
storagecontrollerportcount0="30"
storagecontrollerbootable0="on"
"SATA-0-0"="C:\Users\Sherif Ali.minikube\machines\minikube\boot2docker.iso"
"SATA-ImageUUID-0-0"="e3a8d3ba-0497-49d8-8415-f29fb495caa2"
"SATA-tempeject"="off"
"SATA-IsEjected"="off"
"SATA-1-0"="C:\Users\Sherif Ali.minikube\machines\minikube\disk.vmdk"
"SATA-ImageUUID-1-0"="787791e0-f256-4c6b-84e0-312236d4575e"
"SATA-2-0"="none"
"SATA-3-0"="none"
"SATA-4-0"="none"
"SATA-5-0"="none"
"SATA-6-0"="none"
"SATA-7-0"="none"
"SATA-8-0"="none"
"SATA-9-0"="none"
"SATA-10-0"="none"
"SATA-11-0"="none"
"SATA-12-0"="none"
"SATA-13-0"="none"
"SATA-14-0"="none"
"SATA-15-0"="none"
"SATA-16-0"="none"
"SATA-17-0"="none"
"SATA-18-0"="none"
"SATA-19-0"="none"
"SATA-20-0"="none"
"SATA-21-0"="none"
"SATA-22-0"="none"
"SATA-23-0"="none"
"SATA-24-0"="none"
"SATA-25-0"="none"
"SATA-26-0"="none"
"SATA-27-0"="none"
"SATA-28-0"="none"
"SATA-29-0"="none"
natnet1="nat"
macaddress1="0800274844BF"
cableconnected1="on"
nic1="nat"
nictype1="virtio"
nicspeed1="0"
mtu="0"
sockSnd="64"
sockRcv="64"
tcpWndSnd="64"
tcpWndRcv="64"
Forwarding(0)="ssh,tcp,127.0.0.1,51145,,22"
hostonlyadapter2="VirtualBox Host-Only Ethernet Adapter #3"
macaddress2="0800271EF814"
cableconnected2="on"
nic2="hostonly"
nictype2="virtio"
nicspeed2="0"
nic3="none"
nic4="none"
nic5="none"
nic6="none"
nic7="none"
nic8="none"
hidpointing="ps2mouse"
hidkeyboard="ps2kbd"
uart1="off"
uart2="off"
uart3="off"
uart4="off"
lpt1="off"
lpt2="off"
audio="dsound"
audio_out="on"
audio_in="on"
clipboard="disabled"
draganddrop="disabled"
SessionName="headless"
VideoMode="720,400,0"@0,0 1
vrde="off"
usb="off"
ehci="off"
xhci="off"
SharedFolderNameMachineMapping1="c/Users"
SharedFolderPathMachineMapping1="\\?\c:\Users"
VRDEActiveConnection="off"
VRDEClients==0
videocap="off"
videocapaudio="off"
capturescreens="0"
capturefilename="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\minikube.webm"
captureres="1024x768"
capturevideorate=512
capturevideofps=25
captureopts=""
GuestMemoryBalloon=0
GuestOSType="Linux26_64"
GuestAdditionsRunLevel=2
GuestAdditionsVersion="5.2.32 r132056"
GuestAdditionsFacility_VirtualBox Base Driver=50,1589150207974
GuestAdditionsFacility_VirtualBox System Service=50,1589150208295
GuestAdditionsFacility_Seamless Mode=0,1589150208856
GuestAdditionsFacility_Graphics Mode=0,1589150207974
}
I0511 00:50:42.277094 22940 main.go:110] libmachine: STDERR:
{
}
I0511 00:50:42.280962 22940 main.go:110] libmachine: Host-only MAC: 0800271ef814
I0511 00:50:42.298529 22940 main.go:110] libmachine: Using SSH client type: native
I0511 00:50:42.298529 22940 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7b5c20] 0x7b5bf0 [] 0s} 127.0.0.1 51145 }
I0511 00:50:42.305399 22940 main.go:110] libmachine: About to run SSH command:
ip addr show
I0511 00:50:42.413697 22940 main.go:110] libmachine: SSH cmd err, output: : 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:48:44:bf brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 85567sec preferred_lft 85567sec
inet6 fe80::a00:27ff:fe48:44bf/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:1e:f8:14 brd ff:ff:ff:ff:ff:ff
inet 192.168.99.100/24 brd 192.168.99.255 scope global dynamic eth1
valid_lft 365sec preferred_lft 365sec
inet6 fe80::a00:27ff:fe1e:f814/64 scope link
valid_lft forever preferred_lft forever
4: sit0@NONE: mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:e2:4a:75:21 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
I0511 00:50:42.414676 22940 main.go:110] libmachine: SSH returned: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:48:44:bf brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 85567sec preferred_lft 85567sec
inet6 fe80::a00:27ff:fe48:44bf/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:1e:f8:14 brd ff:ff:ff:ff:ff:ff
inet 192.168.99.100/24 brd 192.168.99.255 scope global dynamic eth1
valid_lft 365sec preferred_lft 365sec
inet6 fe80::a00:27ff:fe1e:f814/64 scope link
valid_lft forever preferred_lft forever
4: sit0@NONE: mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:e2:4a:75:21 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
END SSH
I0511 00:50:42.423458 22940 ssh_runner.go:96] (SSHRunner) Run: nslookup kubernetes.io
I0511 00:50:42.463473 22940 ssh_runner.go:96] (SSHRunner) Run: curl -sS https://k8s.gcr.io/
I0511 00:50:49.639193 22940 profile.go:82] Saving config to C:\Users\Sherif Ali.minikube\profiles\minikube\config.json ...
I0511 00:50:49.640157 22940 lock.go:41] attempting to write to file "C:\Users\Sherif Ali\.minikube\profiles\minikube\config.json.tmp862546554" with filemode -rw-------
I0511 00:50:49.671388 22940 ssh_runner.go:96] (SSHRunner) Run: systemctl is-active --quiet service containerd
I0511 00:50:49.680180 22940 ssh_runner.go:139] (SSHRunner) Non-zero exit: systemctl is-active --quiet service containerd: Process exited with status 3 (982.5µs)
I0511 00:50:49.708476 22940 ssh_runner.go:96] (SSHRunner) Run: systemctl is-active --quiet service crio
I0511 00:50:49.753372 22940 ssh_runner.go:96] (SSHRunner) Run: sudo systemctl stop crio
I0511 00:50:49.811932 22940 ssh_runner.go:96] (SSHRunner) Run: systemctl is-active --quiet service crio
I0511 00:50:49.824620 22940 ssh_runner.go:139] (SSHRunner) Non-zero exit: systemctl is-active --quiet service crio: Process exited with status 3 (3.9051ms)
I0511 00:50:49.868540 22940 ssh_runner.go:96] (SSHRunner) Run: sudo systemctl start docker
I0511 00:50:49.904653 22940 ssh_runner.go:96] (SSHRunner) Run: docker version --format '{{.Server.Version}}'
I0511 00:50:49.937836 22940 main.go:110] libmachine: COMMAND: C:\Program Files\Oracle\VirtualBox\VBoxManage.exe showvminfo minikube --machinereadable
I0511 00:50:50.009085 22940 main.go:110] libmachine: STDOUT:
{
name="minikube"
groups="/"
ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
UUID="4aad24ba-b0dd-4c3c-a868-5e7b162c49e4"
CfgFile="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\minikube.vbox"
SnapFldr="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\Snapshots"
LogFldr="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\Logs"
hardwareuuid="4aad24ba-b0dd-4c3c-a868-5e7b162c49e4"
memory=8192
pagefusion="off"
vram=8
cpuexecutioncap=100
hpet="on"
cpu-profile="host"
chipset="piix3"
firmware="BIOS"
cpus=2
pae="on"
longmode="on"
triplefaultreset="off"
apic="on"
x2apic="off"
nested-hw-virt="off"
cpuid-portability-level=0
bootmenu="disabled"
boot1="dvd"
boot2="dvd"
boot3="disk"
boot4="none"
acpi="on"
ioapic="on"
biosapic="apic"
biossystemtimeoffset=0
rtcuseutc="on"
hwvirtex="on"
nestedpaging="on"
largepages="on"
vtxvpid="on"
vtxux="on"
paravirtprovider="default"
effparavirtprovider="kvm"
VMState="running"
VMStateChangeTime="2020-05-10T22:36:08.215000000"
graphicscontroller="vboxvga"
monitorcount=1
accelerate3d="off"
accelerate2dvideo="off"
teleporterenabled="off"
teleporterport=0
teleporteraddress=""
teleporterpassword=""
tracing-enabled="off"
tracing-allow-vm-access="off"
tracing-config=""
autostart-enabled="off"
autostart-delay=0
defaultfrontend=""
vmprocpriority="default"
storagecontrollername0="SATA"
storagecontrollertype0="IntelAhci"
storagecontrollerinstance0="0"
storagecontrollermaxportcount0="30"
storagecontrollerportcount0="30"
storagecontrollerbootable0="on"
"SATA-0-0"="C:\Users\Sherif Ali.minikube\machines\minikube\boot2docker.iso"
"SATA-ImageUUID-0-0"="e3a8d3ba-0497-49d8-8415-f29fb495caa2"
"SATA-tempeject"="off"
"SATA-IsEjected"="off"
"SATA-1-0"="C:\Users\Sherif Ali.minikube\machines\minikube\disk.vmdk"
"SATA-ImageUUID-1-0"="787791e0-f256-4c6b-84e0-312236d4575e"
"SATA-2-0"="none"
"SATA-3-0"="none"
"SATA-4-0"="none"
"SATA-5-0"="none"
"SATA-6-0"="none"
"SATA-7-0"="none"
"SATA-8-0"="none"
"SATA-9-0"="none"
"SATA-10-0"="none"
"SATA-11-0"="none"
"SATA-12-0"="none"
"SATA-13-0"="none"
"SATA-14-0"="none"
"SATA-15-0"="none"
"SATA-16-0"="none"
"SATA-17-0"="none"
"SATA-18-0"="none"
"SATA-19-0"="none"
"SATA-20-0"="none"
"SATA-21-0"="none"
"SATA-22-0"="none"
"SATA-23-0"="none"
"SATA-24-0"="none"
"SATA-25-0"="none"
"SATA-26-0"="none"
"SATA-27-0"="none"
"SATA-28-0"="none"
"SATA-29-0"="none"
natnet1="nat"
macaddress1="0800274844BF"
cableconnected1="on"
nic1="nat"
nictype1="virtio"
nicspeed1="0"
mtu="0"
sockSnd="64"
sockRcv="64"
tcpWndSnd="64"
tcpWndRcv="64"
Forwarding(0)="ssh,tcp,127.0.0.1,51145,,22"
hostonlyadapter2="VirtualBox Host-Only Ethernet Adapter Support kubernetes dashboard. #3"
macaddress2="0800271EF814"
cableconnected2="on"
nic2="hostonly"
nictype2="virtio"
nicspeed2="0"
nic3="none"
nic4="none"
nic5="none"
nic6="none"
nic7="none"
nic8="none"
hidpointing="ps2mouse"
hidkeyboard="ps2kbd"
uart1="off"
uart2="off"
uart3="off"
uart4="off"
lpt1="off"
lpt2="off"
audio="dsound"
audio_out="on"
audio_in="on"
clipboard="disabled"
draganddrop="disabled"
SessionName="headless"
VideoMode="720,400,0"@0,0 1
vrde="off"
usb="off"
ehci="off"
xhci="off"
SharedFolderNameMachineMapping1="c/Users"
SharedFolderPathMachineMapping1="\\?\c:\Users"
VRDEActiveConnection="off"
VRDEClients==0
videocap="off"
videocapaudio="off"
capturescreens="0"
capturefilename="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\minikube.webm"
captureres="1024x768"
capturevideorate=512
capturevideofps=25
captureopts=""
GuestMemoryBalloon=0
GuestOSType="Linux26_64"
GuestAdditionsRunLevel=2
GuestAdditionsVersion="5.2.32 r132056"
GuestAdditionsFacility_VirtualBox Base Driver=50,1589150207974
GuestAdditionsFacility_VirtualBox System Service=50,1589150208295
GuestAdditionsFacility_Seamless Mode=0,1589150208856
GuestAdditionsFacility_Graphics Mode=0,1589150207974
}
I0511 00:50:50.011037 22940 main.go:110] libmachine: STDERR:
{
}
I0511 00:50:50.014964 22940 main.go:110] libmachine: COMMAND: C:\Program Files\Oracle\VirtualBox\VBoxManage.exe showvminfo minikube --machinereadable
I0511 00:50:50.088141 22940 main.go:110] libmachine: STDOUT:
{
name="minikube"
groups="/"
ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
UUID="4aad24ba-b0dd-4c3c-a868-5e7b162c49e4"
CfgFile="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\minikube.vbox"
SnapFldr="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\Snapshots"
LogFldr="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\Logs"
hardwareuuid="4aad24ba-b0dd-4c3c-a868-5e7b162c49e4"
memory=8192
pagefusion="off"
vram=8
cpuexecutioncap=100
hpet="on"
cpu-profile="host"
chipset="piix3"
firmware="BIOS"
cpus=2
pae="on"
longmode="on"
triplefaultreset="off"
apic="on"
x2apic="off"
nested-hw-virt="off"
cpuid-portability-level=0
bootmenu="disabled"
boot1="dvd"
boot2="dvd"
boot3="disk"
boot4="none"
acpi="on"
ioapic="on"
biosapic="apic"
biossystemtimeoffset=0
rtcuseutc="on"
hwvirtex="on"
nestedpaging="on"
largepages="on"
vtxvpid="on"
vtxux="on"
paravirtprovider="default"
effparavirtprovider="kvm"
VMState="running"
VMStateChangeTime="2020-05-10T22:36:08.215000000"
graphicscontroller="vboxvga"
monitorcount=1
accelerate3d="off"
accelerate2dvideo="off"
teleporterenabled="off"
teleporterport=0
teleporteraddress=""
teleporterpassword=""
tracing-enabled="off"
tracing-allow-vm-access="off"
tracing-config=""
autostart-enabled="off"
autostart-delay=0
defaultfrontend=""
vmprocpriority="default"
storagecontrollername0="SATA"
storagecontrollertype0="IntelAhci"
storagecontrollerinstance0="0"
storagecontrollermaxportcount0="30"
storagecontrollerportcount0="30"
storagecontrollerbootable0="on"
"SATA-0-0"="C:\Users\Sherif Ali.minikube\machines\minikube\boot2docker.iso"
"SATA-ImageUUID-0-0"="e3a8d3ba-0497-49d8-8415-f29fb495caa2"
"SATA-tempeject"="off"
"SATA-IsEjected"="off"
"SATA-1-0"="C:\Users\Sherif Ali.minikube\machines\minikube\disk.vmdk"
"SATA-ImageUUID-1-0"="787791e0-f256-4c6b-84e0-312236d4575e"
"SATA-2-0"="none"
"SATA-3-0"="none"
"SATA-4-0"="none"
"SATA-5-0"="none"
"SATA-6-0"="none"
"SATA-7-0"="none"
"SATA-8-0"="none"
"SATA-9-0"="none"
"SATA-10-0"="none"
"SATA-11-0"="none"
"SATA-12-0"="none"
"SATA-13-0"="none"
"SATA-14-0"="none"
"SATA-15-0"="none"
"SATA-16-0"="none"
"SATA-17-0"="none"
"SATA-18-0"="none"
"SATA-19-0"="none"
"SATA-20-0"="none"
"SATA-21-0"="none"
"SATA-22-0"="none"
"SATA-23-0"="none"
"SATA-24-0"="none"
"SATA-25-0"="none"
"SATA-26-0"="none"
"SATA-27-0"="none"
"SATA-28-0"="none"
"SATA-29-0"="none"
natnet1="nat"
macaddress1="0800274844BF"
cableconnected1="on"
nic1="nat"
nictype1="virtio"
nicspeed1="0"
mtu="0"
sockSnd="64"
sockRcv="64"
tcpWndSnd="64"
tcpWndRcv="64"
Forwarding(0)="ssh,tcp,127.0.0.1,51145,,22"
hostonlyadapter2="VirtualBox Host-Only Ethernet Adapter Support kubernetes dashboard. #3"
macaddress2="0800271EF814"
cableconnected2="on"
nic2="hostonly"
nictype2="virtio"
nicspeed2="0"
nic3="none"
nic4="none"
nic5="none"
nic6="none"
nic7="none"
nic8="none"
hidpointing="ps2mouse"
hidkeyboard="ps2kbd"
uart1="off"
uart2="off"
uart3="off"
uart4="off"
lpt1="off"
lpt2="off"
audio="dsound"
audio_out="on"
audio_in="on"
clipboard="disabled"
draganddrop="disabled"
SessionName="headless"
VideoMode="720,400,0"@0,0 1
vrde="off"
usb="off"
ehci="off"
xhci="off"
SharedFolderNameMachineMapping1="c/Users"
SharedFolderPathMachineMapping1="\\?\c:\Users"
VRDEActiveConnection="off"
VRDEClients==0
videocap="off"
videocapaudio="off"
capturescreens="0"
capturefilename="C:\Users\Sherif Ali\.minikube\machines\minikube\minikube\minikube.webm"
captureres="1024x768"
capturevideorate=512
capturevideofps=25
captureopts=""
GuestMemoryBalloon=0
GuestOSType="Linux26_64"
GuestAdditionsRunLevel=2
GuestAdditionsVersion="5.2.32 r132056"
GuestAdditionsFacility_VirtualBox Base Driver=50,1589150207974
GuestAdditionsFacility_VirtualBox System Service=50,1589150208295
GuestAdditionsFacility_Seamless Mode=0,1589150208856
GuestAdditionsFacility_Graphics Mode=0,1589150207974
}
I0511 00:50:50.091069 22940 main.go:110] libmachine: STDERR:
{
}
I0511 00:50:50.094975 22940 main.go:110] libmachine: Host-only MAC: 0800271ef814
I0511 00:50:50.109614 22940 main.go:110] libmachine: Using SSH client type: native
I0511 00:50:50.110592 22940 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x7b5c20] 0x7b5bf0 [] 0s} 127.0.0.1 51145 }
I0511 00:50:50.111566 22940 main.go:110] libmachine: About to run SSH command:
ip addr show
I0511 00:50:50.210140 22940 main.go:110] libmachine: SSH cmd err, output: : 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:48:44:bf brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 85559sec preferred_lft 85559sec
inet6 fe80::a00:27ff:fe48:44bf/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:1e:f8:14 brd ff:ff:ff:ff:ff:ff
inet 192.168.99.100/24 brd 192.168.99.255 scope global dynamic eth1
valid_lft 357sec preferred_lft 357sec
inet6 fe80::a00:27ff:fe1e:f814/64 scope link
valid_lft forever preferred_lft forever
4: sit0@NONE: mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:e2:4a:75:21 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
I0511 00:50:50.211138 22940 main.go:110] libmachine: SSH returned: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:48:44:bf brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 85559sec preferred_lft 85559sec
inet6 fe80::a00:27ff:fe48:44bf/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:1e:f8:14 brd ff:ff:ff:ff:ff:ff
inet 192.168.99.100/24 brd 192.168.99.255 scope global dynamic eth1
valid_lft 357sec preferred_lft 357sec
inet6 fe80::a00:27ff:fe1e:f814/64 scope link
valid_lft forever preferred_lft forever
4: sit0@NONE: mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:e2:4a:75:21 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
END SSH
I0511 00:50:50.215997 22940 settings.go:124] acquiring lock: {Name:kubeconfigUpdate Clock:{} Delay:10s Timeout:0s Cancel:}
I0511 00:50:50.216973 22940 settings.go:132] Updating kubeconfig: C:\Users\Sherif Ali/.kube/config
I0511 00:50:50.219900 22940 lock.go:41] attempting to write to file "C:\Users\Sherif Ali/.kube/config" with filemode -rw-------
I0511 00:50:50.272605 22940 cache_images.go:96] LoadImages start: [k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/pause:3.1 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kube-addon-manager:v9.0 gcr.io/k8s-minikube/storage-provisioner:v1.8.1]
I0511 00:50:50.273581 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner:v1.8.1 -> C:\Users\Sherif Ali.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v1.8.1
I0511 00:50:50.273581 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-kube-dns-amd64:1.14.13 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-kube-dns-amd64_1.14.13
I0511 00:50:50.273581 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-proxy:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.16.2
I0511 00:50:50.280413 22940 cache_images.go:211] Loading image from cache: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.16.2
I0511 00:50:50.273581 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-scheduler:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.16.2
I0511 00:50:50.273581 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-controller-manager:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.16.2
I0511 00:50:50.273581 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\etcd:3.3.15-0 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\etcd_3.3.15-0
I0511 00:50:50.273581 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-apiserver:v1.16.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.16.2
I0511 00:50:50.273581 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-sidecar-amd64:1.14.13 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-sidecar-amd64_1.14.13
I0511 00:50:50.273581 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-dnsmasq-nanny-amd64:1.14.13 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-dnsmasq-nanny-amd64_1.14.13
I0511 00:50:50.273581 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kubernetes-dashboard-amd64:v1.10.1 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kubernetes-dashboard-amd64_v1.10.1
I0511 00:50:50.273581 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\coredns:1.6.2 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\coredns_1.6.2
I0511 00:50:50.273581 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-addon-manager:v9.0 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-addon-manager_v9.0
I0511 00:50:50.273581 22940 cache_images.go:151] windows sanitize: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\pause:3.1 -> C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\pause_3.1
I0511 00:50:50.278473 22940 cache_images.go:211] Loading image from cache: C:\Users\Sherif Ali.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v1.8.1
I0511 00:50:50.279436 22940 cache_images.go:211] Loading image from cache: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-kube-dns-amd64_1.14.13
I0511 00:50:50.282365 22940 cache_images.go:211] Loading image from cache: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.16.2
I0511 00:50:50.282365 22940 ssh_runner.go:160] Transferring 30892544 bytes to /var/lib/minikube/images/kube-proxy_v1.16.2
I0511 00:50:50.283355 22940 cache_images.go:211] Loading image from cache: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.16.2
I0511 00:50:50.285296 22940 cache_images.go:211] Loading image from cache: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\etcd_3.3.15-0
I0511 00:50:50.286270 22940 cache_images.go:211] Loading image from cache: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.16.2
I0511 00:50:50.289197 22940 cache_images.go:211] Loading image from cache: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-sidecar-amd64_1.14.13
I0511 00:50:50.290172 22940 cache_images.go:211] Loading image from cache: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-dnsmasq-nanny-amd64_1.14.13
I0511 00:50:50.290172 22940 cache_images.go:211] Loading image from cache: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kubernetes-dashboard-amd64_v1.10.1
I0511 00:50:50.291148 22940 cache_images.go:211] Loading image from cache: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\coredns_1.6.2
I0511 00:50:50.291148 22940 cache_images.go:211] Loading image from cache: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-addon-manager_v9.0
I0511 00:50:50.292124 22940 cache_images.go:211] Loading image from cache: C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\pause_3.1
I0511 00:50:50.294086 22940 ssh_runner.go:160] Transferring 20683776 bytes to /var/lib/minikube/images/storage-provisioner_v1.8.1
I0511 00:50:50.294086 22940 ssh_runner.go:160] Transferring 31410176 bytes to /var/lib/minikube/images/kube-scheduler_v1.16.2
I0511 00:50:50.294086 22940 ssh_runner.go:160] Transferring 14267904 bytes to /var/lib/minikube/images/k8s-dns-kube-dns-amd64_1.14.13
I0511 00:50:50.303837 22940 ssh_runner.go:160] Transferring 48863744 bytes to /var/lib/minikube/images/kube-controller-manager_v1.16.2
I0511 00:50:50.303837 22940 ssh_runner.go:160] Transferring 85501440 bytes to /var/lib/minikube/images/etcd_3.3.15-0
I0511 00:50:50.303837 22940 ssh_runner.go:160] Transferring 50502656 bytes to /var/lib/minikube/images/kube-apiserver_v1.16.2
I0511 00:50:50.304813 22940 ssh_runner.go:160] Transferring 12207616 bytes to /var/lib/minikube/images/k8s-dns-sidecar-amd64_1.14.13
I0511 00:50:50.308719 22940 ssh_runner.go:160] Transferring 11769344 bytes to /var/lib/minikube/images/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I0511 00:50:50.314573 22940 ssh_runner.go:160] Transferring 14125568 bytes to /var/lib/minikube/images/coredns_1.6.2
I0511 00:50:50.314573 22940 ssh_runner.go:160] Transferring 318976 bytes to /var/lib/minikube/images/pause_3.1
I0511 00:50:50.314573 22940 ssh_runner.go:160] Transferring 30522368 bytes to /var/lib/minikube/images/kube-addon-manager_v9.0
I0511 00:50:50.314573 22940 ssh_runner.go:160] Transferring 44910592 bytes to /var/lib/minikube/images/kubernetes-dashboard-amd64_v1.10.1
I0511 00:50:50.470733 22940 ssh_runner.go:179] pause_3.1: copied 318976 bytes
I0511 00:50:50.500013 22940 docker.go:107] Loading image: /var/lib/minikube/images/pause_3.1
I0511 00:50:50.527341 22940 ssh_runner.go:96] (SSHRunner) Run: docker load -i /var/lib/minikube/images/pause_3.1
I0511 00:50:50.731325 22940 cache_images.go:237] Successfully loaded image C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\pause_3.1 from cache
I0511 00:50:52.499843 22940 ssh_runner.go:179] k8s-dns-dnsmasq-nanny-amd64_1.14.13: copied 11769344 bytes
I0511 00:50:52.532045 22940 docker.go:107] Loading image: /var/lib/minikube/images/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I0511 00:50:52.560348 22940 ssh_runner.go:96] (SSHRunner) Run: docker load -i /var/lib/minikube/images/k8s-dns-dnsmasq-nanny-amd64_1.14.13
I0511 00:50:52.573038 22940 ssh_runner.go:179] k8s-dns-sidecar-amd64_1.14.13: copied 12207616 bytes
I0511 00:50:52.808254 22940 cache_images.go:237] Successfully loaded image C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-dnsmasq-nanny-amd64_1.14.13 from cache
I0511 00:50:52.808254 22940 docker.go:107] Loading image: /var/lib/minikube/images/k8s-dns-sidecar-amd64_1.14.13
I0511 00:50:52.837532 22940 ssh_runner.go:96] (SSHRunner) Run: docker load -i /var/lib/minikube/images/k8s-dns-sidecar-amd64_1.14.13
I0511 00:50:52.942943 22940 ssh_runner.go:179] k8s-dns-kube-dns-amd64_1.14.13: copied 14267904 bytes
I0511 00:50:52.948798 22940 ssh_runner.go:179] coredns_1.6.2: copied 14125568 bytes
I0511 00:50:53.033710 22940 cache_images.go:237] Successfully loaded image C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-sidecar-amd64_1.14.13 from cache
I0511 00:50:53.033710 22940 docker.go:107] Loading image: /var/lib/minikube/images/k8s-dns-kube-dns-amd64_1.14.13
I0511 00:50:53.067873 22940 ssh_runner.go:96] (SSHRunner) Run: docker load -i /var/lib/minikube/images/k8s-dns-kube-dns-amd64_1.14.13
I0511 00:50:53.253310 22940 cache_images.go:237] Successfully loaded image C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\k8s-dns-kube-dns-amd64_1.14.13 from cache
I0511 00:50:53.253310 22940 docker.go:107] Loading image: /var/lib/minikube/images/coredns_1.6.2
I0511 00:50:53.283565 22940 ssh_runner.go:96] (SSHRunner) Run: docker load -i /var/lib/minikube/images/coredns_1.6.2
I0511 00:50:53.569535 22940 cache_images.go:237] Successfully loaded image C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\coredns_1.6.2 from cache
I0511 00:50:53.705196 22940 ssh_runner.go:179] storage-provisioner_v1.8.1: copied 20683776 bytes
I0511 00:50:53.718861 22940 docker.go:107] Loading image: /var/lib/minikube/images/storage-provisioner_v1.8.1
I0511 00:50:53.748140 22940 ssh_runner.go:96] (SSHRunner) Run: docker load -i /var/lib/minikube/images/storage-provisioner_v1.8.1
I0511 00:50:54.119997 22940 cache_images.go:237] Successfully loaded image C:\Users\Sherif Ali.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v1.8.1 from cache
I0511 00:50:54.708526 22940 ssh_runner.go:179] kube-proxy_v1.16.2: copied 30892544 bytes
I0511 00:50:54.733902 22940 docker.go:107] Loading image: /var/lib/minikube/images/kube-proxy_v1.16.2
I0511 00:50:54.766109 22940 ssh_runner.go:96] (SSHRunner) Run: docker load -i /var/lib/minikube/images/kube-proxy_v1.16.2
I0511 00:50:54.817836 22940 ssh_runner.go:179] kube-addon-manager_v9.0: copied 30522368 bytes
I0511 00:50:54.881276 22940 ssh_runner.go:179] kube-scheduler_v1.16.2: copied 31410176 bytes
I0511 00:50:55.063791 22940 cache_images.go:237] Successfully loaded image C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.16.2 from cache
I0511 00:50:55.063791 22940 docker.go:107] Loading image: /var/lib/minikube/images/kube-addon-manager_v9.0
I0511 00:50:55.094044 22940 ssh_runner.go:96] (SSHRunner) Run: docker load -i /var/lib/minikube/images/kube-addon-manager_v9.0
I0511 00:50:55.440525 22940 cache_images.go:237] Successfully loaded image C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-addon-manager_v9.0 from cache
I0511 00:50:55.440525 22940 docker.go:107] Loading image: /var/lib/minikube/images/kube-scheduler_v1.16.2
I0511 00:50:55.468829 22940 ssh_runner.go:96] (SSHRunner) Run: docker load -i /var/lib/minikube/images/kube-scheduler_v1.16.2
I0511 00:50:55.769437 22940 cache_images.go:237] Successfully loaded image C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.16.2 from cache
I0511 00:50:55.811404 22940 ssh_runner.go:179] kubernetes-dashboard-amd64_v1.10.1: copied 44910592 bytes
I0511 00:50:55.822141 22940 docker.go:107] Loading image: /var/lib/minikube/images/kubernetes-dashboard-amd64_v1.10.1
I0511 00:50:55.856301 22940 ssh_runner.go:96] (SSHRunner) Run: docker load -i /var/lib/minikube/images/kubernetes-dashboard-amd64_v1.10.1
I0511 00:50:55.996845 22940 ssh_runner.go:179] kube-controller-manager_v1.16.2: copied 48863744 bytes
I0511 00:50:56.080780 22940 ssh_runner.go:179] kube-apiserver_v1.16.2: copied 50502656 bytes
I0511 00:50:56.297452 22940 cache_images.go:237] Successfully loaded image C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kubernetes-dashboard-amd64_v1.10.1 from cache
I0511 00:50:56.298428 22940 docker.go:107] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.16.2
I0511 00:50:56.324781 22940 ssh_runner.go:96] (SSHRunner) Run: docker load -i /var/lib/minikube/images/kube-controller-manager_v1.16.2
I0511 00:50:56.630270 22940 ssh_runner.go:179] etcd_3.3.15-0: copied 85501440 bytes
I0511 00:50:56.738606 22940 cache_images.go:237] Successfully loaded image C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.16.2 from cache
I0511 00:50:56.740583 22940 docker.go:107] Loading image: /var/lib/minikube/images/kube-apiserver_v1.16.2
I0511 00:50:56.767883 22940 ssh_runner.go:96] (SSHRunner) Run: docker load -i /var/lib/minikube/images/kube-apiserver_v1.16.2
I0511 00:50:56.964062 22940 cache_images.go:237] Successfully loaded image C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.16.2 from cache
I0511 00:50:56.964062 22940 docker.go:107] Loading image: /var/lib/minikube/images/etcd_3.3.15-0
I0511 00:50:56.987484 22940 ssh_runner.go:96] (SSHRunner) Run: docker load -i /var/lib/minikube/images/etcd_3.3.15-0
I0511 00:50:57.317372 22940 cache_images.go:237] Successfully loaded image C:\Users\Sherif Ali.minikube\cache\images\k8s.gcr.io\etcd_3.3.15-0 from cache
I0511 00:50:57.317372 22940 cache_images.go:120] Successfully loaded all cached images.
I0511 00:50:57.322252 22940 cache_images.go:121] LoadImages end
I0511 00:50:57.322252 22940 kubeadm.go:665] kubelet v1.16.2 config:
[Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.16.2/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.99.100 --pod-manifest-path=/etc/kubernetes/manifests
[Install]
I0511 00:50:57.323228 22940 ssh_runner.go:96] (SSHRunner) Run: /bin/bash -c "pgrep kubelet && sudo systemctl stop kubelet"
I0511 00:50:57.363244 22940 cache_binaries.go:74] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.16.2/bin/linux/amd64/kubeadm
I0511 00:50:57.363244 22940 cache_binaries.go:74] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.16.2/bin/linux/amd64/kubelet
I0511 00:50:57.364222 22940 ssh_runner.go:160] Transferring 44252992 bytes to /var/lib/minikube/binaries/v1.16.2/kubeadm
I0511 00:50:57.366174 22940 ssh_runner.go:160] Transferring 123129136 bytes to /var/lib/minikube/binaries/v1.16.2/kubelet
I0511 00:50:58.764781 22940 ssh_runner.go:179] kubeadm: copied 44252992 bytes
I0511 00:51:00.039437 22940 ssh_runner.go:179] kubelet: copied 123129136 bytes
I0511 00:51:00.048221 22940 ssh_runner.go:160] Transferring 1146 bytes to /var/tmp/minikube/kubeadm.yaml
I0511 00:51:00.049197 22940 ssh_runner.go:179] kubeadm.yaml: copied 1146 bytes
I0511 00:51:00.060956 22940 ssh_runner.go:160] Transferring 561 bytes to /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
I0511 00:51:00.061886 22940 ssh_runner.go:179] 10-kubeadm.conf: copied 561 bytes
I0511 00:51:00.071645 22940 ssh_runner.go:160] Transferring 349 bytes to /lib/systemd/system/kubelet.service
I0511 00:51:00.071645 22940 ssh_runner.go:179] kubelet.service: copied 349 bytes
I0511 00:51:00.081405 22940 ssh_runner.go:160] Transferring 1532 bytes to /etc/kubernetes/manifests/addon-manager.yaml.tmpl
I0511 00:51:00.082381 22940 ssh_runner.go:179] addon-manager.yaml.tmpl: copied 1532 bytes
I0511 00:51:00.091164 22940 ssh_runner.go:160] Transferring 1001 bytes to /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0511 00:51:00.092140 22940 ssh_runner.go:179] dashboard-clusterrole.yaml: copied 1001 bytes
I0511 00:51:00.100925 22940 ssh_runner.go:160] Transferring 1018 bytes to /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0511 00:51:00.101908 22940 ssh_runner.go:179] dashboard-clusterrolebinding.yaml: copied 1018 bytes
I0511 00:51:00.113639 22940 ssh_runner.go:160] Transferring 837 bytes to /etc/kubernetes/addons/dashboard-configmap.yaml
I0511 00:51:00.114589 22940 ssh_runner.go:179] dashboard-configmap.yaml: copied 837 bytes
I0511 00:51:00.124348 22940 ssh_runner.go:160] Transferring 4027 bytes to /etc/kubernetes/addons/dashboard-dp.yaml
I0511 00:51:00.125325 22940 ssh_runner.go:179] dashboard-dp.yaml: copied 4027 bytes
I0511 00:51:00.134111 22940 ssh_runner.go:160] Transferring 759 bytes to /etc/kubernetes/addons/dashboard-ns.yaml
I0511 00:51:00.135085 22940 ssh_runner.go:179] dashboard-ns.yaml: copied 759 bytes
I0511 00:51:00.143870 22940 ssh_runner.go:160] Transferring 1724 bytes to /etc/kubernetes/addons/dashboard-role.yaml
I0511 00:51:00.144844 22940 ssh_runner.go:179] dashboard-role.yaml: copied 1724 bytes
I0511 00:51:00.156556 22940 ssh_runner.go:160] Transferring 1046 bytes to /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0511 00:51:00.158508 22940 ssh_runner.go:179] dashboard-rolebinding.yaml: copied 1046 bytes
I0511 00:51:00.167296 22940 ssh_runner.go:160] Transferring 837 bytes to /etc/kubernetes/addons/dashboard-sa.yaml
I0511 00:51:00.170247 22940 ssh_runner.go:179] dashboard-sa.yaml: copied 837 bytes
I0511 00:51:00.179004 22940 ssh_runner.go:160] Transferring 1401 bytes to /etc/kubernetes/addons/dashboard-secret.yaml
I0511 00:51:00.179980 22940 ssh_runner.go:179] dashboard-secret.yaml: copied 1401 bytes
I0511 00:51:00.188764 22940 ssh_runner.go:160] Transferring 1294 bytes to /etc/kubernetes/addons/dashboard-svc.yaml
I0511 00:51:00.189742 22940 ssh_runner.go:179] dashboard-svc.yaml: copied 1294 bytes
I0511 00:51:00.200478 22940 ssh_runner.go:160] Transferring 271 bytes to /etc/kubernetes/addons/storageclass.yaml
I0511 00:51:00.201453 22940 ssh_runner.go:179] storageclass.yaml: copied 271 bytes
I0511 00:51:00.210239 22940 ssh_runner.go:160] Transferring 1709 bytes to /etc/kubernetes/addons/storage-provisioner.yaml
I0511 00:51:00.211212 22940 ssh_runner.go:179] storage-provisioner.yaml: copied 1709 bytes
I0511 00:51:00.224925 22940 ssh_runner.go:160] Transferring 2470 bytes to /etc/kubernetes/addons/influxGrafana-rc.yaml
I0511 00:51:00.225852 22940 ssh_runner.go:179] influxGrafana-rc.yaml: copied 2470 bytes
I0511 00:51:00.235614 22940 ssh_runner.go:160] Transferring 1085 bytes to /etc/kubernetes/addons/grafana-svc.yaml
I0511 00:51:00.236590 22940 ssh_runner.go:179] grafana-svc.yaml: copied 1085 bytes
I0511 00:51:00.245372 22940 ssh_runner.go:160] Transferring 1048 bytes to /etc/kubernetes/addons/influxdb-svc.yaml
I0511 00:51:00.246347 22940 ssh_runner.go:179] influxdb-svc.yaml: copied 1048 bytes
I0511 00:51:00.255131 22940 ssh_runner.go:160] Transferring 1616 bytes to /etc/kubernetes/addons/heapster-rc.yaml
I0511 00:51:00.256108 22940 ssh_runner.go:179] heapster-rc.yaml: copied 1616 bytes
I0511 00:51:00.264892 22940 ssh_runner.go:160] Transferring 1006 bytes to /etc/kubernetes/addons/heapster-svc.yaml
I0511 00:51:00.266847 22940 ssh_runner.go:179] heapster-svc.yaml: copied 1006 bytes
I0511 00:51:00.275627 22940 ssh_runner.go:96] (SSHRunner) Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl start kubelet"
I0511 00:51:00.454237 22940 certs.go:75] acquiring lock: {Name:setupCerts Clock:{} Delay:15s Timeout:0s Cancel:}
I0511 00:51:00.454237 22940 certs.go:83] Setting up C:\Users\Sherif Ali.minikube for IP: 192.168.99.100
I0511 00:51:00.456188 22940 crypto.go:69] Generating cert C:\Users\Sherif Ali.minikube\client.crt with IP's: []
I0511 00:51:00.462043 22940 crypto.go:157] Writing cert to C:\Users\Sherif Ali.minikube\client.crt ...
I0511 00:51:00.462043 22940 lock.go:41] attempting to write to file "C:\Users\Sherif Ali\.minikube\client.crt" with filemode -rw-r--r--
I0511 00:51:00.463996 22940 crypto.go:165] Writing key to C:\Users\Sherif Ali.minikube\client.key ...
I0511 00:51:00.463996 22940 lock.go:41] attempting to write to file "C:\Users\Sherif Ali\.minikube\client.key" with filemode -rw-------
I0511 00:51:00.465948 22940 crypto.go:69] Generating cert C:\Users\Sherif Ali.minikube\apiserver.crt with IP's: [192.168.99.100 10.96.0.1 10.0.0.1]
I0511 00:51:00.471804 22940 crypto.go:157] Writing cert to C:\Users\Sherif Ali.minikube\apiserver.crt ...
I0511 00:51:00.471804 22940 lock.go:41] attempting to write to file "C:\Users\Sherif Ali\.minikube\apiserver.crt" with filemode -rw-r--r--
I0511 00:51:00.473756 22940 crypto.go:165] Writing key to C:\Users\Sherif Ali.minikube\apiserver.key ...
I0511 00:51:00.474732 22940 lock.go:41] attempting to write to file "C:\Users\Sherif Ali\.minikube\apiserver.key" with filemode -rw-------
I0511 00:51:00.475711 22940 crypto.go:69] Generating cert C:\Users\Sherif Ali.minikube\proxy-client.crt with IP's: []
I0511 00:51:00.481594 22940 crypto.go:157] Writing cert to C:\Users\Sherif Ali.minikube\proxy-client.crt ...
I0511 00:51:00.482563 22940 lock.go:41] attempting to write to file "C:\Users\Sherif Ali\.minikube\proxy-client.crt" with filemode -rw-r--r--
I0511 00:51:00.485468 22940 crypto.go:165] Writing key to C:\Users\Sherif Ali.minikube\proxy-client.key ...
I0511 00:51:00.485468 22940 lock.go:41] attempting to write to file "C:\Users\Sherif Ali\.minikube\proxy-client.key" with filemode -rw-------
I0511 00:51:00.493276 22940 ssh_runner.go:160] Transferring 1066 bytes to /var/lib/minikube/certs/ca.crt
I0511 00:51:00.494253 22940 ssh_runner.go:179] ca.crt: copied 1066 bytes
I0511 00:51:00.514750 22940 ssh_runner.go:160] Transferring 1679 bytes to /var/lib/minikube/certs/ca.key
I0511 00:51:00.516730 22940 ssh_runner.go:179] ca.key: copied 1679 bytes
I0511 00:51:00.527436 22940 ssh_runner.go:160] Transferring 1298 bytes to /var/lib/minikube/certs/apiserver.crt
I0511 00:51:00.528413 22940 ssh_runner.go:179] apiserver.crt: copied 1298 bytes
I0511 00:51:00.549885 22940 ssh_runner.go:160] Transferring 1675 bytes to /var/lib/minikube/certs/apiserver.key
I0511 00:51:00.551838 22940 ssh_runner.go:179] apiserver.key: copied 1675 bytes
I0511 00:51:00.566477 22940 ssh_runner.go:160] Transferring 1074 bytes to /var/lib/minikube/certs/proxy-client-ca.crt
I0511 00:51:00.568429 22940 ssh_runner.go:179] proxy-client-ca.crt: copied 1074 bytes
I0511 00:51:00.584045 22940 ssh_runner.go:160] Transferring 1675 bytes to /var/lib/minikube/certs/proxy-client-ca.key
I0511 00:51:00.585997 22940 ssh_runner.go:179] proxy-client-ca.key: copied 1675 bytes
I0511 00:51:00.596733 22940 ssh_runner.go:160] Transferring 1103 bytes to /var/lib/minikube/certs/proxy-client.crt
I0511 00:51:00.598690 22940 ssh_runner.go:179] proxy-client.crt: copied 1103 bytes
I0511 00:51:00.609421 22940 ssh_runner.go:160] Transferring 1679 bytes to /var/lib/minikube/certs/proxy-client.key
I0511 00:51:00.610397 22940 ssh_runner.go:179] proxy-client.key: copied 1679 bytes
I0511 00:51:00.622109 22940 ssh_runner.go:160] Transferring 1066 bytes to /usr/share/ca-certificates/minikubeCA.pem
I0511 00:51:00.624062 22940 ssh_runner.go:179] minikubeCA.pem: copied 1066 bytes
I0511 00:51:00.635786 22940 ssh_runner.go:160] Transferring 428 bytes to /var/lib/minikube/kubeconfig
I0511 00:51:00.636750 22940 ssh_runner.go:179] kubeconfig: copied 428 bytes
I0511 00:51:00.677742 22940 ssh_runner.go:96] (SSHRunner) Run: openssl version
I0511 00:51:00.710924 22940 ssh_runner.go:96] (SSHRunner) Run: sudo test -f /etc/ssl/certs/minikubeCA.pem
I0511 00:51:00.744108 22940 ssh_runner.go:96] (SSHRunner) Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0511 00:51:00.795838 22940 ssh_runner.go:96] (SSHRunner) Run: sudo test -f /etc/ssl/certs/b5213941.0
I0511 00:51:00.802669 22940 kubeadm.go:436] RestartCluster start
I0511 00:51:00.829998 22940 ssh_runner.go:96] (SSHRunner) Run: sudo test -d /data/minikube
I0511 00:51:00.835856 22940 ssh_runner.go:139] (SSHRunner) Non-zero exit: sudo test -d /data/minikube: Process exited with status 1 (977.6µs)
I0511 00:51:00.836831 22940 kubeadm.go:229] /data/minikube skipping compat symlinks: Process exited with status 1
I0511 00:51:00.836831 22940 ssh_runner.go:96] (SSHRunner) Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0511 00:51:00.989084 22940 ssh_runner.go:96] (SSHRunner) Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0511 00:51:02.274479 22940 ssh_runner.go:96] (SSHRunner) Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0511 00:51:02.440396 22940 ssh_runner.go:96] (SSHRunner) Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0511 00:51:02.620956 22940 kubeadm.go:496] Waiting for apiserver process ...
I0511 00:51:02.650236 22940 ssh_runner.go:96] (SSHRunner) Run: sudo pgrep kube-apiserver
I0511 00:51:02.670734 22940 kubeadm.go:511] Waiting for apiserver to port healthy status ...
I0511 00:51:23.236947 22940 kubeadm.go:168] https://192.168.99.100:8443/healthz response: Get https://192.168.99.100:8443/healthz: read tcp 192.168.99.1:52662->192.168.99.100:8443: wsarecv: An existing connection was forcibly closed by the remote host.
I0511 00:51:23.237860 22940 kubeadm.go:514] apiserver status: Stopped, err:
I0511 00:51:25.741425 22940 kubeadm.go:168] https://192.168.99.100:8443/healthz response: Get https://192.168.99.100:8443/healthz: dial tcp 192.168.99.100:8443: connectex: No connection could be made because the target machine actively refused it.
I0511 00:51:25.741425 22940 kubeadm.go:514] apiserver status: Stopped, err:
I0511 00:51:28.241067 22940 kubeadm.go:168] https://192.168.99.100:8443/healthz response: Get https://192.168.99.100:8443/healthz: dial tcp 192.168.99.100:8443: connectex: No connection could be made because the target machine actively refused it.
I0511 00:51:28.241067 22940 kubeadm.go:514] apiserver status: Stopped, err:
I0511 00:51:30.744483 22940 kubeadm.go:168] https://192.168.99.100:8443/healthz response: Get https://192.168.99.100:8443/healthz: dial tcp 192.168.99.100:8443: connectex: No connection could be made because the target machine actively refused it.
I0511 00:51:30.744483 22940 kubeadm.go:514] apiserver status: Stopped, err:
I0511 00:51:33.245179 22940 kubeadm.go:168] https://192.168.99.100:8443/healthz response: Get https://192.168.99.100:8443/healthz: dial tcp 192.168.99.100:8443: connectex: No connection could be made because the target machine actively refused it.
I0511 00:51:33.246287 22940 kubeadm.go:514] apiserver status: Stopped, err:
I0511 00:51:35.742985 22940 kubeadm.go:168] https://192.168.99.100:8443/healthz response: Get https://192.168.99.100:8443/healthz: dial tcp 192.168.99.100:8443: connectex: No connection could be made because the target machine actively refused it.
I0511 00:51:35.743926 22940 kubeadm.go:514] apiserver status: Stopped, err:
I0511 00:51:38.246488 22940 kubeadm.go:168] https://192.168.99.100:8443/healthz response: Get https://192.168.99.100:8443/healthz: dial tcp 192.168.99.100:8443: connectex: No connection could be made because the target machine actively refused it.
I0511 00:51:38.246488 22940 kubeadm.go:514] apiserver status: Stopped, err:
I0511 00:51:40.743066 22940 kubeadm.go:168] https://192.168.99.100:8443/healthz response: Get https://192.168.99.100:8443/healthz: dial tcp 192.168.99.100:8443: connectex: No connection could be made because the target machine actively refused it.
Full output of
minikube start
command used, if not already included:Optional: Full output of
minikube logs
command:The text was updated successfully, but these errors were encountered: