Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minikube Crashed #4402

Closed
ravishgithub opened this issue Jun 1, 2019 · 3 comments
Closed

Minikube Crashed #4402

ravishgithub opened this issue Jun 1, 2019 · 3 comments

Comments

@ravishgithub
Copy link

The exact command to reproduce the issue: sudo minikube start --memory=5120 --cpus=4 --kubernetes-version=v1.14.

The full output of the command that failed:
sudo minikube start --memory=5120 --cpus=4 --kubernetes-version=v1.14.1
[sudo] password for ravish:
😄 minikube v1.1.0 on linux (amd64)
💡 Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
🔄 Restarting existing kvm2 VM for "minikube" ...
⌛ Waiting for SSH access ...
🐳 Configuring environment for Kubernetes v1.14.1 on Docker 18.09.6
🔄 Relaunching Kubernetes v1.14.1 using kubeadm ...

💣 Error restarting cluster: waiting for apiserver: timed out waiting for the condition

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new

The output of the minikube logs command:
==> coredns <==
.:53
2019-06-01T06:45:06.052Z [INFO] CoreDNS-1.3.1
2019-06-01T06:45:06.052Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-06-01T06:45:06.052Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669

==> dmesg <==
[Jun 1 06:42] core: CPUID marked event: 'bus cycles' unavailable
[ +0.000676] #2
[ +0.001019] #3
[ +0.023584] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +0.127121] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
[ +19.546646] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10
[ +0.023680] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11
[ +0.023801] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
[ +0.120381] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +0.140751] systemd-fstab-generator[1109]: Ignoring "noauto" for root device
[ +0.005313] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[ +0.000003] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[ +0.674326] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +0.343786] vboxguest: loading out-of-tree module taints kernel.
[ +0.004687] vboxguest: PCI device not found, probably running on physical hardware.
[ +8.112908] systemd-fstab-generator[1934]: Ignoring "noauto" for root device
[Jun 1 06:43] systemd-fstab-generator[2834]: Ignoring "noauto" for root device
[Jun 1 06:44] kauditd_printk_skb: 104 callbacks suppressed
[ +34.395612] kauditd_printk_skb: 20 callbacks suppressed
[ +12.814733] NFSD: Unable to end grace period: -110
[Jun 1 06:45] kauditd_printk_skb: 35 callbacks suppressed
[ +33.562566] kauditd_printk_skb: 2 callbacks suppressed
[ +20.955630] kauditd_printk_skb: 116 callbacks suppressed
[Jun 1 06:46] kauditd_printk_skb: 302 callbacks suppressed
[ +48.525674] kauditd_printk_skb: 32 callbacks suppressed
[Jun 1 06:47] kauditd_printk_skb: 2 callbacks suppressed
[ +8.894684] kauditd_printk_skb: 8 callbacks suppressed
[ +8.436728] kauditd_printk_skb: 2 callbacks suppressed
[Jun 1 06:48] kauditd_printk_skb: 8 callbacks suppressed

==> kernel <==
07:21:54 up 39 min, 0 users, load average: 1.40, 1.41, 1.31
Linux minikube 4.15.0 #1 SMP Tue May 21 00:14:40 UTC 2019 x86_64 GNU/Linux

==> kube-addon-manager <==
error: no objects passed to apply
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-01T07:16:38+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-01T07:17:40+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-01T07:17:42+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-01T07:18:36+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
error: no objects passed to apply
error: no objects passed to apply
error: no objects passed to apply
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-01T07:18:38+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-01T07:19:35+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-01T07:19:38+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-01T07:20:35+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-01T07:20:37+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-01T07:21:36+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-01T07:21:38+00:00 ==

==> kube-apiserver <==
I0601 06:46:52.255338 1 trace.go:81] Trace[1992436708]: "List /apis/apps/v1/namespaces/kube-system/deployments" (started: 2019-06-01 06:46:51.393400265 +0000 UTC m=+148.965916413) (total time: 861.893032ms):
Trace[1992436708]: [861.682415ms] [861.621309ms] Listing from storage done
I0601 06:46:52.256048 1 trace.go:81] Trace[270027804]: "Update /apis/policy/v1beta1/namespaces/istio-system/poddisruptionbudgets/istio-pilot/status" (started: 2019-06-01 06:46:51.37438829 +0000 UTC m=+148.946904443) (total time: 881.640323ms):
Trace[270027804]: [879.894494ms] [879.833245ms] Object stored in database
I0601 06:46:52.346101 1 trace.go:81] Trace[1423535433]: "GuaranteedUpdate etcd3: *core.Event" (started: 2019-06-01 06:46:51.520357313 +0000 UTC m=+149.092873480) (total time: 825.718155ms):
Trace[1423535433]: [333.643578ms] [333.643578ms] initial value restored
Trace[1423535433]: [825.699789ms] [491.824674ms] Transaction committed
I0601 06:46:52.346377 1 trace.go:81] Trace[678633740]: "Patch /api/v1/namespaces/istio-system/events/istio-policy-78c7d8cffb-jh7j5.15a4002724acbc59" (started: 2019-06-01 06:46:51.520280484 +0000 UTC m=+149.092796640) (total time: 826.007964ms):
Trace[678633740]: [333.722073ms] [333.697306ms] About to apply patch
Trace[678633740]: [825.854438ms] [491.987758ms] Object stored in database
I0601 06:46:55.308042 1 trace.go:81] Trace[973297618]: "GuaranteedUpdate etcd3: *core.Event" (started: 2019-06-01 06:46:54.754910896 +0000 UTC m=+152.327427065) (total time: 553.095893ms):
Trace[973297618]: [499.582111ms] [499.582111ms] initial value restored
I0601 06:46:55.308277 1 trace.go:81] Trace[1059483186]: "Patch /api/v1/namespaces/default/events/details-v1-65b966b497-x2k4x.15a400330e70eacb" (started: 2019-06-01 06:46:54.754824243 +0000 UTC m=+152.327340404) (total time: 553.436073ms):
Trace[1059483186]: [499.671732ms] [499.646258ms] About to apply patch
I0601 06:46:55.411480 1 trace.go:81] Trace[975801333]: "GuaranteedUpdate etcd3: *core.Endpoints" (started: 2019-06-01 06:46:54.806205561 +0000 UTC m=+152.378721718) (total time: 605.236409ms):
Trace[975801333]: [605.200794ms] [604.949922ms] Transaction committed
I0601 06:46:55.411661 1 trace.go:81] Trace[1605189436]: "Update /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2019-06-01 06:46:54.806074042 +0000 UTC m=+152.378590198) (total time: 605.571467ms):
Trace[1605189436]: [605.448984ms] [605.359606ms] Object stored in database
I0601 06:46:58.100698 1 trace.go:81] Trace[1133818302]: "GuaranteedUpdate etcd3: *core.Event" (started: 2019-06-01 06:46:57.288445678 +0000 UTC m=+154.860961843) (total time: 812.184712ms):
Trace[1133818302]: [417.485103ms] [417.485103ms] initial value restored
Trace[1133818302]: [812.141204ms] [394.308441ms] Transaction committed
I0601 06:46:58.101256 1 trace.go:81] Trace[900022639]: "Patch /api/v1/namespaces/default/events/details-v1-65b966b497-x2k4x.15a400330e70eacb" (started: 2019-06-01 06:46:57.288358457 +0000 UTC m=+154.860874611) (total time: 812.87648ms):
Trace[900022639]: [417.574737ms] [417.532783ms] About to apply patch
Trace[900022639]: [812.701224ms] [394.950105ms] Object stored in database
I0601 06:46:58.841412 1 trace.go:81] Trace[666173241]: "GuaranteedUpdate etcd3: *core.Event" (started: 2019-06-01 06:46:58.103204644 +0000 UTC m=+155.675720807) (total time: 738.168314ms):
Trace[666173241]: [729.458886ms] [729.458886ms] initial value restored
I0601 06:46:58.841659 1 trace.go:81] Trace[299599909]: "Patch /api/v1/namespaces/istio-system/events/istio-egressgateway-6b4cd4d9f-vgtt8.15a4002ca01d1841" (started: 2019-06-01 06:46:58.103134371 +0000 UTC m=+155.675650537) (total time: 738.507666ms):
Trace[299599909]: [729.532112ms] [729.514565ms] About to apply patch
I0601 06:47:07.355076 1 trace.go:81] Trace[1461808627]: "GuaranteedUpdate etcd3: *core.Event" (started: 2019-06-01 06:47:06.74793107 +0000 UTC m=+164.320447242) (total time: 607.097723ms):
Trace[1461808627]: [177.053118ms] [177.053118ms] initial value restored
Trace[1461808627]: [607.075808ms] [429.706419ms] Transaction committed
I0601 06:47:07.355304 1 trace.go:81] Trace[1405949854]: "Patch /api/v1/namespaces/default/events/reviews-v2-7dc5785684-55qt2.15a40032f7fbdfa3" (started: 2019-06-01 06:47:06.747829833 +0000 UTC m=+164.320346005) (total time: 607.456987ms):
Trace[1405949854]: [177.156803ms] [177.135264ms] About to apply patch
Trace[1405949854]: [607.288526ms] [429.972111ms] Object stored in database
I0601 06:47:07.983495 1 trace.go:81] Trace[848126431]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-06-01 06:47:07.30965267 +0000 UTC m=+164.882168827) (total time: 673.812274ms):
Trace[848126431]: [673.673536ms] [673.604594ms] About to write a response
I0601 06:47:08.026848 1 trace.go:81] Trace[363579291]: "GuaranteedUpdate etcd3: *core.Event" (started: 2019-06-01 06:47:07.357400474 +0000 UTC m=+164.929916650) (total time: 669.412459ms):
Trace[363579291]: [633.859881ms] [633.859881ms] initial value restored
I0601 06:47:08.027074 1 trace.go:81] Trace[467250577]: "Patch /api/v1/namespaces/default/events/details-v1-65b966b497-x2k4x.15a400330e70eacb" (started: 2019-06-01 06:47:07.357295514 +0000 UTC m=+164.929811671) (total time: 669.761418ms):
Trace[467250577]: [633.967079ms] [633.921066ms] About to apply patch
I0601 06:47:09.199862 1 trace.go:81] Trace[542024574]: "GuaranteedUpdate etcd3: *core.Event" (started: 2019-06-01 06:47:08.666845196 +0000 UTC m=+166.239361374) (total time: 532.968908ms):
Trace[542024574]: [394.327209ms] [394.327209ms] initial value restored
Trace[542024574]: [532.915297ms] [138.235102ms] Transaction committed
I0601 06:47:09.200163 1 trace.go:81] Trace[1691363252]: "Patch /api/v1/namespaces/default/events/details-v1-65b966b497-x2k4x.15a400330e70eacb" (started: 2019-06-01 06:47:08.666758305 +0000 UTC m=+166.239274469) (total time: 533.385156ms):
Trace[1691363252]: [394.416535ms] [394.389933ms] About to apply patch
Trace[1691363252]: [533.148331ms] [138.572976ms] Object stored in database
I0601 06:47:19.626273 1 trace.go:81] Trace[729910900]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/minikube" (started: 2019-06-01 06:47:19.045997108 +0000 UTC m=+176.618513260) (total time: 580.244511ms):
Trace[729910900]: [580.118241ms] [580.07803ms] About to write a response
I0601 06:47:19.626885 1 trace.go:81] Trace[1098033861]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-06-01 06:47:18.299969204 +0000 UTC m=+175.872485366) (total time: 1.326892219s):
Trace[1098033861]: [1.326819572s] [1.326775732s] About to write a response

==> kube-proxy <==
W0601 06:44:58.253885 1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy
I0601 06:44:59.856777 1 server_others.go:147] Using iptables Proxier.
W0601 06:44:59.905238 1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0601 06:44:59.983934 1 server.go:555] Version: v1.14.1
I0601 06:44:59.994465 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0601 06:44:59.994538 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0601 06:44:59.994821 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0601 06:44:59.995047 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0601 06:44:59.995264 1 config.go:202] Starting service config controller
I0601 06:44:59.995313 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0601 06:44:59.995294 1 config.go:102] Starting endpoints config controller
I0601 06:44:59.995588 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0601 06:45:00.132402 1 controller_utils.go:1034] Caches are synced for endpoints config controller
I0601 06:45:00.195546 1 controller_utils.go:1034] Caches are synced for service config controller

==> kube-scheduler <==
E0601 06:44:24.826678 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:24.828051 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:24.834102 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:24.835238 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:24.836486 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:24.837543 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:24.838903 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:24.888133 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:25.826648 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:25.827779 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:25.828490 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:25.829966 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:25.834825 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:25.835917 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:25.837060 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:25.838112 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:25.839422 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:25.889114 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:26.829510 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:26.829735 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:26.829763 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:26.830499 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:26.835302 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:26.836356 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:26.837585 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:26.838748 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:26.839973 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:26.890343 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:27.830323 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:27.831200 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:27.832448 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:27.833596 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:27.836017 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:27.836978 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:27.838046 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:27.839200 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:27.840533 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:27.891005 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0601 06:44:34.153577 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0601 06:44:34.153593 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0601 06:44:34.153966 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0601 06:44:34.154010 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0601 06:44:34.154071 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0601 06:44:34.154234 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0601 06:44:34.154129 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0601 06:44:34.154287 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
I0601 06:44:36.014612 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0601 06:44:36.114968 1 controller_utils.go:1034] Caches are synced for scheduler controller
I0601 06:44:36.115053 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler...
I0601 06:44:57.393041 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Sat 2019-06-01 06:42:49 UTC, end at Sat 2019-06-01 07:21:54 UTC. --
Jun 01 06:44:59 minikube kubelet[2908]: I0601 06:44:59.276134 2908 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "istio-envoy" (UniqueName: "kubernetes.io/empty-dir/9b59f6dc-7fa3-11e9-9783-a0898098ac66-istio-envoy") pod "httpbin-5446f4d9b4-hsd2b" (UID: "9b59f6dc-7fa3-11e9-9783-a0898098ac66")
Jun 01 06:45:01 minikube kubelet[2908]: W0601 06:45:01.246627 2908 kuberuntime_container.go:691] No ref for container {"docker" "5d282790f32f53071a9715ead552b5eea4e3d752a0d4d1dcb85cda7936678d91"}
Jun 01 06:45:01 minikube kubelet[2908]: W0601 06:45:01.654342 2908 pod_container_deletor.go:75] Container "b66f6ccf7bbcdedff26bcec0e0858d6f1eb6e75cac025e8e807eb33b88081605" not found in pod's containers
Jun 01 06:45:01 minikube kubelet[2908]: W0601 06:45:01.654441 2908 pod_container_deletor.go:75] Container "19ae1db65fdcc3817b09d6207d7c4c56b152f751cbf158538ab3142759536a5b" not found in pod's containers
Jun 01 06:45:01 minikube kubelet[2908]: W0601 06:45:01.654490 2908 pod_container_deletor.go:75] Container "420372e36b6b3ddca74e86bb3023fad1ceddc89d369001c6e8e7c86c706dc030" not found in pod's containers
Jun 01 06:45:01 minikube kubelet[2908]: W0601 06:45:01.654527 2908 pod_container_deletor.go:75] Container "6ceb723c366192138c503d196b18896c9dab1a154025955cf2016d0fc280f1e3" not found in pod's containers
Jun 01 06:45:01 minikube kubelet[2908]: W0601 06:45:01.654587 2908 pod_container_deletor.go:75] Container "e3787f94b9e50690f80d0e5c633094ee5b4b7462163f081e53e7f11fd1945982" not found in pod's containers
Jun 01 06:45:01 minikube kubelet[2908]: I0601 06:45:01.689417 2908 reconciler.go:154] Reconciler: start to sync state
Jun 01 06:45:05 minikube kubelet[2908]: W0601 06:45:05.137556 2908 pod_container_deletor.go:75] Container "4cf6c9f83c847bdbe80d6b138deb7b226c100891fe6c365f91dd944dba0e9954" not found in pod's containers
Jun 01 06:45:06 minikube kubelet[2908]: W0601 06:45:06.189544 2908 kuberuntime_container.go:691] No ref for container {"docker" "475683fa2547b60129271e406196f6f1c71b9dd277643e815ea17c2412066378"}
Jun 01 06:45:12 minikube kubelet[2908]: W0601 06:45:12.276490 2908 pod_container_deletor.go:75] Container "6b492dbd7d5fda4cfa6a4c25808133faf7220de847b85e9e24d2a02a8e367935" not found in pod's containers
Jun 01 06:45:27 minikube kubelet[2908]: W0601 06:45:27.551814 2908 pod_container_deletor.go:75] Container "aa8f37f1ea24e5ca40f48182791bb7a4a8cc93061961453fd9e03bd0e16a6671" not found in pod's containers
Jun 01 06:45:28 minikube kubelet[2908]: W0601 06:45:28.214842 2908 pod_container_deletor.go:75] Container "f2eecb1fba3d785dda59c80651a4820fffac3f3cf0481260ed23274df621f1e7" not found in pod's containers
Jun 01 06:45:31 minikube kubelet[2908]: W0601 06:45:31.769081 2908 pod_container_deletor.go:75] Container "3f591c1264bff1af243b6633ded340a7d2ccb79111793ffffd06dfd821452859" not found in pod's containers
Jun 01 06:45:38 minikube kubelet[2908]: W0601 06:45:38.881640 2908 pod_container_deletor.go:75] Container "f69414254f0ce61ddca1345cb47e6e11ef18a2f65ce5d211a5692a1b632cbe1f" not found in pod's containers
Jun 01 06:45:38 minikube kubelet[2908]: W0601 06:45:38.925656 2908 pod_container_deletor.go:75] Container "cc830e15e111e50a086e185733aff1fa402ea56bf32a662f0c838ccecaf88192" not found in pod's containers
Jun 01 06:45:38 minikube kubelet[2908]: W0601 06:45:38.942985 2908 pod_container_deletor.go:75] Container "27a27ccac90f8841c2378d914bcca7a11936b0ab7d27b8782a99cf4ec6250e95" not found in pod's containers
Jun 01 06:45:38 minikube kubelet[2908]: W0601 06:45:38.965064 2908 pod_container_deletor.go:75] Container "4de36437fe2a7294613bb2947cf0e80abfd030d87b76859b85899f6aeb8ed244" not found in pod's containers
Jun 01 06:45:38 minikube kubelet[2908]: W0601 06:45:38.974133 2908 pod_container_deletor.go:75] Container "86d26c3e5cb8a7d10fee8a24ef8addae505cbc52344682af2d148fcab10daeb0" not found in pod's containers
Jun 01 06:45:38 minikube kubelet[2908]: E0601 06:45:38.982814 2908 remote_runtime.go:321] ContainerStatus "3e4d5d5e6a6e7d14c20dc986a92a83b74ff3daa6c9b4cfc520bbf03ad0b733ea" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 3e4d5d5e6a6e7d14c20dc986a92a83b74ff3daa6c9b4cfc520bbf03ad0b733ea
Jun 01 06:45:38 minikube kubelet[2908]: E0601 06:45:38.982873 2908 kuberuntime_manager.go:917] getPodContainerStatuses for pod "prometheus-d8d46c5b5-nfftp_istio-system(c15bd9f4-7f9d-11e9-9783-a0898098ac66)" failed: rpc error: code = Unknown desc = Error: No such container: 3e4d5d5e6a6e7d14c20dc986a92a83b74ff3daa6c9b4cfc520bbf03ad0b733ea
Jun 01 06:45:38 minikube kubelet[2908]: W0601 06:45:38.995593 2908 pod_container_deletor.go:75] Container "d868214d7b3a33748a71f69ba6a448b0d0e87531691da36d49e68e7d1e61bfcd" not found in pod's containers
Jun 01 06:45:39 minikube kubelet[2908]: W0601 06:45:39.054439 2908 pod_container_deletor.go:75] Container "feae2d5be6d056c8a4f9ec11335369e16cd20814ff7e715c2b37cbc75c38be11" not found in pod's containers
Jun 01 06:45:51 minikube kubelet[2908]: W0601 06:45:51.290248 2908 pod_container_deletor.go:75] Container "ab96068b228172583ab9d391031396f79ed20a839b5dbea2ea5269fce2eef159" not found in pod's containers
Jun 01 06:45:58 minikube kubelet[2908]: W0601 06:45:58.000453 2908 container.go:409] Failed to create summary reader for "/kubepods/burstable/pod62b38689-7f9f-11e9-9783-a0898098ac66/24fbdcfac9b6362d5429d48ef728df8396bdcaa971963c7121ab697268816809": none of the resources are being tracked.
Jun 01 06:45:58 minikube kubelet[2908]: W0601 06:45:58.661136 2908 pod_container_deletor.go:75] Container "02ca782937b45162f491dcbff63780112e7733b90b2d08dae9f4c8b6ee56645f" not found in pod's containers
Jun 01 06:45:59 minikube kubelet[2908]: W0601 06:45:59.955238 2908 pod_container_deletor.go:75] Container "e8f6e312ca4ce243392bb3aae43bd28874bcfc9d0165e8694a84ef715e837595" not found in pod's containers
Jun 01 06:46:00 minikube kubelet[2908]: W0601 06:46:00.497153 2908 container.go:409] Failed to create summary reader for "/kubepods/burstable/pod63395fb9-7f9f-11e9-9783-a0898098ac66/7538b91cd1c432b21882d38f861ec16e43761c35dc6ca928061df2e973b73fd0": none of the resources are being tracked.
Jun 01 06:46:00 minikube kubelet[2908]: W0601 06:46:00.506802 2908 pod_container_deletor.go:75] Container "a35a265b2df3d8f58e7b1d880b1e931e0f461676a0de4b65ed9948b125c4874d" not found in pod's containers
Jun 01 06:46:00 minikube kubelet[2908]: W0601 06:46:00.536994 2908 pod_container_deletor.go:75] Container "842cc197244a971535a9862f1c4a2947c20e047e44fb120339f3883abca6dbe7" not found in pod's containers
Jun 01 06:46:00 minikube kubelet[2908]: W0601 06:46:00.826634 2908 container.go:409] Failed to create summary reader for "/kubepods/burstable/pod64235cc2-7f9f-11e9-9783-a0898098ac66/feaabc6c75d1f20c41ba59daa8790c94c33bb55f05c9922881eafb3791394b77": none of the resources are being tracked.
Jun 01 06:46:01 minikube kubelet[2908]: W0601 06:46:01.263742 2908 container.go:409] Failed to create summary reader for "/kubepods/burstable/pod656f2666-7f9f-11e9-9783-a0898098ac66/ac7fa0286be9b735d5aa05c1e323d9a6e96f9d1cd786c3c94c9e4830c10cf6f9": none of the resources are being tracked.
Jun 01 06:46:01 minikube kubelet[2908]: E0601 06:46:01.315954 2908 cadvisor_stats_provider.go:403] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/burstable/pod656f2666-7f9f-11e9-9783-a0898098ac66/ac7fa0286be9b735d5aa05c1e323d9a6e96f9d1cd786c3c94c9e4830c10cf6f9": RecentStats: unable to find data in memory cache], ["/kubepods/burstable/pod64235cc2-7f9f-11e9-9783-a0898098ac66/feaabc6c75d1f20c41ba59daa8790c94c33bb55f05c9922881eafb3791394b77": RecentStats: unable to find data in memory cache], ["/kubepods/burstable/podc0d4de2c-7f9d-11e9-9783-a0898098ac66/a3f4896a4d53c74b19baaf794fad0f59b6388beefe1e7fd65dbf8694dc10962d": RecentStats: unable to find data in memory cache], ["/kubepods/burstable/pod63395fb9-7f9f-11e9-9783-a0898098ac66/7538b91cd1c432b21882d38f861ec16e43761c35dc6ca928061df2e973b73fd0": RecentStats: unable to find data in memory cache]
Jun 01 06:46:01 minikube kubelet[2908]: E0601 06:46:01.804829 2908 remote_runtime.go:321] ContainerStatus "044b23dc058a292616de50130410598d2965a9bbcd68a409fa59192e86b1c9c9" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 044b23dc058a292616de50130410598d2965a9bbcd68a409fa59192e86b1c9c9
Jun 01 06:46:01 minikube kubelet[2908]: E0601 06:46:01.804979 2908 kuberuntime_manager.go:917] getPodContainerStatuses for pod "ratings-v1-5b7cd6c58f-k48pb_default(62b38689-7f9f-11e9-9783-a0898098ac66)" failed: rpc error: code = Unknown desc = Error: No such container: 044b23dc058a292616de50130410598d2965a9bbcd68a409fa59192e86b1c9c9
Jun 01 06:46:10 minikube kubelet[2908]: E0601 06:46:10.753315 2908 remote_runtime.go:321] ContainerStatus "e7ca318da047ac172dc3fcfbe7639291074076b7bab4bcd4c82b4280a0e07a2b" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: e7ca318da047ac172dc3fcfbe7639291074076b7bab4bcd4c82b4280a0e07a2b
Jun 01 06:46:10 minikube kubelet[2908]: E0601 06:46:10.753370 2908 kuberuntime_manager.go:917] getPodContainerStatuses for pod "ratings-v1-5b7cd6c58f-k48pb_default(62b38689-7f9f-11e9-9783-a0898098ac66)" failed: rpc error: code = Unknown desc = Error: No such container: e7ca318da047ac172dc3fcfbe7639291074076b7bab4bcd4c82b4280a0e07a2b
Jun 01 06:46:15 minikube kubelet[2908]: E0601 06:46:15.237894 2908 cadvisor_stats_provider.go:403] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/burstable/podc0d4de2c-7f9d-11e9-9783-a0898098ac66/774f3da0dd38f7e63a9ce22efd3df60490a4d8fb0ea3296ab228439b1957dc20": RecentStats: unable to find data in memory cache]
Jun 01 06:46:19 minikube kubelet[2908]: E0601 06:46:19.796105 2908 remote_runtime.go:321] ContainerStatus "7bc755843172b61540b26988462be7c05825cbd897af13234e157ad11cca146d" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 7bc755843172b61540b26988462be7c05825cbd897af13234e157ad11cca146d
Jun 01 06:46:19 minikube kubelet[2908]: E0601 06:46:19.796222 2908 kuberuntime_manager.go:917] getPodContainerStatuses for pod "httpbin-5446f4d9b4-hsd2b_default(9b59f6dc-7fa3-11e9-9783-a0898098ac66)" failed: rpc error: code = Unknown desc = Error: No such container: 7bc755843172b61540b26988462be7c05825cbd897af13234e157ad11cca146d
Jun 01 06:46:23 minikube kubelet[2908]: W0601 06:46:23.614263 2908 pod_container_deletor.go:75] Container "97e8f77a774f759b468a9e98cac5e01719bdc223dbd409b0952cdf65062fdb8c" not found in pod's containers
Jun 01 06:46:23 minikube kubelet[2908]: E0601 06:46:23.623102 2908 remote_runtime.go:321] ContainerStatus "2c68dd86feb706ec2076dce8bca3985acc2b6fb7d9e04387778fa1f0d6d6bfea" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 2c68dd86feb706ec2076dce8bca3985acc2b6fb7d9e04387778fa1f0d6d6bfea
Jun 01 06:46:23 minikube kubelet[2908]: E0601 06:46:23.623151 2908 kuberuntime_manager.go:917] getPodContainerStatuses for pod "productpage-v1-79458795bc-bpnwf_default(656f2666-7f9f-11e9-9783-a0898098ac66)" failed: rpc error: code = Unknown desc = Error: No such container: 2c68dd86feb706ec2076dce8bca3985acc2b6fb7d9e04387778fa1f0d6d6bfea
Jun 01 06:46:29 minikube kubelet[2908]: E0601 06:46:29.735148 2908 cadvisor_stats_provider.go:403] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/burstable/pod9b59f6dc-7fa3-11e9-9783-a0898098ac66/ee55b59a09558b37b84676d70f6b1746613c3c9a86be8736833bbb809fdac60d": RecentStats: unable to find data in memory cache]
Jun 01 06:46:35 minikube kubelet[2908]: E0601 06:46:35.291980 2908 remote_runtime.go:321] ContainerStatus "5b0dbb65dae7dce9c1fbe7c6eb2bb5cf7973f87f6ad945999d28fd3e0dd91ed0" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 5b0dbb65dae7dce9c1fbe7c6eb2bb5cf7973f87f6ad945999d28fd3e0dd91ed0
Jun 01 06:46:35 minikube kubelet[2908]: E0601 06:46:35.292064 2908 kuberuntime_manager.go:917] getPodContainerStatuses for pod "istio-telemetry-5c9cb76c56-rxlg4_istio-system(c0e822ec-7f9d-11e9-9783-a0898098ac66)" failed: rpc error: code = Unknown desc = Error: No such container: 5b0dbb65dae7dce9c1fbe7c6eb2bb5cf7973f87f6ad945999d28fd3e0dd91ed0
Jun 01 06:46:38 minikube kubelet[2908]: E0601 06:46:38.529356 2908 remote_runtime.go:321] ContainerStatus "d6841006b37489d1cb77d7ca67faa57c23287f0501285727a8ff4a9fb2c5c0a6" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: d6841006b37489d1cb77d7ca67faa57c23287f0501285727a8ff4a9fb2c5c0a6
Jun 01 06:46:38 minikube kubelet[2908]: E0601 06:46:38.529415 2908 kuberuntime_manager.go:917] getPodContainerStatuses for pod "istio-policy-78c7d8cffb-jh7j5_istio-system(c0d4de2c-7f9d-11e9-9783-a0898098ac66)" failed: rpc error: code = Unknown desc = Error: No such container: d6841006b37489d1cb77d7ca67faa57c23287f0501285727a8ff4a9fb2c5c0a6
Jun 01 06:46:47 minikube kubelet[2908]: E0601 06:46:47.769191 2908 remote_runtime.go:321] ContainerStatus "1d1a0e33434df1912ff707fdca97802e7eb2600b7d86546041222a46cb8b4f44" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 1d1a0e33434df1912ff707fdca97802e7eb2600b7d86546041222a46cb8b4f44
Jun 01 06:46:47 minikube kubelet[2908]: E0601 06:46:47.769736 2908 kuberuntime_manager.go:917] getPodContainerStatuses for pod "istio-telemetry-5c9cb76c56-rxlg4_istio-system(c0e822ec-7f9d-11e9-9783-a0898098ac66)" failed: rpc error: code = Unknown desc = Error: No such container: 1d1a0e33434df1912ff707fdca97802e7eb2600b7d86546041222a46cb8b4f44

==> kubernetes-dashboard <==
2019/06/01 06:57:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 06:57:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 06:58:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 06:58:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 06:59:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 06:59:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:00:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:00:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:01:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:01:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:02:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:02:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:03:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:03:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:04:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:04:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:05:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:05:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:06:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:06:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:07:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:07:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:08:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:08:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:09:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:09:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:10:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:10:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:11:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:11:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:12:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:12:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:13:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:13:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:14:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:14:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:15:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:15:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:16:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:16:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:17:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:17:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:18:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:18:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:19:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:19:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:20:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:20:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:21:09 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/01 07:21:39 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.

The operating system version: Ubuntu 18.04.2 LTS

@medyagh
Copy link
Member

medyagh commented Jun 4, 2019

Thank you for sharing your experience! If you don't mind, could you please try it without sudo ?

@ravishgithub
Copy link
Author

Without sudo it never works rather stop too early complaining on some permission. Anyway this issue was due to multiple vm drivers kvm, no driver (use docker) and virtual box. I expected the switch to work but i had to finally clean up all. Now just kvm2 and it starts. You can close this issue. I was disappointed though

@medyagh
Copy link
Member

medyagh commented Jun 8, 2019

@ravishgithub is your issue resolved? Please feel free to reopen this issue if you still have the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants