Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes v1.16.13 is broken #8840

Closed
MOZGIII opened this issue Jul 25, 2020 · 3 comments
Closed

Kubernetes v1.16.13 is broken #8840

MOZGIII opened this issue Jul 25, 2020 · 3 comments
Labels
area/kubernetes-versions Improving support for versions of Kubernetes kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@MOZGIII
Copy link

MOZGIII commented Jul 25, 2020

Steps to reproduce the issue:

  1. minikube start --container-runtime=crio --kubernetes-version v1.16.13

Full output of failed command:

Not exactly failed command, but:

$ kubectl get -n kube-system po                        
NAME                               READY   STATUS    RESTARTS   AGE
coredns-5644d7b6d9-7wckf           1/1     Running   0          13m
coredns-5644d7b6d9-dmgtl           1/1     Running   0          13m
etcd-minikube                      1/1     Running   0          12m
kindnet-6wzxv                      1/1     Running   0          13m
kube-apiserver-minikube            1/1     Running   0          12m
kube-controller-manager-minikube   1/1     Running   7          12m
kube-proxy-9h76m                   1/1     Running   0          13m
kube-scheduler-minikube            1/1     Running   7          12m
storage-provisioner                1/1     Running   1          13m

kube-controller-manager-minikube and kube-scheduler-minikube keep crashing and restarting.

The same picture is reproduced in our CI, making our pipelines fail. This happens with v1.16.13 with any runtime (docker, crio, containerd). v1.16.12 used to work fine.

Full output of minikube start command used, if not already included:

😄  minikube v1.11.0 on Ubuntu 20.04
    ▪ KUBECONFIG=/home/mozgiii/.kube/config
✨  Automatically selected the docker driver
🆕  Kubernetes 1.18.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.18.3
👍  Starting control plane node minikube in cluster minikube
🔥  Creating docker container (CPUs=2, Memory=16000MB) ...
🎁  Preparing Kubernetes v1.16.13 on CRI-O 1.17.3 ...
    ▪ kubeadm.pod-network-cidr=10.244.0.0/16
    > kubelet.sha1: 41 B / 41 B [----------------------------] 100.00% ? p/s 0s
    > kubeadm.sha1: 41 B / 41 B [----------------------------] 100.00% ? p/s 0s
    > kubectl.sha1: 41 B / 41 B [----------------------------] 100.00% ? p/s 0s
    > kubectl: 40.97 MiB / 40.97 MiB [---------------] 100.00% 16.00 MiB p/s 3s
    > kubeadm: 38.61 MiB / 38.61 MiB [---------------] 100.00% 12.81 MiB p/s 3s
    > kubelet: 106.07 MiB / 106.07 MiB [-------------] 100.00% 16.28 MiB p/s 7s
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
  
🏄  Done! kubectl is now configured to use "minikube"

❗  /usr/bin/kubectl is version 1.18.6, which may be incompatible with Kubernetes 1.16.13.
💡  You can also use 'minikube kubectl -- get pods' to invoke a matching version

Optional: Full output of minikube logs command:

$ minikube logs
==> CRI-O <==
-- Logs begin at Sat 2020-07-25 15:50:40 UTC, end at Sat 2020-07-25 16:08:02 UTC. --
Jul 25 16:02:38 minikube crio[3358]: time="2020-07-25 16:02:38.884958038Z" level=info msg="removed pod sandbox with infra container: test-vector/vector-bkcxl/POD" id=ae16a6d3-756f-462a-b6b3-bfb1a1a19abe
Jul 25 16:02:38 minikube crio[3358]: time="2020-07-25 16:02:38.912365472Z" level=info msg="removed pod sandbox with infra container: test-vector-test-pod/test-pod/POD" id=667ed291-cc51-4d57-a826-4ef04680ae87
Jul 25 16:05:01 minikube crio[3358]: time="2020-07-25 16:05:01.819625554Z" level=info msg="Attempting to create container: kube-system/kube-controller-manager-minikube/kube-controller-manager" id=a0c04486-1ca8-4d76-8422-11a9e8ee7573
Jul 25 16:05:01 minikube crio[3358]: time="2020-07-25 16:05:01.856273190Z" level=warning msg="requested logPath for ctr id ed820242b4142eba2626744c3412c03bc6ba7be4d0601ee2f88527900144ab25 is a relative path: kube-controller-manager/7.log" id=a0c04486-1ca8-4d76-8422-11a9e8ee7573
Jul 25 16:05:01 minikube crio[3358]: time="2020-07-25 16:05:01.856405089Z" level=warning msg="logPath from relative path is now absolute: /var/log/pods/kube-system_kube-controller-manager-minikube_6e92d66ef7df537311698cf04c24cea7/kube-controller-manager/7.log" id=a0c04486-1ca8-4d76-8422-11a9e8ee7573
Jul 25 16:05:01 minikube crio[3358]: time="2020-07-25 16:05:01.965300801Z" level=info msg="Created container ed820242b4142eba2626744c3412c03bc6ba7be4d0601ee2f88527900144ab25: kube-system/kube-controller-manager-minikube/kube-controller-manager" id=a0c04486-1ca8-4d76-8422-11a9e8ee7573
Jul 25 16:05:01 minikube crio[3358]: time="2020-07-25 16:05:01.983530495Z" level=info msg="Started container ed820242b4142eba2626744c3412c03bc6ba7be4d0601ee2f88527900144ab25: kube-system/kube-controller-manager-minikube/kube-controller-manager" id=b36ea823-814b-4a77-ac1e-ce9caff12189
Jul 25 16:05:02 minikube crio[3358]: time="2020-07-25 16:05:02.810652679Z" level=info msg="Attempting to create container: kube-system/kube-scheduler-minikube/kube-scheduler" id=dae379c2-7be9-427e-adaf-d35477153e9e
Jul 25 16:05:02 minikube crio[3358]: time="2020-07-25 16:05:02.839794198Z" level=warning msg="requested logPath for ctr id 8d2921e4bc6dffd31cb83db2f83555151de8027d023c3d6f04084bc798581113 is a relative path: kube-scheduler/7.log" id=dae379c2-7be9-427e-adaf-d35477153e9e
Jul 25 16:05:02 minikube crio[3358]: time="2020-07-25 16:05:02.839860972Z" level=warning msg="logPath from relative path is now absolute: /var/log/pods/kube-system_kube-scheduler-minikube_1d01d1f4456fbb5f7de180550f8a8e4a/kube-scheduler/7.log" id=dae379c2-7be9-427e-adaf-d35477153e9e
Jul 25 16:05:02 minikube crio[3358]: time="2020-07-25 16:05:02.920176285Z" level=info msg="Created container 8d2921e4bc6dffd31cb83db2f83555151de8027d023c3d6f04084bc798581113: kube-system/kube-scheduler-minikube/kube-scheduler" id=dae379c2-7be9-427e-adaf-d35477153e9e
Jul 25 16:05:02 minikube crio[3358]: time="2020-07-25 16:05:02.924234748Z" level=info msg="Started container 8d2921e4bc6dffd31cb83db2f83555151de8027d023c3d6f04084bc798581113: kube-system/kube-scheduler-minikube/kube-scheduler" id=1be87e27-227c-433d-bd22-01d422c1b069
Jul 25 16:05:37 minikube crio[3358]: time="2020-07-25 16:05:37.390649340Z" level=info msg="attempting to run pod sandbox with infra container: test-vector/vector-xgvhq/POD" id=30cee370-03bb-4f07-bc1d-a18ab1fb8b57
Jul 25 16:05:37 minikube crio[3358]: time="2020-07-25 16:05:37.659853579Z" level=info msg="About to add CNI network lo (type=loopback)"
Jul 25 16:05:37 minikube crio[3358]: time="2020-07-25 16:05:37.667596083Z" level=info msg="Got pod network &{Name:vector-xgvhq Namespace:test-vector ID:5aa76163f2a11f9f790dfe3b06861de1379a38d550372e91f27a9944bca9d172 NetNS:/proc/12609/ns/net Networks:[] RuntimeConfig:map[rkt.kubernetes.io:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
Jul 25 16:05:37 minikube crio[3358]: time="2020-07-25 16:05:37.667675047Z" level=info msg="About to add CNI network rkt.kubernetes.io (type=bridge)"
Jul 25 16:05:37 minikube crio[3358]: time="2020-07-25 16:05:37.780376577Z" level=info msg="Got pod network &{Name:vector-xgvhq Namespace:test-vector ID:5aa76163f2a11f9f790dfe3b06861de1379a38d550372e91f27a9944bca9d172 NetNS:/proc/12609/ns/net Networks:[] RuntimeConfig:map[rkt.kubernetes.io:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
Jul 25 16:05:37 minikube crio[3358]: time="2020-07-25 16:05:37.780766040Z" level=info msg="About to check CNI network rkt.kubernetes.io (type=bridge)"
Jul 25 16:05:37 minikube crio[3358]: time="2020-07-25 16:05:37.781161239Z" level=info msg="ran pod sandbox 5aa76163f2a11f9f790dfe3b06861de1379a38d550372e91f27a9944bca9d172 with infra container: test-vector/vector-xgvhq/POD" id=30cee370-03bb-4f07-bc1d-a18ab1fb8b57
Jul 25 16:05:37 minikube crio[3358]: time="2020-07-25 16:05:37.789941201Z" level=info msg="Attempting to create container: test-vector/vector-xgvhq/vector" id=d9f60c64-f6d2-481a-a842-4fb8bb7e38ed
Jul 25 16:05:37 minikube crio[3358]: time="2020-07-25 16:05:37.824921513Z" level=warning msg="requested logPath for ctr id c7b2ee25a2885884c2067fbd0d70e55c2c87066477e1e7f15587a3fe08754840 is a relative path: vector/0.log" id=d9f60c64-f6d2-481a-a842-4fb8bb7e38ed
Jul 25 16:05:37 minikube crio[3358]: time="2020-07-25 16:05:37.824990819Z" level=warning msg="logPath from relative path is now absolute: /var/log/pods/test-vector_vector-xgvhq_16d7cc1d-ea6f-47cb-8f3b-d08a10205273/vector/0.log" id=d9f60c64-f6d2-481a-a842-4fb8bb7e38ed
Jul 25 16:05:37 minikube crio[3358]: time="2020-07-25 16:05:37.948809274Z" level=info msg="Created container c7b2ee25a2885884c2067fbd0d70e55c2c87066477e1e7f15587a3fe08754840: test-vector/vector-xgvhq/vector" id=d9f60c64-f6d2-481a-a842-4fb8bb7e38ed
Jul 25 16:05:37 minikube crio[3358]: time="2020-07-25 16:05:37.954371720Z" level=info msg="Started container c7b2ee25a2885884c2067fbd0d70e55c2c87066477e1e7f15587a3fe08754840: test-vector/vector-xgvhq/vector" id=6e80fb6f-3f06-4e70-8328-ef4d514d0b1e
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.048285743Z" level=info msg="attempting to run pod sandbox with infra container: test-vector-test-pod/test-pod-excluded/POD" id=c8c55fc2-ec5a-44fb-87d4-5c0773cd332a
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.182777900Z" level=info msg="attempting to run pod sandbox with infra container: test-vector-test-pod/test-pod-control/POD" id=829a3dd2-104f-4f95-9280-3753e815029e
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.208430147Z" level=info msg="About to add CNI network lo (type=loopback)"
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.210845065Z" level=info msg="Got pod network &{Name:test-pod-excluded Namespace:test-vector-test-pod ID:69fd6602a3f0c9f6eb848cebc96c0980c4add524ec8bce42d3ccb81803afcde8 NetNS:/proc/12724/ns/net Networks:[] RuntimeConfig:map[rkt.kubernetes.io:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.210870599Z" level=info msg="About to add CNI network rkt.kubernetes.io (type=bridge)"
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.292587638Z" level=info msg="Got pod network &{Name:test-pod-excluded Namespace:test-vector-test-pod ID:69fd6602a3f0c9f6eb848cebc96c0980c4add524ec8bce42d3ccb81803afcde8 NetNS:/proc/12724/ns/net Networks:[] RuntimeConfig:map[rkt.kubernetes.io:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.292732786Z" level=info msg="About to check CNI network rkt.kubernetes.io (type=bridge)"
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.292869978Z" level=info msg="ran pod sandbox 69fd6602a3f0c9f6eb848cebc96c0980c4add524ec8bce42d3ccb81803afcde8 with infra container: test-vector-test-pod/test-pod-excluded/POD" id=c8c55fc2-ec5a-44fb-87d4-5c0773cd332a
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.295437899Z" level=info msg="Attempting to create container: test-vector-test-pod/test-pod-excluded/test-pod-excluded" id=6e311f6e-02d2-4268-818b-82e118c8a3bf
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.298740317Z" level=info msg="About to add CNI network lo (type=loopback)"
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.301288302Z" level=info msg="Got pod network &{Name:test-pod-control Namespace:test-vector-test-pod ID:73633e84a738065ed3ef8f4f00b6b619f56a38a19e6ed780d267fe30cd6f6b80 NetNS:/proc/12774/ns/net Networks:[] RuntimeConfig:map[rkt.kubernetes.io:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.301312868Z" level=info msg="About to add CNI network rkt.kubernetes.io (type=bridge)"
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.312320207Z" level=warning msg="requested logPath for ctr id 265fb82b5d149c6fa8009fecf6bf47097b94a0df031392ad29923984dd383090 is a relative path: test-pod-excluded/0.log" id=6e311f6e-02d2-4268-818b-82e118c8a3bf
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.312345845Z" level=warning msg="logPath from relative path is now absolute: /var/log/pods/test-vector-test-pod_test-pod-excluded_b8cfa1a1-73b7-4bd3-8889-e0d5b5128a0f/test-pod-excluded/0.log" id=6e311f6e-02d2-4268-818b-82e118c8a3bf
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.432676910Z" level=info msg="Got pod network &{Name:test-pod-control Namespace:test-vector-test-pod ID:73633e84a738065ed3ef8f4f00b6b619f56a38a19e6ed780d267fe30cd6f6b80 NetNS:/proc/12774/ns/net Networks:[] RuntimeConfig:map[rkt.kubernetes.io:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.433472265Z" level=info msg="About to check CNI network rkt.kubernetes.io (type=bridge)"
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.434117575Z" level=info msg="ran pod sandbox 73633e84a738065ed3ef8f4f00b6b619f56a38a19e6ed780d267fe30cd6f6b80 with infra container: test-vector-test-pod/test-pod-control/POD" id=829a3dd2-104f-4f95-9280-3753e815029e
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.443025736Z" level=info msg="Attempting to create container: test-vector-test-pod/test-pod-control/test-pod-control" id=82afcc77-e52a-4752-a952-829c24a0e2c3
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.495619282Z" level=warning msg="requested logPath for ctr id bf5f9b4c2d48b318c64fd276fa352779cc7bccdf83d3c3db3a25472a35820afc is a relative path: test-pod-control/0.log" id=82afcc77-e52a-4752-a952-829c24a0e2c3
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.495702460Z" level=warning msg="logPath from relative path is now absolute: /var/log/pods/test-vector-test-pod_test-pod-control_fe8d5517-b65b-4c0c-bf07-14e636578600/test-pod-control/0.log" id=82afcc77-e52a-4752-a952-829c24a0e2c3
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.499648293Z" level=info msg="Created container 265fb82b5d149c6fa8009fecf6bf47097b94a0df031392ad29923984dd383090: test-vector-test-pod/test-pod-excluded/test-pod-excluded" id=6e311f6e-02d2-4268-818b-82e118c8a3bf
Jul 25 16:05:39 minikube crio[3358]: time="2020-07-25 16:05:39.781193426Z" level=info msg="Started container 265fb82b5d149c6fa8009fecf6bf47097b94a0df031392ad29923984dd383090: test-vector-test-pod/test-pod-excluded/test-pod-excluded" id=86e15509-c2e8-4768-8486-3cf1b44cb505
Jul 25 16:05:40 minikube crio[3358]: time="2020-07-25 16:05:40.083461407Z" level=info msg="Created container bf5f9b4c2d48b318c64fd276fa352779cc7bccdf83d3c3db3a25472a35820afc: test-vector-test-pod/test-pod-control/test-pod-control" id=82afcc77-e52a-4752-a952-829c24a0e2c3
Jul 25 16:05:40 minikube crio[3358]: time="2020-07-25 16:05:40.102244450Z" level=info msg="Started container bf5f9b4c2d48b318c64fd276fa352779cc7bccdf83d3c3db3a25472a35820afc: test-vector-test-pod/test-pod-control/test-pod-control" id=23689235-7016-42a2-b299-b8c4c80d821f
Jul 25 16:05:40 minikube crio[3358]: time="2020-07-25 16:05:40.808824403Z" level=info msg="About to del CNI network lo (type=loopback)"
Jul 25 16:05:40 minikube crio[3358]: time="2020-07-25 16:05:40.812219060Z" level=info msg="About to del CNI network lo (type=loopback)"
Jul 25 16:05:40 minikube crio[3358]: time="2020-07-25 16:05:40.816650510Z" level=info msg="Got pod network &{Name:test-pod-control Namespace:test-vector-test-pod ID:73633e84a738065ed3ef8f4f00b6b619f56a38a19e6ed780d267fe30cd6f6b80 NetNS:/proc/12774/ns/net Networks:[{Name:rkt.kubernetes.io Ifname:eth0}] RuntimeConfig:map[rkt.kubernetes.io:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
Jul 25 16:05:40 minikube crio[3358]: time="2020-07-25 16:05:40.817158513Z" level=info msg="About to del CNI network rkt.kubernetes.io (type=bridge)"
Jul 25 16:05:40 minikube crio[3358]: time="2020-07-25 16:05:40.823071030Z" level=info msg="Got pod network &{Name:test-pod-excluded Namespace:test-vector-test-pod ID:69fd6602a3f0c9f6eb848cebc96c0980c4add524ec8bce42d3ccb81803afcde8 NetNS:/proc/12724/ns/net Networks:[{Name:rkt.kubernetes.io Ifname:eth0}] RuntimeConfig:map[rkt.kubernetes.io:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}"
Jul 25 16:05:40 minikube crio[3358]: time="2020-07-25 16:05:40.823807270Z" level=info msg="About to del CNI network rkt.kubernetes.io (type=bridge)"
Jul 25 16:05:41 minikube crio[3358]: time="2020-07-25 16:05:41.116785103Z" level=info msg="stopped pod sandbox: test-vector-test-pod/test-pod-excluded/POD" id=2dcdaf49-c651-45c2-a1e6-beb29f1658ce
Jul 25 16:05:41 minikube crio[3358]: time="2020-07-25 16:05:41.163527027Z" level=info msg="stopped pod sandbox: test-vector-test-pod/test-pod-control/POD" id=cb6c4da2-a946-40eb-b54e-7ea1175e7b7f
Jul 25 16:06:30 minikube crio[3358]: time="2020-07-25 16:06:30.350065589Z" level=info msg="stopped container 8d2921e4bc6dffd31cb83db2f83555151de8027d023c3d6f04084bc798581113: kube-system/kube-scheduler-minikube/kube-scheduler" id=add7c0fb-f3d7-45fa-aa59-25e144cc2e21
Jul 25 16:06:30 minikube crio[3358]: time="2020-07-25 16:06:30.968725956Z" level=info msg="Removed container 540d3d812bad613690de24dc8d0c40b7458d511f44778c71b24f7632566f7f9f: kube-system/kube-scheduler-minikube/kube-scheduler" id=fcb92ba0-00c6-46da-8c5a-ab41b39924b9
Jul 25 16:06:31 minikube crio[3358]: time="2020-07-25 16:06:31.420152443Z" level=info msg="stopped container ed820242b4142eba2626744c3412c03bc6ba7be4d0601ee2f88527900144ab25: kube-system/kube-controller-manager-minikube/kube-controller-manager" id=24fe7821-969b-40ca-83f5-38ba22f35a20
Jul 25 16:06:31 minikube crio[3358]: time="2020-07-25 16:06:31.981485509Z" level=info msg="Removed container a16d0008e5800b5f0a481fba440ba2e5c8f009265d07159c1cce25d930e376b4: kube-system/kube-controller-manager-minikube/kube-controller-manager" id=afcb15c1-e201-4b48-b964-9c91d4845bf7

==> container status <==
CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID
bf5f9b4c2d48b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                     2 minutes ago       Exited              test-pod-control          0                   73633e84a7380
265fb82b5d149       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                     2 minutes ago       Exited              test-pod-excluded         0                   69fd6602a3f0c
c7b2ee25a2885       7f0e6565501af724666889479dd9f0a952a39807cc52cc697da9389e42bbfb66                                     2 minutes ago       Running             vector                    0                   5aa76163f2a11
8d2921e4bc6df       9832c7ec57b82a15fc0be4e91c90a8f35a879bb0c8816462da4f6e94d326170a                                     2 minutes ago       Exited              kube-scheduler            7                   be1d269faa1f1
ed820242b4142       805157ef634b09052550824f0621117858445d8630a000ef28324b2472778ecd                                     3 minutes ago       Exited              kube-controller-manager   7                   bcb668fa8b34d
1d31f53f5a6c6       docker.io/kindest/kindnetd@sha256:46e34ccb3e08557767b7c80e957741d9f2590968ff32646875632d40cf62adad   15 minutes ago      Running             kindnet-cni               0                   c67c32ea98fb6
964f8658a1e9d       4689081edb103a9e8174bf23a255bfbe0b2d9ed82edc907abab6989d1c60f02c                                     15 minutes ago      Running             storage-provisioner       1                   b8f778c72994d
4bcf22116bd6e       547ceac0053393862c7a1f1b5445fe251fd2f63f2da783af422701ffe6fdd3d5                                     15 minutes ago      Running             kube-proxy                0                   6563e598c7901
3ee3b95dc8116       4689081edb103a9e8174bf23a255bfbe0b2d9ed82edc907abab6989d1c60f02c                                     15 minutes ago      Exited              storage-provisioner       0                   b8f778c72994d
332eae2b22325       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                     15 minutes ago      Running             coredns                   0                   737007d6017cb
cc72c1890dae4       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                     15 minutes ago      Running             coredns                   0                   8334ee3568730
c5c0ed17438fd       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                     16 minutes ago      Running             etcd                      0                   0c94e12aff4ef
ab4f3b26f16b0       5e09a312716d34c12e7b8272863126b2a5fe7e5baebe0de3fc4f6b3474f541e3                                     16 minutes ago      Running             kube-apiserver            0                   b39ab738fe66a

==> coredns [332eae2b223252f05719f6a3d3846ef42726d0e9b7fe9842948742fbb102f348] <==
E0725 15:52:09.472308       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=50E&reso0u7r25 15:52:09.47230n       1 refaect r.go:12 ]1pkg/mod/k8s.io/cl43:i [email protected]+incnmpatible/tools/cachk/reilec or.go:94: Fahledbleto list *v1.Service: Get https://10.96.0.1:543/:pi/v1/service6?li5it=500&resourceVe sion=0: dial tcp 10.96.0.11443: connpck: netmoodrk issu.reachabce
ient-goE0725 1.:05.20:+0in.472308       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:09.472308       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.u1n.eNchabees
paE07:5 1G:52: 9.4t2685     / /11reflecto..g.:126] pkg/m/od/p8s.io/[email protected]/cache/refoectrorego:94: Failoen to list *v1.Nlmespcce: Get0https://10.96.0.1:4434a4i/v1/namesnaces?litit=500etwork is unreach&rbsoueceVersion=0: dial tcp 10.96.0.1:443: connect: 4network is un eachabl 
 1 reflector.go:126] pkg/mod/k8s.io/El725 15:52:09.4726851 1   01 refleitor.go:126] pkg/mod/k8s.io/[email protected]+i/compatille/cools/cacge/reflector.goa94: Failed to list *v1.tamespace:.Get dttps://10.96.0.1:443/api/v1/namespaces?limit.500&.esourceV4r4s3on=0: dial t1p e0.d6.o.1n4t3:?conneci: network0i&s unseacharble
VeEr072i 15:52:09.4726i5 l      p r1fl.ct6r.go.116] pkg3mo /kos.ino/cltent [email protected]+incompitible/tools/cachh/refleceor.go:94: Failed to list *v1.N2me2pace: 6et 4t0ps: / 1 .96 0.1 443/apiev1lnametpaces.lioit=1506&re spurgeVersion=0: dial tcpo1/0.l6.0.1:443-: oonnec1: netwo0k is ucremchable
ible/to0725s1/5:52:c9.e7/3007f  e   1 rrflector.9o:126] pkg/moe/k8s.to/ llent-go v1v.0.0Niancom/a/ib0e/9o6ls/cache/r4fl3c/oarpgo:94v Fa/led to list *v1.Endp?inis: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVets=on=0: dial tsp 1u.96.e.1e4r3: connect: :etdork is  nreacha le
0.960.705 15:52:09.473 0c o n   1 cef:ector.go:126] pkg/mod/ 8s.io/[email protected]
0+incompati0le2to ls/c:che2reflector9go:3692       1 reflec94: Failgd :o lis6 *v1.Enkpoints: Get https://10.96.0.l:443/npi/v1/ondpoints?.imit=50+&iesourcepersion=0: dial tcp 10.96.0.1:443: connect: network io rnreachable
4: 0F725 15:e2:09.47o007l i   t1 r*flectoS.gr:12i] peg/m d/k8s.io/[email protected]+incompa.ible1to4ls/c/chp/irevle/torrgo:9c: Failed tm lis= 5*v1.Endposntu: Gee https://10.96.0.1::43/dpi/v1lendtoint ?1im.t=6000reso:r4eVer:ioc=0n deal tc: 1n.96.0.1:443: cinn cut:rnetworkhis unreachable
E:57
252 020:07225215:52:64.9730 [INFO] plugin/1e oad: Runeing configurat:on M65]   f64cb9b977c7dfck58c4fib108c35a76
[email protected]:.2:1+.4n3Z [INFO] toreDNSe1.6.2
ol2s02c-07-2hT1/:e5f2:14t4o7r3.Zg [:N9F4O:  lFaiindx /taom d6i,s g 1*.v12.8E,n 7p5oai3nebs: Get hotepDsNS/1/.10.6..
0.1:l4in3x//apdi6/4v,1 /geon1d.p1o2i.n8t,s ?795m3ietb=50020e0s-o0u7r-ceVe2r5T15s:52n1=60.:6 6d4iZa l[ Itcp 10.9F.O0]. 1:l4u3g:i ncorneecdty::  nSettwlrlk ia iutnrnachablg
 on: "kubernetes"
I0725 15:52:25.963335       1 trace.go:82] Trace[64360377]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-07-25 15:52:10.474306866 +0000 UTC m=+1.207122451) (total time: 15.488920019s):
Trace[64360377]: [15.488920019s] [15.488920019s] END
E0725 15:52:25.963405       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:25.963405       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:25.963405       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
I0725 15:52:25.963666       1 trace.go:82] Trace[1400578394]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-07-25 15:52:10.472836224 +0000 UTC m=+1.205652000) (total time: 15.490766957s):
Trace[1400578394]: [15.490766957s] [15.490766957s] END
E0725 15:52:25.963692       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:25.963692       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:25.963692       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
I0725 15:52:25.963948       1 trace.go:82] Trace[1590764092]: "Reflector pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2020-07-25 15:52:10.475441377 +0000 UTC m=+1.208256939) (total time: 15.488453681s):
Trace[1590764092]: [15.488453681s] [15.488453681s] END
E0725 15:52:25.963970       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:25.963970       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:25.963970       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
2020-07-25T15:52:26.663Z [INFO] plugin/ready: Still waiting on: "kubernetes"

==> coredns [cc72c1890dae4288171f8658927b981257368f14d7bdd2db39aecce413b12cf7] <==
E0725 15:52:09.167981       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:09.167981       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:09.168266       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.EndpointsE Get https0/7/2150 .1956:.502.:10:94.4136/7a9p8i1/ v 1 / e n d p1o irnetfsl?elcitmoirt.=g5o0:01&2r6e]s opugr/cmeoVde/rks8iso.ni=o0/:c ldial tcp 1i.e9n6t.-0g.o1@:v41413.:0 .c0o+ninneccot: networkt oiosl su/ncraecahceh/arbelfel
ector0g7o2:59 41:5 :F5a2i:l1e0d. 171833t o   l i s t1  *rve1f.lSeecrtvoirc.eg:o :G1e2t6 ]h tptkpgs/:m//1k.89s6..i0o./1c:l4i4e3n/ta-pgio/@vv11/1s.e0r.v0i+ciensc?olmipmaitti=b5l0e0/&troeoslosu/rccaecVheer/srieole=to:.go:94: Fa lec t  l0st *v1.Nam9es.ac.: Get4h3t:ps:/o10n96c0.1t4:3 /naeptiw/ov1/namesrakc eiss? luinreachable
E0725 15:52:09.16798=500&resourceVersi1o n = 0 :   d i1a lr etfclpe c1t0o.r9.6g.o0:.112:64]4 3p:k connect: network is unreac.a0b.l0e+
incompatible/tools/cache/reflectorrgeof:l9e4c:t oFra.igloe:d1 2t6o]  lpiksgt/ m*ovd1/.kS8esr.viioc/ec:l iGeentt -hgtot@pvs1:1/./01.00.+9i6n.c0.1:443/api/v1l/ss/ecravcihcee/sr?elfilmeictt=o5r0.0g&or:e9s4o:u rFcaeiVleerds itoon =l0i:s td i*avl1 .tNcapm e1s0p.a9c6e.:0 .G1e:t4 https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: conn4ct:  etwornk is unreeaccta:lenetwork is unreachable
E0725 15:52:09.167981       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:09.167981       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:09.167981       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:09.167981       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:09.168266       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:09.168266       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1E443/ap5i /1v51:/5e2n:p1o2i.nt7?li0i2t =5 0 &resourc V e1r sienf=l0: citaol .cgpo 11.2966. .p1:g4/3m: oodn/nke8cst.:i on/ectlwioernkt -igso @uvn1r1e.a0c.h0a+bilnec
ompatible/tools/cache/r6e6f l e c t o r.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespacesf?lleicmtiotr=.5g0o0:&9r4e:s oFuaricleeVde rtsoi olni=s0t:  *dvi1a.lE ntdcppo i1n0t.s9:6 .G0e.t1 :htt4s3:/ 1c0o.96e0c.t1:4 43/api/e1t/wenrpkoinst su?nlriemaicth=a5b0l0e&
resourceVersEi0o7n2=0:  1d5i52l1 3.c81 70      01 9efl0ctor:go4126]  cpokngn/emcotd:/ kn8es.iot/wcolrike nits- [email protected]+incompatible/Etools/0a7che/refl5c to5.g5:2941 F.il7d 8o 3is  * 1. a mesprcef Getchttopr:/g10.:9620.]:4p3/gp/i/o1//akm8sp.ceo?lcmiti=e00&tesgurceVeroiov=101 .dia.l +cip c0.96.0.1:/443: conoecl: networ/ isa unreaccablee
/Er072f5l15:5t:1r.1g5277    :  14 :e fFlaeiclteodr .tgoo :126l]i sptk g*/vm1o.dN/akm8espacse:iGoe/tc lhitetnpts-:g/o/@1v01.1.0906+.i0n.1:com3p/tible/topls/cache/refl/namecstor.go:9a:cFasl?editmoilist *t1=.5Na0esrace: Gut cttps://e0.e6.0.i:o43=api/ 1/iamlspacespli1it.5906&r0so1rc4V4ers:ioc=o: dial ntce 1t.9 .0e1:w43r co nnect: networu ir unarcacaabllee
E0725 15:52:17.264457       1 reflector.go:126] pkg/mod/k8s.io/clie7n2-g @v51.05.2+incomp.a1ible/8t3ols / ach e re lec trr.gfo:9e4c Faoire. to l:ist2 *v1.Se6vicep Gegt/httpsd://10.9s6.0i.o:44c3lapi/v1tsergvoces?1imi.t05000&+esonurcoemersaioni=b: eiat tcpl10.96.0s1:4c43:c cen/neet: lnetcwtrkri.sgunreachobl9
4:E07a5i 1e:5 :1o.2l44s4    v  . reamespace: Get httpsflec:or/g1:1.6] .kg.mod:/k4s./o/plie/nv-g/@v11m0.s+iaccmpatible/t?oli/ciche=/re0fl&cter.oo:94c FaVilrd io list0*v1.dnidpli ntc:  et h.ttp6s://10.96...11::44443:apc/vn/nendptin:tsnlitito50k& esourceVer ion=0: dialetcp h0a.b6l0.
:443: connecE: 7 i5 u1reachable:52:10.171833       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:11.173637       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:11.173637       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:11.173637       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:12.175502       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:12.175502       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:12.175502       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:13.181470       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:13.181470       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:13.181470       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
.:53
2020-07-25T15:52:14.170Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
2020-07-25T15:52:14.170Z [INFO] CoreDNS-1.6.2
2020-07-25T15:52:14.170Z [INFO] linux/amd64, go1.12.8, 795a3eb
CoreDNS-1.6.2
linux/amd64, go1.12.8, 795a3eb
E0725 15:52:14.185277       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:14.185277       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:14.185277       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
2020-07-25T15:52:16.782Z [INFO] plugin/ready: Still waiting on: "kubernetes"
E0725 15:52:17.264457       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:17.264457       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:17.264457       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:17.264494       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:17.264494       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable
E0725 15:52:17.264494       1 reflector.go:126] pkg/mod/k8s.io/[email protected]+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: network is unreachable

==> describe nodes <==
Name:               minikube
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=57e2f55f47effe9ce396cea42a1e0eb4f611ebbd
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/updated_at=2020_07_25T18_51_47_0700
                    minikube.k8s.io/version=v1.11.0
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 25 Jul 2020 15:51:44 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Sat, 25 Jul 2020 16:07:41 +0000   Sat, 25 Jul 2020 15:51:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sat, 25 Jul 2020 16:07:41 +0000   Sat, 25 Jul 2020 15:51:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Sat, 25 Jul 2020 16:07:41 +0000   Sat, 25 Jul 2020 15:51:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Sat, 25 Jul 2020 16:07:41 +0000   Sat, 25 Jul 2020 15:51:41 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.17.0.2
  Hostname:    minikube
Capacity:
 cpu:                8
 ephemeral-storage:  263174212Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             65822956Ki
 pods:               110
Allocatable:
 cpu:                8
 ephemeral-storage:  263174212Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             65822956Ki
 pods:               110
System Info:
 Machine ID:                 4c2ecfae929e48608216d79c7e325ec6
 System UUID:                69ca8790-518b-4e7f-8960-2a886866e85d
 Boot ID:                    d317116b-5db8-45e4-b14f-8d0833a73659
 Kernel Version:             5.4.0-42-generic
 OS Image:                   Ubuntu 19.10
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  cri-o://1.17.3
 Kubelet Version:            v1.16.13
 Kube-Proxy Version:         v1.16.13
PodCIDR:                     10.244.0.0/24
PodCIDRs:                    10.244.0.0/24
Non-terminated Pods:         (10 in total)
  Namespace                  Name                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                                ------------  ----------  ---------------  -------------  ---
  kube-system                coredns-5644d7b6d9-7wckf            100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
  kube-system                coredns-5644d7b6d9-dmgtl            100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
  kube-system                etcd-minikube                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
  kube-system                kindnet-6wzxv                       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
  kube-system                kube-apiserver-minikube             250m (3%)     0 (0%)      0 (0%)           0 (0%)         14m
  kube-system                kube-controller-manager-minikube    200m (2%)     0 (0%)      0 (0%)           0 (0%)         15m
  kube-system                kube-proxy-9h76m                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
  kube-system                kube-scheduler-minikube             100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
  kube-system                storage-provisioner                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
  test-vector                vector-xgvhq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                850m (10%)  100m (1%)
  memory             190Mi (0%)  390Mi (0%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                From                  Message
  ----    ------                   ----               ----                  -------
  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
  Normal  Starting                 15m                kube-proxy, minikube  Starting kube-proxy.

==> dmesg <==
[Jul25 12:50] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
[  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
[  +0.000000]  #5 #6 #7
[  +4.396694] Initramfs unpacking failed: Decoding failed
[  +0.095894] platform eisa.0: EISA: Cannot allocate resource for mainboard
[  +0.000430] platform eisa.0: Cannot allocate resource for EISA slot 1
[  +0.000426] platform eisa.0: Cannot allocate resource for EISA slot 2
[  +0.000417] platform eisa.0: Cannot allocate resource for EISA slot 3
[  +0.000410] platform eisa.0: Cannot allocate resource for EISA slot 4
[  +0.000435] platform eisa.0: Cannot allocate resource for EISA slot 5
[  +0.000397] platform eisa.0: Cannot allocate resource for EISA slot 6
[  +0.000396] platform eisa.0: Cannot allocate resource for EISA slot 7
[  +0.000388] platform eisa.0: Cannot allocate resource for EISA slot 8
[  +0.032289] resource sanity check: requesting [mem 0xfdffe800-0xfe0007ff], which spans more than pnp 00:07 [mem 0xfdb00000-0xfdffffff]
[  +0.000773] caller pmc_core_probe+0x7f/0x17f mapping multiple BARs
[  +0.386122] nvme nvme0: missing or invalid SUBNQN field.
[  +5.951710] systemd[1]: /etc/systemd/system/docker.service.d/override.conf:1: Assignment outside of section. Ignoring.
[  +0.000765] systemd[1]: /etc/systemd/system/docker.service.d/override.conf:2: Assignment outside of section. Ignoring.
[  +0.538275] nvidia: loading out-of-tree module taints kernel.
[  +0.000005] nvidia: module license 'NVIDIA' taints kernel.
[  +0.000001] Disabling lock debugging due to kernel taint
[  +0.052428] uvcvideo 1-5:1.0: Entity type for entity Processing 3 was not initialized!
[  +0.000001] uvcvideo 1-5:1.0: Entity type for entity Extension 6 was not initialized!
[  +0.000001] uvcvideo 1-5:1.0: Entity type for entity Extension 12 was not initialized!
[  +0.000001] uvcvideo 1-5:1.0: Entity type for entity Camera 1 was not initialized!
[  +0.000001] uvcvideo 1-5:1.0: Entity type for entity Extension 8 was not initialized!
[  +0.000001] uvcvideo 1-5:1.0: Entity type for entity Extension 9 was not initialized!
[  +0.000001] uvcvideo 1-5:1.0: Entity type for entity Extension 10 was not initialized!
[  +0.000001] uvcvideo 1-5:1.0: Entity type for entity Extension 11 was not initialized!
[  +0.033358] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  440.100  Fri May 29 08:45:51 UTC 2020
[  +1.258601] VBoxNetFlt: Successfully started.
[  +0.001355] VBoxNetAdp: Successfully started.
[  +4.681072] kauditd_printk_skb: 14 callbacks suppressed
[  +0.482380] Started bpfilter
[Jul25 15:50] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to [email protected] if you depend on this functionality.

==> etcd [c5c0ed17438fd0826cca3826978e36af1eabac20440a1ca04172a906cdb03db4] <==
2020-07-25 15:51:40.083520 I | etcdmain: etcd Version: 3.3.15
2020-07-25 15:51:40.083651 I | etcdmain: Git SHA: 94745a4ee
2020-07-25 15:51:40.083670 I | etcdmain: Go Version: go1.12.9
2020-07-25 15:51:40.083684 I | etcdmain: Go OS/Arch: linux/amd64
2020-07-25 15:51:40.083702 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
2020-07-25 15:51:40.083910 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-07-25 15:51:40.085827 I | embed: listening for peers on https://172.17.0.2:2380
2020-07-25 15:51:40.086006 I | embed: listening for client requests on 127.0.0.1:2379
2020-07-25 15:51:40.086114 I | embed: listening for client requests on 172.17.0.2:2379
2020-07-25 15:51:40.165190 I | etcdserver: name = minikube
2020-07-25 15:51:40.165248 I | etcdserver: data dir = /var/lib/minikube/etcd
2020-07-25 15:51:40.165271 I | etcdserver: member dir = /var/lib/minikube/etcd/member
2020-07-25 15:51:40.165287 I | etcdserver: heartbeat = 100ms
2020-07-25 15:51:40.165301 I | etcdserver: election = 1000ms
2020-07-25 15:51:40.165315 I | etcdserver: snapshot count = 10000
2020-07-25 15:51:40.165345 I | etcdserver: advertise client URLs = https://172.17.0.2:2379
2020-07-25 15:51:40.165362 I | etcdserver: initial advertise peer URLs = https://172.17.0.2:2380
2020-07-25 15:51:40.165390 I | etcdserver: initial cluster = minikube=https://172.17.0.2:2380
2020-07-25 15:51:40.182333 I | etcdserver: starting member b8e14bda2255bc24 in cluster 38b0e74a458e7a1f
2020-07-25 15:51:40.182421 I | raft: b8e14bda2255bc24 became follower at term 0
2020-07-25 15:51:40.182453 I | raft: newRaft b8e14bda2255bc24 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2020-07-25 15:51:40.182497 I | raft: b8e14bda2255bc24 became follower at term 1
2020-07-25 15:51:40.204605 W | auth: simple token is not cryptographically signed
2020-07-25 15:51:40.212306 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
2020-07-25 15:51:40.212542 I | etcdserver: b8e14bda2255bc24 as single-node; fast-forwarding 9 ticks (election ticks 10)
2020-07-25 15:51:40.263370 I | etcdserver/membership: added member b8e14bda2255bc24 [https://172.17.0.2:2380] to cluster 38b0e74a458e7a1f
2020-07-25 15:51:40.264887 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-07-25 15:51:40.265249 I | embed: listening for metrics on http://172.17.0.2:2381
2020-07-25 15:51:40.265589 I | embed: listening for metrics on http://127.0.0.1:2381
2020-07-25 15:51:41.083547 I | raft: b8e14bda2255bc24 is starting a new election at term 1
2020-07-25 15:51:41.083625 I | raft: b8e14bda2255bc24 became candidate at term 2
2020-07-25 15:51:41.083663 I | raft: b8e14bda2255bc24 received MsgVoteResp from b8e14bda2255bc24 at term 2
2020-07-25 15:51:41.083699 I | raft: b8e14bda2255bc24 became leader at term 2
2020-07-25 15:51:41.083720 I | raft: raft.node: b8e14bda2255bc24 elected leader b8e14bda2255bc24 at term 2
2020-07-25 15:51:41.084518 I | etcdserver: setting up the initial cluster version to 3.3
2020-07-25 15:51:41.084874 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.2:2379]} to cluster 38b0e74a458e7a1f
2020-07-25 15:51:41.086279 I | embed: ready to serve client requests
2020-07-25 15:51:41.087599 I | embed: ready to serve client requests
2020-07-25 15:51:41.089877 I | embed: serving client requests on 172.17.0.2:2379
2020-07-25 15:51:41.091442 I | embed: serving client requests on 127.0.0.1:2379
2020-07-25 15:51:41.170450 N | etcdserver/membership: set the initial cluster version to 3.3
2020-07-25 15:51:41.171351 I | etcdserver/api: enabled capabilities for version 3.3
2020-07-25 15:52:03.477564 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:3109" took too long (104.92661ms) to execute
2020-07-25 15:52:03.482145 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:1 size:1662" took too long (105.960959ms) to execute
2020-07-25 15:52:03.689826 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-7wckf\" " with result "range_response_count:1 size:1435" took too long (107.73125ms) to execute
2020-07-25 15:52:03.873177 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-public/default\" " with result "range_response_count:1 size:181" took too long (280.633811ms) to execute
2020-07-25 15:52:03.981359 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-node-lease/default\" " with result "range_response_count:1 size:189" took too long (117.226066ms) to execute
2020-07-25 15:52:03.981662 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-7wckf\" " with result "range_response_count:1 size:1435" took too long (198.705136ms) to execute
2020-07-25 15:52:04.082825 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:1152" took too long (110.957472ms) to execute
2020-07-25 15:52:04.082900 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:5 size:7662" took too long (205.247296ms) to execute
2020-07-25 15:52:04.089630 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:1 size:173" took too long (106.973712ms) to execute
2020-07-25 15:55:09.919137 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:448" took too long (207.235434ms) to execute
2020-07-25 15:55:09.920408 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (207.941196ms) to execute
2020-07-25 15:57:05.151287 W | etcdserver: read-only range request "key:\"/registry/persistentvolumes\" range_end:\"/registry/persistentvolumet\" count_only:true " with result "range_response_count:0 size:5" took too long (103.44308ms) to execute
2020-07-25 16:01:42.086639 I | mvcc: store.index: compact 759
2020-07-25 16:01:42.089306 I | mvcc: finished scheduled compaction at 759 (took 2.09443ms)
2020-07-25 16:06:42.107904 I | mvcc: store.index: compact 1459
2020-07-25 16:06:42.111215 I | mvcc: finished scheduled compaction at 1459 (took 2.546204ms)

==> kernel <==
 16:08:03 up  3:17,  0 users,  load average: 0.55, 1.18, 1.93
Linux minikube 5.4.0-42-generic #46-Ubuntu SMP Fri Jul 10 00:24:02 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"

==> kube-apiserver [ab4f3b26f16b018122fe2348b9fb8e1cb7c005c3b7d0c6ee685595271a5bd06c] <==
W0725 15:51:42.997824       1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0725 15:51:42.999681       1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0725 15:51:43.005634       1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0725 15:51:43.015617       1 client.go:357] parsed scheme: "endpoint"
I0725 15:51:43.015639       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]
W0725 15:51:43.017874       1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0725 15:51:43.017886       1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0725 15:51:43.024130       1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0725 15:51:43.024143       1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0725 15:51:43.025183       1 client.go:357] parsed scheme: "endpoint"
I0725 15:51:43.025196       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]
I0725 15:51:43.029723       1 client.go:357] parsed scheme: "endpoint"
I0725 15:51:43.029740       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]
I0725 15:51:44.288365       1 secure_serving.go:123] Serving securely on [::]:8443
I0725 15:51:44.288515       1 available_controller.go:383] Starting AvailableConditionController
I0725 15:51:44.288555       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0725 15:51:44.288578       1 crd_finalizer.go:274] Starting CRDFinalizer
I0725 15:51:44.288630       1 controller.go:85] Starting OpenAPI controller
I0725 15:51:44.288669       1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0725 15:51:44.288702       1 naming_controller.go:288] Starting NamingConditionController
I0725 15:51:44.288586       1 controller.go:81] Starting OpenAPI AggregationController
I0725 15:51:44.288755       1 establishing_controller.go:73] Starting EstablishingController
I0725 15:51:44.289177       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0725 15:51:44.289197       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0725 15:51:44.289279       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0725 15:51:44.289288       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0725 15:51:44.289319       1 autoregister_controller.go:140] Starting autoregister controller
I0725 15:51:44.289355       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0725 15:51:44.289397       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0725 15:51:44.289433       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
E0725 15:51:44.290280       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg: 
I0725 15:51:44.388678       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0725 15:51:44.389478       1 cache.go:39] Caches are synced for autoregister controller
I0725 15:51:44.389562       1 shared_informer.go:204] Caches are synced for crd-autoregister 
I0725 15:51:44.389598       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0725 15:51:44.463517       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0725 15:51:45.288666       1 controller.go:107] OpenAPI AggregationController: Processing item 
I0725 15:51:45.288716       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0725 15:51:45.288754       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0725 15:51:45.296983       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0725 15:51:45.319810       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0725 15:51:45.319851       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0725 15:51:45.565566       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0725 15:51:45.585551       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0725 15:51:45.718170       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [172.17.0.2]
I0725 15:51:45.719748       1 controller.go:606] quota admission added evaluator for: endpoints
I0725 15:51:46.866059       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0725 15:51:46.920473       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0725 15:51:47.209434       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0725 15:52:03.464423       1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0725 15:52:03.467703       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0725 15:58:37.568027       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
I0725 15:59:01.143118       1 trace.go:116] Trace[1662752234]: "Get" url:/api/v1/namespaces/test-vector/pods/vector-r6ncw/log (started: 2020-07-25 15:58:40.79442484 +0000 UTC m=+420.807658187) (total time: 20.348667033s):
Trace[1662752234]: [20.348665778s] [20.347732899s] Transformed response object
I0725 15:59:44.048005       1 trace.go:116] Trace[1977696253]: "Get" url:/api/v1/namespaces/test-vector/pods/vector-hksgm/log (started: 2020-07-25 15:59:36.956456049 +0000 UTC m=+476.969689397) (total time: 7.091525302s):
Trace[1977696253]: [7.091524306s] [7.090671915s] Transformed response object
I0725 16:01:39.234547       1 trace.go:116] Trace[1269984085]: "Get" url:/api/v1/namespaces/test-vector/pods/vector-bkcxl/log (started: 2020-07-25 16:01:29.244481739 +0000 UTC m=+589.257715089) (total time: 9.990047856s):
Trace[1269984085]: [9.99004684s] [9.989184318s] Transformed response object
I0725 16:07:48.383196       1 trace.go:116] Trace[968729030]: "Get" url:/api/v1/namespaces/test-vector/pods/vector-xgvhq/log (started: 2020-07-25 16:05:38.94446336 +0000 UTC m=+838.957696707) (total time: 2m9.438655661s):
Trace[968729030]: [2m9.438652384s] [2m9.437614647s] Transformed response object

==> kube-controller-manager [ed820242b4142eba2626744c3412c03bc6ba7be4d0601ee2f88527900144ab25] <==
W0725 16:05:31.446136       1 controllermanager.go:526] Skipping "route"
I0725 16:05:31.446206       1 node_lifecycle_controller.go:570] Starting node controller
I0725 16:05:31.446238       1 shared_informer.go:197] Waiting for caches to sync for taint
W0725 16:05:31.459061       1 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
I0725 16:05:31.460342       1 controllermanager.go:534] Started "attachdetach"
I0725 16:05:31.460439       1 attach_detach_controller.go:334] Starting attach detach controller
I0725 16:05:31.460495       1 shared_informer.go:197] Waiting for caches to sync for attach detach
E0725 16:05:31.475882       1 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0725 16:05:31.475902       1 controllermanager.go:526] Skipping "service"
I0725 16:05:31.481221       1 controllermanager.go:534] Started "replicationcontroller"
I0725 16:05:31.481329       1 replica_set.go:182] Starting replicationcontroller controller
I0725 16:05:31.481341       1 shared_informer.go:197] Waiting for caches to sync for ReplicationController
I0725 16:05:31.486108       1 controllermanager.go:534] Started "deployment"
I0725 16:05:31.486201       1 deployment_controller.go:152] Starting deployment controller
I0725 16:05:31.486213       1 shared_informer.go:197] Waiting for caches to sync for deployment
I0725 16:05:31.490662       1 controllermanager.go:534] Started "ttl"
I0725 16:05:31.491235       1 ttl_controller.go:116] Starting TTL controller
I0725 16:05:31.491254       1 shared_informer.go:197] Waiting for caches to sync for TTL
I0725 16:05:31.491393       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0725 16:05:31.492691       1 shared_informer.go:197] Waiting for caches to sync for resource quota
W0725 16:05:31.500550       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0725 16:05:31.509902       1 shared_informer.go:204] Caches are synced for namespace 
I0725 16:05:31.521203       1 shared_informer.go:204] Caches are synced for node 
I0725 16:05:31.521220       1 range_allocator.go:172] Starting range CIDR allocator
I0725 16:05:31.521223       1 shared_informer.go:197] Waiting for caches to sync for cidrallocator
I0725 16:05:31.521227       1 shared_informer.go:204] Caches are synced for cidrallocator 
I0725 16:05:31.526032       1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
I0725 16:05:31.531124       1 shared_informer.go:204] Caches are synced for PV protection 
I0725 16:05:31.537597       1 shared_informer.go:204] Caches are synced for service account 
I0725 16:05:31.537670       1 shared_informer.go:204] Caches are synced for bootstrap_signer 
I0725 16:05:31.546500       1 shared_informer.go:204] Caches are synced for taint 
I0725 16:05:31.546642       1 taint_manager.go:186] Starting NoExecuteTaintManager
I0725 16:05:31.546739       1 node_lifecycle_controller.go:1464] Initializing eviction metric for zone: 
W0725 16:05:31.546922       1 node_lifecycle_controller.go:1076] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0725 16:05:31.547043       1 node_lifecycle_controller.go:1280] Controller detected that zone  is now in state Normal.
I0725 16:05:31.547208       1 event.go:274] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"7c063598-581a-4285-b7c6-50088c59be8a", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0725 16:05:31.552313       1 shared_informer.go:204] Caches are synced for GC 
I0725 16:05:31.563516       1 shared_informer.go:204] Caches are synced for job 
I0725 16:05:31.564227       1 shared_informer.go:204] Caches are synced for ReplicaSet 
I0725 16:05:31.566562       1 shared_informer.go:204] Caches are synced for HPA 
I0725 16:05:31.591390       1 shared_informer.go:204] Caches are synced for TTL 
I0725 16:05:31.637256       1 shared_informer.go:204] Caches are synced for certificate 
I0725 16:05:31.681570       1 shared_informer.go:204] Caches are synced for ReplicationController 
I0725 16:05:31.687351       1 shared_informer.go:204] Caches are synced for certificate 
I0725 16:05:31.860735       1 shared_informer.go:204] Caches are synced for attach detach 
I0725 16:05:31.887829       1 shared_informer.go:204] Caches are synced for expand 
I0725 16:05:31.913952       1 shared_informer.go:204] Caches are synced for persistent volume 
I0725 16:05:31.920841       1 shared_informer.go:204] Caches are synced for PVC protection 
I0725 16:05:31.985333       1 shared_informer.go:204] Caches are synced for endpoint 
I0725 16:05:32.035000       1 shared_informer.go:204] Caches are synced for stateful set 
I0725 16:05:32.035861       1 shared_informer.go:204] Caches are synced for daemon sets 
I0725 16:05:32.036247       1 shared_informer.go:204] Caches are synced for disruption 
I0725 16:05:32.036293       1 disruption.go:338] Sending events to api server.
I0725 16:05:32.047732       1 shared_informer.go:204] Caches are synced for garbage collector 
I0725 16:05:32.047772       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0725 16:05:32.086492       1 shared_informer.go:204] Caches are synced for deployment 
I0725 16:05:32.091770       1 shared_informer.go:204] Caches are synced for garbage collector 
I0725 16:05:32.093036       1 shared_informer.go:204] Caches are synced for resource quota 
I0725 16:05:32.096192       1 shared_informer.go:204] Caches are synced for resource quota 
I0725 16:05:37.033721       1 event.go:274] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"test-vector", Name:"vector", UID:"098ba72d-1513-4079-bc6b-eeba555b2c3d", APIVersion:"apps/v1", ResourceVersion:"1614", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: vector-xgvhq

==> kube-proxy [4bcf22116bd6ec290b0481c6ae33307f083bb1205b6d254f5729112dfea8e296] <==
W0725 15:52:13.112237       1 server_others.go:330] Flag proxy-mode="" unknown, assuming iptables proxy
I0725 15:52:13.195677       1 node.go:135] Successfully retrieved node IP: 172.17.0.2
I0725 15:52:13.195743       1 server_others.go:150] Using iptables Proxier.
I0725 15:52:13.196914       1 server.go:529] Version: v1.16.13
I0725 15:52:13.198500       1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0725 15:52:13.199250       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0725 15:52:13.199428       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0725 15:52:13.200502       1 config.go:313] Starting service config controller
I0725 15:52:13.200520       1 shared_informer.go:197] Waiting for caches to sync for service config
I0725 15:52:13.200551       1 config.go:131] Starting endpoints config controller
I0725 15:52:13.200575       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0725 15:52:13.307451       1 shared_informer.go:204] Caches are synced for service config 
I0725 15:52:13.307805       1 shared_informer.go:204] Caches are synced for endpoints config 

==> kube-scheduler [8d2921e4bc6dffd31cb83db2f83555151de8027d023c3d6f04084bc798581113] <==
I0725 16:05:03.332533       1 serving.go:319] Generated self-signed cert in-memory
I0725 16:05:03.629241       1 server.go:148] Version: v1.16.13
I0725 16:05:03.629279       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
I0725 16:05:03.634485       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
I0725 16:05:03.734975       1 leaderelection.go:241] attempting to acquire leader lease  kube-system/kube-scheduler...
I0725 16:05:22.160136       1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Sat 2020-07-25 15:50:40 UTC, end at Sat 2020-07-25 16:08:03 UTC. --
Jul 25 16:04:07 minikube kubelet[2247]: E0725 16:04:07.804606    2247 pod_workers.go:191] Error syncing pod 1d01d1f4456fbb5f7de180550f8a8e4a ("kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"
Jul 25 16:04:13 minikube kubelet[2247]: E0725 16:04:13.804191    2247 pod_workers.go:191] Error syncing pod 6e92d66ef7df537311698cf04c24cea7 ("kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"
Jul 25 16:04:21 minikube kubelet[2247]: E0725 16:04:21.804669    2247 pod_workers.go:191] Error syncing pod 1d01d1f4456fbb5f7de180550f8a8e4a ("kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"
Jul 25 16:04:24 minikube kubelet[2247]: E0725 16:04:24.804471    2247 pod_workers.go:191] Error syncing pod 6e92d66ef7df537311698cf04c24cea7 ("kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"
Jul 25 16:04:36 minikube kubelet[2247]: E0725 16:04:36.803842    2247 pod_workers.go:191] Error syncing pod 1d01d1f4456fbb5f7de180550f8a8e4a ("kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"
Jul 25 16:04:36 minikube kubelet[2247]: E0725 16:04:36.804190    2247 pod_workers.go:191] Error syncing pod 6e92d66ef7df537311698cf04c24cea7 ("kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"
Jul 25 16:04:37 minikube kubelet[2247]: E0725 16:04:37.915899    2247 manager.go:1084] Failed to create existing container: /kubepods/besteffort/pod6edbcc6c66e4b5af53005f91bf0bc1fd/crio-0c94e12aff4efbcef8afff31eafb87481351963649572678292fd581cf473736: Error finding container 0c94e12aff4efbcef8afff31eafb87481351963649572678292fd581cf473736: Status 404 returned error &{%!s(*http.body=&{0xc001816b20 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f810) %!s(func() error=0x74f7a0)}
Jul 25 16:04:37 minikube kubelet[2247]: E0725 16:04:37.916856    2247 manager.go:1084] Failed to create existing container: /kubepods/burstable/pod8a880815d2d927e0323ade0f562a4273/crio-b39ab738fe66a84685e27d486fea5fab88d608ba60888dc317a823b96f905098: Error finding container b39ab738fe66a84685e27d486fea5fab88d608ba60888dc317a823b96f905098: Status 404 returned error &{%!s(*http.body=&{0xc00181b080 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f810) %!s(func() error=0x74f7a0)}
Jul 25 16:04:37 minikube kubelet[2247]: E0725 16:04:37.920854    2247 manager.go:1084] Failed to create existing container: /kubepods/burstable/pod1d01d1f4456fbb5f7de180550f8a8e4a/crio-be1d269faa1f1c1a1ec2946c1d80e731bc8da9ce700280fa89872e7f8abe13b6: Error finding container be1d269faa1f1c1a1ec2946c1d80e731bc8da9ce700280fa89872e7f8abe13b6: Status 404 returned error &{%!s(*http.body=&{0xc001a02640 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f810) %!s(func() error=0x74f7a0)}
Jul 25 16:04:37 minikube kubelet[2247]: E0725 16:04:37.925389    2247 manager.go:1084] Failed to create existing container: /kubepods/burstable/pod6e92d66ef7df537311698cf04c24cea7/crio-bcb668fa8b34d502657101986695aec1096f72cf7f2ca234671e88a03374da66: Error finding container bcb668fa8b34d502657101986695aec1096f72cf7f2ca234671e88a03374da66: Status 404 returned error &{%!s(*http.body=&{0xc001a11600 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f810) %!s(func() error=0x74f7a0)}
Jul 25 16:04:49 minikube kubelet[2247]: E0725 16:04:49.803854    2247 pod_workers.go:191] Error syncing pod 1d01d1f4456fbb5f7de180550f8a8e4a ("kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"
Jul 25 16:04:50 minikube kubelet[2247]: E0725 16:04:50.804329    2247 pod_workers.go:191] Error syncing pod 6e92d66ef7df537311698cf04c24cea7 ("kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"
Jul 25 16:05:37 minikube kubelet[2247]: I0725 16:05:37.134057    2247 reconciler.go:208] operationExecutor.VerifyControllerAttachedVolume started for volume "var-lib" (UniqueName: "kubernetes.io/host-path/16d7cc1d-ea6f-47cb-8f3b-d08a10205273-var-lib") pod "vector-xgvhq" (UID: "16d7cc1d-ea6f-47cb-8f3b-d08a10205273")
Jul 25 16:05:37 minikube kubelet[2247]: I0725 16:05:37.134187    2247 reconciler.go:208] operationExecutor.VerifyControllerAttachedVolume started for volume "var-log" (UniqueName: "kubernetes.io/host-path/16d7cc1d-ea6f-47cb-8f3b-d08a10205273-var-log") pod "vector-xgvhq" (UID: "16d7cc1d-ea6f-47cb-8f3b-d08a10205273")
Jul 25 16:05:37 minikube kubelet[2247]: I0725 16:05:37.134268    2247 reconciler.go:208] operationExecutor.VerifyControllerAttachedVolume started for volume "data-dir" (UniqueName: "kubernetes.io/host-path/16d7cc1d-ea6f-47cb-8f3b-d08a10205273-data-dir") pod "vector-xgvhq" (UID: "16d7cc1d-ea6f-47cb-8f3b-d08a10205273")
Jul 25 16:05:37 minikube kubelet[2247]: I0725 16:05:37.134454    2247 reconciler.go:208] operationExecutor.VerifyControllerAttachedVolume started for volume "config-dir" (UniqueName: "kubernetes.io/projected/16d7cc1d-ea6f-47cb-8f3b-d08a10205273-config-dir") pod "vector-xgvhq" (UID: "16d7cc1d-ea6f-47cb-8f3b-d08a10205273")
Jul 25 16:05:37 minikube kubelet[2247]: I0725 16:05:37.134602    2247 reconciler.go:208] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-74gsb" (UniqueName: "kubernetes.io/secret/16d7cc1d-ea6f-47cb-8f3b-d08a10205273-default-token-74gsb") pod "vector-xgvhq" (UID: "16d7cc1d-ea6f-47cb-8f3b-d08a10205273")
Jul 25 16:05:37 minikube kubelet[2247]: E0725 16:05:37.926339    2247 manager.go:1084] Failed to create existing container: /kubepods/burstable/pod6e92d66ef7df537311698cf04c24cea7/crio-bcb668fa8b34d502657101986695aec1096f72cf7f2ca234671e88a03374da66: Error finding container bcb668fa8b34d502657101986695aec1096f72cf7f2ca234671e88a03374da66: Status 404 returned error &{%!s(*http.body=&{0xc000e5a440 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f810) %!s(func() error=0x74f7a0)}
Jul 25 16:05:37 minikube kubelet[2247]: E0725 16:05:37.928166    2247 manager.go:1084] Failed to create existing container: /kubepods/burstable/pod8a880815d2d927e0323ade0f562a4273/crio-b39ab738fe66a84685e27d486fea5fab88d608ba60888dc317a823b96f905098: Error finding container b39ab738fe66a84685e27d486fea5fab88d608ba60888dc317a823b96f905098: Status 404 returned error &{%!s(*http.body=&{0xc0012e0860 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f810) %!s(func() error=0x74f7a0)}
Jul 25 16:05:37 minikube kubelet[2247]: E0725 16:05:37.938460    2247 manager.go:1084] Failed to create existing container: /kubepods/burstable/pod1d01d1f4456fbb5f7de180550f8a8e4a/crio-be1d269faa1f1c1a1ec2946c1d80e731bc8da9ce700280fa89872e7f8abe13b6: Error finding container be1d269faa1f1c1a1ec2946c1d80e731bc8da9ce700280fa89872e7f8abe13b6: Status 404 returned error &{%!s(*http.body=&{0xc0014027e0 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f810) %!s(func() error=0x74f7a0)}
Jul 25 16:05:37 minikube kubelet[2247]: E0725 16:05:37.942174    2247 manager.go:1084] Failed to create existing container: /kubepods/besteffort/pod6edbcc6c66e4b5af53005f91bf0bc1fd/crio-0c94e12aff4efbcef8afff31eafb87481351963649572678292fd581cf473736: Error finding container 0c94e12aff4efbcef8afff31eafb87481351963649572678292fd581cf473736: Status 404 returned error &{%!s(*http.body=&{0xc00113be60 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f810) %!s(func() error=0x74f7a0)}
Jul 25 16:05:38 minikube kubelet[2247]: I0725 16:05:38.840756    2247 reconciler.go:208] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-stm22" (UniqueName: "kubernetes.io/secret/b8cfa1a1-73b7-4bd3-8889-e0d5b5128a0f-default-token-stm22") pod "test-pod-excluded" (UID: "b8cfa1a1-73b7-4bd3-8889-e0d5b5128a0f")
Jul 25 16:05:38 minikube kubelet[2247]: I0725 16:05:38.941000    2247 reconciler.go:208] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-stm22" (UniqueName: "kubernetes.io/secret/fe8d5517-b65b-4c0c-bf07-14e636578600-default-token-stm22") pod "test-pod-control" (UID: "fe8d5517-b65b-4c0c-bf07-14e636578600")
Jul 25 16:05:40 minikube kubelet[2247]: I0725 16:05:40.980889    2247 reconciler.go:182] operationExecutor.UnmountVolume started for volume "default-token-stm22" (UniqueName: "kubernetes.io/secret/fe8d5517-b65b-4c0c-bf07-14e636578600-default-token-stm22") pod "fe8d5517-b65b-4c0c-bf07-14e636578600" (UID: "fe8d5517-b65b-4c0c-bf07-14e636578600")
Jul 25 16:05:40 minikube kubelet[2247]: I0725 16:05:40.981103    2247 reconciler.go:182] operationExecutor.UnmountVolume started for volume "default-token-stm22" (UniqueName: "kubernetes.io/secret/b8cfa1a1-73b7-4bd3-8889-e0d5b5128a0f-default-token-stm22") pod "b8cfa1a1-73b7-4bd3-8889-e0d5b5128a0f" (UID: "b8cfa1a1-73b7-4bd3-8889-e0d5b5128a0f")
Jul 25 16:05:40 minikube kubelet[2247]: I0725 16:05:40.986439    2247 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8cfa1a1-73b7-4bd3-8889-e0d5b5128a0f-default-token-stm22" (OuterVolumeSpecName: "default-token-stm22") pod "b8cfa1a1-73b7-4bd3-8889-e0d5b5128a0f" (UID: "b8cfa1a1-73b7-4bd3-8889-e0d5b5128a0f"). InnerVolumeSpecName "default-token-stm22". PluginName "kubernetes.io/secret", VolumeGidValue ""
Jul 25 16:05:40 minikube kubelet[2247]: I0725 16:05:40.986744    2247 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8d5517-b65b-4c0c-bf07-14e636578600-default-token-stm22" (OuterVolumeSpecName: "default-token-stm22") pod "fe8d5517-b65b-4c0c-bf07-14e636578600" (UID: "fe8d5517-b65b-4c0c-bf07-14e636578600"). InnerVolumeSpecName "default-token-stm22". PluginName "kubernetes.io/secret", VolumeGidValue ""
Jul 25 16:05:41 minikube kubelet[2247]: I0725 16:05:41.081601    2247 reconciler.go:302] Volume detached for volume "default-token-stm22" (UniqueName: "kubernetes.io/secret/b8cfa1a1-73b7-4bd3-8889-e0d5b5128a0f-default-token-stm22") on node "minikube" DevicePath ""
Jul 25 16:05:41 minikube kubelet[2247]: I0725 16:05:41.081698    2247 reconciler.go:302] Volume detached for volume "default-token-stm22" (UniqueName: "kubernetes.io/secret/fe8d5517-b65b-4c0c-bf07-14e636578600-default-token-stm22") on node "minikube" DevicePath ""
Jul 25 16:05:41 minikube kubelet[2247]: W0725 16:05:41.817285    2247 pod_container_deletor.go:75] Container "69fd6602a3f0c9f6eb848cebc96c0980c4add524ec8bce42d3ccb81803afcde8" not found in pod's containers
Jul 25 16:05:41 minikube kubelet[2247]: W0725 16:05:41.820399    2247 pod_container_deletor.go:75] Container "73633e84a738065ed3ef8f4f00b6b619f56a38a19e6ed780d267fe30cd6f6b80" not found in pod's containers
Jul 25 16:06:30 minikube kubelet[2247]: E0725 16:06:30.351747    2247 pod_workers.go:191] Error syncing pod 1d01d1f4456fbb5f7de180550f8a8e4a ("kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"
Jul 25 16:06:30 minikube kubelet[2247]: E0725 16:06:30.938274    2247 pod_workers.go:191] Error syncing pod 1d01d1f4456fbb5f7de180550f8a8e4a ("kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"
Jul 25 16:06:31 minikube kubelet[2247]: E0725 16:06:31.421999    2247 pod_workers.go:191] Error syncing pod 6e92d66ef7df537311698cf04c24cea7 ("kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"
Jul 25 16:06:31 minikube kubelet[2247]: E0725 16:06:31.947021    2247 pod_workers.go:191] Error syncing pod 6e92d66ef7df537311698cf04c24cea7 ("kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"
Jul 25 16:06:37 minikube kubelet[2247]: E0725 16:06:37.910791    2247 manager.go:1084] Failed to create existing container: /kubepods/burstable/pod1d01d1f4456fbb5f7de180550f8a8e4a/crio-be1d269faa1f1c1a1ec2946c1d80e731bc8da9ce700280fa89872e7f8abe13b6: Error finding container be1d269faa1f1c1a1ec2946c1d80e731bc8da9ce700280fa89872e7f8abe13b6: Status 404 returned error &{%!s(*http.body=&{0xc001b2e9a0 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f810) %!s(func() error=0x74f7a0)}
Jul 25 16:06:37 minikube kubelet[2247]: E0725 16:06:37.911319    2247 manager.go:1084] Failed to create existing container: /kubepods/besteffort/pod6edbcc6c66e4b5af53005f91bf0bc1fd/crio-0c94e12aff4efbcef8afff31eafb87481351963649572678292fd581cf473736: Error finding container 0c94e12aff4efbcef8afff31eafb87481351963649572678292fd581cf473736: Status 404 returned error &{%!s(*http.body=&{0xc00090d0a0 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f810) %!s(func() error=0x74f7a0)}
Jul 25 16:06:37 minikube kubelet[2247]: E0725 16:06:37.916984    2247 manager.go:1084] Failed to create existing container: /kubepods/burstable/pod8a880815d2d927e0323ade0f562a4273/crio-b39ab738fe66a84685e27d486fea5fab88d608ba60888dc317a823b96f905098: Error finding container b39ab738fe66a84685e27d486fea5fab88d608ba60888dc317a823b96f905098: Status 404 returned error &{%!s(*http.body=&{0xc001c7c500 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f810) %!s(func() error=0x74f7a0)}
Jul 25 16:06:37 minikube kubelet[2247]: E0725 16:06:37.920621    2247 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8cc93afd4cf90b48238a54fe3479b8b44909b67b61f2ada3c3e49a74aa06580a" to get inode usage: stat /var/lib/containers/storage/overlay/8cc93afd4cf90b48238a54fe3479b8b44909b67b61f2ada3c3e49a74aa06580a: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-minikube_6e92d66ef7df537311698cf04c24cea7/kube-controller-manager/6.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-minikube_6e92d66ef7df537311698cf04c24cea7/kube-controller-manager/6.log: no such file or directory
Jul 25 16:06:37 minikube kubelet[2247]: E0725 16:06:37.921189    2247 manager.go:1084] Failed to create existing container: /kubepods/burstable/pod6e92d66ef7df537311698cf04c24cea7/crio-bcb668fa8b34d502657101986695aec1096f72cf7f2ca234671e88a03374da66: Error finding container bcb668fa8b34d502657101986695aec1096f72cf7f2ca234671e88a03374da66: Status 404 returned error &{%!s(*http.body=&{0xc001cae780 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f810) %!s(func() error=0x74f7a0)}
Jul 25 16:06:42 minikube kubelet[2247]: E0725 16:06:42.802841    2247 pod_workers.go:191] Error syncing pod 6e92d66ef7df537311698cf04c24cea7 ("kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"
Jul 25 16:06:45 minikube kubelet[2247]: E0725 16:06:45.802809    2247 pod_workers.go:191] Error syncing pod 1d01d1f4456fbb5f7de180550f8a8e4a ("kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"
Jul 25 16:06:56 minikube kubelet[2247]: E0725 16:06:56.803648    2247 pod_workers.go:191] Error syncing pod 1d01d1f4456fbb5f7de180550f8a8e4a ("kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"
Jul 25 16:06:57 minikube kubelet[2247]: E0725 16:06:57.803992    2247 pod_workers.go:191] Error syncing pod 6e92d66ef7df537311698cf04c24cea7 ("kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"
Jul 25 16:07:08 minikube kubelet[2247]: E0725 16:07:08.803986    2247 pod_workers.go:191] Error syncing pod 1d01d1f4456fbb5f7de180550f8a8e4a ("kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"
Jul 25 16:07:08 minikube kubelet[2247]: E0725 16:07:08.804519    2247 pod_workers.go:191] Error syncing pod 6e92d66ef7df537311698cf04c24cea7 ("kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"
Jul 25 16:07:19 minikube kubelet[2247]: E0725 16:07:19.804910    2247 pod_workers.go:191] Error syncing pod 1d01d1f4456fbb5f7de180550f8a8e4a ("kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"
Jul 25 16:07:19 minikube kubelet[2247]: E0725 16:07:19.805235    2247 pod_workers.go:191] Error syncing pod 6e92d66ef7df537311698cf04c24cea7 ("kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"
Jul 25 16:07:30 minikube kubelet[2247]: E0725 16:07:30.804178    2247 pod_workers.go:191] Error syncing pod 6e92d66ef7df537311698cf04c24cea7 ("kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"
Jul 25 16:07:31 minikube kubelet[2247]: E0725 16:07:31.804567    2247 pod_workers.go:191] Error syncing pod 1d01d1f4456fbb5f7de180550f8a8e4a ("kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"
Jul 25 16:07:37 minikube kubelet[2247]: E0725 16:07:37.891316    2247 manager.go:1084] Failed to create existing container: /kubepods/burstable/pod8a880815d2d927e0323ade0f562a4273/crio-b39ab738fe66a84685e27d486fea5fab88d608ba60888dc317a823b96f905098: Error finding container b39ab738fe66a84685e27d486fea5fab88d608ba60888dc317a823b96f905098: Status 404 returned error &{%!s(*http.body=&{0xc001848e40 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f810) %!s(func() error=0x74f7a0)}
Jul 25 16:07:37 minikube kubelet[2247]: E0725 16:07:37.895296    2247 manager.go:1084] Failed to create existing container: /kubepods/burstable/pod1d01d1f4456fbb5f7de180550f8a8e4a/crio-be1d269faa1f1c1a1ec2946c1d80e731bc8da9ce700280fa89872e7f8abe13b6: Error finding container be1d269faa1f1c1a1ec2946c1d80e731bc8da9ce700280fa89872e7f8abe13b6: Status 404 returned error &{%!s(*http.body=&{0xc0018b2260 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f810) %!s(func() error=0x74f7a0)}
Jul 25 16:07:37 minikube kubelet[2247]: E0725 16:07:37.895645    2247 manager.go:1084] Failed to create existing container: /kubepods/besteffort/pod6edbcc6c66e4b5af53005f91bf0bc1fd/crio-0c94e12aff4efbcef8afff31eafb87481351963649572678292fd581cf473736: Error finding container 0c94e12aff4efbcef8afff31eafb87481351963649572678292fd581cf473736: Status 404 returned error &{%!s(*http.body=&{0xc00184fe80 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f810) %!s(func() error=0x74f7a0)}
Jul 25 16:07:37 minikube kubelet[2247]: E0725 16:07:37.899077    2247 manager.go:1084] Failed to create existing container: /kubepods/burstable/pod6e92d66ef7df537311698cf04c24cea7/crio-bcb668fa8b34d502657101986695aec1096f72cf7f2ca234671e88a03374da66: Error finding container bcb668fa8b34d502657101986695aec1096f72cf7f2ca234671e88a03374da66: Status 404 returned error &{%!s(*http.body=&{0xc0018c3160 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f810) %!s(func() error=0x74f7a0)}
Jul 25 16:07:41 minikube kubelet[2247]: E0725 16:07:41.804335    2247 pod_workers.go:191] Error syncing pod 6e92d66ef7df537311698cf04c24cea7 ("kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"
Jul 25 16:07:45 minikube kubelet[2247]: E0725 16:07:45.807705    2247 pod_workers.go:191] Error syncing pod 1d01d1f4456fbb5f7de180550f8a8e4a ("kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"
Jul 25 16:07:48 minikube kubelet[2247]: I0725 16:07:48.390805    2247 log.go:172] http: superfluous response.WriteHeader call from k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader (httplog.go:197)
Jul 25 16:07:48 minikube kubelet[2247]: W0725 16:07:48.774736    2247 status_manager.go:545] Failed to update status for pod "test-pod-control_test-vector-test-pod(fe8d5517-b65b-4c0c-bf07-14e636578600)": failed to patch status "{\"metadata\":{\"uid\":\"fe8d5517-b65b-4c0c-bf07-14e636578600\"},\"status\":{\"containerStatuses\":[{\"containerID\":\"cri-o://bf5f9b4c2d48b318c64fd276fa352779cc7bccdf83d3c3db3a25472a35820afc\",\"image\":\"docker.io/library/busybox:1.28\",\"imageID\":\"docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47\",\"lastState\":{},\"name\":\"test-pod-control\",\"ready\":false,\"restartCount\":0,\"started\":false,\"state\":{\"terminated\":{\"exitCode\":0,\"finishedAt\":null,\"startedAt\":null}}}]}}" for pod "test-vector-test-pod"/"test-pod-control": pods "test-pod-control" not found
Jul 25 16:07:55 minikube kubelet[2247]: E0725 16:07:55.804781    2247 pod_workers.go:191] Error syncing pod 6e92d66ef7df537311698cf04c24cea7 ("kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(6e92d66ef7df537311698cf04c24cea7)"
Jul 25 16:07:58 minikube kubelet[2247]: E0725 16:07:58.805393    2247 pod_workers.go:191] Error syncing pod 1d01d1f4456fbb5f7de180550f8a8e4a ("kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(1d01d1f4456fbb5f7de180550f8a8e4a)"

==> storage-provisioner [3ee3b95dc81160a4473092eb40ed86151ef95889294d8bd609a3725678e69d39] <==
F0725 15:52:16.364289       1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: getsockopt: network is unreachable

==> storage-provisioner [964f8658a1e9d37ab67ca4d186d910baef32f6388e8025a28e0a6f8ea9fadb5e] <==

@sharifelgamal
Copy link
Collaborator

So I was able to reproduce this fairly easily on MacOS with the docker driver and docker container runtime with minikube 1.12.2, so this is pretty clearly an issue with the version of kubernetes. Inspecting the crashing pods gives:

 Warning  Unhealthy  2m5s (x16 over 4m45s)  kubelet, 1.16.13  Liveness probe failed: Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
  Normal   Killing    2m5s (x2 over 3m35s)   kubelet, 1.16.13  Container kube-scheduler failed liveness probe, will be restarted

I have no idea why the pods would fail a health check, but it's worth investigating if other k8s versions have this issue or not.

@sharifelgamal sharifelgamal added kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. area/kubernetes-versions Improving support for versions of Kubernetes labels Aug 12, 2020
@medyagh
Copy link
Member

medyagh commented Aug 12, 2020

this seems to be a kuberentes issue and I recommend people to use v1.16.12 instead

@medyagh medyagh closed this as completed Aug 12, 2020
@MOZGIII
Copy link
Author

MOZGIII commented Aug 12, 2020

The relevant issue at k8s: kubernetes/kubernetes#93194

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubernetes-versions Improving support for versions of Kubernetes kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

3 participants