Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vbox 1.3.1: waiting for k8s-app=kube-proxy: timed out waiting for the condition #5238

Closed
jbinoos opened this issue Aug 31, 2019 · 3 comments
Closed
Labels
co/kube-proxy issues relating to kube-proxy in some way co/virtualbox ev/kube-proxy-pod-timeout timeout waiting for kube-proxy to schedule help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@jbinoos
Copy link

jbinoos commented Aug 31, 2019

The exact command to reproduce the issue:

minikube start --cpus 20 --memory 32768

The full output of the command that failed:

😄 minikube v1.3.1 on Ubuntu 18.04
🔥 Creating virtualbox VM (CPUs=20, Memory=32768MB, Disk=20000MB) ...
🐳 Preparing Kubernetes v1.15.2 on Docker 18.09.8 ...
🚜 Extrayant les images ...
🚀 Lançant Kubernetes ...
⌛ Waiting for: apiserver proxy
💣 Wait failed: waiting for k8s-app=kube-proxy: timed out waiting for the condition

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new/choose

The output of the minikube logs command:

==> Docker <==
-- Logs begin at Sat 2019-08-31 17:48:52 UTC, end at Sat 2019-08-31 18:05:11 UTC. --
Aug 31 17:54:27 minikube dockerd[2800]: time="2019-08-31T17:54:27.501772882Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5f06b2eddfac3e4420ded1e978d2a50d798ed6334682c9cb852f143b8e723ff5/shim.sock" debug=false pid=4758
Aug 31 17:54:27 minikube dockerd[2800]: time="2019-08-31T17:54:27.577048333Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0162be4853f4f3e66ab170af12d986e1c5d588cbbf66eefbf90fc96c37eeab4a/shim.sock" debug=false pid=4774
Aug 31 17:55:50 minikube dockerd[2800]: time="2019-08-31T17:55:50.729445918Z" level=info msg="shim reaped" id=db0b4dd55cffa5370201f2e90ca6211fb0f8598f7303a75fcfea6fb9018c673a
Aug 31 17:55:50 minikube dockerd[2800]: time="2019-08-31T17:55:50.768020871Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 31 17:55:50 minikube dockerd[2800]: time="2019-08-31T17:55:50.768183921Z" level=warning msg="db0b4dd55cffa5370201f2e90ca6211fb0f8598f7303a75fcfea6fb9018c673a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/db0b4dd55cffa5370201f2e90ca6211fb0f8598f7303a75fcfea6fb9018c673a/mounts/shm, flags: 0x2: no such file or directory"
Aug 31 17:56:01 minikube dockerd[2800]: time="2019-08-31T17:56:01.555023323Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0d5e423f7dabe8cadf31656c74f71507032279fda4659c6f61d594d22d4b5f03/shim.sock" debug=false pid=5762
Aug 31 17:58:13 minikube dockerd[2800]: time="2019-08-31T17:58:13.102728791Z" level=info msg="shim reaped" id=0d5e423f7dabe8cadf31656c74f71507032279fda4659c6f61d594d22d4b5f03
Aug 31 17:58:13 minikube dockerd[2800]: time="2019-08-31T17:58:13.205841423Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 31 17:58:13 minikube dockerd[2800]: time="2019-08-31T17:58:13.206536705Z" level=warning msg="0d5e423f7dabe8cadf31656c74f71507032279fda4659c6f61d594d22d4b5f03 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/0d5e423f7dabe8cadf31656c74f71507032279fda4659c6f61d594d22d4b5f03/mounts/shm, flags: 0x2: no such file or directory"
Aug 31 17:58:27 minikube dockerd[2800]: time="2019-08-31T17:58:27.179404026Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0e094536185226356a05c4e2f3e2457fe6259608b55a90ac1c4c050bba3d00cd/shim.sock" debug=false pid=7421
Aug 31 17:59:56 minikube dockerd[2800]: time="2019-08-31T17:59:56.018897323Z" level=info msg="shim reaped" id=0e094536185226356a05c4e2f3e2457fe6259608b55a90ac1c4c050bba3d00cd
Aug 31 17:59:56 minikube dockerd[2800]: time="2019-08-31T17:59:56.194948879Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 31 17:59:56 minikube dockerd[2800]: time="2019-08-31T17:59:56.231594586Z" level=warning msg="0e094536185226356a05c4e2f3e2457fe6259608b55a90ac1c4c050bba3d00cd cleanup: failed to unmount IPC: umount /var/lib/docker/containers/0e094536185226356a05c4e2f3e2457fe6259608b55a90ac1c4c050bba3d00cd/mounts/shm, flags: 0x2: no such file or directory"
Aug 31 18:00:24 minikube dockerd[2800]: time="2019-08-31T18:00:24.692290794Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f25a78c3b3e3b40989b4f44e1006df49561aa3143debc732e0ed9642a01a9f58/shim.sock" debug=false pid=8914
Aug 31 18:02:08 minikube dockerd[2800]: time="2019-08-31T18:02:08.299744428Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9db6d8c8374f55e34b6b27287e59ab8c881114806156585408feeae001e79357/shim.sock" debug=false pid=10094
Aug 31 18:02:17 minikube dockerd[2800]: time="2019-08-31T18:02:17.230489068Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e6929bdf507644fb23f9fc47efe1bb5a29bd61e495e113a14a6ae91dd6319542/shim.sock" debug=false pid=10188
Aug 31 18:02:18 minikube dockerd[2800]: time="2019-08-31T18:02:18.005911469Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/463f247f868a20a7911acdf9ee4969fea4c05269cde03a47b43d790f6964cc97/shim.sock" debug=false pid=10200
Aug 31 18:02:35 minikube dockerd[2800]: time="2019-08-31T18:02:35.626796134Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/78c5e441f8ececab410b2e0532deae910ec953bd27bf221d48e9460ede6b2955/shim.sock" debug=false pid=10380
Aug 31 18:03:02 minikube dockerd[2800]: time="2019-08-31T18:03:02.911669427Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d79f2e8a9c9083e7e78b4f5a2667880403d80e12b0ed6021663f00ee479324d1/shim.sock" debug=false pid=10614
Aug 31 18:03:03 minikube dockerd[2800]: time="2019-08-31T18:03:03.936106455Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c1f1ffb20ce45f952cd9d15375205c726eea3a2d45e190d37927d26319615dfd/shim.sock" debug=false pid=10636
Aug 31 18:03:48 minikube dockerd[2800]: time="2019-08-31T18:03:48.076455944Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/307f0bb16457de4c57eb8de5ae5688532743a1d67f8ea72655aa8c0954651e56/shim.sock" debug=false pid=11022
Aug 31 18:03:55 minikube dockerd[2800]: time="2019-08-31T18:03:55.966384628Z" level=info msg="shim reaped" id=c1f1ffb20ce45f952cd9d15375205c726eea3a2d45e190d37927d26319615dfd
Aug 31 18:03:56 minikube dockerd[2800]: time="2019-08-31T18:03:56.084093978Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 31 18:03:56 minikube dockerd[2800]: time="2019-08-31T18:03:56.084314353Z" level=warning msg="c1f1ffb20ce45f952cd9d15375205c726eea3a2d45e190d37927d26319615dfd cleanup: failed to unmount IPC: umount /var/lib/docker/containers/c1f1ffb20ce45f952cd9d15375205c726eea3a2d45e190d37927d26319615dfd/mounts/shm, flags: 0x2: no such file or directory"
Aug 31 18:03:58 minikube dockerd[2800]: time="2019-08-31T18:03:58.353621742Z" level=info msg="shim reaped" id=d79f2e8a9c9083e7e78b4f5a2667880403d80e12b0ed6021663f00ee479324d1
Aug 31 18:03:59 minikube dockerd[2800]: time="2019-08-31T18:03:59.140267887Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 31 18:03:59 minikube dockerd[2800]: time="2019-08-31T18:03:59.191159839Z" level=warning msg="d79f2e8a9c9083e7e78b4f5a2667880403d80e12b0ed6021663f00ee479324d1 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d79f2e8a9c9083e7e78b4f5a2667880403d80e12b0ed6021663f00ee479324d1/mounts/shm, flags: 0x2: no such file or directory"
Aug 31 18:04:13 minikube dockerd[2800]: time="2019-08-31T18:04:13.967437769Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8dfac0b91705b2b060cb97096e6b0cc1adb6593f7da50bed53aa3189bba66de8/shim.sock" debug=false pid=11305
Aug 31 18:04:19 minikube dockerd[2800]: time="2019-08-31T18:04:19.574111941Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dd72a0a512971d21e471ac0018623fede3073c0017bd0c4d0838375d062da0d8/shim.sock" debug=false pid=11360
Aug 31 18:04:19 minikube dockerd[2800]: time="2019-08-31T18:04:19.880856996Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/716c97777db5684cdde169240335ed7d2e56dd1a50d556dcad55934b3e2d36f5/shim.sock" debug=false pid=11364

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
716c97777db56 eb516548c180f About a minute ago Running coredns 1 463f247f868a2
dd72a0a512971 eb516548c180f About a minute ago Running coredns 1 e6929bdf50764
8dfac0b91705b 4689081edb103 About a minute ago Running storage-provisioner 0 307f0bb16457d
78c5e441f8ece 167bbf6c93388 2 minutes ago Running kube-proxy 0 9db6d8c8374f5
f25a78c3b3e3b 9f5df470155d4 4 minutes ago Running kube-controller-manager 3 ca06af7ed5a88
0e09453618522 9f5df470155d4 6 minutes ago Exited kube-controller-manager 2 ca06af7ed5a88
0162be4853f4f 119701e77cbc4 10 minutes ago Running kube-addon-manager 0 7b2d233da72ea
6ff3a5df09b63 88fa9cb27bd2d 10 minutes ago Running kube-scheduler 0 a69230069cce5
5f06b2eddfac3 2c4adeb21b4ff 10 minutes ago Running etcd 0 ff0f4836bd2eb
08a4015c209eb 34a53be6c9a7e 10 minutes ago Running kube-apiserver 0 527cb220693de

==> coredns <==
.:53
2019-08-31T18:04:35.471Z [INFO] CoreDNS-1.3.1
2019-08-31T18:04:35.471Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-08-31T18:04:35.471Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843

==> dmesg <==
[ +0.052620] hpet1: lost 2 rtc interrupts
[ +0.044801] hpet1: lost 2 rtc interrupts
[ +0.486545] hpet1: lost 30 rtc interrupts
[ +0.028421] hpet1: lost 1 rtc interrupts
[ +0.031056] hpet1: lost 1 rtc interrupts
[ +0.031560] hpet1: lost 1 rtc interrupts
[ +0.050064] hpet1: lost 2 rtc interrupts
[ +0.046210] hpet1: lost 2 rtc interrupts
[ +5.981697] hpet_rtc_timer_reinit: 26 callbacks suppressed
[ +0.000001] hpet1: lost 121 rtc interrupts
[ +0.057461] hpet1: lost 3 rtc interrupts
[ +0.053038] hpet1: lost 2 rtc interrupts
[ +0.576911] hpet1: lost 36 rtc interrupts
[ +0.062940] hpet1: lost 3 rtc interrupts
[ +0.074448] hpet1: lost 4 rtc interrupts
[ +0.052770] hpet1: lost 2 rtc interrupts
[ +0.082890] hpet1: lost 5 rtc interrupts
[ +0.432623] hpet1: lost 26 rtc interrupts
[ +0.063720] hpet1: lost 3 rtc interrupts
[ +3.554533] hpet_rtc_timer_reinit: 31 callbacks suppressed
[ +0.000001] hpet1: lost 4 rtc interrupts
[ +0.053821] hpet1: lost 2 rtc interrupts
[ +0.041106] hpet1: lost 2 rtc interrupts
[ +0.047471] hpet1: lost 2 rtc interrupts
[ +0.062202] hpet1: lost 3 rtc interrupts
[ +0.627975] hpet1: lost 39 rtc interrupts
[ +0.074348] hpet1: lost 4 rtc interrupts
[ +0.086416] hpet1: lost 4 rtc interrupts
[ +0.122601] hpet1: lost 7 rtc interrupts
[ +0.436864] hpet1: lost 27 rtc interrupts

==> kernel <==
18:05:18 up 16 min, 0 users, load average: 21.83, 18.96, 10.63
Linux minikube 4.15.0 #1 SMP Fri Aug 2 16:17:56 PDT 2019 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2018.05.3"

==> kube-addon-manager <==
WRN: == Error getting default service account, retry in 0.5 second ==
Error from server (NotFound): serviceaccounts "default" not found
E0831 20:05:23.547715 23788 style.go:171] unable to execute error: error executing template "{{with index .secrets 0}}{{.name}}{{end}}": template: output:1:7: executing "output" at <index .sWecrets 0>: errRN: or calling i==ndex: Error i ndgettex of uning dtypeefauld nil
: template: error: error executing template "{{with index .secrets 0}}{{.name}}{{end}}": template: output:1:7: executing "output" at <index .sWecrets 0>: errRN: or calling i==ndex: Error i ndgettex of uning dtypeefauld nil
:1:40: executing "error: error executing template "{{with index .secrets 0}}{{.name}}{{end}}": template: output:1:7: executing "output" at <index .sWecrets 0>: errRN: or calling i==ndex: Error i ndgettex of uning dtypeefauld nil\n" at <index .secrets 0>: error calling index: index of untyped nil - returning raw string.
error: error executing template "{{with index .secrets 0}}{{.name}}{{end}}": template: output:1:7: executing "output" at <index .sWecrets 0>: errRN: or calling i==ndex: Error i ndgettex of uning dtypeefauld nil
t service account, retry in 0.5 sfined: 'c/etc/okubernnetd ==
es/admission-controls': No such file or directory
WRN: == Eerrrorr: no objects paorss getted ting o apply
default seerror: no objecrvits passed to apply
ce account, retry iner ror: no o0.5 sebcojecndts pas=se=d to
apply
INFO: == Default service account in the kube-system namespace has token default-token-6jz75 ==
INFO: == Entering periodical apply loop at 2019-08-31T18:01:59+00:00 ==
INFO: Leader is minikube
clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
storageclass.storage.k8s.io/standard created
INFO: == Kubernetes addon ensure completed at 2019-08-31T18:03:02+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner created
pod/storage-provisioner created
INFO: == Kubernetes addon reconcile completed at 2019-08-31T18:03:43+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-08-31T18:04:06+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-08-31T18:04:43+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-08-31T18:05:05+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==

==> kube-apiserver <==
Trace[706901113]: [878.803008ms] [878.774897ms] About to write a response
I0831 18:04:52.988170 1 trace.go:81] Trace[1307853031]: "GuaranteedUpdate etcd3: *coordination.Lease" (started: 2019-08-31 18:04:52.481346226 +0000 UTC m=+621.101389844) (total time: 506.784078ms):
Trace[1307853031]: [506.736074ms] [462.281618ms] Transaction committed
I0831 18:04:53.023847 1 trace.go:81] Trace[1149535806]: "Update /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/minikube" (started: 2019-08-31 18:04:52.481184316 +0000 UTC m=+621.101227924) (total time: 542.633719ms):
Trace[1149535806]: [542.569165ms] [542.461544ms] Object stored in database
I0831 18:04:53.024972 1 trace.go:81] Trace[706807645]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2019-08-31 18:04:52.318663533 +0000 UTC m=+620.938707141) (total time: 706.286382ms):
Trace[706807645]: [158.01052ms] [158.01052ms] initial value restored
Trace[706807645]: [706.268936ms] [485.24627ms] Transaction committed
I0831 18:04:53.818358 1 trace.go:81] Trace[990398939]: "Get /api/v1/namespaces/default/endpoints/kubernetes" (started: 2019-08-31 18:04:53.055958506 +0000 UTC m=+621.676002111) (total time: 758.800339ms):
Trace[990398939]: [751.444906ms] [751.429796ms] About to write a response
I0831 18:04:55.001100 1 trace.go:81] Trace[230317308]: "GuaranteedUpdate etcd3: *core.Endpoints" (started: 2019-08-31 18:04:54.05305133 +0000 UTC m=+622.673094939) (total time: 948.002292ms):
Trace[230317308]: [947.961697ms] [947.831847ms] Transaction committed
I0831 18:04:55.027372 1 trace.go:81] Trace[1827850440]: "Update /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2019-08-31 18:04:54.05297368 +0000 UTC m=+622.673017291) (total time: 974.378692ms):
Trace[1827850440]: [974.330007ms] [974.27941ms] Object stored in database
I0831 18:04:57.964831 1 trace.go:81] Trace[2054731162]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2019-08-31 18:04:57.327152406 +0000 UTC m=+625.947196010) (total time: 637.641687ms):
Trace[2054731162]: [637.585922ms] [637.53064ms] About to write a response
I0831 18:05:02.150184 1 trace.go:81] Trace[1989702036]: "Get /api/v1/namespaces/default/endpoints/kubernetes" (started: 2019-08-31 18:05:01.639391373 +0000 UTC m=+630.259434972) (total time: 510.759544ms):
Trace[1989702036]: [510.712831ms] [510.704356ms] About to write a response
I0831 18:05:02.786218 1 trace.go:81] Trace[289204577]: "List etcd3: key=/masterleases/, resourceVersion=0, limit: 0, continue: " (started: 2019-08-31 18:05:02.203102975 +0000 UTC m=+630.823146588) (total time: 583.086246ms):
Trace[289204577]: [583.086246ms] [583.086246ms] END
I0831 18:05:03.727447 1 trace.go:81] Trace[461055888]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/minikube" (started: 2019-08-31 18:05:03.158902794 +0000 UTC m=+631.778946483) (total time: 568.511035ms):
Trace[461055888]: [568.464272ms] [568.428432ms] About to write a response
I0831 18:05:22.356909 1 trace.go:81] Trace[1540684204]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2019-08-31 18:05:21.4672051 +0000 UTC m=+650.087248727) (total time: 889.659148ms):
Trace[1540684204]: [346.095878ms] [346.095878ms] initial value restored
Trace[1540684204]: [486.642197ms] [140.546319ms] Transaction prepared
Trace[1540684204]: [889.641585ms] [402.999388ms] Transaction committed
I0831 18:05:24.582584 1 trace.go:81] Trace[1991028419]: "GuaranteedUpdate etcd3: *core.Endpoints" (started: 2019-08-31 18:05:24.026662991 +0000 UTC m=+652.646706610) (total time: 555.881058ms):
Trace[1991028419]: [555.859697ms] [550.069168ms] Transaction committed
I0831 18:05:24.582645 1 trace.go:81] Trace[1795466329]: "Update /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2019-08-31 18:05:24.026496834 +0000 UTC m=+652.646540490) (total time: 556.135681ms):
Trace[1795466329]: [556.104137ms] [555.993113ms] Object stored in database

==> kube-proxy <==
W0831 18:03:23.471157 1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
I0831 18:03:25.404506 1 server_others.go:143] Using iptables Proxier.
W0831 18:03:25.413894 1 proxier.go:321] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0831 18:03:25.435091 1 server.go:534] Version: v1.15.2
I0831 18:03:25.692574 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 655360
I0831 18:03:25.692613 1 conntrack.go:52] Setting nf_conntrack_max to 655360
I0831 18:03:25.693040 1 conntrack.go:83] Setting conntrack hashsize to 163840
I0831 18:03:25.825111 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0831 18:03:25.825169 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0831 18:03:25.843871 1 config.go:96] Starting endpoints config controller
I0831 18:03:25.843898 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I0831 18:03:25.843929 1 config.go:187] Starting service config controller
I0831 18:03:25.843938 1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0831 18:03:26.069613 1 controller_utils.go:1036] Caches are synced for endpoints config controller
I0831 18:03:26.166125 1 controller_utils.go:1036] Caches are synced for service config controller
I0831 18:03:35.211939 1 trace.go:81] Trace[611254350]: "iptables restore" (started: 2019-08-31 18:03:32.619529001 +0000 UTC m=+48.511660086) (total time: 2.59234496s):
Trace[611254350]: [2.59234496s] [2.592293794s] END
I0831 18:04:57.777634 1 trace.go:81] Trace[384034109]: "iptables restore" (started: 2019-08-31 18:04:52.794023654 +0000 UTC m=+128.686154734) (total time: 4.983520126s):
Trace[384034109]: [4.983520126s] [4.983469151s] END

==> kube-scheduler <==
E0831 17:55:29.232853 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0831 17:55:29.232904 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0831 17:55:29.312011 1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0831 17:55:29.356700 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0831 17:55:29.369489 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0831 17:55:29.508180 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0831 17:55:29.666572 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0831 17:55:29.832442 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0831 17:55:29.832522 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0831 17:55:29.832583 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0831 17:55:30.285679 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0831 17:55:30.357611 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0831 17:55:30.403566 1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0831 17:55:30.446777 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0831 17:55:30.534618 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0831 17:55:30.644963 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0831 17:55:30.736004 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0831 17:55:30.929290 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0831 17:55:30.957899 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0831 17:55:31.032156 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0831 17:55:31.391380 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0831 17:55:31.458845 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0831 17:55:31.488024 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0831 17:55:31.488095 1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0831 17:55:31.611292 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0831 17:55:31.740204 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0831 17:55:31.888344 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0831 17:55:32.057224 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I0831 17:55:34.087350 1 leaderelection.go:235] attempting to acquire leader lease kube-system/kube-scheduler...
I0831 17:55:35.281819 1 leaderelection.go:245] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Sat 2019-08-31 17:48:52 UTC, end at Sat 2019-08-31 18:05:36 UTC. --
Aug 31 18:01:46 minikube kubelet[4385]: I0831 18:01:45.473527 4385 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-b6z99" (UniqueName: "kubernetes.io/secret/02de5ab4-10e6-42df-a04f-30b6e19383e2-kube-proxy-token-b6z99") pod "kube-proxy-dzt2n" (UID: "02de5ab4-10e6-42df-a04f-30b6e19383e2")
Aug 31 18:01:46 minikube kubelet[4385]: I0831 18:01:45.473557 4385 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/02de5ab4-10e6-42df-a04f-30b6e19383e2-lib-modules") pod "kube-proxy-dzt2n" (UID: "02de5ab4-10e6-42df-a04f-30b6e19383e2")
Aug 31 18:01:46 minikube kubelet[4385]: E0831 18:01:46.653293 4385 configmap.go:203] Couldn't get configMap kube-system/kube-proxy: couldn't propagate object cache: timed out waiting for the condition
Aug 31 18:01:47 minikube kubelet[4385]: E0831 18:01:46.653411 4385 nestedpendingoperations.go:270] Operation for ""kubernetes.io/configmap/02de5ab4-10e6-42df-a04f-30b6e19383e2-kube-proxy" ("02de5ab4-10e6-42df-a04f-30b6e19383e2")" failed. No retries permitted until 2019-08-31 18:01:47.153367517 +0000 UTC m=+478.723512373 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/02de5ab4-10e6-42df-a04f-30b6e19383e2-kube-proxy") pod "kube-proxy-dzt2n" (UID: "02de5ab4-10e6-42df-a04f-30b6e19383e2") : couldn't propagate object cache: timed out waiting for the condition"
Aug 31 18:01:51 minikube kubelet[4385]: I0831 18:01:51.247968 4385 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-sz7n5" (UniqueName: "kubernetes.io/secret/ed2e25db-86aa-4a5b-a9fb-b651342f0e6e-coredns-token-sz7n5") pod "coredns-5c98db65d4-rrpz4" (UID: "ed2e25db-86aa-4a5b-a9fb-b651342f0e6e")
Aug 31 18:01:51 minikube kubelet[4385]: I0831 18:01:51.248023 4385 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ed2e25db-86aa-4a5b-a9fb-b651342f0e6e-config-volume") pod "coredns-5c98db65d4-rrpz4" (UID: "ed2e25db-86aa-4a5b-a9fb-b651342f0e6e")
Aug 31 18:01:52 minikube kubelet[4385]: E0831 18:01:52.469072 4385 configmap.go:203] Couldn't get configMap kube-system/coredns: couldn't propagate object cache: timed out waiting for the condition
Aug 31 18:01:52 minikube kubelet[4385]: E0831 18:01:52.525712 4385 nestedpendingoperations.go:270] Operation for ""kubernetes.io/configmap/ed2e25db-86aa-4a5b-a9fb-b651342f0e6e-config-volume" ("ed2e25db-86aa-4a5b-a9fb-b651342f0e6e")" failed. No retries permitted until 2019-08-31 18:01:53.025667825 +0000 UTC m=+484.595812683 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ed2e25db-86aa-4a5b-a9fb-b651342f0e6e-config-volume") pod "coredns-5c98db65d4-rrpz4" (UID: "ed2e25db-86aa-4a5b-a9fb-b651342f0e6e") : couldn't propagate object cache: timed out waiting for the condition"
Aug 31 18:01:54 minikube kubelet[4385]: E0831 18:01:54.233710 4385 configmap.go:203] Couldn't get configMap kube-system/coredns: couldn't propagate object cache: timed out waiting for the condition
Aug 31 18:01:54 minikube kubelet[4385]: E0831 18:01:54.484985 4385 nestedpendingoperations.go:270] Operation for ""kubernetes.io/configmap/ed2e25db-86aa-4a5b-a9fb-b651342f0e6e-config-volume" ("ed2e25db-86aa-4a5b-a9fb-b651342f0e6e")" failed. No retries permitted until 2019-08-31 18:01:55.484949685 +0000 UTC m=+487.055094553 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ed2e25db-86aa-4a5b-a9fb-b651342f0e6e-config-volume") pod "coredns-5c98db65d4-rrpz4" (UID: "ed2e25db-86aa-4a5b-a9fb-b651342f0e6e") : couldn't propagate object cache: timed out waiting for the condition"
Aug 31 18:01:55 minikube kubelet[4385]: I0831 18:01:55.124889 4385 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7d749427-5cc5-4756-8a93-5779c8587e28-config-volume") pod "coredns-5c98db65d4-vwhl2" (UID: "7d749427-5cc5-4756-8a93-5779c8587e28")
Aug 31 18:01:55 minikube kubelet[4385]: I0831 18:01:55.235725 4385 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-sz7n5" (UniqueName: "kubernetes.io/secret/7d749427-5cc5-4756-8a93-5779c8587e28-coredns-token-sz7n5") pod "coredns-5c98db65d4-vwhl2" (UID: "7d749427-5cc5-4756-8a93-5779c8587e28")
Aug 31 18:01:55 minikube kubelet[4385]: E0831 18:01:55.975826 4385 kuberuntime_manager.go:887] PodSandboxStatus of sandbox "9db6d8c8374f55e34b6b27287e59ab8c881114806156585408feeae001e79357" for pod "kube-proxy-dzt2n_kube-system(02de5ab4-10e6-42df-a04f-30b6e19383e2)" error: rpc error: code = Unknown desc = Error: No such container: 9db6d8c8374f55e34b6b27287e59ab8c881114806156585408feeae001e79357
Aug 31 18:01:57 minikube kubelet[4385]: E0831 18:01:57.811903 4385 kuberuntime_manager.go:887] PodSandboxStatus of sandbox "9db6d8c8374f55e34b6b27287e59ab8c881114806156585408feeae001e79357" for pod "kube-proxy-dzt2n_kube-system(02de5ab4-10e6-42df-a04f-30b6e19383e2)" error: rpc error: code = Unknown desc = Error: No such container: 9db6d8c8374f55e34b6b27287e59ab8c881114806156585408feeae001e79357
Aug 31 18:01:58 minikube kubelet[4385]: E0831 18:01:58.013443 4385 kuberuntime_manager.go:887] PodSandboxStatus of sandbox "9db6d8c8374f55e34b6b27287e59ab8c881114806156585408feeae001e79357" for pod "kube-proxy-dzt2n_kube-system(02de5ab4-10e6-42df-a04f-30b6e19383e2)" error: rpc error: code = Unknown desc = Error: No such container: 9db6d8c8374f55e34b6b27287e59ab8c881114806156585408feeae001e79357
Aug 31 18:02:00 minikube kubelet[4385]: W0831 18:02:00.296375 4385 pod_container_deletor.go:75] Container "9db6d8c8374f55e34b6b27287e59ab8c881114806156585408feeae001e79357" not found in pod's containers
Aug 31 18:02:01 minikube kubelet[4385]: E0831 18:02:01.858397 4385 kuberuntime_manager.go:887] PodSandboxStatus of sandbox "e6929bdf507644fb23f9fc47efe1bb5a29bd61e495e113a14a6ae91dd6319542" for pod "coredns-5c98db65d4-vwhl2_kube-system(7d749427-5cc5-4756-8a93-5779c8587e28)" error: rpc error: code = Unknown desc = Error: No such container: e6929bdf507644fb23f9fc47efe1bb5a29bd61e495e113a14a6ae91dd6319542
Aug 31 18:02:01 minikube kubelet[4385]: E0831 18:02:01.956338 4385 kuberuntime_manager.go:887] PodSandboxStatus of sandbox "463f247f868a20a7911acdf9ee4969fea4c05269cde03a47b43d790f6964cc97" for pod "coredns-5c98db65d4-rrpz4_kube-system(ed2e25db-86aa-4a5b-a9fb-b651342f0e6e)" error: rpc error: code = Unknown desc = Error: No such container: 463f247f868a20a7911acdf9ee4969fea4c05269cde03a47b43d790f6964cc97
Aug 31 18:02:04 minikube kubelet[4385]: W0831 18:02:04.530456 4385 docker_sandbox.go:384] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5c98db65d4-vwhl2 through plugin: invalid network status for
Aug 31 18:02:04 minikube kubelet[4385]: W0831 18:02:04.856185 4385 pod_container_deletor.go:75] Container "e6929bdf507644fb23f9fc47efe1bb5a29bd61e495e113a14a6ae91dd6319542" not found in pod's containers
Aug 31 18:02:48 minikube kubelet[4385]: W0831 18:02:48.189649 4385 pod_container_deletor.go:75] Container "463f247f868a20a7911acdf9ee4969fea4c05269cde03a47b43d790f6964cc97" not found in pod's containers
Aug 31 18:02:55 minikube kubelet[4385]: E0831 18:02:55.503371 4385 remote_runtime.go:295] ContainerStatus "d79f2e8a9c9083e7e78b4f5a2667880403d80e12b0ed6021663f00ee479324d1" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: d79f2e8a9c9083e7e78b4f5a2667880403d80e12b0ed6021663f00ee479324d1
Aug 31 18:02:55 minikube kubelet[4385]: E0831 18:02:55.533807 4385 kuberuntime_manager.go:902] getPodContainerStatuses for pod "coredns-5c98db65d4-vwhl2_kube-system(7d749427-5cc5-4756-8a93-5779c8587e28)" failed: rpc error: code = Unknown desc = Error: No such container: d79f2e8a9c9083e7e78b4f5a2667880403d80e12b0ed6021663f00ee479324d1
Aug 31 18:03:34 minikube kubelet[4385]: I0831 18:03:34.734913 4385 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-gghk9" (UniqueName: "kubernetes.io/secret/98914b25-9f7f-48f1-a45e-c3b732fedb21-storage-provisioner-token-gghk9") pod "storage-provisioner" (UID: "98914b25-9f7f-48f1-a45e-c3b732fedb21")
Aug 31 18:03:34 minikube kubelet[4385]: I0831 18:03:34.769642 4385 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/98914b25-9f7f-48f1-a45e-c3b732fedb21-tmp") pod "storage-provisioner" (UID: "98914b25-9f7f-48f1-a45e-c3b732fedb21")
Aug 31 18:03:40 minikube kubelet[4385]: E0831 18:03:40.072915 4385 kuberuntime_manager.go:887] PodSandboxStatus of sandbox "307f0bb16457de4c57eb8de5ae5688532743a1d67f8ea72655aa8c0954651e56" for pod "storage-provisioner_kube-system(98914b25-9f7f-48f1-a45e-c3b732fedb21)" error: rpc error: code = Unknown desc = Error: No such container: 307f0bb16457de4c57eb8de5ae5688532743a1d67f8ea72655aa8c0954651e56
Aug 31 18:04:04 minikube kubelet[4385]: W0831 18:04:04.878144 4385 pod_container_deletor.go:75] Container "307f0bb16457de4c57eb8de5ae5688532743a1d67f8ea72655aa8c0954651e56" not found in pod's containers
Aug 31 18:04:10 minikube kubelet[4385]: E0831 18:04:10.040836 4385 remote_runtime.go:295] ContainerStatus "8dfac0b91705b2b060cb97096e6b0cc1adb6593f7da50bed53aa3189bba66de8" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 8dfac0b91705b2b060cb97096e6b0cc1adb6593f7da50bed53aa3189bba66de8
Aug 31 18:04:10 minikube kubelet[4385]: E0831 18:04:10.040887 4385 kuberuntime_manager.go:902] getPodContainerStatuses for pod "storage-provisioner_kube-system(98914b25-9f7f-48f1-a45e-c3b732fedb21)" failed: rpc error: code = Unknown desc = Error: No such container: 8dfac0b91705b2b060cb97096e6b0cc1adb6593f7da50bed53aa3189bba66de8
Aug 31 18:04:33 minikube kubelet[4385]: E0831 18:04:33.289715 4385 cadvisor_stats_provider.go:403] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/burstable/poded2e25db-86aa-4a5b-a9fb-b651342f0e6e/716c97777db5684cdde169240335ed7d2e56dd1a50d556dcad55934b3e2d36f5": RecentStats: unable to find data in memory cache]

==> storage-provisioner <==

The operating system version:
Linux brocker 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

@tstromberg tstromberg changed the title minikube start crash vb: Wait failed: waiting for k8s-app=kube-proxy: timed out waiting for the condition Sep 3, 2019
@tstromberg tstromberg changed the title vb: Wait failed: waiting for k8s-app=kube-proxy: timed out waiting for the condition vb: waiting for k8s-app=kube-proxy: timed out waiting for the condition Sep 3, 2019
@tstromberg tstromberg changed the title vb: waiting for k8s-app=kube-proxy: timed out waiting for the condition vbox 1.3.1: waiting for k8s-app=kube-proxy: timed out waiting for the condition Sep 3, 2019
@tstromberg
Copy link
Contributor

kube-proxy eventually came up, but the deployment appears to have been unstable. I've never seen it come up after etcd, for instance:

CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
716c97777db56 eb516548c180f About a minute ago Running coredns 1 463f247f868a2
dd72a0a512971 eb516548c180f About a minute ago Running coredns 1 e6929bdf50764
8dfac0b91705b 4689081edb103 About a minute ago Running storage-provisioner 0 307f0bb16457d
78c5e441f8ece 167bbf6c93388 2 minutes ago Running kube-proxy 0 9db6d8c8374f5
f25a78c3b3e3b 9f5df470155d4 4 minutes ago Running kube-controller-manager 3 ca06af7ed5a88
0e09453618522 9f5df470155d4 6 minutes ago Exited kube-controller-manager 2 ca06af7ed5a88
0162be4853f4f 119701e77cbc4 10 minutes ago Running kube-addon-manager 0 7b2d233da72ea
6ff3a5df09b63 88fa9cb27bd2d 10 minutes ago Running kube-scheduler 0 a69230069cce5
5f06b2eddfac3 2c4adeb21b4ff 10 minutes ago Running etcd 0 ff0f4836bd2eb
08a4015c209eb 34a53be6c9a7e 10 minutes ago Running kube-apiserver 0 527cb220693de

This message from kubelet is interesting:

Aug 31 18:01:46 minikube kubelet[4385]: E0831 18:01:46.653293 4385 configmap.go:203] Couldn't get configMap kube-system/kube-proxy: couldn't propagate object cache: timed out waiting for the condition

I think we're going to need to extend the logs command to display logs for failed pods.

@tstromberg tstromberg added co/kube-proxy issues relating to kube-proxy in some way co/virtualbox ev/kube-proxy-pod-timeout timeout waiting for kube-proxy to schedule help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Sep 3, 2019
@tstromberg tstromberg added the kind/bug Categorizes issue or PR as related to a bug. label Sep 19, 2019
@tstromberg
Copy link
Contributor

Could you please check to see if minikube v1.4 addresses this issue? We've made some changes with how this is handled, and improved the minikube logs output to help us debug tricky cases like this.

@tstromberg
Copy link
Contributor

This issue appears to be a duplicate of #4540, do you mind if we move the conversation there?

Ths way we can centralize the content relating to the issue. If you feel that this issue is not in fact a duplicate, please re-open it using /reopen. If you have additional information to share, please add it to the new issue.

Thank you for reporting this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/kube-proxy issues relating to kube-proxy in some way co/virtualbox ev/kube-proxy-pod-timeout timeout waiting for kube-proxy to schedule help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

2 participants