You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$ kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
kube-addon-manager-minikube 1/1 Running 11 3d
kube-dns-v20-12vcw 2/3 Running 154 3d
kubernetes-dashboard-8rh3b 1/1 Running 76 3d
kubectl describe pod kube-dns-v20-12vcw -n kube-system
Name: kube-dns-v20-12vcw
Namespace: kube-system
Node: minikube/192.168.64.2
Start Time: Mon, 19 Dec 2016 15:40:00 -0700
Labels: k8s-app=kube-dns
version=v20
Status: Running
IP: 172.17.0.2
Controllers: ReplicationController/kube-dns-v20
Containers:
kubedns:
Container ID: docker://9438c708fcb82a2c6fd7592a73c31e5eeb2051be0c42c47b3df66841339eca3a
Image: gcr.io/google_containers/kubedns-amd64:1.8
Image ID: docker://sha256:597a45ef55ec52401fdcd2e1d6ee53c74b04afb264490d7fa67b6d98ad330dfe
Ports: 10053/UDP, 10053/TCP
Args:
--domain=cluster.local.
--dns-port=10053
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Thu, 22 Dec 2016 16:07:55 -0700
Finished: Thu, 22 Dec 2016 16:09:03 -0700
Ready: False
Restart Count: 74
Liveness: http-get http://:8080/healthz-kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9lns9 (ro)
Environment Variables: <none>
dnsmasq:
Container ID: docker://f88502ee8468735530d5a9257f3a514f0533278ee469686c83f16f6a201e5815
Image: gcr.io/google_containers/kube-dnsmasq-amd64:1.4
Image ID: docker://sha256:3ec65756a89b70b4095e43a340a6e2d5696cac7a93a29619ff5c4b6be9af2773
Ports: 53/UDP, 53/TCP
Args:
--cache-size=1000
--no-resolv
--server=127.0.0.1#10053
--log-facility=-
State: Running
Started: Thu, 22 Dec 2016 16:10:40 -0700
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Thu, 22 Dec 2016 16:06:18 -0700
Finished: Thu, 22 Dec 2016 16:07:53 -0700
Ready: True
Restart Count: 71
Liveness: http-get http://:8080/healthz-dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9lns9 (ro)
Environment Variables: <none>
healthz:
Container ID: docker://f83814f0d347d728570b1e4431c4a453cee74a61056d2978b1a4ed2689012334
Image: gcr.io/google_containers/exechealthz-amd64:1.2
Image ID: docker://sha256:93a43bfb39bfe9795e76ccd75d7a0e6d40e2ae8563456a2a77c1b4cfc3bbd967
Port: 8080/TCP
Args:
--cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
--url=/healthz-dnsmasq
--cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
--url=/healthz-kubedns
--port=8080
--quiet
Limits:
memory: 50Mi
Requests:
cpu: 10m
memory: 50Mi
State: Running
Started: Thu, 22 Dec 2016 15:45:00 -0700
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Thu, 22 Dec 2016 15:39:27 -0700
Finished: Thu, 22 Dec 2016 15:43:37 -0700
Ready: True
Restart Count: 11
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9lns9 (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-9lns9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9lns9
QoS Class: Burstable
Tolerations: CriticalAddonsOnly=:Exists
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5h 59m 11 {kubelet minikube} spec.containers{kubedns} Normal Pulled Container image "gcr.io/google_containers/kubedns-amd64:1.8" already present on machine
1h 59m 33 {kubelet minikube} spec.containers{kubedns} Warning Unhealthy Readiness probe failed: Get http://172.17.0.3:8081/readiness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
1h 59m 29 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "dnsmasq" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=dnsmasq pod=kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)"
1h 58m 60 {kubelet minikube} spec.containers{kubedns} Warning Unhealthy Readiness probe failed: Get http://172.17.0.3:8081/readiness: dial tcp 172.17.0.3:8081: getsockopt: connection refused
1h 58m 20 {kubelet minikube} spec.containers{kubedns} Warning Unhealthy Liveness probe failed: Get http://172.17.0.3:8080/healthz-kubedns: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
5h 57m 11 {kubelet minikube} spec.containers{dnsmasq} Normal Pulled Container image "gcr.io/google_containers/kube-dnsmasq-amd64:1.4" already present on machine
1h 57m 11 {kubelet minikube} spec.containers{kubedns} Normal Created (events with common reason combined)
1h 57m 11 {kubelet minikube} spec.containers{kubedns} Normal Started (events with common reason combined)
1h 56m 32 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "kubedns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubedns pod=kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)"
1h 56m 14 {kubelet minikube} spec.containers{dnsmasq} Warning Unhealthy Liveness probe failed: Get http://172.17.0.3:8080/healthz-dnsmasq: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
1h 55m 13 {kubelet minikube} spec.containers{dnsmasq} Normal Killing (events with common reason combined)
1h 55m 6 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: [failed to "StartContainer" for "dnsmasq" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=dnsmasq pod=kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)"
, failed to "StartContainer" for "kubedns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubedns pod=kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)"
]
1h 54m 27 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: [failed to "StartContainer" for "kubedns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubedns pod=kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)"
, failed to "StartContainer" for "dnsmasq" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=dnsmasq pod=kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)"
]
1h 54m 154 {kubelet minikube} spec.containers{kubedns} Warning BackOff Back-off restarting failed docker container
54m 54m 1 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImageInspectError: "Failed to inspect image \"gcr.io/google_containers/pause-amd64:3.0\": Cannot connect to the Docker daemon. Is the docker daemon running on this host?"
53m 53m 1 {kubelet minikube} spec.containers{kubedns} Normal Pulled Container image "gcr.io/google_containers/kubedns-amd64:1.8" already present on machine
53m 53m 1 {kubelet minikube} spec.containers{kubedns} Normal Created Created container with docker id f377e01484d0; Security:[seccomp=unconfined]
53m 53m 1 {kubelet minikube} spec.containers{kubedns} Normal Started Started container with docker id f377e01484d0
53m 53m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Pulled Container image "gcr.io/google_containers/kube-dnsmasq-amd64:1.4" already present on machine
53m 53m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Created Created container with docker id bf5c9f9a093e; Security:[seccomp=unconfined]
53m 53m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Started Started container with docker id bf5c9f9a093e
53m 53m 1 {kubelet minikube} spec.containers{healthz} Normal Pulled Container image "gcr.io/google_containers/exechealthz-amd64:1.2" already present on machine
53m 53m 1 {kubelet minikube} spec.containers{healthz} Normal Created Created container with docker id ceb03132c329; Security:[seccomp=unconfined]
53m 53m 1 {kubelet minikube} spec.containers{healthz} Normal Started Started container with docker id ceb03132c329
34m 34m 1 {kubelet minikube} spec.containers{dnsmasq} Warning Unhealthy Liveness probe failed: Get http://172.17.0.4:8080/healthz-dnsmasq: dial tcp 172.17.0.4:8080: getsockopt: connection refused
34m 34m 1 {kubelet minikube} spec.containers{kubedns} Warning Unhealthy Liveness probe failed: Get http://172.17.0.4:8080/healthz-kubedns: dial tcp 172.17.0.4:8080: getsockopt: connection refused
32m 32m 1 {kubelet minikube} spec.containers{healthz} Normal Pulled Container image "gcr.io/google_containers/exechealthz-amd64:1.2" already present on machine
32m 32m 1 {kubelet minikube} spec.containers{healthz} Normal Created Created container with docker id 7e921f54a3d4; Security:[seccomp=unconfined]
32m 32m 1 {kubelet minikube} spec.containers{healthz} Normal Started Started container with docker id 7e921f54a3d4
32m 32m 1 {kubelet minikube} spec.containers{kubedns} Normal Pulled Container image "gcr.io/google_containers/kubedns-amd64:1.8" already present on machine
32m 32m 1 {kubelet minikube} spec.containers{kubedns} Normal Created Created container with docker id 220b02f83a86; Security:[seccomp=unconfined]
32m 32m 1 {kubelet minikube} spec.containers{kubedns} Normal Started Started container with docker id 220b02f83a86
32m 32m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Pulled Container image "gcr.io/google_containers/kube-dnsmasq-amd64:1.4" already present on machine
32m 32m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Created Created container with docker id 829e48ed741f; Security:[seccomp=unconfined]
32m 32m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Started Started container with docker id 829e48ed741f
27m 27m 1 {kubelet minikube} spec.containers{dnsmasq} Warning Unhealthy Liveness probe failed: Get http://172.17.0.4:8080/healthz-dnsmasq: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
27m 27m 1 {kubelet minikube} spec.containers{kubedns} Warning Unhealthy Readiness probe failed: Get http://172.17.0.4:8081/readiness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
27m 27m 1 {kubelet minikube} spec.containers{kubedns} Warning Unhealthy Liveness probe failed: Get http://172.17.0.4:8080/healthz-kubedns: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
26m 26m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Created Created container with docker id 68b8e30f5c5b; Security:[seccomp=unconfined]
26m 26m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Started Started container with docker id 68b8e30f5c5b
26m 26m 1 {kubelet minikube} spec.containers{healthz} Normal Pulled Container image "gcr.io/google_containers/exechealthz-amd64:1.2" already present on machine
26m 26m 1 {kubelet minikube} spec.containers{kubedns} Normal Started Started container with docker id 9c24df251517
26m 26m 1 {kubelet minikube} spec.containers{healthz} Normal Created Created container with docker id f83814f0d347; Security:[seccomp=unconfined]
26m 26m 1 {kubelet minikube} spec.containers{healthz} Normal Started Started container with docker id f83814f0d347
26m 26m 1 {kubelet minikube} spec.containers{kubedns} Normal Created Created container with docker id 9c24df251517; Security:[seccomp=unconfined]
13m 13m 1 {kubelet minikube} spec.containers{kubedns} Normal Killing Killing container with docker id 9c24df251517: pod "kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)" container "kubedns" is unhealthy, it will be killed and re-created.
12m 12m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Killing Killing container with docker id 68b8e30f5c5b: pod "kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)" container "dnsmasq" is unhealthy, it will be killed and re-created.
12m 12m 1 {kubelet minikube} spec.containers{kubedns} Normal Started Started container with docker id 1accbd7f15ac
12m 12m 1 {kubelet minikube} spec.containers{kubedns} Normal Created Created container with docker id 1accbd7f15ac; Security:[seccomp=unconfined]
12m 12m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Created Created container with docker id 56d5f499beb4; Security:[seccomp=unconfined]
12m 12m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Started Started container with docker id 56d5f499beb4
11m 11m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Killing Killing container with docker id 56d5f499beb4: pod "kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)" container "dnsmasq" is unhealthy, it will be killed and re-created.
15m 10m 25 {kubelet minikube} spec.containers{kubedns} Warning Unhealthy Readiness probe failed: Get http://172.17.0.2:8081/readiness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
10m 10m 1 {kubelet minikube} spec.containers{kubedns} Normal Killing Killing container with docker id 1accbd7f15ac: pod "kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)" container "kubedns" is unhealthy, it will be killed and re-created.
10m 10m 1 {kubelet minikube} spec.containers{kubedns} Normal Created Created container with docker id 059e947de588; Security:[seccomp=unconfined]
10m 10m 1 {kubelet minikube} spec.containers{kubedns} Normal Started Started container with docker id 059e947de588
10m 10m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Created Created container with docker id fa4d04b5b9ab; Security:[seccomp=unconfined]
10m 10m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Started Started container with docker id fa4d04b5b9ab
9m 9m 1 {kubelet minikube} spec.containers{kubedns} Warning Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 503
9m 9m 1 {kubelet minikube} spec.containers{kubedns} Normal Killing Killing container with docker id 059e947de588: pod "kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)" container "kubedns" is unhealthy, it will be killed and re-created.
9m 9m 1 {kubelet minikube} spec.containers{kubedns} Normal Created Created container with docker id 0a1c624cd8f1; Security:[seccomp=unconfined]
9m 9m 1 {kubelet minikube} spec.containers{kubedns} Normal Started Started container with docker id 0a1c624cd8f1
8m 8m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Killing Killing container with docker id fa4d04b5b9ab: pod "kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)" container "dnsmasq" is unhealthy, it will be killed and re-created.
8m 8m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Created Created container with docker id a65eeb3248f6; Security:[seccomp=unconfined]
8m 8m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Started Started container with docker id a65eeb3248f6
8m 8m 1 {kubelet minikube} spec.containers{kubedns} Normal Killing Killing container with docker id 0a1c624cd8f1: pod "kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)" container "kubedns" is unhealthy, it will be killed and re-created.
8m 8m 1 {kubelet minikube} spec.containers{kubedns} Normal Created Created container with docker id 297f52d1550a; Security:[seccomp=unconfined]
8m 8m 1 {kubelet minikube} spec.containers{kubedns} Normal Started Started container with docker id 297f52d1550a
7m 7m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Killing Killing container with docker id a65eeb3248f6: pod "kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)" container "dnsmasq" is unhealthy, it will be killed and re-created.
7m 7m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Created Created container with docker id cc17cc010ba9; Security:[seccomp=unconfined]
7m 7m 1 {kubelet minikube} spec.containers{dnsmasq} Normal Started Started container with docker id cc17cc010ba9
6m 6m 1 {kubelet minikube} spec.containers{kubedns} Normal Killing Killing container with docker id 297f52d1550a: pod "kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)" container "kubedns" is unhealthy, it will be killed and re-created.
6m 6m 8 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "kubedns" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kubedns pod=kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)"
5m 5m 1 {kubelet minikube} spec.containers{kubedns} Normal Created Created container with docker id 63443adcc35c; Security:[seccomp=unconfined]
5m 5m 1 {kubelet minikube} spec.containers{kubedns} Normal Started Started container with docker id 63443adcc35c
15m 4m 15 {kubelet minikube} spec.containers{dnsmasq} Warning Unhealthy Liveness probe failed: Get http://172.17.0.2:8080/healthz-dnsmasq: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
26m 3m 7 {kubelet minikube} spec.containers{kubedns} Normal Pulled Container image "gcr.io/google_containers/kubedns-amd64:1.8" already present on machine
3m 2m 8 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "dnsmasq" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dnsmasq pod=kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)"
12m 2m 38 {kubelet minikube} spec.containers{kubedns} Warning Unhealthy Readiness probe failed: Get http://172.17.0.2:8081/readiness: dial tcp 172.17.0.2:8081: getsockopt: connection refused
15m 2m 15 {kubelet minikube} spec.containers{kubedns} Warning Unhealthy Liveness probe failed: Get http://172.17.0.2:8080/healthz-kubedns: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
5m 2m 4 {kubelet minikube} spec.containers{dnsmasq} Normal Killing (events with common reason combined)
2m 1m 5 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: [failed to "StartContainer" for "kubedns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubedns pod=kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)"
, failed to "StartContainer" for "dnsmasq" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dnsmasq pod=kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)"
]
2m 1m 4 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: [failed to "StartContainer" for "dnsmasq" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=dnsmasq pod=kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)"
, failed to "StartContainer" for "kubedns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubedns pod=kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)"
]
26m 1m 7 {kubelet minikube} spec.containers{dnsmasq} Normal Pulled Container image "gcr.io/google_containers/kube-dnsmasq-amd64:1.4" already present on machine
5m 1m 3 {kubelet minikube} spec.containers{dnsmasq} Normal Created (events with common reason combined)
5m 58s 3 {kubelet minikube} spec.containers{dnsmasq} Normal Started (events with common reason combined)
6m 1s 40 {kubelet minikube} spec.containers{kubedns} Warning BackOff Back-off restarting failed docker container
58s 1s 6 {kubelet minikube} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "kubedns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubedns pod=kube-dns-v20-12vcw_kube-system(133f0581-c63c-11e6-83a1-ce231054886b)"
~/code/rj (master) $ kubectl logs kube-dns-v20-12vcw -n kube-system -c kubedns
~/code/rj (master) $ kubectl logs kube-dns-v20-12vcw -n kube-system -c dnsmasq
dnsmasq[1]: started, version 2.76 cachesize 1000
dnsmasq[1]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
dnsmasq[1]: using nameserver 127.0.0.1#10053
dnsmasq[1]: read /etc/hosts - 7 addresses
~/code/rj (master) $ kubectl logs kube-dns-v20-12vcw -n kube-system -c healthz
2016/12/22 22:56:55 Latest result too old to be useful: Result of last exec: , at 2016-12-22 22:56:21.706891488 +0000 UTC, error None.
2016/12/22 22:56:55 Latest result too old to be useful: Result of last exec: , at 2016-12-22 22:56:21.706891488 +0000 UTC, error None.
2016/12/22 22:56:55 Latest result too old to be useful: Result of last exec: , at 2016-12-22 22:56:21.706891488 +0000 UTC, error None.
2016/12/22 22:58:23 Latest result too old to be useful: Result of last exec: , at 2016-12-22 22:57:03.000902982 +0000 UTC, error None.
2016/12/22 22:58:23 Latest result too old to be useful: Result of last exec: , at 2016-12-22 22:57:03.000902982 +0000 UTC, error None.
2016/12/22 22:58:23 Latest result too old to be useful: Result of last exec: , at 2016-12-22 22:57:31.304386027 +0000 UTC, error None.
2016/12/22 22:58:23 Latest result too old to be useful: Result of last exec: , at 2016-12-22 22:57:31.304386027 +0000 UTC, error None.
2016/12/22 22:58:23 Latest result too old to be useful: Result of last exec: , at 2016-12-22 22:57:03.000902982 +0000 UTC, error None.
2016/12/22 22:58:23 Latest result too old to be useful: Result of last exec: , at 2016-12-22 22:57:03.000902982 +0000 UTC, error None.
2016/12/22 22:58:23 Latest result too old to be useful: Result of last exec: , at 2016-12-22 22:57:03.000902982 +0000 UTC, error None.
2016/12/22 22:58:23 Latest result too old to be useful: Result of last exec: , at 2016-12-22 22:57:31.304386027 +0000 UTC, error None.
2016/12/22 22:58:23 Latest result too old to be useful: Result of last exec: , at 2016-12-22 22:57:31.304386027 +0000 UTC, error None.
2016/12/22 22:58:23 Latest result too old to be useful: Result of last exec: , at 2016-12-22 22:57:31.304386027 +0000 UTC, error None.
2016/12/22 23:00:09 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:00:01.088061926 +0000 UTC, error exit status 1
2016/12/22 23:00:09 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:00:02.652894452 +0000 UTC, error exit status 1
2016/12/22 23:02:08 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:02:06.928783487 +0000 UTC, error exit status 1
2016/12/22 23:02:42 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:02:04.557499748 +0000 UTC, error exit status 1
2016/12/22 23:03:19 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:03:18.960432027 +0000 UTC, error exit status 1
2016/12/22 23:06:00 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:03:49.774130823 +0000 UTC, error exit status 1
2016/12/22 23:06:01 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:05:59.406481194 +0000 UTC, error exit status 1
2016/12/22 23:06:03 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:03:49.774130823 +0000 UTC, error exit status 1
2016/12/22 23:10:03 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:07:46.728427536 +0000 UTC, error exit status 1
2016/12/22 23:10:18 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:08:00.858504849 +0000 UTC, error exit status 1
2016/12/22 23:11:18 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:09:34.718138037 +0000 UTC, error exit status 1
2016/12/22 23:11:56 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:10:50.598608832 +0000 UTC, error exit status 1
2016/12/22 23:17:46 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:15:36.258315381 +0000 UTC, error exit status 1
2016/12/22 23:18:01 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:17:27.538748923 +0000 UTC, error exit status 1
2016/12/22 23:19:42 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:18:25.238372493 +0000 UTC, error exit status 1
2016/12/22 23:20:34 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:20:28.096994817 +0000 UTC, error exit status 1
2016/12/22 23:25:15 Latest result too old to be useful: Result of last exec: , at 2016-12-22 23:23:52.411261964 +0000 UTC, error None.
2016/12/22 23:25:15 Latest result too old to be useful: Result of last exec: , at 2016-12-22 23:23:52.411261964 +0000 UTC, error None.
2016/12/22 23:25:15 Latest result too old to be useful: Result of last exec: , at 2016-12-22 23:23:52.411261964 +0000 UTC, error None.
2016/12/22 23:25:15 Latest result too old to be useful: Result of last exec: , at 2016-12-22 23:23:52.411261964 +0000 UTC, error None.
2016/12/22 23:25:15 Latest result too old to be useful: Result of last exec: , at 2016-12-22 23:23:52.411261964 +0000 UTC, error None.
2016/12/22 23:31:30 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:31:30.031569257 +0000 UTC, error exit status 1
2016/12/22 23:31:30 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:31:30.031569257 +0000 UTC, error exit status 1
2016/12/22 23:31:30 Latest result too old to be useful: Result of last exec: , at 2016-12-22 23:29:55.396372647 +0000 UTC, error None.
2016/12/22 23:31:30 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:31:30.031569257 +0000 UTC, error exit status 1
2016/12/22 23:31:30 Latest result too old to be useful: Result of last exec: , at 2016-12-22 23:29:55.396372647 +0000 UTC, error None.
2016/12/22 23:31:30 Latest result too old to be useful: Result of last exec: , at 2016-12-22 23:29:55.396372647 +0000 UTC, error None.
2016/12/22 23:31:30 Latest result too old to be useful: Result of last exec: , at 2016-12-22 23:29:55.396372647 +0000 UTC, error None.
2016/12/22 23:31:30 Latest result too old to be useful: Result of last exec: , at 2016-12-22 23:29:54.687293921 +0000 UTC, error None.
2016/12/22 23:31:30 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:31:30.031569257 +0000 UTC, error exit status 1
2016/12/22 23:31:30 Latest result too old to be useful: Result of last exec: , at 2016-12-22 23:29:55.396372647 +0000 UTC, error None.
2016/12/22 23:35:18 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:35:06.5822531 +0000 UTC, error exit status 1
2016/12/22 23:35:18 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:35:06.5822531 +0000 UTC, error exit status 1
2016/12/22 23:35:18 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:34:19.252762761 +0000 UTC, error exit status 1
2016/12/22 23:35:18 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:34:19.252762761 +0000 UTC, error exit status 1
2016/12/22 23:35:18 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:34:19.252762761 +0000 UTC, error exit status 1
2016/12/22 23:35:18 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:35:06.5822531 +0000 UTC, error exit status 1
2016/12/22 23:35:18 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:34:19.252762761 +0000 UTC, error exit status 1
2016/12/22 23:35:18 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:35:06.5822531 +0000 UTC, error exit status 1
2016/12/22 23:41:45 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:41:21.375881143 +0000 UTC, error exit status 1
2016/12/22 23:41:45 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:41:24.491262797 +0000 UTC, error exit status 1
2016/12/22 23:42:49 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:42:42.4129767 +0000 UTC, error exit status 1
2016/12/22 23:43:33 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:43:21.715618171 +0000 UTC, error exit status 1
2016/12/22 23:50:13 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:50:07.207462063 +0000 UTC, error exit status 1
2016/12/22 23:50:14 Latest result too old to be useful: Result of last exec: , at 2016-12-22 23:49:23.943983715 +0000 UTC, error None.
2016/12/22 23:51:39 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:51:28.496814294 +0000 UTC, error exit status 1
2016/12/22 23:51:59 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:50:33.646504986 +0000 UTC, error exit status 1
2016/12/22 23:58:23 Latest result too old to be useful: Result of last exec: , at 2016-12-22 23:57:53.142920737 +0000 UTC, error None.
2016/12/22 23:58:23 Latest result too old to be useful: Result of last exec: , at 2016-12-22 23:57:53.142920737 +0000 UTC, error None.
2016/12/22 23:58:23 Latest result too old to be useful: Result of last exec: , at 2016-12-22 23:57:53.142920737 +0000 UTC, error None.
2016/12/22 23:59:14 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:58:33.136525764 +0000 UTC, error exit status 1
2016/12/22 23:59:14 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:59:14.071287464 +0000 UTC, error exit status 1
2016/12/22 23:59:14 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-22 23:59:14.071287464 +0000 UTC, error exit status 1
2016/12/23 00:01:59 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-23 00:01:34.170876544 +0000 UTC, error exit status 1
2016/12/23 00:02:00 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-23 00:01:57.631239647 +0000 UTC, error exit status 1
2016/12/23 00:10:05 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-23 00:08:38.920605539 +0000 UTC, error exit status 1
2016/12/23 00:10:05 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-23 00:08:34.835785246 +0000 UTC, error exit status 1
2016/12/23 00:10:07 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-23 00:10:05.738244378 +0000 UTC, error exit status 1
2016/12/23 00:10:07 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-23 00:10:05.630155978 +0000 UTC, error exit status 1
2016/12/23 00:17:14 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-23 00:16:27.609570957 +0000 UTC, error exit status 1
2016/12/23 00:17:14 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-23 00:17:10.312535772 +0000 UTC, error exit status 1
2016/12/23 00:19:05 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-23 00:18:27.379915937 +0000 UTC, error exit status 1
2016/12/23 00:19:07 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-23 00:18:13.569750542 +0000 UTC, error exit status 1
2016/12/23 00:26:15 Latest result too old to be useful: Result of last exec: , at 2016-12-23 00:25:10.874673104 +0000 UTC, error None.
2016/12/23 00:26:32 Healthz probe on /healthz-kubedns error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-23 00:26:32.234659445 +0000 UTC, error exit status 1
2016/12/23 00:27:08 Healthz probe on /healthz-dnsmasq error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
, at 2016-12-23 00:27:02.875679955 +0000 UTC, error exit status 1
The kube-dns pod just keeps failing forever and the dns will not come back up until you restart minikube
How to reproduce it:
Honestly I'm unsure. A few things about my setup:
kubectl port-forward into an nginx load-balancer which uses the request path to proxy pass to the appropriate service. Had some problems with nginx using the kubernetes dns server so had to use dnsmasq in the load-balancer pod. This works fine on GCE but is a little gross so figured it was worth mentioning
I had a very similar problem (Linux, virtualbox, docker 1.21.1) and it turned out that the minikube virtual machine had somehow lost it's connection on the NAT interface which was causing DNS to break. This also meant that minikube ssh failed to connect to the VM.
The workaround was to connect directly to the VM (double click the VM in virtualbox), kill udhcpc for eth0 and then request a new address with udhcpc -i eth0. Once resolved, I restarted the kube-dns pod and everything started working once more.
Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Report
Minikube version (use
minikube version
): v0.14.0Environment:
cat ~/.minikube/machines/minikube/config.json | grep DriverName
): xhyvedocker -v
): 1.12.5, build 7392c3bWhat happened:
After initially working, kube-dns will get into a crash loop. Restarting minikube fixes it
working:
Not working (no endpoint):
kubectl describe pod kube-dns-v20-12vcw -n kube-system
The kube-dns pod just keeps failing forever and the dns will not come back up until you restart minikube
How to reproduce it:
Honestly I'm unsure. A few things about my setup:
The described behavior happens consistently and quickly (~30-60 seconds after being fixed)
Possibly related: #314
The text was updated successfully, but these errors were encountered: