Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can not pull access any external resource with hyperkit driver #5336

Closed
kvokka opened this issue Sep 13, 2019 · 10 comments
Closed

Can not pull access any external resource with hyperkit driver #5336

kvokka opened this issue Sep 13, 2019 · 10 comments
Labels
co/hyperkit Hyperkit related issues kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@kvokka
Copy link

kvokka commented Sep 13, 2019

This with after entire re-install of the cluster. With VirtualBox driver everything works well.
Do not have any proxy on the host system, but already tried the solution from here with lo luck.

The exact command to reproduce the issue:

export NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.0.0/16
minikube start --vm-driver=hyperkit
$ kubectl get pod --all-namespaces
NAMESPACE     NAME                                        READY   STATUS             RESTARTS   AGE
kube-system   coredns-5c98db65d4-chlbk                    1/1     Running            0          3m6s
kube-system   coredns-5c98db65d4-lzhqv                    1/1     Running            0          3m6s
kube-system   etcd-minikube                               1/1     Running            0          2m4s
kube-system   heapster-7m9gx                              0/1     ImagePullBackOff   0          3m5s
kube-system   influxdb-grafana-7nxzj                      0/2     ErrImagePull       0          3m5s
kube-system   kube-addon-manager-minikube                 1/1     Running            0          116s
kube-system   kube-apiserver-minikube                     1/1     Running            0          2m1s
kube-system   kube-controller-manager-minikube            1/1     Running            0          2m4s
kube-system   kube-proxy-b2wrr                            1/1     Running            0          3m6s
kube-system   kube-scheduler-minikube                     1/1     Running            0          112s
kube-system   kubernetes-dashboard-7b8ddcb5d6-fk7m2       1/1     Running            0          3m5s
kube-system   nginx-ingress-controller-5d9cf9c69f-ghfl6   0/1     ImagePullBackOff   0          3m4s
kube-system   storage-provisioner                         1/1     Running            0          3m4s
$ minikube ssh
                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ curl google.com
curl: (6) Could not resolve host: google.com
$ nslookup google.com
Server:    192.168.64.1
Address 1: 192.168.64.1

nslookup: can't resolve 'google.com'

The output of the minikube logs command:

==> Docker <==
-- Logs begin at Fri 2019-09-13 05:42:58 UTC, end at Fri 2019-09-13 05:49:04 UTC. --
Sep 13 05:44:35 minikube dockerd[1904]: time="2019-09-13T05:44:35.934012432Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:57352->192.168.64.1:53: read: connection refused"
Sep 13 05:44:35 minikube dockerd[1904]: time="2019-09-13T05:44:35.934165435Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:57352->192.168.64.1:53: read: connection refused"
Sep 13 05:44:35 minikube dockerd[1904]: time="2019-09-13T05:44:35.934197650Z" level=error msg="Handler for POST /images/create returned error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:57352->192.168.64.1:53: read: connection refused"
Sep 13 05:44:40 minikube dockerd[1904]: time="2019-09-13T05:44:40.932110564Z" level=warning msg="Error getting v2 registry: Get https://quay.io/v2/: dial tcp: lookup quay.io on 192.168.64.1:53: read udp 192.168.64.10:35153->192.168.64.1:53: read: connection refused"
Sep 13 05:44:40 minikube dockerd[1904]: time="2019-09-13T05:44:40.932640210Z" level=info msg="Attempting next endpoint for pull after error: Get https://quay.io/v2/: dial tcp: lookup quay.io on 192.168.64.1:53: read udp 192.168.64.10:35153->192.168.64.1:53: read: connection refused"
Sep 13 05:44:40 minikube dockerd[1904]: time="2019-09-13T05:44:40.932765095Z" level=error msg="Handler for POST /images/create returned error: Get https://quay.io/v2/: dial tcp: lookup quay.io on 192.168.64.1:53: read udp 192.168.64.10:35153->192.168.64.1:53: read: connection refused"
Sep 13 05:45:19 minikube dockerd[1904]: time="2019-09-13T05:45:19.929055313Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:58986->192.168.64.1:53: read: connection refused"
Sep 13 05:45:19 minikube dockerd[1904]: time="2019-09-13T05:45:19.929466742Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:58986->192.168.64.1:53: read: connection refused"
Sep 13 05:45:19 minikube dockerd[1904]: time="2019-09-13T05:45:19.929536234Z" level=error msg="Handler for POST /images/create returned error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:58986->192.168.64.1:53: read: connection refused"
Sep 13 05:45:19 minikube dockerd[1904]: time="2019-09-13T05:45:19.939947170Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:34241->192.168.64.1:53: read: connection refused"
Sep 13 05:45:19 minikube dockerd[1904]: time="2019-09-13T05:45:19.940163026Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:34241->192.168.64.1:53: read: connection refused"
Sep 13 05:45:19 minikube dockerd[1904]: time="2019-09-13T05:45:19.940263254Z" level=error msg="Handler for POST /images/create returned error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:34241->192.168.64.1:53: read: connection refused"
Sep 13 05:45:22 minikube dockerd[1904]: time="2019-09-13T05:45:22.934272368Z" level=warning msg="Error getting v2 registry: Get https://quay.io/v2/: dial tcp: lookup quay.io on 192.168.64.1:53: read udp 192.168.64.10:46812->192.168.64.1:53: read: connection refused"
Sep 13 05:45:22 minikube dockerd[1904]: time="2019-09-13T05:45:22.934350947Z" level=info msg="Attempting next endpoint for pull after error: Get https://quay.io/v2/: dial tcp: lookup quay.io on 192.168.64.1:53: read udp 192.168.64.10:46812->192.168.64.1:53: read: connection refused"
Sep 13 05:45:22 minikube dockerd[1904]: time="2019-09-13T05:45:22.934379633Z" level=error msg="Handler for POST /images/create returned error: Get https://quay.io/v2/: dial tcp: lookup quay.io on 192.168.64.1:53: read udp 192.168.64.10:46812->192.168.64.1:53: read: connection refused"
Sep 13 05:45:28 minikube dockerd[1904]: time="2019-09-13T05:45:28.934821699Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:58124->192.168.64.1:53: read: connection refused"
Sep 13 05:45:28 minikube dockerd[1904]: time="2019-09-13T05:45:28.935308961Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:58124->192.168.64.1:53: read: connection refused"
Sep 13 05:45:28 minikube dockerd[1904]: time="2019-09-13T05:45:28.935439983Z" level=error msg="Handler for POST /images/create returned error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:58124->192.168.64.1:53: read: connection refused"
Sep 13 05:46:44 minikube dockerd[1904]: time="2019-09-13T05:46:44.929782215Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:43363->192.168.64.1:53: read: connection refused"
Sep 13 05:46:44 minikube dockerd[1904]: time="2019-09-13T05:46:44.929868265Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:43363->192.168.64.1:53: read: connection refused"
Sep 13 05:46:44 minikube dockerd[1904]: time="2019-09-13T05:46:44.929916120Z" level=error msg="Handler for POST /images/create returned error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:43363->192.168.64.1:53: read: connection refused"
Sep 13 05:46:44 minikube dockerd[1904]: time="2019-09-13T05:46:44.935415116Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:58667->192.168.64.1:53: read: connection refused"
Sep 13 05:46:44 minikube dockerd[1904]: time="2019-09-13T05:46:44.935505991Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:58667->192.168.64.1:53: read: connection refused"
Sep 13 05:46:44 minikube dockerd[1904]: time="2019-09-13T05:46:44.935565227Z" level=error msg="Handler for POST /images/create returned error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:58667->192.168.64.1:53: read: connection refused"
Sep 13 05:46:50 minikube dockerd[1904]: time="2019-09-13T05:46:50.929919745Z" level=warning msg="Error getting v2 registry: Get https://quay.io/v2/: dial tcp: lookup quay.io on 192.168.64.1:53: read udp 192.168.64.10:38946->192.168.64.1:53: read: connection refused"
Sep 13 05:46:50 minikube dockerd[1904]: time="2019-09-13T05:46:50.930422659Z" level=info msg="Attempting next endpoint for pull after error: Get https://quay.io/v2/: dial tcp: lookup quay.io on 192.168.64.1:53: read udp 192.168.64.10:38946->192.168.64.1:53: read: connection refused"
Sep 13 05:46:50 minikube dockerd[1904]: time="2019-09-13T05:46:50.930497588Z" level=error msg="Handler for POST /images/create returned error: Get https://quay.io/v2/: dial tcp: lookup quay.io on 192.168.64.1:53: read udp 192.168.64.10:38946->192.168.64.1:53: read: connection refused"
Sep 13 05:47:01 minikube dockerd[1904]: time="2019-09-13T05:47:01.930713412Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:40044->192.168.64.1:53: read: connection refused"
Sep 13 05:47:01 minikube dockerd[1904]: time="2019-09-13T05:47:01.930847751Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:40044->192.168.64.1:53: read: connection refused"
Sep 13 05:47:01 minikube dockerd[1904]: time="2019-09-13T05:47:01.930875309Z" level=error msg="Handler for POST /images/create returned error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:40044->192.168.64.1:53: read: connection refused"

==> container status <==
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
6342cfce1fa35       4689081edb103       5 minutes ago       Running             storage-provisioner       0                   24ab772c5bcbe
050c39febaabc       eb516548c180f       5 minutes ago       Running             coredns                   0                   fc938acf2174a
68fd60b9ab866       eb516548c180f       5 minutes ago       Running             coredns                   0                   68cdb0735711e
7c0110a30bb59       f9aed6605b814       5 minutes ago       Running             kubernetes-dashboard      0                   ea8d9fea18ccf
e70179bcc5c01       167bbf6c93388       5 minutes ago       Running             kube-proxy                0                   a111829190be2
742eca7550b81       34a53be6c9a7e       5 minutes ago       Running             kube-apiserver            0                   1944b478e4bd1
58a2bdd72fe4a       2c4adeb21b4ff       5 minutes ago       Running             etcd                      0                   db2a72feb401e
7bd5885c78951       119701e77cbc4       5 minutes ago       Running             kube-addon-manager        0                   a5118ebda64b7
b84f7df0a9026       9f5df470155d4       5 minutes ago       Running             kube-controller-manager   0                   adc87e80987cf
7b40fa0c6ef2b       88fa9cb27bd2d       5 minutes ago       Running             kube-scheduler            0                   aa6cd8df64d67

==> coredns <==
.:53
2019-09-13T05:43:55.475Z [INFO] CoreDNS-1.3.1
2019-09-13T05:43:55.475Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-09-13T05:43:55.475Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
2019-09-13T05:43:57.521Z [ERROR] plugin/errors: 2 3469386322681028515.6675711715418277962. HINFO: read udp 172.17.0.6:39673->192.168.64.1:53: read: connection refused
2019-09-13T05:43:58.478Z [ERROR] plugin/errors: 2 3469386322681028515.6675711715418277962. HINFO: read udp 172.17.0.6:37176->192.168.64.1:53: read: connection refused

==> dmesg <==
[Sep13 05:42] ERROR: earlyprintk= earlyser already used
[  +0.000000] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xC0, should be 0x1D (20170831/tbprint-211)
[  +0.000000] ACPI Error: Could not enable RealTimeClock event (20170831/evxfevnt-218)
[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20170831/evxface-654)
[  +0.006697] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[  +0.334818] systemd-fstab-generator[1038]: Ignoring "noauto" for root device
[  +0.003546] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[  +0.000001] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[  +0.448908] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[  +0.747701] vboxguest: loading out-of-tree module taints kernel.
[  +0.003050] vboxguest: PCI device not found, probably running on physical hardware.
[Sep13 05:43] systemd-fstab-generator[1829]: Ignoring "noauto" for root device
[ +21.389345] systemd-fstab-generator[2547]: Ignoring "noauto" for root device
[  +0.678967] systemd-fstab-generator[2729]: Ignoring "noauto" for root device
[ +10.569344] kauditd_printk_skb: 68 callbacks suppressed
[ +10.964961] tee (3439): /proc/3190/oom_adj is deprecated, please use /proc/3190/oom_score_adj instead.
[  +6.894223] kauditd_printk_skb: 20 callbacks suppressed
[Sep13 05:44] kauditd_printk_skb: 80 callbacks suppressed
[ +31.618312] kauditd_printk_skb: 2 callbacks suppressed
[Sep13 05:45] NFSD: Unable to end grace period: -110

==> kernel <==
 05:49:04 up 6 min,  0 users,  load average: 0.15, 0.32, 0.18
Linux minikube 4.15.0 #1 SMP Fri Aug 2 16:17:56 PDT 2019 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2018.05.3"

==> kube-addon-manager <==
service/kubernetes-dashboard unchanged
service/monitoring-grafana unchanged
replicationcontroller/heapster unchanged
service/heapster unchanged
replicationcontroller/influxdb-grafana unchanged
service/monitoring-influxdb unchanged
deployment.extensions/nginx-ingress-controller unchanged
serviceaccount/nginx-ingress unchanged
clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged
role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-09-13T05:47:54+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-09-13T05:48:52+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
service/monitoring-grafana unchanged
replicationcontroller/heapster unchanged
service/heapster unchanged
replicationcontroller/influxdb-grafana unchanged
service/monitoring-influxdb unchanged
deployment.extensions/nginx-ingress-controller unchanged
serviceaccount/nginx-ingress unchanged
clusterrole.rbac.authorization.k8s.io/system:nginx-ingress unchanged
role.rbac.authorization.k8s.io/system::nginx-ingress-role unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-09-13T05:48:53+00:00 ==

==> kube-apiserver <==
I0913 05:43:41.791285       1 crdregistration_controller.go:112] Starting crd-autoregister controller
I0913 05:43:41.791313       1 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
I0913 05:43:41.791326       1 controller.go:83] Starting OpenAPI controller
I0913 05:43:41.791333       1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0913 05:43:41.791405       1 naming_controller.go:288] Starting NamingConditionController
I0913 05:43:41.791440       1 establishing_controller.go:73] Starting EstablishingController
I0913 05:43:41.791563       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
E0913 05:43:41.818914       1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.64.10, ResourceVersion: 0, AdditionalErrorMsg: 
I0913 05:43:41.988663       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0913 05:43:41.988709       1 cache.go:39] Caches are synced for autoregister controller
I0913 05:43:41.990814       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0913 05:43:42.001600       1 controller_utils.go:1036] Caches are synced for crd-autoregister controller
I0913 05:43:42.097561       1 controller.go:606] quota admission added evaluator for: namespaces
I0913 05:43:42.787540       1 controller.go:107] OpenAPI AggregationController: Processing item 
I0913 05:43:42.787646       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0913 05:43:42.788988       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0913 05:43:42.801675       1 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0913 05:43:42.809529       1 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0913 05:43:42.809568       1 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0913 05:43:43.597528       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0913 05:43:44.570635       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0913 05:43:44.759233       1 controller.go:606] quota admission added evaluator for: endpoints
I0913 05:43:44.852080       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0913 05:43:45.187823       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [192.168.64.10]
I0913 05:43:45.733349       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0913 05:43:46.534982       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0913 05:43:46.843907       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0913 05:43:52.384474       1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0913 05:43:52.739657       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0913 05:43:53.779955       1 controller.go:606] quota admission added evaluator for: deployments.extensions

==> kube-proxy <==
W0913 05:43:53.621724       1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
I0913 05:43:53.642736       1 server_others.go:143] Using iptables Proxier.
W0913 05:43:53.642978       1 proxier.go:321] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0913 05:43:53.643367       1 server.go:534] Version: v1.15.2
I0913 05:43:53.666550       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0913 05:43:53.669281       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0913 05:43:53.670138       1 conntrack.go:83] Setting conntrack hashsize to 32768
I0913 05:43:53.685868       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0913 05:43:53.686188       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0913 05:43:53.687077       1 config.go:187] Starting service config controller
I0913 05:43:53.687120       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0913 05:43:53.687139       1 config.go:96] Starting endpoints config controller
I0913 05:43:53.687193       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I0913 05:43:53.787805       1 controller_utils.go:1036] Caches are synced for service config controller
I0913 05:43:53.790353       1 controller_utils.go:1036] Caches are synced for endpoints config controller

==> kube-scheduler <==
W0913 05:43:38.149580       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0913 05:43:38.151198       1 server.go:142] Version: v1.15.2
I0913 05:43:38.151302       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0913 05:43:38.152404       1 authorization.go:47] Authorization is disabled
W0913 05:43:38.152438       1 authentication.go:55] Authentication is disabled
I0913 05:43:38.152448       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0913 05:43:38.153166       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0913 05:43:41.903963       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0913 05:43:41.904111       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0913 05:43:41.904240       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0913 05:43:41.904349       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0913 05:43:41.904570       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0913 05:43:41.904622       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0913 05:43:41.904790       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0913 05:43:41.904955       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0913 05:43:41.905009       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0913 05:43:41.904940       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0913 05:43:42.905096       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0913 05:43:42.909067       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0913 05:43:42.910240       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0913 05:43:42.912357       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0913 05:43:42.913939       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0913 05:43:42.915979       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0913 05:43:42.917206       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0913 05:43:42.917459       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0913 05:43:42.918182       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0913 05:43:42.919573       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I0913 05:43:44.755469       1 leaderelection.go:235] attempting to acquire leader lease  kube-system/kube-scheduler...
I0913 05:43:44.763633       1 leaderelection.go:245] successfully acquired lease kube-system/kube-scheduler
E0913 05:43:52.411100       1 factory.go:702] pod is already present in the activeQ

==> kubelet <==
-- Logs begin at Fri 2019-09-13 05:42:58 UTC, end at Fri 2019-09-13 05:49:04 UTC. --
Sep 13 05:47:45 minikube kubelet[2770]: E0913 05:47:45.927954    2770 pod_workers.go:190] Error syncing pod eb7fc8a6-dabd-408f-b393-a2f94b10cf23 ("influxdb-grafana-7nxzj_kube-system(eb7fc8a6-dabd-408f-b393-a2f94b10cf23)"), skipping: [failed to "StartContainer" for "influxdb" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/heapster-influxdb-amd64:v1.3.3\""
Sep 13 05:47:45 minikube kubelet[2770]: , failed to "StartContainer" for "grafana" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/heapster-grafana-amd64:v4.4.3\""
Sep 13 05:47:45 minikube kubelet[2770]: ]
Sep 13 05:47:46 minikube kubelet[2770]: E0913 05:47:46.925811    2770 pod_workers.go:190] Error syncing pod 4e42b819-27cb-4a83-87d5-d88e57f6d9ec ("heapster-7m9gx_kube-system(4e42b819-27cb-4a83-87d5-d88e57f6d9ec)"), skipping: failed to "StartContainer" for "heapster" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/heapster-amd64:v1.5.3\""
Sep 13 05:47:53 minikube kubelet[2770]: E0913 05:47:53.926455    2770 pod_workers.go:190] Error syncing pod 41686f4c-f2cf-4c89-bc4d-0a7a108a6261 ("nginx-ingress-controller-5d9cf9c69f-ghfl6_kube-system(41686f4c-f2cf-4c89-bc4d-0a7a108a6261)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with ImagePullBackOff: "Back-off pulling image \"quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0\""
Sep 13 05:48:00 minikube kubelet[2770]: E0913 05:48:00.927955    2770 pod_workers.go:190] Error syncing pod eb7fc8a6-dabd-408f-b393-a2f94b10cf23 ("influxdb-grafana-7nxzj_kube-system(eb7fc8a6-dabd-408f-b393-a2f94b10cf23)"), skipping: [failed to "StartContainer" for "influxdb" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/heapster-influxdb-amd64:v1.3.3\""
Sep 13 05:48:00 minikube kubelet[2770]: , failed to "StartContainer" for "grafana" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/heapster-grafana-amd64:v4.4.3\""
Sep 13 05:48:00 minikube kubelet[2770]: ]
Sep 13 05:48:01 minikube kubelet[2770]: E0913 05:48:01.926048    2770 pod_workers.go:190] Error syncing pod 4e42b819-27cb-4a83-87d5-d88e57f6d9ec ("heapster-7m9gx_kube-system(4e42b819-27cb-4a83-87d5-d88e57f6d9ec)"), skipping: failed to "StartContainer" for "heapster" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/heapster-amd64:v1.5.3\""
Sep 13 05:48:08 minikube kubelet[2770]: E0913 05:48:08.926733    2770 pod_workers.go:190] Error syncing pod 41686f4c-f2cf-4c89-bc4d-0a7a108a6261 ("nginx-ingress-controller-5d9cf9c69f-ghfl6_kube-system(41686f4c-f2cf-4c89-bc4d-0a7a108a6261)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with ImagePullBackOff: "Back-off pulling image \"quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0\""
Sep 13 05:48:14 minikube kubelet[2770]: E0913 05:48:14.925991    2770 pod_workers.go:190] Error syncing pod 4e42b819-27cb-4a83-87d5-d88e57f6d9ec ("heapster-7m9gx_kube-system(4e42b819-27cb-4a83-87d5-d88e57f6d9ec)"), skipping: failed to "StartContainer" for "heapster" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/heapster-amd64:v1.5.3\""
Sep 13 05:48:15 minikube kubelet[2770]: E0913 05:48:15.927125    2770 pod_workers.go:190] Error syncing pod eb7fc8a6-dabd-408f-b393-a2f94b10cf23 ("influxdb-grafana-7nxzj_kube-system(eb7fc8a6-dabd-408f-b393-a2f94b10cf23)"), skipping: [failed to "StartContainer" for "influxdb" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/heapster-influxdb-amd64:v1.3.3\""
Sep 13 05:48:15 minikube kubelet[2770]: , failed to "StartContainer" for "grafana" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/heapster-grafana-amd64:v4.4.3\""
Sep 13 05:48:15 minikube kubelet[2770]: ]
Sep 13 05:48:20 minikube kubelet[2770]: E0913 05:48:20.925925    2770 pod_workers.go:190] Error syncing pod 41686f4c-f2cf-4c89-bc4d-0a7a108a6261 ("nginx-ingress-controller-5d9cf9c69f-ghfl6_kube-system(41686f4c-f2cf-4c89-bc4d-0a7a108a6261)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with ImagePullBackOff: "Back-off pulling image \"quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0\""
Sep 13 05:48:26 minikube kubelet[2770]: E0913 05:48:26.927436    2770 pod_workers.go:190] Error syncing pod eb7fc8a6-dabd-408f-b393-a2f94b10cf23 ("influxdb-grafana-7nxzj_kube-system(eb7fc8a6-dabd-408f-b393-a2f94b10cf23)"), skipping: [failed to "StartContainer" for "influxdb" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/heapster-influxdb-amd64:v1.3.3\""
Sep 13 05:48:26 minikube kubelet[2770]: , failed to "StartContainer" for "grafana" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/heapster-grafana-amd64:v4.4.3\""
Sep 13 05:48:26 minikube kubelet[2770]: ]
Sep 13 05:48:28 minikube kubelet[2770]: E0913 05:48:28.925517    2770 pod_workers.go:190] Error syncing pod 4e42b819-27cb-4a83-87d5-d88e57f6d9ec ("heapster-7m9gx_kube-system(4e42b819-27cb-4a83-87d5-d88e57f6d9ec)"), skipping: failed to "StartContainer" for "heapster" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/heapster-amd64:v1.5.3\""
Sep 13 05:48:31 minikube kubelet[2770]: E0913 05:48:31.927910    2770 pod_workers.go:190] Error syncing pod 41686f4c-f2cf-4c89-bc4d-0a7a108a6261 ("nginx-ingress-controller-5d9cf9c69f-ghfl6_kube-system(41686f4c-f2cf-4c89-bc4d-0a7a108a6261)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with ImagePullBackOff: "Back-off pulling image \"quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0\""
Sep 13 05:48:37 minikube kubelet[2770]: E0913 05:48:37.927462    2770 pod_workers.go:190] Error syncing pod eb7fc8a6-dabd-408f-b393-a2f94b10cf23 ("influxdb-grafana-7nxzj_kube-system(eb7fc8a6-dabd-408f-b393-a2f94b10cf23)"), skipping: [failed to "StartContainer" for "influxdb" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/heapster-influxdb-amd64:v1.3.3\""
Sep 13 05:48:37 minikube kubelet[2770]: , failed to "StartContainer" for "grafana" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/heapster-grafana-amd64:v4.4.3\""
Sep 13 05:48:37 minikube kubelet[2770]: ]
Sep 13 05:48:42 minikube kubelet[2770]: E0913 05:48:42.926986    2770 pod_workers.go:190] Error syncing pod 4e42b819-27cb-4a83-87d5-d88e57f6d9ec ("heapster-7m9gx_kube-system(4e42b819-27cb-4a83-87d5-d88e57f6d9ec)"), skipping: failed to "StartContainer" for "heapster" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/heapster-amd64:v1.5.3\""
Sep 13 05:48:42 minikube kubelet[2770]: E0913 05:48:42.928291    2770 pod_workers.go:190] Error syncing pod 41686f4c-f2cf-4c89-bc4d-0a7a108a6261 ("nginx-ingress-controller-5d9cf9c69f-ghfl6_kube-system(41686f4c-f2cf-4c89-bc4d-0a7a108a6261)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with ImagePullBackOff: "Back-off pulling image \"quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0\""
Sep 13 05:48:52 minikube kubelet[2770]: E0913 05:48:52.927873    2770 pod_workers.go:190] Error syncing pod eb7fc8a6-dabd-408f-b393-a2f94b10cf23 ("influxdb-grafana-7nxzj_kube-system(eb7fc8a6-dabd-408f-b393-a2f94b10cf23)"), skipping: [failed to "StartContainer" for "influxdb" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/heapster-influxdb-amd64:v1.3.3\""
Sep 13 05:48:52 minikube kubelet[2770]: , failed to "StartContainer" for "grafana" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/heapster-grafana-amd64:v4.4.3\""
Sep 13 05:48:52 minikube kubelet[2770]: ]
Sep 13 05:48:55 minikube kubelet[2770]: E0913 05:48:55.926944    2770 pod_workers.go:190] Error syncing pod 4e42b819-27cb-4a83-87d5-d88e57f6d9ec ("heapster-7m9gx_kube-system(4e42b819-27cb-4a83-87d5-d88e57f6d9ec)"), skipping: failed to "StartContainer" for "heapster" with ImagePullBackOff: "Back-off pulling image \"k8s.gcr.io/heapster-amd64:v1.5.3\""
Sep 13 05:48:55 minikube kubelet[2770]: E0913 05:48:55.926983    2770 pod_workers.go:190] Error syncing pod 41686f4c-f2cf-4c89-bc4d-0a7a108a6261 ("nginx-ingress-controller-5d9cf9c69f-ghfl6_kube-system(41686f4c-f2cf-4c89-bc4d-0a7a108a6261)"), skipping: failed to "StartContainer" for "nginx-ingress-controller" with ImagePullBackOff: "Back-off pulling image \"quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0\""

==> kubernetes-dashboard <==
2019/09/13 05:43:54 Starting overwatch
2019/09/13 05:43:54 Using in-cluster config to connect to apiserver
2019/09/13 05:43:54 Using service account token for csrf signing
2019/09/13 05:43:54 Successful initial request to the apiserver, version: v1.15.2
2019/09/13 05:43:54 Generating JWE encryption key
2019/09/13 05:43:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2019/09/13 05:43:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2019/09/13 05:43:55 Storing encryption key in a secret
2019/09/13 05:43:55 Creating in-cluster Heapster client
2019/09/13 05:43:55 Serving insecurely on HTTP port: 9090
2019/09/13 05:43:55 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.
2019/09/13 05:44:25 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.
2019/09/13 05:44:55 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.
2019/09/13 05:45:25 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.
2019/09/13 05:45:55 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.
2019/09/13 05:46:25 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.
2019/09/13 05:46:55 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.
2019/09/13 05:47:25 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.
2019/09/13 05:47:55 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.
2019/09/13 05:48:25 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.
2019/09/13 05:48:55 Metric client health check failed: the server is currently unable to handle the request (get services heapster). Retrying in 30 seconds.

==> storage-provisioner <==

The operating system version:

Mac Os 10.14.6
minikube 1.3.1

@tstromberg
Copy link
Contributor

tstromberg commented Sep 13, 2019

I don’t yet have a clear way to replicate this issue. Do you mind adding some additional details? Here is additional information that would be helpful:

  • The output of /usr/local/bin/docker-machine-driver-hyperkit version
  • The output of hyperkit -version
  • The exact minikube start command line used
  • The full output of the minikube start command

My apologies for minikube not working well for you yet with hyperkit. I think we can get to the bottom of this issue quickly. This seems to indicate that the built-in hyperkit DNS proxy isn't working:

Sep 13 05:47:01 minikube dockerd[1904]: time="2019-09-13T05:47:01.930875309Z" level=error msg="Handler for POST /images/create returned error: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.10:40044->192.168.64.1:53: read: connection refused"

@tstromberg tstromberg added co/hyperkit Hyperkit related issues triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Sep 13, 2019
@kvokka
Copy link
Author

kvokka commented Sep 13, 2019

Thank you for such a quick response!

Yes, it looks like it does not works with external addresses.

Here are all outputs:

$ /usr/local/bin/docker-machine-driver-hyperkit version
version: v1.3.1
commit: ca60a424ce69a4d79f502650199ca2b52f29e631
$ hyperkit -version
hyperkit: v0.20190201-11-gc0dd46

Homepage: https://github.com/docker/hyperkit
License: BSD
minikube start --vm-driver=hyperkit
$ minikube start --vm-driver=hyperkit
😄  minikube v1.3.1 on Darwin 10.14.6
🔥  Creating hyperkit VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.15.2 on Docker 18.09.8 ...
🚜  Pulling images ...
🚀  Launching Kubernetes ...
⌛  Waiting for: apiserver proxy etcd scheduler controller dns
🏄  Done! kubectl is now configured to use "minikube"

@tstromberg
Copy link
Contributor

OK, nothing unusual about your configuration as far as I can see yet. There likely isn't any need to set NO_PROXY as in your first example. The next step is to figure out if this is just DNS, or a failure to route any packets:

Do you mind sharing the output of:

minikube ssh ping 8.8.8.8
minikube ssh nslookup google.com 8.8.8.8
ps -afe | egrep -i 'hyperkit|InternetSharing'
sudo lsof -i :53

@kvokka
Copy link
Author

kvokka commented Sep 13, 2019

NO_PROXY was an attempt to fix it, with out this variable got the same result.

After creation another minikube machine with minikube start -p foo --vm-driver=hyperkit it works. Unfortunately, i can not remove the VM in default namespace at the moment. But may to the tests later.

If you need the test, just let me know, otherwise you can close the issue.

Thank you for the help!

@kvokka
Copy link
Author

kvokka commented Sep 16, 2019

$ minikube delete -p dev
🔥  Deleting "dev" in hyperkit ...
💔  The "dev" cluster has been deleted.
$ minikube start --vm-driver=hyperkit -p dev
😄  [dev] minikube v1.3.1 on Darwin 10.14.6
🏃  Using the running hyperkit "dev" VM ...
⌛  Waiting for the host to be provisioned ...
🐳  Preparing Kubernetes v1.15.2 on Docker 18.09.8 ...
🔄  Relaunching Kubernetes using kubeadm ...
⌛  Waiting for: apiserver proxy etcd scheduler controller dns
🏄  Done! kubectl is now configured to use "dev"
 $ minikube ssh -p dev ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=53 time=41.225 ms
$ minikube ssh -p dev nslookup google.com 8.8.8.8
Server:    8.8.8.8
Address 1: 8.8.8.8 dns.google

Name:      google.com
Address 1: 172.217.168.174 mad07s10-in-f14.1e100.net
Address 2: 2a00:1450:4003:809::200e mad08s05-in-x0e.1e100.net

$ ps -afe | egrep -i 'hyperkit|InternetSharing'
    0 69938     1   0  9:23am ??         0:00.79 /usr/libexec/InternetSharing
  501 79005 79001   0 12:25pm ??        57:42.39 com.docker.hyperkit -A -u -F vms/0/hyperkit.pid -c 10 -m 6144M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-vpnkit,path=vpnkit.eth.sock,uuid=85cedf19-2c30-42f6-a55b-cc542bb91825 -U c4493641-002d-4252-923c-f65f5956e74a -s 2:0,ahci-hd,/Users/mike/Library/Containers/com.docker.docker/Data/vms/0/Docker.raw -s 3,virtio-sock,guest_cid=3,path=vms/0,guest_forwards=2376;1525 -s 4,ahci-cd,/Applications/Docker.app/Contents/Resources/linuxkit/docker-desktop.iso -s 5,ahci-cd,vms/0/config.iso -s 6,ahci-cd,/Applications/Docker.app/Contents/Resources/linuxkit/docker.iso -s 7,virtio-rnd -l com1,autopty=vms/0/tty,asl -f bootrom,/Applications/Docker.app/Contents/Resources/uefi/UEFI.fd,,
    0   438     1   0  9:43am ttys011    3:26.61 /usr/local/bin/hyperkit -A -u -F /Users/mike/.minikube/machines/dev/hyperkit.pid -c 2 -m 2000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 043dbca8-d85e-11e9-b896-acde48001122 -s 2:0,virtio-blk,/Users/mike/.minikube/machines/dev/dev.rawdisk -s 3,ahci-cd,/Users/mike/.minikube/machines/dev/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/mike/.minikube/machines/dev/tty,log=/Users/mike/.minikube/machines/dev/console-ring -f kexec,/Users/mike/.minikube/machines/dev/bzimage,/Users/mike/.minikube/machines/dev/initrd,earlyprintk=serial loglevel=3 user=docker console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes base host=dev
  501  9726 96089   0  9:51am ttys011    0:00.00 egrep -i hyperkit|InternetSharing

sudo lsof -i :53
COMMAND PID   USER   FD   TYPE             DEVICE SIZE/OFF NODE NAME
dnsmasq  55 nobody    4u  IPv4 0xe69bc567e18dbecb      0t0  UDP localhost:domain
dnsmasq  55 nobody    5u  IPv4 0xe69bc567edaf5fcb      0t0  TCP localhost:domain (LISTEN)
dnsmasq  55 nobody    6u  IPv6 0xe69bc567e18dcf1b      0t0  UDP localhost:domain
dnsmasq  55 nobody    7u  IPv6 0xe69bc567edafa2cb      0t0  TCP localhost:domain (LISTEN)
dnsmasq  55 nobody    8u  IPv6 0xe69bc567e18dd1d3      0t0  UDP localhost:domain
dnsmasq  55 nobody    9u  IPv6 0xe69bc567edaf9d0b      0t0  TCP localhost:domain (LISTEN)

reproduced the issue (with the same profile got the same). Looks like it dies after the second run.

With VirtualBox provider everything still works.

@tstromberg
Copy link
Contributor

Thank you for the additional info. I'd be willing to bet that hyperkit's DNS server is conflicting with dnsmasq. This is a known issue:

https://minikube.sigs.k8s.io/docs/reference/drivers/hyperkit/#Issues

Here's some background: #3036

Can you try turning off dnsmasq to confirm? Alternatively, it should be possible to change dnsmasq to bind only to a certain IP, such as 127.0.0.1, so that it does not conflict. Please let me know how it goes.

@kvokka
Copy link
Author

kvokka commented Sep 16, 2019

Thank you @tstromberg for the link! It helps, will provide the steps to fix below.

Upgraded dnsmasq from 2.7.9 to 2.8.0 with brew upgrade dnsmasq

$ cat /usr/local/etc/dnsmasq.conf
address=/.loc/127.0.0.1

So I run

minikube delete && rm -rf ~/.minikube ~/.kube
sudo rm /var/db/dhcpd_leases
mkdir -p /usr/local/etc/dnsmasq.d/minikube              
echo 'listen-address=192.168.64.1' > /usr/local/etc/dnsmasq.d/minikube/minikube.conf
sudo launchctl stop homebrew.mxcl.dnsmasq
sudo launchctl start homebrew.mxcl.dnsmasq

IDK which exactly step helps, but hope that this chunk of code may helps somebody.

@tstromberg
Copy link
Contributor

Excellent. I'm glad you were able to get this worked out!

@kvokka
Copy link
Author

kvokka commented Sep 24, 2019

IP address of minikube host may vary, so use your minikube ip instead for listen-address setting

@MayukhSobo
Copy link

This did not help me....I am moving to virtualbox driver.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/hyperkit Hyperkit related issues kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

3 participants