Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MacOS + hyperkit + Cisco AnyConnect (or maybe mDNSRespo) = 192.168.64.1:53: read: connection refused #13497

Closed
alexec opened this issue Jan 26, 2022 · 6 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@alexec
Copy link

alexec commented Jan 26, 2022

What Happened?

(⎈ |N/A:default)➜  ~ minikube start --driver=hyperkit
😄  minikube v1.25.1 on Darwin 11.6.2
✨  Using the hyperkit driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
❗  This VM is having trouble accessing https://k8s.gcr.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳  Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
    ▪ kubelet.housekeeping-interval=5m
❗  Certificate client.crt has expired. Generating a new one...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
(⎈ |minikube:default)➜  ~ eval $(minikube -p minikube docker-env)

(⎈ |minikube:default)➜  ~ docker pull minio
Using default tag: latest
Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.14:48046->192.168.64.1:53: read: connection refused
(⎈ |minikube:default)➜  ~ minikube ssh
                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ curl https://k8s.gcr.io
curl: (6) Could not resolve host: k8s.gcr.io

Attach the log file

REMOVED

Operating System

macOS (Default)

Driver

HyperKit

@alexec
Copy link
Author

alexec commented Jan 26, 2022

See #5336

@alexec
Copy link
Author

alexec commented Jan 26, 2022

(⎈ |minikube:default)➜  ~  sudo lsof -ni:53

COMMAND    PID           USER   FD   TYPE             DEVICE SIZE/OFF NODE NAME
mDNSRespo  236 _mdnsresponder   55u  IPv6 0xf4e73e5eeda590b9      0t0  UDP *:domain
mDNSRespo  236 _mdnsresponder   57u  IPv6 0xf4e73e5ececb1419      0t0  TCP *:domain (LISTEN)
dnscrypt- 9135         nobody   44u  IPv4 0xf4e73e5ed25a3d79      0t0  UDP 127.0.0.1:domain
dnscrypt- 9135         nobody   50u  IPv4 0xf4e73e5ee32922a9      0t0  TCP 127.0.0.1:domain (LISTEN)

@alexec
Copy link
Author

alexec commented Jan 26, 2022

I think this will fix it by bypassing the DNS server:

    minikube start --driver=hyperkit
    minikube ssh sudo resolvectl dns eth0 8.8.8.8 8.8.4.4
    minikube ssh sudo resolvectl dns docker0 8.8.8.8 8.8.4.4
    minikube ssh sudo resolvectl dns sit0 8.8.8.8 8.8.4.4
    eval $(minikube -p minikube docker-env)
    docker pull minio/minio

@alexec alexec changed the title MacOS + hyperkit + Cisco AnyConnect = 192.168.64.1:53: read: connection refused MacOS + hyperkit + Cisco AnyConnect (or maybe mDNSRespo) = 192.168.64.1:53: read: connection refused Jan 26, 2022
@HarikrishnanBalagopal
Copy link

I think this will fix it by bypassing the DNS server:

    minikube start --driver=hyperkit
    minikube ssh sudo resolvectl dns eth0 8.8.8.8 8.8.4.4
    minikube ssh sudo resolvectl dns docker0 8.8.8.8 8.8.4.4
    minikube ssh sudo resolvectl dns sit0 8.8.8.8 8.8.4.4
    eval $(minikube -p minikube docker-env)
    docker pull minio/minio

This works, although I had to run each command separately. Copy pasting the entire thing resulted in

$     minikube start --driver=hyperkit
    minikube ssh sudo resolvectl dns eth0 8.8.8.8 8.8.4.4
    minikube ssh sudo resolvectl dns docker0 8.8.8.8 8.8.4.4
    minikube ssh sudo resolvectl dns sit0 8.8.8.8 8.8.4.4
    eval $(minikube -p minikube docker-env)
    docker pull minio/minio
😄  minikube v1.24.0 on Darwin 12.1
✨  Using the hyperkit driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
❗  This VM is having trouble accessing https://k8s.gcr.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳  Preparing Kubernetes v1.22.3 on Docker 20.10.8 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Using default tag: latest
Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.3:50500->192.168.64.1:53: i/o timeout

@HarikrishnanBalagopal
Copy link

I think this will fix it by bypassing the DNS server:

    minikube start --driver=hyperkit
    minikube ssh sudo resolvectl dns eth0 8.8.8.8 8.8.4.4
    minikube ssh sudo resolvectl dns docker0 8.8.8.8 8.8.4.4
    minikube ssh sudo resolvectl dns sit0 8.8.8.8 8.8.4.4
    eval $(minikube -p minikube docker-env)
    docker pull minio/minio

also for me the sit0 interface seems to be missing. I didn't get any error code tho

$ minikube ssh ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:4C:E4:88:03  
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1247 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1418 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:104654 (102.2 KiB)  TX bytes:135658 (132.4 KiB)

eth0      Link encap:Ethernet  HWaddr 8A:9C:7D:6E:CC:D5  
          inet addr:192.168.64.3  Bcast:192.168.64.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:375579 errors:0 dropped:0 overruns:0 frame:0
          TX packets:23380 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:551643939 (526.0 MiB)  TX bytes:1927248 (1.8 MiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:162117 errors:0 dropped:0 overruns:0 frame:0
          TX packets:162117 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:31641902 (30.1 MiB)  TX bytes:31641902 (30.1 MiB)

vethbbc7cbe Link encap:Ethernet  HWaddr 12:02:0E:AB:9B:82  
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1247 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1418 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:122112 (119.2 KiB)  TX bytes:135658 (132.4 KiB)

$ minikube ssh sudo resolvectl dns sit0 8.8.8.8 8.8.4.4
$ echo $?
0

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 8, 2022
@alexec alexec closed this as completed May 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

4 participants