Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] DNS not resolving #209

Closed
luisdavim opened this issue Mar 24, 2020 · 91 comments · Fixed by #721
Closed

[BUG] DNS not resolving #209

luisdavim opened this issue Mar 24, 2020 · 91 comments · Fixed by #721
Assignees
Labels
bug Something isn't working help wanted Extra attention is needed priority/medium runtime Issue with the container runtime (docker)
Milestone

Comments

@luisdavim
Copy link

What did you do?

  • How was the cluster created?
 k3d create -n mycluster
  • What did you do afterwards?

Start a pod and try a DNS query:

$ export KUBECONFIG="$(k3d get-kubeconfig --name='mycluster')"
$ kubectl run --restart=Never --rm -i --tty tmp --image=alpine -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup www.gmail.com
Server:         10.43.0.10
Address:        10.43.0.10:53

;; connection timed out; no servers could be reached

/ # cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5
/ # exit

Exec into the k3d container and do the same DNS query:

docker exec -it k3d-endpoint-server sh
/ # nslookup www.gmail.com
Server:         127.0.0.11
Address:        127.0.0.11:53

Non-authoritative answer:
www.gmail.com   canonical name = mail.google.com
mail.google.com canonical name = googlemail.l.google.com
Name:   googlemail.l.google.com
Address: 172.217.164.101

Non-authoritative answer:
www.gmail.com   canonical name = mail.google.com
mail.google.com canonical name = googlemail.l.google.com
Name:   googlemail.l.google.com
Address: 2607:f8b0:4005:80b::2005

/ # cat /etc/resolv.conf
nameserver 127.0.0.11
options ndots:0
/ # exit

What did you expect to happen?
I would expect the pods in the k3d cluster to be able to resolve DNS names

Which OS & Architecture?
MacOS 10.15.3

Which version of k3d?

  • output of k3d --version
k3d version v1.7.0

Which version of docker?

  • output of docker version
docker version
Client: Docker Engine - Community
 Version:           19.03.5
 API version:       1.40
 Go version:        go1.12.12
 Git commit:        633a0ea
 Built:             Wed Nov 13 07:22:34 2019
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.5
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.12
  Git commit:       633a0ea
  Built:            Wed Nov 13 07:29:19 2019
  OS/Arch:          linux/amd64
  Experimental:     true
 containerd:
  Version:          v1.2.10
  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version:          1.0.0-rc8+dev
  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
@luisdavim luisdavim added the bug Something isn't working label Mar 24, 2020
@iwilltry42
Copy link
Member

Hi there, thanks for opening this issue.
As mentioned in the related issue (#101 (comment)), CoreDNS is doing the name resolution inside the cluster.
I couldn't reproduce this on Linux (with the same docker and k3d versions), so I guess that the difference in DNS settings is caused by Docker for Desktop.

@iwilltry42 iwilltry42 added the runtime Issue with the container runtime (docker) label Mar 25, 2020
@luisdavim
Copy link
Author

To workaround this issue I'm patching the coredns configmap.

@iwilltry42
Copy link
Member

@luisdavim what's the patch that you apply?

@consideRatio
Copy link

@luisdavim I'm also interested in what patch you applied

@irizzant
Copy link

irizzant commented Jul 23, 2020

I'm pretty sure that adapting the forward . /etc/resolv.conf part is enough.

This let k8s use the machine DNS when the name cannot be resolved internally.

I think this should be the default behaviour.

@irizzant
Copy link

After more investigation I found that this could be related to the way k3d creates the docker network.

Indeed, k3d creates a custom docker network for each cluster and when this happens resolving is done through the docker daemon. The requests are actually forwarded to the DNS servers configured in your host's resolv.conf. But through a single DNS server (the embedded one of docker).

This means that if your daemon.json is, like mine, not configured to provide extra DNS servers it defaults to 8.8.8.8 which does not resolve any company address for example.

It would be useful to have a custom options to provide to k3d when it starts the cluster and specify the DNS servers there, as proposed here #165

@iwilltry42
Copy link
Member

Thanks for your additional input @irizzant , how would you add additional DNS servers here on k3d's side?
The network opts of docker don't seem to have such an option (after having a quick glance over what's available there) 🤔

@irizzant
Copy link

Personally I fixed this by injecting a custom ConfigMap for CoreDns, by changing

forward . /etc/resolv.conf

to:

forward . /etc/resolv.conf xxx.xxx.xxx.xxx

replacing the x with the IP of your DNS servers.

I had a quick look to the docker options and I confirm that I don't see an option to configure the custom DNS servers on the docker network.

Maybe a feasible option would be to add a custom flag to k3d command which adds the custom DNS servers to the CoreDns ConfigMap directly.

@iwilltry42
Copy link
Member

Currently, k3d doesn't interact with any Kubernetes resources inside the cluster (i.e. in k3s) and I tend to avoid this because of the huge dependencies on Kubernetes libraries it could draw in. Upon cluster creation this could work however by modifying the Chart that's being auto-deployed by k3s. Not sure if this could go into k3s itself instead 🤔

@iwilltry42 iwilltry42 added this to the 3.2.0 milestone Sep 2, 2020
@iwilltry42 iwilltry42 self-assigned this Sep 2, 2020
@irizzant
Copy link

irizzant commented Sep 2, 2020

Maybe interacting with k8s itself it's not needed.
k3s deploys whatever is under /var/lib/rancher/k3s/server/manifests, so you could add a valid CoreDns configuration just customizing the DNS part according to the command line flag.

@dminca
Copy link

dminca commented Sep 10, 2020

This only happens to me if I deploy something in the default namespace, in other namespaces it worked just fine, idk why

LE: just noticed it doesn't matter which namespace you deploy stuff to, it's about the network you're in. So if I'm in the office (company LAN) I get this issue, but when I'm trying it from home, it just simply works. And I cannot say what network restrictions they applied in the company 😄

Also, @Athosone 's solution works for me now

@YAMLcase
Copy link

I seem to be running into this issues about every couple of weeks. This is the only workaround that seems to "just work" so I can get back to the job I'm paid to do:

sudo iptables -t nat -A PREROUTING -p udp -d 8.8.8.8  --dport 53 -j DNAT --to <your DNS server IP>

@eigood
Copy link

eigood commented Sep 23, 2020

This seems to be broken, because the coredns pod does not have an /etc/resolv.conf in it, while the ConfigMap is configured to forward to that. All the docs have pointed told me that coredns will use the $HOST resolv.conf, but when I used k3d, which uses k3s, the coredns "pod" doesn't run as a container, or as a process on the $HOST. It runs as a process of containerd, and therefore it doesn't get any of the correct settings.

@Athosone
Copy link

Athosone commented Oct 1, 2020

For those who have the problem a simple fix is to mount your /etc/resolve.conf onto the cluster:

k3d cluster create --volume /etc/resolv.conf:/etc/resolv.conf

@YAMLcase
Copy link

YAMLcase commented Oct 1, 2020

What does that --volume command do exactly? I've used it to take advantage of other things registries.yaml, etc) but haven't taken the time to dig into what all it gets mapped to

@Athosone
Copy link

Athosone commented Oct 1, 2020

From what I understand there is nothing special. It just mounts the volume to the container running the k3s server.
Thus I guess you could mount anything and even maybe do some airgapped setup

@iwilltry42
Copy link
Member

Hey folks, sorry for the radio silence, just getting back to k3d now...
Let me reply to some of the messages here.


@irizzant

Maybe interacting with k8s itself it's not needed.
k3s deploys whatever is under /var/lib/rancher/k3s/server/manifests, so you could add a valid CoreDns configuration just customizing the DNS part according to the command line flag.

That's a good starting point. Unfortunately, this would require us to write the file to disk and bind mount it into the container, as exec'ing into it afterwards to update the ConfigMap manifest, wouldn't update the actual thing inside the cluster (IIRC, there is no loop to do so). It's definitely doable, but we'd need to keep state somewhere and react to changes k3s does in the auto-deploy manifests.


@dminca

This only happens to me if I deploy something in the default namespace, in other namespaces it worked just fine, idk why

This is for real the weirdest thing on this thread that you're experiencing 🤔 No clue, what's going on there..


@YAMLcase

I seem to be running into this issues about every couple of weeks. This is the only workaround that seems to "just work" so I can get back to the job I'm paid to do:

sudo iptables -t nat -A PREROUTING -p udp -d 8.8.8.8  --dport 53 -j DNAT --to <your DNS server IP>

Are you executing this on your local host (I assume so because of the sudo) to just route all the Google-DNS traffic (default Docker DNS) to your own DNS server?


@eigood & @Athosone

This seems to be broken, because the coredns pod does not have an /etc/resolv.conf in it, while the ConfigMap is configured to forward to that. All the docs have pointed told me that coredns will use the $HOST resolv.conf, but when I used k3d, which uses k3s, the coredns "pod" doesn't run as a container, or as a process on the $HOST. It runs as a process of containerd, and therefore it doesn't get any of the correct settings.

What do you mean by "it doesn't run as a container"? It surely is running in a container 🤔

For those who have the problem a simple fix is to mount your /etc/resolve.conf onto the cluster:

k3d cluster create --volume /etc/resolv.conf:/etc/resolv.conf

Also, those two statements seem to conflict, right?


@YAMLcase

What does that --volume command do exactly? I've used it to take advantage of other things registries.yaml, etc) but haven't taken the time to dig into what all it gets mapped to

It's doing basically the same as docker's --volume flag: bind-mounting a file or a directory into one or more containers, overlaying anything that might already be there (you can specify the access mode).


Is anyone experiencing this on a Linux machine or in WSL2, i.e. in Docker versions which do not run inside a VM?
I could imagine, that we can modify the CoreDNS ConfigMap in the running cluster like we might be doing it to inject the host.k3d.internal entry in #360 .
However, just mounting in your resolv.conf might be the easiest solution 🤔

@Athosone
Copy link

Athosone commented Oct 2, 2020

Well maybe it conflicts but for me it solves the problem.
For information I am running in wsl2.
I dont know why it does not managed to use the right resolv.conf if I dint mount it.

@iwilltry42
Copy link
Member

@Athosone , I meant, that CoreDNS does indeed pick up the resolv.conf, as your solution works 👍
I guess the problem is more that the "default" resolv.conf in the k3s container (node) does not work for it (and that one you're effectively replacing with the volume mount) 👍

@irizzant
Copy link

irizzant commented Oct 2, 2020

Just as a clarification, if you modify the docker daemon configuration (daemon.json) to add the company DNS and then you launch the k3d cluster you can "docker exec" into the containers where k8s running and you'll see that nslookup finds DNS entries served by company DNS.
Consequently the docker container in which k8s is running picks up the right resolv.conf configuration.

The problem is that the default resolv.conf in k8s just reports:

forward . /etc/resolv.conf

which forces CoreDNS to use its own resolv.conf bypassing the docker one.

@eigood
Copy link

eigood commented Oct 2, 2020

When I run k3d cluster create, I can certainly volume mount $HOST files into the docker containers(agent+server). However, containerd is then used to start the CoreDNS pod, running inside k8s. It is this internal container that needs to have an /etc/resolv.conf. Nothing I do to the k3d command will allow me to adjust the internal containerd/pod that is created.

I did a bit of research, trying to figure out that this was the case, by figuring out where containerd stores it's filesystems.

@eshepelyuk
Copy link

eshepelyuk commented Oct 22, 2020

I am running k3d in a Docker that runs in VM in Virtual Box (actually I'm using Docker Toolbox for Windows product that does all the setup).

And I'm experiencing the same problem

  • inside k3d docker container rancher/k3s:v1.18.8-k3s1 resolutions of external and custom (private VPN) domains is happening
  • inside pods - fails.

I can see that my VM in Virtual Box and all containers running inside it are using proper DNS configurations, but pods - not.

@iwilltry42 iwilltry42 added the help wanted Extra attention is needed label Oct 22, 2020
@avodaqstephan
Copy link

avodaqstephan commented Oct 29, 2020

Mounting the resolv.conf works but this can't be the best solution. If you missed that oppertunity at the beginning you need to re-create a k3d cluster just to mount that volume.

forward . /etc/resolv.conf xxx.xxx.xxx.xxx

This one did not work for me.

Edit: Forward is working but I need to place my DNS server in front.

forward . xxx.xxx.xxx.xxx /etc/resolv.conf

@szapps
Copy link

szapps commented Oct 31, 2020

For those who have the problem a simple fix is to mount your /etc/resolve.conf onto the cluster:

k3d cluster create --volume /etc/resolv.conf:/etc/resolv.conf

VERY HELPFULL HINT

@indistinctTalk
Copy link

@iwilltry42, awesome, running the above has both external DNS and host.k3d.internal working without any patching required. I'll keep using it and see how it goes.

k3d: v5.0.0-rc.5
os: MacOS Big Sur 11.6
docker: Docker for Desktop 20.10.8

On a side note, I'm happy to help out in general so feel free to tag me.

@iwilltry42
Copy link
Member

Fix working and tested on

  • Linux (Ubuntu): tested on my own machine and in a VM
  • MacOS 10 & 11: tested in a VM and on "real" machines by community
  • Windows w/ WSL2: tested on a "real" machine and in a VM

The K3D_FIX_DNS "feature" flag environment variable will stay at least until v5.1.0 to ensure it's stable. Once we see that it's working out for everyone, it will be promoted and made a fixed (and required) step of the cluster initialization.

@gioppoluca
Copy link

Executed on ubuntu 21.04 with VPN using openconnect:
export K3D_FIX_DNS=1 && k3d cluster create test
But it cannot download images from the resources in the VPN

@gioppoluca
Copy link

@iwilltry42 seems that the fix is not working. I'm on 5.0.0-rc5

@iwilltry42
Copy link
Member

@gioppoluca , can you please provide some more information?
Is your VPN automatically pushing DNS settings?
What are the logs there? (preferably with --verbose or --trace set)
What is the error you're getting there? What images are you referring to and from where do you try to download them?
Feel free to send me a message on Slack (slack.rancher.io) for easier communication 👍

@ghost
Copy link

ghost commented Nov 5, 2021

The fix with the environment variable doesn't seem to be working for me as well.
Apple laptop with M1 silicon CPU running Docker Desktop:

❯ system_profiler SPSoftwareDataType
Software:

    System Software Overview:

      System Version: macOS 11.6 (20G165)
      Kernel Version: Darwin 20.6.0

❯ k3d --version

k3d version v5.0.3
k3s version latest (default)
❯ k3d cluster create test


INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-test'
INFO[0000] Created volume 'k3d-test-images'
INFO[0000] Starting new tools node...
INFO[0000] Starting Node 'k3d-test-tools'
INFO[0001] Creating node 'k3d-test-server-0'
INFO[0001] Creating LoadBalancer 'k3d-test-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
WARN[0001] failed to resolve 'host.docker.internal' from inside the k3d-tools node: Failed to read address for 'host.docker.internal' from command output
INFO[0001] HostIP: using network gateway...
INFO[0001] Starting cluster 'test'
INFO[0001] Starting servers...
INFO[0001] Deleted k3d-test-tools
INFO[0001] Starting Node 'k3d-test-server-0'
INFO[0005] Starting agents...
INFO[0005] Starting helpers...
INFO[0005] Starting Node 'k3d-test-serverlb'
INFO[0012] Injecting '172.27.0.1 host.k3d.internal' into /etc/hosts of all nodes...
INFO[0012] Injecting records for host.k3d.internal and for 2 network members into CoreDNS configmap...
INFO[0012] Cluster 'test' created successfully!

Also, the fix with volume creation also doesn't seem to be working:

❯ k3d cluster create --volume /etc/resolv.conf:/etc/resolv.conf


WARN[0000] No node filter specified
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default'
INFO[0000] Created volume 'k3d-k3s-default-images'
INFO[0000] Starting new tools node...
INFO[0000] Starting Node 'k3d-k3s-default-tools'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0001] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
WARN[0001] failed to resolve 'host.docker.internal' from inside the k3d-tools node: Failed to read address for 'host.docker.internal' from command output
INFO[0001] HostIP: using network gateway...
INFO[0001] Starting cluster 'k3s-default'
INFO[0001] Starting servers...
INFO[0001] Deleted k3d-k3s-default-tools
INFO[0001] Starting Node 'k3d-k3s-default-server-0'
ERRO[0001] Failed Cluster Start: Failed to start server k3d-k3s-default-server-0: Node k3d-k3s-default-server-0 failed to get ready: error waiting for log line `k3s is up and running` from node 'k3d-k3s-default-server-0': stopped returning log lines
ERRO[0001] Failed to create cluster >>> Rolling Back
INFO[0001] Deleting cluster 'k3s-default'
INFO[0001] Deleted k3d-k3s-default-serverlb
INFO[0001] Deleted k3d-k3s-default-server-0
INFO[0001] Deleting cluster network 'k3d-k3s-default'
INFO[0001] Deleting image volume 'k3d-k3s-default-images'
FATA[0001] Cluster creation FAILED, all changes have been rolled back!

@iwilltry42
Copy link
Member

Hi @parg0MakSystem, can you provide some more information as per #209 (comment), please?

Additionally, you're hitting another issue here:

Also, the fix with volume creation also doesn't seem to be working:
❯ k3d cluster create --volume /etc/resolv.conf:/etc/resolv.conf
ERRO[0001] Failed Cluster Start: Failed to start server k3d-k3s-default-server-0: Node k3d-k3s-default-server-0 failed to get ready: error waiting for log line k3s is up and running from node 'k3d-k3s-default-server-0': stopped returning log lines

This is because with K3D_FIX_DNS, k3d writes an entrypoint script that modifies /etc/resolv.conf inside the container, which won't work if it's a mounted file (just implemented a quick check in 3cc4c5c).
Even with K3D_FIX_DNS=0 this may fail, but this time at the loadbalancer, which you'd have to exclude from the volume mount (as otherwise it doesn't use the docker resolver and cannot resolve the names of the other containers anymore): k3d cluster create --volume /etc/resolv.conf:/etc/resolv.conf@server:0

@gbonnefille
Copy link
Contributor

Sorry for reopening this issue, but I'm really disappointed.

I'm running K3D behind a corporate DNS+HTTP Proxy.
Of course, without any configuration, the cluster fails at downloading the images of the K8S component (traefik, coredns...).
When setting proxy and K3D_FIX_DNS=1, cluster state is much much better as all components run.
But when loading workload requiring an Internet access (in intContainers), it appears it is unable to resolve the DNS name of the proxy server. While it is able to resolve the DNS name of Google's server (as example).

I suspect that CoreDNS is still unable to use my local-host DNS resolution (Ubuntu 20.04).
I'm still unable to explain how it is able to resolve public DNS names. My corporate network blocks any DNS requests. And the CoreDNS Config File does not mention any DoH provider.

Any help appreciated.

PS: I restarted the cluster between its creation and the add of workload: is there any impact related to K3D_FIX_DNS feature?

@gbonnefille
Copy link
Contributor

Sorry (again): it seems my previous issue was due to Docker daemon. I'm using a VPN. It seems that Docker daemon pick the DNS resolution config at start (outside VPN) and does not update when VPN was activated. It still continue to use the DNS of my local network, not the corporate one.
After a simple restart of Docker daemon while VPN is active solves the issue.

@bayeslearner
Copy link

bayeslearner commented Apr 1, 2022

I shouldn't start a separate thread, so I'm adding my comment here. New feature works for me. My set-up is described in #1042. After setting environment and create a test cluster, I saw this:

export K3D_FIX_DNS=1 && k3d cluster create test
kubectl run -it --image=rockylinux rockylinux -- bash
[rockylinux@rockylinux8 infra_k3d]$ kubectl exec -it rockylinux -- bash
[root@rockylinux /]#
[root@rockylinux /]# ping splunkapp-preprod02.med.umich.edu
PING splunkapp-preprod02.med.umich.edu (172.20.30.106) 56(84) bytes of data.
From k3d-test-server-0 (172.20.0.2) icmp_seq=1 Destination Host Unreachable

It did resolve 172.20.30.106 but I can't reach it. Though I suspect there is an address conflict with docker network 172.20.XXX.XXX?

So I tried again with k3d cluster create test --subnet 172.28.0.0/16 and it worked this time.
I do notice there are a few seconds of delay in getting the first ping response back.

@gbonnefille
Copy link
Contributor

I shouldn't start a separate thread, so I'm adding my comment here. New feature doesn't work for me. My set-up is described in #1042. After setting environment and create a test cluster, I saw this:

export K3D_FIX_DNS=1 && k3d cluster create test
kubectl run -it --image=rockylinux rockylinux -- bash
[rockylinux@rockylinux8 infra_k3d]$ kubectl exec -it rockylinux -- bash
[root@rockylinux /]#
[root@rockylinux /]# ping splunkapp-preprod02.med.umich.edu
PING splunkapp-preprod02.med.umich.edu (172.20.30.106) 56(84) bytes of data.
From k3d-test-server-0 (172.20.0.2) icmp_seq=1 Destination Host Unreachable

It did resolve 172.20.30.106 but I can't reach it. Though I suspect there is an address conflict with docker network 172.20.XXX.XXX?

If so, why not configuring your docker (cf. parameter bip in daemon.json) in order to avoid conflict between docker and corporate address plan?

@jrbeilke
Copy link

jrbeilke commented Jul 1, 2022

FYI experiencing a similar issue on an M1 Macbook as @parg0MakSystem reported, although I'm using colima (which could be related ie. abiosoft/colima#341)

Tried to use K3D_FIX_DNS but still wasn't able to resolve DNS in pods

Used k3d cluster create --volume /etc/resolv.conf:/etc/resolv.conf@server:0 instead and was able to get DNS resolution working

@felipecrs
Copy link

Today I migrated one of my projects from KinD to K3D, and I immediately found this issue. Using K3D_FIX_DNS=1 fixes the issue.

In my case, I'm not running behind Docker Desktop, or even a corporate VPN. It failed to resolve updates.jenkins.io.

I just wonder if K3D_FIX_DNS=1 shouldn't be promoted to default (I wonder how KinD handles it, because I never ran into this when using KinD for two years).

@irizzant
Copy link

irizzant commented Sep 14, 2022

K3D_FIX_DNS=1 works for us as well, maybe it's worth evaluating it as a new default

@iwilltry42
Copy link
Member

It will definitely move to default 👍
@felipecrs KinD does it exactly the same way.

@felipecrs
Copy link

Got it. Thank you!

@irizzant
Copy link

irizzant commented Dec 12, 2022

We've run into an issue with K3D_FIX_DNS=1 , on the host running k3d I've noticed that after an upgrade received to containerd the docker daemon was restarted and the DNS resolution stopped working in the cluster.

This is because of 2 extra lines added to iptables for DNS NAT for host 127.0.0.11, which caused DNS resolution to fail.
Could this be caused by this change ?
Maybe under certain conditions this script appends wrong content to iptables?

@iwilltry42
Copy link
Member

@irizzant , got a heads-up on this issue, so I'm just reading your reply now.
Is this still an issue? The DNS fix script is executed inside the container. The Gateway IP used there is coming from a simple name lookup inside the container and you should be able to check it out via docker exec to debug inside the K3s node containers.
If you want to dive into this, feel free to open a new discussion.

@Djaytan
Copy link

Djaytan commented Sep 18, 2023

Thanks a lot for the great work while searching a solution for the issue! Using K3D_FIX_DNS=1 solve it definitively, even after restarting the Docker daemon (Docker in WSL on my side).

However, it seems that it's not yet the default behavior with k3d v5.6.0. Any plan on this front @iwilltry42?

@iwilltry42 iwilltry42 moved this to Done in Networking Sep 27, 2023
@schlichtanders
Copy link

schlichtanders commented Oct 24, 2023

I am facing the issue that DNS does not work on mobile network (mobile hotspot)... the browser works seamlessly

Unfortunatly neither export K3D_FIX_DNS=1 nor --volume /etc/resolv.conf:/etc/resolv.conf@server:0 does work for me for k3d to work on mobile network...

any hints/help are highly appreciated.

@jjba23
Copy link

jjba23 commented Nov 6, 2023

Hey sorry to reopen this. At my company some scripts use export K3D_FIX_DNS=1 and that makes my cluster internal DNS stop working. Once I turn it off, it all start working perfectly. Real strange... Please don't make this the default !!!!

@ChristianCiach
Copy link

Both K3D_FIX_DNS=1 and k3d cluster create --volume /etc/resolv.conf:/etc/resolv.conf@server:* break the internal image registry for me, because k3s fails to resolve the internal registry hostname in both cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed priority/medium runtime Issue with the container runtime (docker)
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.