-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DNS resolution fails with dnsPolicy: ClusterFirstWithHostNet and hostNetwork: true #1827
Comments
+1, though this seems to happen exclusively on my I'm running To clarify, I've had this problem for a while now, but I face no such issue on my Node 1 information
Node 2 Information
|
I have three amd64 nodes who suffer from this issue... |
So the real issue here is that you cannot access ClusterIP services when using Are you using Ubuntu's ufw or any other host-based firewall that might be interfering with this traffic? |
We had a similar issue even with 1.18.2 and we initially tried the host-gw option for flannel but that didnt help and then followed the proposed alternatives from here: #751 Specifically followed this: |
I do not have any specific firewall rules or ufw in place:
|
the node local dns cache fixed it for me... it would be great if node dns local cache would be included in k3s by default or the underlying issue would be fixed... I think running hostNetwork pods is not uncommon... |
Are you unable to reach ANY ClusterIP service when using host network, or is it only an issue with the coredns service? |
In my case it was unable to reach ANY ClusterIP service. |
(NOTE: Workaround/Solution at the end of this comment) I'm also up against this, running k3s on 3 x86 VMs under Proxmox. I get no DNS resolution at all from a Pod running with From the affected Pod I'm able to do DNS queries out to the LAN DNS server (10.68.0.2) but not to the cluster DNS server (10.43.0.10). I can query ClusterIP services whose Pods are on other nodes if I do so by IP address. DNS Queries for Internal and External Hosts from SidecarThis shows queries to 10.43.0.10 failing, while queries to 10.68.0.2 succeed.
GET Request to ClusterIP Service by IP / HostnamecURL requests to IP succeed, but DNS resolution fails when making the request to a ClusterIP hostname (10.43.71.169 is the ClusterIP for the
Digging around led me to this which led me to this, in which the OP says that there is no route to 10.43.0.0/16 via the local Sure enough, if this is added, it works:
Workaround / SolutionFollowing the suggestions in this comment I switched the flannel backend to There's a lot of good troubleshooting later in that ticket about checksum validation changes in 1.17 versus 1.16, and it leads me to believe that this is Flannel's issue to resolve, not Kubernetes/K3s/Rancher's issue. |
Signed-off-by: Jeff Billimek <[email protected]>
Sharing my findings: I've upgraded from v1.17.5+k3s1 to v1.18.4+k3s1 2 days ago and have since stopped observing that issue. |
I can't confirm that. I'm also on the v1.18.4+k3s1 release and still having this issue. |
I'm also up against this. After adding |
@wpfnihao what are you deployed on? Does your infrastructure support host-gw? |
This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions. |
In case anyone else stumbles on this - I had the same problem after upgrading from |
For the record, upgrading from |
Version:
k3s version v1.17.4+k3s1 (3eee8ac)
ubuntu 20.04
K3s arguments:
--no-deploy traefik --no-deploy=servicelb --kubelet-arg containerd=/run/k3s/containerd/containerd.sock
Describe the bug
The dns resolution does not work for my container which is running using these settings:
The dns resolution works just fine if I do not use hostNetwork and don't change the dns policy.
The core dns service looks fine:
As you can see I can sucessfuly query the single instances of coredns but the cluster ip access fails:
To Reproduce
Run a pod with host network and dns policy ClusterFirstWithHostNet.
Expected behavior
DNS resolution should work fine
Actual behavior
DNS resolution does not work at all
Additional context / logs
DNS resolution works fine with container network:
The text was updated successfully, but these errors were encountered: