-
-
Notifications
You must be signed in to change notification settings - Fork 466
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] CoreDNS NodeHosts lost after adding a new node #1009
Comments
Hi @lerminou , thanks for opening this issue! |
Hi @iwilltry42 If a check k3s changelog. There are some changes about the coredns config. I will try to reproduce it in a full 1.22 cluster |
Hey @lerminou in context of #1032 I just did several NodeHosts: |
172.21.0.2 k3d-test-server-0
172.21.0.5 k3d-testnode2-0
172.21.0.1 host.k3d.internal
172.21.0.6 k3d-testnode3-0
172.21.0.3 k3d-test-agent-0
172.21.0.4 k3d-test-serverlb So I assume that the issue is limited to creating a new server node. Which after taking another look at your original post raises a question: You just ran |
Hi @iwilltry42 I agree it only appears when adding a new node to an existing cluster |
Huh? The
Also when adding an agent node? |
Here is the relevant informations from my config.yaml used when I create the cluster
And the output of the container
When I create a new AGENT node in this cluster, the nodeHost is not modified:
when I create a new SERVER node, the configMap is broken |
Okay, now that makes more sense. From your initial post I couldn't see that you were using a config file. |
This is indeed the intended behavior on K3s' end (thanks @brandond for the explanation 🙏 ) |
Hi, I found this issue while investigating why when K3s containers created via K3d would not maintain the CoreDNS NodeHosts ConfigMap on host machine reboot. Could it be the same issue? From my testing the issues comes down to the K3s starting by themselves without K3d orchestraction and it rewrites CoreDNS ConfigMap to only contain k3d-test-server-0 for my 1-server cluster. |
Hi @jracabado yes. It's exactly the same problem for me. |
and i think i've this issue, too, in a different scenario: just a k3s container restart, or docker restart, triggers the configmap to be emptied... issue here: 1112 |
What did you do
k3d cluster create
What did you do afterwards?
The host
host.k3d.internal
is present on the cluster creationk3d node create newserver --cluster local --role server
What did you expect to happen
The new host is added but others NodeHosts are not lost
Screenshots or terminal output
Which OS & Architecture
Which version of
k3d
k3d version v5.3.0 k3s version v1.22.6-k3s1 (default)
Which version of docker
The text was updated successfully, but these errors were encountered: