-
Environmental Info:
Node(s) CPU architecture, OS, and Version:
Firewall rules:
Cluster Configuration:
Describe the bug: Steps To Reproduce: Installed K3s on all three nodes (with --cluster-init and --server as appropriate):
I upgraded from 1.28.5+k3s1 and also did some Ubuntu package upgrades at the same time, and this issue cropped up. Expected behavior: IPv6 traffic will flow correctly. Actual behavior: IPv6 traffic is blocked:
Additional context / logs: Adding the following rules unblocks traffic:
|
Beta Was this translation helpful? Give feedback.
Replies: 7 comments 14 replies
-
We don't block any of the traffic in question; I suspect something changed in whatever host-based firewall you are using (firewalld/ufw) and the default rules are now more restrictive. Your firewall status output does specifically show a default-deny policy for ipv4 and ipv6 on eno1, so what you're reporting sounds logical. I would recommend against adding rules to the kube-router chains; these are manage by the kube-router network policy controller and will likely be discarded when you restart k3s. This doesn't appear to be related to k3s or our embedded network policy controller, so I am going to convert it to a discussion. |
Beta Was this translation helpful? Give feedback.
-
Some notes on my research: Looking at the NFLOG entries, the rules created by kube-router seem to not match any rules, and thus are logged as DROP:
Which means none of the rules marked the packet as ALLOW (0x10000):
Likely because when selecting a specific protocol (this case UDP) in a NetworkPolicy, the ICMPv6 packets will be dropped by the rule. From what I can tell,
|
Beta Was this translation helpful? Give feedback.
-
I managed to resolve this connection failure by adding the NDP solicited-node address to the ipset:
Now it succeeds:
|
Beta Was this translation helpful? Give feedback.
-
The specific case where this seems to be an issue:
|
Beta Was this translation helpful? Give feedback.
-
@manuelbuil do you have any thoughts on this? |
Beta Was this translation helpful? Give feedback.
-
I investigated a bit on this. This is clearly related to the fix for the IPset rule on IPv6. This is specific to Flannel that differently from the other CNIs creates an L2 network between the pods and to manages the L2 communications IPv6 protocol uses multicast IP packets differently from IPv4. So the possible solutions could be:
|
Beta Was this translation helpful? Give feedback.
-
For those who use firewalld:
|
Beta Was this translation helpful? Give feedback.
I did a PR on our kube-router fork k3s-io/kube-router#85