-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
K3s does not clean up kube-router iptables rules when restarting with --disable-kube-router
.
#7244
Comments
How do you restarted K3s? |
I guess a reboot or uninstall/reinstall would also clear everything out yeah, but if that's what we want to recommend we at least need to document it. I wasn't aware that we weren't cleaning them up when the controller is disabled. |
I'll leave it unassigned, sorry. We can talk through with the team about how to address it. |
I am having these exact issues... KUBE-ROUTER-INPUT is huge:
Well maybe not exact... What is going on here? |
Also not just that, but this:
It seems the return rule duplicates itself ad-infinitum...? |
Did you ever rebooted this node? How many times did you restart K3s? |
I'll leave these here for visibility, but: This was k3s 1.23.16+k3s1, on Rocky Linux 8.7. The root cause for my errors above, was: adding the This is a pretty horrible issue with OS-provided nftables and k3s! |
Yes, I have no idea why distros continue to package such a broken version of nftables. |
At this point, this issue probably needs to be added to the docs before we can resolve. @rbrtbnfgl Would you be willing to do that? |
If kube-router it's still enabled when K3s restarted it should clean all the previous rules. The install script was updated to clean those rules. I don't know if we support changing the configuration of an already running instance. |
We just need to document that kube-router leaves rules behind when disabled, and that they can be cleaned up with a manual set of commands and/or by re-running the install script. I don't see any reason why we wouldn't support toggling it on or off after the fact; users just need to know how to clean up after it since it won't clean up after itself. |
If K3s is initially started with the network policy controller enabled, and it is subsequently disabled, the controller's iptables rules are left in place, with a snapshot of whatever policies were last applied.
For example, after the following process:
disable-network-policy: true
to/etc/rancher/k3s/config.yaml
I still see the kube-router rules in place, and linked from the main
INPUT
chain:The KUBE-ROUTER chains and rules should all be removed when the controller is disabled.
The text was updated successfully, but these errors were encountered: