-
Notifications
You must be signed in to change notification settings - Fork 469
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ExternalIP allows access to node #282
Comments
@thoro wouldn't that break the whole setup aka not letting any traffic flow in? |
I think the ipvs handles before the INPUT chain in iptables is hit, otherwise rules for each service need to be added, or some manual possiblities for handling this #167 added support for fwmark in ipvs, could also solve it, if it's an issue Edit: Actually based on this: http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.filter_rules.html INPUT chain is hit, which may lead to additional work. E.g. add one rule per service and set to drop (with kube-dummy-if as interface possibly - shouldn't affect the other interfaces) |
This is especially pesky with the port 10250 and 10255 see here: https://medium.com/handy-tech/analysis-of-a-kubernetes-hack-backdooring-through-kubelet-823be5c3d67c |
@thoro @lavajnv I am going to take shot at this issue. On top of my head some thing like this should work:
Will update once i have solution. |
@murali-reddy Any updates on this issue? |
@rmb938 sorry did not get chance to work on this issue. This is important enough issue, somehow slipped the attention all the while. I will prioritise this issue for one of the upcoming releases. |
No problem :) I actually found this independently while I was testing some things today, was going to make a issue and realized there was this one. |
@murali-reddy Did you have time to look into this? This problem seems to render the possibility to advertise cluster|external IPs useless, or at least dangerous. Or am I missing something here? |
Could we work around all this by having a dedicated netns and dummy iface just for the ipvs stuff? That would solve this problem for good, no? e.g. something like they mention here |
@murali-reddy yes, the external ip allows access to the host itself |
The net effect of this is that all services running on the host that listen on 0.0.0.0 are directly accessible via any advertised service or external IP. e.g. I can |
I'm not sure if this would also cause problems if a service port is already bound by some daemon on the host. iow: what if my service port is 22 and the sshd on the host is listening on 0.0.0.0? |
Hi, At this point i can't also find a way to block this with network policy either. Host ports seem to be always available on all registred external ip's (unless you make a svc to take that traffic) |
#604 does not fully cover this issue. so reopening. |
@murali-reddy what is missing? I'd try to contribute to get this done. |
Please see earlier comment #282 (comment) So fix added by @bazuchan in #604, adds explicit rule to match service VIP (cluster IP, node port, external IP), protocol, port combination and PERMIT the traffic in INPUT chain in Alternatively from what is reported in #602,
So from completion point if we add a rule to DROP traffic destined for |
So something like this?
|
It's a bit more difficult, you need to make one more ipset of TypeHashIP containing only ip addresses of above ipset minus node ips. Or take all ips from kube-dummy-if interface. Don't know which one is better. |
@bazuchan this should do the trick, right?
|
Figured it out: The following works as expected
With this in place I can reach the 'host-service' (port 2022) on any service IP.
With these additional rules in place I can no longer reach host-service via service IPs. I'll put this in to go code later tonight. |
I use Kube-Router's own healthcheck on 20244 as an external check as well since it acts as a great layer 7 check. Although I check it via the actual Node's ip:20244. Not sure if this would inadvertently block that. |
@MarkDeckert services running on the node's IP (non-service IPs) should not be affected by this. |
- on startup create ipsets and firewall rules - on sync update ipsets - on cleanup remove firewall rules and ipsets Fixes cloudnativelabs#282. Signed-off-by: Steven Armstrong <[email protected]>
- on startup create ipsets and firewall rules - on sync update ipsets - on cleanup remove firewall rules and ipsets Fixes cloudnativelabs#282. Signed-off-by: Steven Armstrong <[email protected]>
Not the same but related to #623 |
…#618) * prevent host services from being accessible through service IPs - on startup create ipsets and firewall rules - on sync update ipsets - on cleanup remove firewall rules and ipsets Fixes #282. Signed-off-by: Steven Armstrong <[email protected]> * ensure iptables rules are also available during cleanup Signed-off-by: Steven Armstrong <[email protected]> * first check if chain exists Signed-off-by: Steven Armstrong <[email protected]> * err not a new variable Signed-off-by: Steven Armstrong <[email protected]> * more redeclared vars Signed-off-by: Steven Armstrong <[email protected]> * maintain a ipset for local addresses and exclude those from our default deny rule Signed-off-by: Steven Armstrong <[email protected]> * copy/paste errors Signed-off-by: Steven Armstrong <[email protected]>
Since the external IPs are also added to the kube-dummy-if they now allow access to the node from outside of the cluster network, if the iptables rules don't forbid this.
I would suggest to add a DROP rule into the INPUT chain for all traffic going in on kube-dummy-if
The text was updated successfully, but these errors were encountered: