Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ExternalIP allows access to node #282

Closed
thoro opened this issue Jan 12, 2018 · 24 comments · Fixed by #604
Closed

ExternalIP allows access to node #282

thoro opened this issue Jan 12, 2018 · 24 comments · Fixed by #604
Labels

Comments

@thoro
Copy link
Contributor

thoro commented Jan 12, 2018

Since the external IPs are also added to the kube-dummy-if they now allow access to the node from outside of the cluster network, if the iptables rules don't forbid this.

I would suggest to add a DROP rule into the INPUT chain for all traffic going in on kube-dummy-if

@roffe
Copy link
Collaborator

roffe commented Jan 12, 2018

@thoro wouldn't that break the whole setup aka not letting any traffic flow in?

@thoro
Copy link
Contributor Author

thoro commented Jan 12, 2018

I think the ipvs handles before the INPUT chain in iptables is hit, otherwise rules for each service need to be added, or some manual possiblities for handling this

#167 added support for fwmark in ipvs, could also solve it, if it's an issue

Edit: Actually based on this: http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.filter_rules.html INPUT chain is hit, which may lead to additional work.

E.g. add one rule per service and set to drop (with kube-dummy-if as interface possibly - shouldn't affect the other interfaces)

@thoro
Copy link
Contributor Author

thoro commented Apr 17, 2018

This is especially pesky with the port 10250 and 10255 see here:

https://medium.com/handy-tech/analysis-of-a-kubernetes-hack-backdooring-through-kubelet-823be5c3d67c

@murali-reddy
Copy link
Member

@thoro @lavajnv

I am going to take shot at this issue. On top of my head some thing like this should work:

  • create a ipset with all current service VIP,s
  • deny traffic with matching destination to the created ipset from INPUT chain
  • override drop rule, with explicity permit rule for each service IP and port combination.

Will update once i have solution.

@rmb938
Copy link
Contributor

rmb938 commented Jul 29, 2018

@murali-reddy Any updates on this issue?

@murali-reddy
Copy link
Member

@rmb938 sorry did not get chance to work on this issue. This is important enough issue, somehow slipped the attention all the while. I will prioritise this issue for one of the upcoming releases.

@rmb938
Copy link
Contributor

rmb938 commented Jul 30, 2018

No problem :)

I actually found this independently while I was testing some things today, was going to make a issue and realized there was this one.

@asteven
Copy link
Contributor

asteven commented Oct 30, 2018

@murali-reddy Did you have time to look into this? This problem seems to render the possibility to advertise cluster|external IPs useless, or at least dangerous. Or am I missing something here?

@asteven
Copy link
Contributor

asteven commented Oct 30, 2018

Could we work around all this by having a dedicated netns and dummy iface just for the ipvs stuff? That would solve this problem for good, no?

e.g. something like they mention here

@murali-reddy
Copy link
Member

@asteven No i could not look into this issue.

@thoro correct me if i am wrong your concern is advertised VIP's are accessible on all ports instead of just on the port on which service is available?

There is another issue #561 which basically allows service VIP's to be accessible as well.

@thoro
Copy link
Contributor Author

thoro commented Oct 31, 2018

@murali-reddy yes, the external ip allows access to the host itself

@asteven
Copy link
Contributor

asteven commented Oct 31, 2018

The net effect of this is that all services running on the host that listen on 0.0.0.0 are directly accessible via any advertised service or external IP.

e.g. I can ssh $service_ip and get a login prompt of the sshd running on the host.
Same with the kubelet ports like 10250 and 10255.

@asteven
Copy link
Contributor

asteven commented Oct 31, 2018

I'm not sure if this would also cause problems if a service port is already bound by some daemon on the host. iow: what if my service port is 22 and the sshd on the host is listening on 0.0.0.0?

@noillir
Copy link
Contributor

noillir commented Nov 20, 2018

Hi,

At this point i can't also find a way to block this with network policy either. Host ports seem to be always available on all registred external ip's (unless you make a svc to take that traffic)

@murali-reddy
Copy link
Member

#604 does not fully cover this issue. so reopening.

@asteven
Copy link
Contributor

asteven commented Dec 13, 2018

@murali-reddy what is missing? I'd try to contribute to get this done.

@murali-reddy
Copy link
Member

murali-reddy commented Dec 13, 2018

Please see earlier comment #282 (comment)

So fix added by @bazuchan in #604, adds explicit rule to match service VIP (cluster IP, node port, external IP), protocol, port combination and PERMIT the traffic in INPUT chain in filter table. If the default policy on INPUT chain of filter table is to drop the packet then we dont run into this issue reported in this bug.

Alternatively from what is reported in #602, firewalld add rule to DROP the packets.

 177K   60M REJECT     all  --  any    any     anywhere             anywhere             reject-with icmp-host-prohibited

So from completion point if we add a rule to DROP traffic destined for KUBE-SVC-ALL that does not match permitted protocol, port combination, then kube-router point it allows only allowed traffic to the service VIP's.

@asteven
Copy link
Contributor

asteven commented Dec 13, 2018

So something like this?

% git d -W | cat                                        
diff --git a/pkg/controllers/proxy/network_services_controller.go b/pkg/controllers/proxy/network_services_controller.go
index 1251120d..d7d03e73 100644
--- a/pkg/controllers/proxy/network_services_controller.go
+++ b/pkg/controllers/proxy/network_services_controller.go
@@ -372,54 +372,61 @@ func (nsc *NetworkServicesController) sync() error {
 func (nsc *NetworkServicesController) setupIpvsFirewall() error {
 	// Add ipset containg all SVCs
 	ipSetHandler, err := utils.NewIPSet(false)
 	if err != nil {
 		return err
 	}
 
 	svcIpSet, err := ipSetHandler.Create(svcIpSetName, utils.TypeHashIPPort, utils.OptionTimeout, "0")
 	if err != nil {
 		return fmt.Errorf("failed to create ipset: %s", err.Error())
 	}
 
 	ipvsSvcs, err := nsc.ln.ipvsGetServices()
 	if err != nil {
 		return errors.New("Failed to list IPVS services: " + err.Error())
 	}
 
 	svcSets := make([]string, 0, len(ipvsSvcs))
 	for _, ipvsSvc := range ipvsSvcs {
 		protocol := "udp"
 		if ipvsSvc.Protocol == syscall.IPPROTO_TCP {
 			protocol = "tcp"
 		}
 		set := fmt.Sprintf("%s,%s:%d", ipvsSvc.Address.String(), protocol, ipvsSvc.Port)
 		svcSets = append(svcSets, set)
 	}
 
 	err = svcIpSet.Refresh(svcSets, utils.OptionTimeout, "0")
 	if err != nil {
 		return fmt.Errorf("failed to sync ipset: %s", err.Error())
 	}
 
 	// Add iptables rule to allow input traffic to ipvs services
 	iptablesCmdHandler, err := iptables.New()
 	if err != nil {
 		return errors.New("Failed to initialize iptables executor" + err.Error())
 	}
 
 	comment := "allow input traffic to ipvs services"
 	args := []string{"-m", "comment", "--comment", comment, "-m", "set", "--match-set", svcIpSetName, "dst,dst", "-j", "ACCEPT"}
 	exists, err := iptablesCmdHandler.Exists("filter", "INPUT", args...)
 	if err != nil {
 		return fmt.Errorf("Failed to run iptables command: %s", err.Error())
 	}
 	if !exists {
 		err := iptablesCmdHandler.Insert("filter", "INPUT", 1, args...)
 		if err != nil {
 			return fmt.Errorf("Failed to run iptables command: %s", err.Error())
 		}
 	}
 
+	comment := "reject all unexpected input traffic"
+	args := []string{"-m", "comment", "--comment", comment, "-m", "set", "--match-set", svcIpSetName, "dst", "-j", "REJECT", "--reject-with", "icmp-port-unreachable"}
+	err := iptablesCmdHandler.AppendUnique("filter", "INPUT", args...)
+	if err != nil {
+		return fmt.Errorf("Failed to run iptables command: %s", err.Error())
+	}
+
 	return nil
 }

@bazuchan
Copy link
Contributor

bazuchan commented Dec 14, 2018

It's a bit more difficult, you need to make one more ipset of TypeHashIP containing only ip addresses of above ipset minus node ips. Or take all ips from kube-dummy-if interface. Don't know which one is better.

@asteven
Copy link
Contributor

asteven commented Dec 14, 2018

@bazuchan this should do the trick, right?

ipset create KUBE-SVC-ALL hash:ip,port timeout 0
ipset add KUBE-SVC-ALL 10.10.10.10,tcp:2048
ipset add KUBE-SVC-ALL 10.10.10.20,tcp:4096

iptables -A INPUT -m "comment" --comment "allow input traffic to ipvs services" \
   -m set --match-set KUBE-SVC-ALL dst -j ACCEPT

iptables -A INPUT -m "comment" --comment "reject all unexpected traffic to kube-dummy-if" \
   -i kube-dummy-if -j REJECT --reject-with icmp-port-unreachable

@asteven
Copy link
Contributor

asteven commented Dec 14, 2018

Figured it out: The following works as expected

modprobe dummy

ip link add kube-dummy-if type dummy

ip addr add 10.10.10.10 dev kube-dummy-if
ip addr add 10.10.10.20 dev kube-dummy-if

# host service
ncat -l 2022 &

# ipvs service
ncat 10.10.10.10 -l 2048 &
ncat 10.10.10.20 -l 4096 &

ipset create KUBE-SVC-ALL hash:ip,port timeout 0
ipset add KUBE-SVC-ALL 10.10.10.10,tcp:2048
ipset add KUBE-SVC-ALL 10.10.10.20,tcp:4096

iptables -A INPUT -m "comment" --comment "allow input traffic to ipvs services" \
   -m set --match-set KUBE-SVC-ALL dst -j ACCEPT

With this in place I can reach the 'host-service' (port 2022) on any service IP.

ipset create KUBE-SVC-ALL-deny hash:ip timeout 0
ipset add KUBE-SVC-ALL-deny 10.10.10.10
ipset add KUBE-SVC-ALL-deny 10.10.10.20

iptables -A INPUT -m "comment" --comment "reject all unexpected traffic" \
   -m set --match-set KUBE-SVC-ALL-deny dst -j REJECT --reject-with icmp-port-unreachable

With these additional rules in place I can no longer reach host-service via service IPs.

I'll put this in to go code later tonight.

@MarkDeckert
Copy link
Contributor

I use Kube-Router's own healthcheck on 20244 as an external check as well since it acts as a great layer 7 check. Although I check it via the actual Node's ip:20244. Not sure if this would inadvertently block that.

@asteven
Copy link
Contributor

asteven commented Dec 14, 2018

@MarkDeckert services running on the node's IP (non-service IPs) should not be affected by this.

asteven added a commit to asteven/kube-router that referenced this issue Dec 19, 2018
- on startup create ipsets and firewall rules
- on sync update ipsets
- on cleanup remove firewall rules and ipsets

Fixes cloudnativelabs#282.

Signed-off-by: Steven Armstrong <[email protected]>
asteven added a commit to asteven/kube-router that referenced this issue Dec 20, 2018
- on startup create ipsets and firewall rules
- on sync update ipsets
- on cleanup remove firewall rules and ipsets

Fixes cloudnativelabs#282.

Signed-off-by: Steven Armstrong <[email protected]>
@asteven
Copy link
Contributor

asteven commented Dec 27, 2018

Not the same but related to #623

murali-reddy pushed a commit that referenced this issue Jan 10, 2019
…#618)

* prevent host services from being accessible through service IPs

- on startup create ipsets and firewall rules
- on sync update ipsets
- on cleanup remove firewall rules and ipsets

Fixes #282.

Signed-off-by: Steven Armstrong <[email protected]>

* ensure iptables rules are also available during cleanup

Signed-off-by: Steven Armstrong <[email protected]>

* first check if chain exists

Signed-off-by: Steven Armstrong <[email protected]>

* err not a new variable

Signed-off-by: Steven Armstrong <[email protected]>

* more redeclared vars

Signed-off-by: Steven Armstrong <[email protected]>

* maintain a ipset for local addresses and exclude those from our default deny rule

Signed-off-by: Steven Armstrong <[email protected]>

* copy/paste errors

Signed-off-by: Steven Armstrong <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants