-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
review cross-nodegroup ingress rules #419
Comments
I was wondering if we could just open up everything across nodegroups, while suggesting users to use k8s network policies w/ calico or cilium or whatever for fine-grained access control. |
In my experience network segregation in Kubernetes is generally applicable at the Namespace/Pod/Service level, rarely if ever by Nodegroup. The standard expectation is that as a cluster user I don't have to think about nodes at all, and I can talk to another pod wherever it happens to be launched. So +1 to a shared SG for all Node Groups. Re: using the VPC CIDR, I think it's worth thinking about the "existing VPC" case, and whether that might include other resources not in the cluster, which possibly shouldn't have full access. Alternatively, we could introduce a Cluster CIDR config, which would be a subset of the VPC CIDR used for the security group as well as creation of the original Subnets (ie #379). |
I think there should be no difference between node-to-node rules within a nodegroup vs accross nodegroups. It's actually a question of what node ports should be open between nodes. At present However, I do think we should provide a way for users to implement advanced security by blocking all ports that are not used to provide vital functionality. The use-case would be to run sandboxed application that should not be able access or (be accessed by) anything else inside the cluster at all, or applications that have limited egress and no ingress access at all. I agree that Kubernetes network policy API serves many of that kind of use-cases well, but I also know that in some use-cases regulatory policies are defined around security groups, so there needs to be an option for that when it's needed. Arguably, one may wish to use network policy as well as SG isolation that provides additional insurance and implemented on the underlay network that lives completely outside the Kubernetes cluster. It's unfortunate that pods cannot be segregated into an SG of their own that would be separate from host network SG. In any case, the question here should be really about which ports should be kept close between nodes by default. At the moment we open SSH (only when needed), we always open DNS and then only high ports are open between nodegroups and to the outside world. I believe the case of outside world should remain the same, and we should actually add a mode where none of the ports are open, yet without private resorting to private subnets (because NAT doesn't come for free - #392). But certainly we should allow use of all ports starting with e.g. starting with 21, so that one can run an FTP server if they must, and we can specify provide a config parameter that lets user specify a range of ports or something like that. But I don't think these ports have to be open outside the VPC CIRD is sufficient. As I said above, I believe we should only open higher ports outside the VPC by default, and we do need a way for users to close those easily. |
I suspect there is a bug at the moment, but I've not check it. I think right now you can run a pod listening on e.g. port 80 and you will get access to it between nodes in one nodegroup, but not between nodegroups. But maybe I'm wrong - could someone on this thread look into it please? |
I think we should have a shared SG, as well as per-nodegroup SG. By default:
Options for sealing a nodegroup:
We should add shared SG in new clusters, or during cluster upgrades. When it's missing, we will provide instruction for users how to add it, perhaps provide a |
I have the same feelings as yours. And that's why I was exited about aws/amazon-vpc-cni-k8s#165, and I've opened aws/amazon-vpc-cni-k8s#208. How about implementing what we can with the former today? Also, Let's +1 on the latter to see its progress there.
An overlay network does work in certain use-cases. I've been using one for a long time. But today I'm trying to figure out how I can get meaningful AWS VPC flow logs from k8s workload and I believe I can't use an overlay network in that case.
DNS can be closed if we could segregate node sgs and pod sgs. aws/amazon-vpc-cni-k8s#208 will allow that in a straightf-forward way. Also, |
So my idea seems orthogonal to yours. I'm fine if we could say the shared sg you've suggested is for pods:
It will initially be associated to all the nodes (and hence pods because we don't use Introducing aws/amazon-vpc-cni-k8s#165 will allow eksctl to associate the shared SG solely to pods, which provides more security. |
@mumoshu thanks a lot for the insight on AWS CNI, I've not had the time to dig in and I tend to naturally gravitate towards Weave Net to be honest also. I'll start working on shared SG today. Adding shared SG means that we will need to handle some backwards compatibility cases and provide a way to make updates to cluster stack, so some plumbing needs to be done. |
At present, we only allow access to majority of ports below 1025 within a single nodegroup. We need to review this, as user may wish to run pods that listen on port 80, for example. We need probably need to open this up, perhaps we can use a shared SG in the cluster stack, or simply allow access on the basis of VPC CIDR (which is what we have for DNS - #418).
The text was updated successfully, but these errors were encountered: