-
Notifications
You must be signed in to change notification settings - Fork 321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[EKS]: EKS Support for IPv6 #835
Comments
External IPv6 clients communicating with EKS API server |
External IPv6 clients communicating with pods |
Pod to external IPv6 clients / Pod to pod (dual-stack) |
External IPv6 clients to node / Node to external IPv6 clients (dual-stack) |
IPv6 only Pods
Note: Pods might still have access to IPv4 using an egress protocol-NAT device, or using a private (to the pod/node) IPv4 address and NATing to the node’s IPv4 address (using iptables). With this mode, IPv6-only pods do not consume an IPv4 address and allow you to scale clusters far beyond any IPv4 addressing plan. |
IPv6 only nodes |
Hello mike I would say that any VPC resources should be accessible via IPv6. |
Hello folks Do you believe we an expect it for the beginning on next half of 2020 already ? |
It is due to attitudes like this that we are having this battle going on for 20 years for IPv6 to be where it should already be. It seems still that developers (always them) can't get IPv6 into their minds. What a pity we have to still go through these scenarios. And that keeps facilitating for people to have their chain of excuses for not having IPv6 on something. |
Please consider providing a managed NAT64 service so that the pods can run single-stack, IPv6-only (this helps administration and design) while maintaining access to the IPv4 Internet. Also it would be extremely useful to provide a /64 routed subnet to a EC2 instance, to avoid the mess fiddling with multiple ENIs and adding/removing individual addresses (as is the current approach in AWS CNI plugin). Another extremely feature would be a /48 assignment per VPC (or at least multiple /56s) to help with that /64 per EC2 instance. Another feature I'd love to see is dual-stack support for those Kubernetes clusters that process both IPv4 and IPv6 ingress traffic from the Internet (that is: dual-stack the ingress, single-stack the rest of the communication within the cluster). IPv6 support in the Network Load Balancer is obviously the next item on the IPv6 support agenda. :-) Thank you for considering any of those ideas. |
Honestly I'd be delighted with 100% IPv6-only on the inside, with dual-stack LBs on the outside to support those poor people who are still stuck on the olde IPv4 internet of the early 2000s 😛 |
Fargate IPv6 support would be really handy for a bunch of different reasons. |
Isn't this already possible today with ALB ingress controller? I saw e.g. this PR kubernetes-sigs/aws-load-balancer-controller#991 , merged 08/2019. |
Since CloudFront and/or ALB Ingress or Service LB could handle any IPv4 only clients, it would be great to have a fully IPv6 backend network option, but this isn't even possible in just VPC & EC2 yet, nevermind EKS. |
How long more are we going to wait to have IPv6 support ? This should have come form day zero and has been the reason people use as excuse to not have IPv6 for public facing on many product. |
Hello, Do you have any update about the IPv6? |
Any updates here? Roadmap? I recently came across this as I wanted to create my first EKS cluster and - of course - was setting up dual-stack VPC/subnets as a basis. I was literally shocked to see the actual non-existence of IPv6 support of such a major infrastructure product in the cloud era. I took half a day trying to find a solution because my brain was resisting to believe this could be true. |
It's unbelievable how people can still treat IPv6 support to new products now a days so badly as if it was something "optional, for later, less important or cosmetic". Meanwhile a lot of new platforms, among them some quiet large ones who generate a fair amount of traffic remain without IPv6 support because of this issue. |
If it helps get this stuff out the door faster, I'm more than happy to dedicate some after work time to review, pair, etc -- just point me in the right direction. I saw this PR a few days ago but havent seen much traction, so idk if its part of the main IPv6 effort or not. IPv6-only pods would be the biggest win for me but I'll help wherever I can |
Hey all, IPv6 is a major priority for the EKS team, but it's also a major project that requires changes in nearly every component of EKS, along with hard dependencies on a few other AWS service feature launches. We have come a long way in our design since originally opening this issue, and it will look similar to @zadjee comment from above. At cluster creation time, you will have the option to choose IPv4 or IPv6 as the pod IP address family. If you choose IPv6, pods will only get IPv6 addresses. When new nodes are launched in the cluster, they will each be assigned a /80 IPv6 CIDR block to be used by pods. However, IPv4 connections will still be possible at the boundaries of the cluster. With dual stack ALB (available now) and NLB (coming soon), you can accept IPv4 traffic that will be routed to pods. For egress, we will use a NAT64 solution that will allow pods to communicate with IPv4 services outside the cluster. This design solves all pod density and IP exhaustion challenges faced today, without requiring all IPv4 services in your environment to first be migrated to IPv6. Again, this design requires some features to be first launched from other AWS service teams, so no timeline to share right now, but it is a project we are actively working on. -Mike |
If I understand the last comment correctly, the nodes themselves, i.e. for NodePort Services, will be unaffected by this, as they will depend on the Node's IPv4/IPv6/Dual-Stack setup and can forward to the Service's ClusterIP in whatever the chosen Pod IP address family is? I'm assuming ClusterIP will match PodIP address family. I ask mostly because our use-case has lots of UDP NodePort Services (Agones-managed), but partly because it seems that if I'm right, NLB instance-routing (i.e. without #981) talks to NodePort Services and so should be isolated from this change. |
There has been a number of fixes for the opensource components, and the 1.22 release of https://github.com/kubernetes/cloud-provider-aws will have the necessary fixes for running IPv6. kOps already supports IPv6 with non-global ipv6 addresses. There is also a s a PR for using the new ipv6 prefix functionality natively with any CNI that can use the kubernetes IPAM (which most can). This will give all pods global ipv6 addresses. (I do have a screenshot of this too: https://twitter.com/olemarkus/status/1424426211393122311) So at least EKS-D should be able to use this functionality. |
AWS released support for IPv6 only VPCs, subnets and NLB/ALBs. This would be very interesting to see in EKS as well. IPv4 only clients, if they still exist, can still be served by CloudFront Edge Locations. It would be a big improvement for us to no longer have to assign IPv4 to VPCs, since for TGW purposes they then need to be unique again. |
@autarchprinceps I think there is still a need for a NAT64 appliance and/or maybe some STS dual-stack endpoints. Pretty annoying to not have IRSA working on IPv6. |
AWS has also deployed the DNS64 support on the VPC DNS infrastructure, although it wasn't communicated publicly (yet). |
I agree 100%. IPv6-only is a step further, but the reality is most of us need dual-stack (particularly for egress). |
@zajdee I am aware of the DNS64 support, though that can be done from CoreDNS too. You still need the NAT64 part to be of any use. |
NAT64 was announced in a few regions: |
EKS support for IPv6 is now available! A point I want to emphasize is that to solve the IPv4 exhaustion problem, we have gone with an approach of single-stack IPv6 K8s, but with dual-stack pods. We give pods an 'egress-only' node-internal private IPv4 address that gets SNATed to the node's primary VPC IPv4 address on the way out if necessary. Kubernetes doesn't know anything about this non-unique v4 address. Note: EKS IPv6 support is not available in Beijing or Ningxia regions, as VPC does not support prefix assignment in those regions. Resources associated with launch:
|
I'm guessing that any workload that's aware of RFC6052 NAT64 can also send traffic to the IPv4 internet provided the VPC is set up for that. |
Yes, IPv4 endpoints are supported. EKS takes a different approach of supporting IPv4 endpoints via egress-only IPv4 solution for pods https://github.com/aws/amazon-vpc-cni-k8s/blob/master/misc/10-aws.conflist#L14. Details to be published soon. |
(and yes, any workload that's aware of RFC6052 NAT64, can just send those IPv6 packets out to their nat64 gateway directly - it will be treated the same as any other v6 traffic to/from the pod.) |
I've set up a dual-stack VPC with IPv6 subnets and enabled DNS64. Given that DNS64 is enabled with Can someone assist me in identifying what might be missing here? Thank you! |
Hi all,
Looking to get input on your IPv6 requirements as we develop our strategy here. IPv6 for EKS is a broad topic and your feedback will help us prioritize the most requested IPv6 scenarios.
Some topics that would be especially useful to get clarity on:
We have identified various milestones and they are outlined separately in the initial comments below. Please upvote this issue if you are interested in IPv6 in general, but also add a +1 to any of the milestone comments below that matter most to you.
For anything you feel is not listed as a milestone below, please open a separate feature request issue on the roadmap.
Looking forward to hear your thoughts here!
The text was updated successfully, but these errors were encountered: