Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[EKS]: EKS Support for IPv6 #835

Closed
mikestef9 opened this issue Apr 14, 2020 · 44 comments
Closed

[EKS]: EKS Support for IPv6 #835

mikestef9 opened this issue Apr 14, 2020 · 44 comments
Assignees
Labels
EKS Amazon Elastic Kubernetes Service

Comments

@mikestef9
Copy link
Contributor

mikestef9 commented Apr 14, 2020

Hi all,

Looking to get input on your IPv6 requirements as we develop our strategy here. IPv6 for EKS is a broad topic and your feedback will help us prioritize the most requested IPv6 scenarios.

Some topics that would be especially useful to get clarity on:

  • What type of VPC resources do you want to access over IPv6?
  • Are you interested in dual stack (IPv4+IPv6) or do you need IPv6-only (IPv4 disallowed) access?
  • Are you planning to use IPv6 only within your VPC(s), or are you also planning to connect your pods to IPv6 internet?
  • Do you require image pulls from ECR over IPv6?
  • Anything else that is important to you!

We have identified various milestones and they are outlined separately in the initial comments below. Please upvote this issue if you are interested in IPv6 in general, but also add a +1 to any of the milestone comments below that matter most to you.

For anything you feel is not listed as a milestone below, please open a separate feature request issue on the roadmap.

Looking forward to hear your thoughts here!

@mikestef9 mikestef9 added Proposed Community submitted issue EKS Amazon Elastic Kubernetes Service labels Apr 14, 2020
@mikestef9
Copy link
Contributor Author

External IPv6 clients communicating with EKS API server
Note: This is separate to API server access from pods within the cluster (via X-ENI), which depends on pod addressing.

@mikestef9
Copy link
Contributor Author

External IPv6 clients communicating with pods
Services deployed on EKS are accessible from the IPv6 Internet. This includes Ingress via ALB and ALB Ingress Controller, and Services of type=LoadBalancer via NLB and the AWS cloud provider. Pods may run IPv4.

@mikestef9
Copy link
Contributor Author

Pod to external IPv6 clients / Pod to pod (dual-stack)
Pods are able to connect to IPv6 addresses outside the cluster, for both ingress and egress (depending on security group policy). Serves as a good intermediate testing ground for IPv6-only, or for serving IPv6 external users. Every pod would still require an IPv4 address, which is the upstream Kubernetes “dual-stack” feature (https://kubernetes.io/docs/concepts/services-networking/dual-stack/).

@mikestef9
Copy link
Contributor Author

External IPv6 clients to node / Node to external IPv6 clients (dual-stack)
Nodes (ie: EC2 instances) can connect to IPv6 addresses outside the cluster, depending on security group policy. Pods running in host network namespace can use this IPv6 connectivity even though the rest of Kubernetes is IPv4 only. Anything connecting to kube-proxy NodePorts (including NLB/ALB) can ingress over IPv6 and be proxied to an IPv4 pod. This requires IPv6 enabled at the VPC and within the host operating system.

@mikestef9
Copy link
Contributor Author

mikestef9 commented Apr 14, 2020

IPv6 only Pods
Requirements we have in mind:

  • VPC CNI plugin supports IPv6.
  • EKS API server is available via IPv6 to in-cluster clients (either via x-eni or NLB).
  • EKS API server can connect to IPv6 pods/services via x-eni for exec/logs and aggregated API server features.
  • CoreDNS and other add-ons support IPv6.
  • IPv6 CIDRs supported in managed nodes.
  • EKS/Fargate IPv6 support.
  • All customer container workloads need to support IPv6.
  • Nodes (kubelets) require node-local IPv6 to perform container health checks.

Note: Pods might still have access to IPv4 using an egress protocol-NAT device, or using a private (to the pod/node) IPv4 address and NATing to the node’s IPv4 address (using iptables).

With this mode, IPv6-only pods do not consume an IPv4 address and allow you to scale clusters far beyond any IPv4 addressing plan.

@mikestef9
Copy link
Contributor Author

IPv6 only nodes
Worker nodes only get IPv6 addresses. We think this feature will only be useful after IPv6-only pods.

@ffrediani
Copy link

ffrediani commented Apr 14, 2020

Hello mike
Most of the things shouldn't be optional to have it or not. Anything that communicates must have IPv6 support now a days. If the intent it to choose priorities then fine.

I would say that any VPC resources should be accessible via IPv6.
Dual-stack is and will be the most common way, but IPv6-only is becoming a reality in IPv6-only Datacenters like Facebook does. However being able to be IPv6-only feature is the type of thing that can surely be less priority than basic IPv6 support.
Certainly pods must connect to IPv6 internet as well. They must be able to serve data directly or via Loadbalancers so there are not more locks and reasons for people to put their content available via IPv6 as well.

@mikestef9 mikestef9 removed the Proposed Community submitted issue label Apr 20, 2020
@mikestef9 mikestef9 self-assigned this Apr 20, 2020
@ffrediani
Copy link

Hello folks
Do you have any update on it so far ?
I am getting often excuses for not having IPv6 on something mostly due to the of IPv6 in EKS. Not nice to have these type of arguments really.

Do you believe we an expect it for the beginning on next half of 2020 already ?

@ffrediani
Copy link

It is due to attitudes like this that we are having this battle going on for 20 years for IPv6 to be where it should already be. It seems still that developers (always them) can't get IPv6 into their minds.
They truly believe "it's Ok" to release a product to market in 2020 without having IPv6 support, just because "nobody uses, so it it unimportant and why should I care?". This is normally the most absurd. Anyone developing a product in 2020 should be very ashamed to release it without IPv6 regardless the reasons.

What a pity we have to still go through these scenarios. And that keeps facilitating for people to have their chain of excuses for not having IPv6 on something.

@zajdee
Copy link

zajdee commented Jun 3, 2020

Please consider providing a managed NAT64 service so that the pods can run single-stack, IPv6-only (this helps administration and design) while maintaining access to the IPv4 Internet.

Also it would be extremely useful to provide a /64 routed subnet to a EC2 instance, to avoid the mess fiddling with multiple ENIs and adding/removing individual addresses (as is the current approach in AWS CNI plugin).

Another extremely feature would be a /48 assignment per VPC (or at least multiple /56s) to help with that /64 per EC2 instance.

Another feature I'd love to see is dual-stack support for those Kubernetes clusters that process both IPv4 and IPv6 ingress traffic from the Internet (that is: dual-stack the ingress, single-stack the rest of the communication within the cluster).

IPv6 support in the Network Load Balancer is obviously the next item on the IPv6 support agenda. :-)

Thank you for considering any of those ideas.

@NoseyNick
Copy link

Honestly I'd be delighted with 100% IPv6-only on the inside, with dual-stack LBs on the outside to support those poor people who are still stuck on the olde IPv4 internet of the early 2000s 😛

@sftim
Copy link

sftim commented Jul 23, 2020

Fargate IPv6 support would be really handy for a bunch of different reasons.

@heidemn
Copy link

heidemn commented Jul 23, 2020

External IPv6 clients communicating with pods
Services deployed on EKS are accessible from the IPv6 Internet. This includes Ingress via ALB and ALB Ingress Controller

Isn't this already possible today with ALB ingress controller? I saw e.g. this PR kubernetes-sigs/aws-load-balancer-controller#991 , merged 08/2019.
(not sure, since we're currently using Nginx + ELBv1)

@autarchprinceps
Copy link

Since CloudFront and/or ALB Ingress or Service LB could handle any IPv4 only clients, it would be great to have a fully IPv6 backend network option, but this isn't even possible in just VPC & EC2 yet, nevermind EKS.
Since the reverse is also possible (as in using CloudFront or ALB for IPv6 clients on IPv4 only internals), the main point inside the VPC would be to save on or not require IPv4 ranges in VPCs, especially in Transit Gateway, VPC Peering, etc. scenarios, where reusing the same IP range for different VPCs over and over again, would cause problems.

@ffrediani
Copy link

How long more are we going to wait to have IPv6 support ? This should have come form day zero and has been the reason people use as excuse to not have IPv6 for public facing on many product.

@DanielDamito
Copy link

Hello,

Do you have any update about the IPv6?

@ghost
Copy link

ghost commented Oct 6, 2020

Any updates here? Roadmap?

I recently came across this as I wanted to create my first EKS cluster and - of course - was setting up dual-stack VPC/subnets as a basis. I was literally shocked to see the actual non-existence of IPv6 support of such a major infrastructure product in the cloud era. I took half a day trying to find a solution because my brain was resisting to believe this could be true.

@ffrediani
Copy link

It's unbelievable how people can still treat IPv6 support to new products now a days so badly as if it was something "optional, for later, less important or cosmetic". Meanwhile a lot of new platforms, among them some quiet large ones who generate a fair amount of traffic remain without IPv6 support because of this issue.

@rkenney525
Copy link

If it helps get this stuff out the door faster, I'm more than happy to dedicate some after work time to review, pair, etc -- just point me in the right direction. I saw this PR a few days ago but havent seen much traction, so idk if its part of the main IPv6 effort or not. IPv6-only pods would be the biggest win for me but I'll help wherever I can

@mikestef9
Copy link
Contributor Author

mikestef9 commented Oct 8, 2020

Hey all,

IPv6 is a major priority for the EKS team, but it's also a major project that requires changes in nearly every component of EKS, along with hard dependencies on a few other AWS service feature launches.

We have come a long way in our design since originally opening this issue, and it will look similar to @zadjee comment from above.

At cluster creation time, you will have the option to choose IPv4 or IPv6 as the pod IP address family. If you choose IPv6, pods will only get IPv6 addresses. When new nodes are launched in the cluster, they will each be assigned a /80 IPv6 CIDR block to be used by pods. However, IPv4 connections will still be possible at the boundaries of the cluster. With dual stack ALB (available now) and NLB (coming soon), you can accept IPv4 traffic that will be routed to pods. For egress, we will use a NAT64 solution that will allow pods to communicate with IPv4 services outside the cluster.

This design solves all pod density and IP exhaustion challenges faced today, without requiring all IPv4 services in your environment to first be migrated to IPv6.

Again, this design requires some features to be first launched from other AWS service teams, so no timeline to share right now, but it is a project we are actively working on.

-Mike

@TBBle
Copy link

TBBle commented Oct 8, 2020

If I understand the last comment correctly, the nodes themselves, i.e. for NodePort Services, will be unaffected by this, as they will depend on the Node's IPv4/IPv6/Dual-Stack setup and can forward to the Service's ClusterIP in whatever the chosen Pod IP address family is? I'm assuming ClusterIP will match PodIP address family.

I ask mostly because our use-case has lots of UDP NodePort Services (Agones-managed), but partly because it seems that if I'm right, NLB instance-routing (i.e. without #981) talks to NodePort Services and so should be isolated from this change.

@olemarkus
Copy link

There has been a number of fixes for the opensource components, and the 1.22 release of https://github.com/kubernetes/cloud-provider-aws will have the necessary fixes for running IPv6.

kOps already supports IPv6 with non-global ipv6 addresses.

There is also a s a PR for using the new ipv6 prefix functionality natively with any CNI that can use the kubernetes IPAM (which most can). This will give all pods global ipv6 addresses. (I do have a screenshot of this too: https://twitter.com/olemarkus/status/1424426211393122311)

So at least EKS-D should be able to use this functionality.

@autarchprinceps
Copy link

autarchprinceps commented Nov 24, 2021

AWS released support for IPv6 only VPCs, subnets and NLB/ALBs. This would be very interesting to see in EKS as well. IPv4 only clients, if they still exist, can still be served by CloudFront Edge Locations. It would be a big improvement for us to no longer have to assign IPv4 to VPCs, since for TGW purposes they then need to be unique again.
Especially, since this will likely take some time to propagate into support in various elements, having it be possible in core components should be planned early on.

@hakman
Copy link

hakman commented Nov 24, 2021

@autarchprinceps I think there is still a need for a NAT64 appliance and/or maybe some STS dual-stack endpoints. Pretty annoying to not have IRSA working on IPv6.

@zajdee
Copy link

zajdee commented Nov 24, 2021

AWS has also deployed the DNS64 support on the VPC DNS infrastructure, although it wasn't communicated publicly (yet).

@marcuz
Copy link

marcuz commented Nov 24, 2021

@autarchprinceps I think there is still a need for a NAT64 appliance and/or maybe some STS dual-stack endpoints. Pretty annoying to not have IRSA working on IPv6.

I agree 100%. IPv6-only is a step further, but the reality is most of us need dual-stack (particularly for egress).

@hakman
Copy link

hakman commented Nov 24, 2021

@zajdee I am aware of the DNS64 support, though that can be done from CoreDNS too. You still need the NAT64 part to be of any use.
I could add that IPv6 rollout in AWS is a little slow in general. Even the the recent EC2 dual-stack API is only available in very few regions. Hopefully things will speed-up a little.

@tvi
Copy link

tvi commented Nov 27, 2021

NAT64 was announced in a few regions:
https://aws.amazon.com/about-aws/whats-new/2021/11/aws-nat64-dns64-communication-ipv6-ipv4-services/

@mikestef9
Copy link
Contributor Author

mikestef9 commented Jan 6, 2022

EKS support for IPv6 is now available!

A point I want to emphasize is that to solve the IPv4 exhaustion problem, we have gone with an approach of single-stack IPv6 K8s, but with dual-stack pods. We give pods an 'egress-only' node-internal private IPv4 address that gets SNATed to the node's primary VPC IPv4 address on the way out if necessary. Kubernetes doesn't know anything about this non-unique v4 address.

Note: EKS IPv6 support is not available in Beijing or Ningxia regions, as VPC does not support prefix assignment in those regions.

Resources associated with launch:

@sftim
Copy link

sftim commented Jan 6, 2022

I'm guessing that any workload that's aware of RFC6052 NAT64 can also send traffic to the IPv4 internet provided the VPC is set up for that.

@sheetaljoshi
Copy link

Yes, IPv4 endpoints are supported. EKS takes a different approach of supporting IPv4 endpoints via egress-only IPv4 solution for pods https://github.com/aws/amazon-vpc-cni-k8s/blob/master/misc/10-aws.conflist#L14. Details to be published soon.

@anguslees
Copy link

(and yes, any workload that's aware of RFC6052 NAT64, can just send those IPv6 packets out to their nat64 gateway directly - it will be treated the same as any other v6 traffic to/from the pod.)

@GouravSingh2580
Copy link

I've set up a dual-stack VPC with IPv6 subnets and enabled DNS64.
However, I'm facing connectivity issues with the Stripe API, which I believe is still IPv4.

image

Given that DNS64 is enabled with 64:ff9b::/96 prefix in the subnet, it should ideally allow access to any IPv4 service, correct?

Can someone assist me in identifying what might be missing here?

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
EKS Amazon Elastic Kubernetes Service
Projects
None yet
Development

No branches or pull requests