-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add IPv6 support #348
Add IPv6 support #348
Conversation
Hi @aojea. Thanks for your PR. I'm waiting for a kubernetes-sigs or kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the PR, i've added some generic comments.
@@ -74,11 +76,14 @@ clusterName: "{{.ClusterName}}" | |||
bootstrapTokens: | |||
- token: "{{ .Token }}" | |||
{{ if .ControlPlaneEndpoint -}} | |||
controlPlaneEndpoint: {{ .ControlPlaneEndpoint }} | |||
controlPlaneEndpoint: "{{ .ControlPlaneEndpoint }}" | |||
{{- end }} | |||
# we use a well know port for making the API server discoverable inside docker network. | |||
# from the host machine such port will be accessible via a random local port instead. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this comment is about the port.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't understand the comment, I had to quote the ControlPlaneEndpoint because it failed to create the JSON from the template using IPv6 addresses
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i wanted to say that the comment # we use a well know port for making...
is now above advertiseAddress, while it applies to the port bellow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that the comment is ok, it's above the api
section not above the port field, to explain that the apiendpoint
uses a well know port.
if err != nil { | ||
return errors.Wrap(err, "failed to get IP for bootsrap node") | ||
} | ||
|
||
// get the control plane endpoint, in case the cluster has an external load balancer in | ||
// front of the control-plane nodes | ||
controlPlaneEndpoint, err := getControlPlaneEndpoint(ec) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
getControlPlaneEndpoint
might need modifications based on IPv6.
have you tested an HA setup?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
didn't test it, but will do.
I've already modified the function func getControlPlaneEndpoint(
to support IPv6, check L135
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@neolit123 results of an HA setup test
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b37bcbcf3b6 kindest/node:v1.13.3 "/usr/local/bin/entr…" 11 minutes ago Up 11 minutes kind-worker2
89fa42fd404e kindest/node:v1.13.3 "/usr/local/bin/entr…" 12 minutes ago Up 12 minutes kind-worker1
0d0caea990ca kindest/node:v1.13.3 "/usr/local/bin/entr…" 12 minutes ago Up 12 minutes 33453/tcp, 0.0.0.0:33453->6443/tcp kind-control-plane3
422bd7c52d53 kindest/node:v1.13.3 "/usr/local/bin/entr…" 13 minutes ago Up 12 minutes 34151/tcp, 0.0.0.0:34151->6443/tcp kind-control-plane2
780f8cde9d6d kindest/node:v1.13.3 "/usr/local/bin/entr…" 13 minutes ago Up 13 minutes 37927/tcp, 0.0.0.0:37927->6443/tcp kind-control-plane1
3f2e867b33b6 kindest/node:v1.13.3 "/usr/local/bin/entr…" 13 minutes ago Up 13 minutes 40189/tcp, 0.0.0.0:40189->6443/tcp kind-lb
# kubectl get endpoints --all-namespaces
NAMESPACE NAME ENDPOINTS AGE
default kubernetes [2001:db8:1::242:ac11:3]:6443,[2001:db8:1::242:ac11:4]:6443,[2001:db8:1::242:ac11:5]:6443 11m
kube-system kube-controller-manager <none> 10m
kube-system kube-dns 10.32.0.2:53,10.32.0.3:53,10.32.0.2:53 + 1 more... 10m
kube-system kube-scheduler <none> 10m
with current parameters:
./kind create cluster --ipv6 --config config-ipv6.yaml
and config-ipv6.yaml
# this config file contains all config fields with comments
kind: Config
apiVersion: kind.sigs.k8s.io/v1alpha2
# 3 control plane node and 3 workers
nodes:
# the control plane node config
- role: control-plane
replicas: 3
# patch the generated kubeadm config with some extra settings
kubeadmConfigPatches:
- |
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
metadata:
name: config
networking:
serviceSubnet: "fd00:1234::/112"
# the three workers
- role: worker
# replicas specifes the number of nodes to create with this configuration
replicas: 2
- role: external-load-balancer
@BenTheElder I'd like to add some e2e scenarios, could you help me or guide me?
Should those tests be part of this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@aojea
hm, given weave supposedly does not support ipv6, how are you testing this?
./kind create cluster --ipv6
we need to remove the --ipv6
flag from the CLI, should be config only.
the CLI is only for the basic functionality.
@BenTheElder I'd like to add some e2e scenarios, could you help me or guide me?
Should those tests be part of this PR?
let's get the support in kind first, and then we can think about e2e.
ideally things like ipv6 should be owned by sig-network, but i can think we might be able to add this in the sig-cluster-lifecycle dashboard too.
i will try your PR later today myself, hopefully.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@neolit123 seems that the cluster control plane is ipv6 but the data plane is ipv4, let me do more tests changing to the cni bridge plugin and come back to you
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@aojea nice.
podSubnet: "fd00:100::/64" serviceSubnet: "fd00:1234::/112"
do we have to instruct the users to patch the config with IPv6 CIDRs if they want IPv6 support.
kind create cluster --ipv6
ideally should be a flag in the kind Config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where should be the best place to add the ipv6 flag?
What do you think to add it in the new networking
section of the v1alpha3 API #340 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think to add it in the new networking section of the v1alpha3 API
yes, makes sense.
and how about this?
do we have to instruct the users to patch the config with IPv6 CIDRs if they want IPv6 support?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@neolit123 that's a good question but I miss context here, but I think that maybe can be a topic to be discussed in kubeadm
.
I could check that kubeadm
sets a default servicesubnet
if none is provided, should we do the same for IPv6? if that's the case we should add a flag to kubeadm
then
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could check that kubeadm sets a default servicesubnet if none is provided, should we do the same for IPv6? if that's the case we should add a flag to kubeadm then
here are kubeadm defaults for these CIDRs:
https://github.com/kubernetes/kubernetes/blob/b66e332d3c19ce78e00dd7c904fa29e8f6784ba0/cmd/kubeadm/app/apis/kubeadm/v1beta1/defaults.go#L31-L34
it not very likely that we will add a flag or a field in the kubeadm config for this.
IPv6 is explicit on the kubeadm side and up to the administrator to decide what stack to use. so kind as a kubeadm wrapper might have to handle this explicitly (as your patch is doing here).
at least that's how i see it for now.
@BenTheElder it's failing to verify the docker image
Do I need to recreate the file? how can I do it? |
|
/hold until I test the haproxy scenario, also I want to test it changing the CNI plugin to verify that IPv6 works ok, please add more comments and suggestions |
[intend to start doing some more tests around this ~tomorrow] Thanks again for working on this, apologies for the review latency on my end. I've been looking deeper into the CNI options FWIW, intend to do some more testing with calico 👍 |
@BenTheElder after playing with different CNI plugins with unsatisfactory results I explored other options. |
911c3bb
to
f694fd1
Compare
jobs are failing because of
it seems related to this #126 🤔 are the CI jobs using a different |
/test pull-kind-conformance-parallel-ipv6 |
426d602
to
d5c22dc
Compare
/test pull-kind-conformance-parallel-ipv6 |
/test pull-kind-conformance-parallel-ipv6 |
/retest |
/test pull-kind-conformance-parallel-ipv6 |
Adds an environment variable (IP_FAMILT) to the script that runs the e2e tests in the CI to allow to choose the kubernetes ip family used in the clusters. By default the variable choose ipv4
This commits adds allows kind to create IPv6 Kubernetes clusters and makes the code ready to implement dual stack support. For simplicity, only one address of each ip family is considered. It adds 2 new options to the v1alpha3 API: ipFamily and serviceSubnet
/test pull-kind-conformance-parallel-ipv6 |
/hold
|
/test pull-kind-conformance-parallel-ipv6 |
@aojea: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@@ -130,6 +133,34 @@ run_tests() { | |||
KUBECONFIG="$(kind get kubeconfig-path)" | |||
export KUBECONFIG | |||
|
|||
if [[ "${IP_FAMILY:-ipv4}" == "ipv6" ]]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not great but not exactly our fault and we can revisit it later and it's orthogonal to the actual CLI work for ipv6
@@ -319,12 +355,24 @@ controlPlaneEndpoint: {{ .ControlPlaneEndpoint }} | |||
# so we need to ensure the cert is valid for localhost so we can talk | |||
# to the cluster after rewriting the kubeconfig to point to localhost | |||
apiServer: | |||
certSANs: [localhost, {{.APIServerAddress}}] | |||
certSANs: [localhost, "{{.APIServerAddress}}"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
controlPlaneEndpoint above needs quoting
I had to add a tiny commmit to fix the quoting in the config (#636), but otherwise this is good to go. It's merged now! 🎉 |
This PR adds allows creating IPv6 Kubernetes clusters with
kind
and have in mind a future dual-stack implementation, considering for simplicity, only one address of each protocol.It adds a new option
ipFamily
to the v1alpha3 API that allows choosing the IP family of the cluster.To avoid issues with the different networking options the podSubnet and the serviceSubnet
kubeadm
values are predefined with the following values:We can create a Kubernetes IPv6 cluster with the following config:
Test results with IPv4 and IPv6
References:
Fixes #280