-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[docs] Kubernetes support matrix #1518
Comments
With the current state of things (v1alpha2+), the real restriction is currently based on the versions supported by CABPK through the exposed kubeadm configuration. For official support, I would like to state plainly that we do not support any Kubernetes versions < n-2, since those fall out of support by Kubernetes proper. I'm not sure that v1.17 will support kubeadm v1beta1 with the adoption of v1beta3 (kubernetes/kubeadm#1796). We can likely transition to using the v1beta2 config for supporting v1.15-v1.17, but that will require us having a migration path to convert the existing kubeadmconfig and kubeadmconfigtemplate types to support the v1beta2 config rather than the v1beta1 config we do today. |
we marked v1beta1 in kubeadm as deprecated this cycle (1.17), so it will be removed in 1.20. |
Nice |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/lifecycle frozen |
We should try to tackle this in v0.3.x now that we're focusing on alpha3 for a little longer. /help |
@vincepri: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
If we're officially supporting and deprecating versions, we should also consider adding validation on k8s versions to reject create or updates with unsupported versions. Also, we might want a way for users to get a list of supported k8s versions with clusterctl based on the versions supported by the version of CABPK installed. |
I'll work on the docs. Once that's there, I can open issues for validation and clusterctl support. /assign |
I think this gets a bit tricky when we compare:
I think validating on the lower bounds for 1 should be relatively non-controversial, since the implementations of KubeadmControlPlane and KubeadmConfig controllers would fail to work. I'm not sure we can validate on the upper bounds, since we don't yet know which version will remove support for the kubeadm v1beta1 config yet. If we wanted to validate on 2, we'd need to provide a way to override that for users and downstream consumers. It would also create challenges for how do we update this as the Kubernetes release cycle is disjoint from ours and a Kubernetes release can go out of support upstream during the middle of the Cluster API release cycle. |
/kind documentation
Write down supported Kubernetes versions. The questions I would like answered are:
To understand why this is a difficult question let's examine Cluster API v1alpha2. When we started it, CAPI v1alpha1 supported Kubernetes v1.13, v1.14 and v1.15. By the time work on v1alpha3 began, v1alpha2 supported v1.13, 1.14, 1.15 and 1.16. Do we stick with an n-2 versioning scheme? Do we do something more complex like: We support all versions spanning (n-2, n-1, n (== current version)) as well as all releases released during the CAPI development cycle? For example, CAPI v1a2 supports v1.13-v1.16 because those were the valid k8s versions during development. This is just an idea. We could certainly drop support as k8s rolls out new versions, but then we get into a weird place where we drop support in the middle of a CAPI development cycle. Realistically though that should be fine.
The text was updated successfully, but these errors were encountered: