Skip to content

Commit

Permalink
Use Kubernetes 1.24 in quickstart and CAPD, bump to kind v0.14
Browse files Browse the repository at this point in the history
* Use Kubernetes 1.24 in quickstart and CAPD
* Bumps kind to v0.14 for v1.24 support as kind management cluster
* Migrates CI upgrade tests to use cluster class and version aware patches for cgroupDriver
* Implement workaround for DockerMachinePools which are not supported by ClusterClass to use correct cgroupDriver on upgrade tests.

Co-authored-by: killianmuldoon <[email protected]>
Co-authored-by: sbueringer <[email protected]>
  • Loading branch information
3 people committed May 27, 2022
1 parent 4886050 commit 40952f4
Show file tree
Hide file tree
Showing 37 changed files with 396 additions and 423 deletions.
28 changes: 12 additions & 16 deletions docs/book/src/user/quick-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ a target [management cluster] on the selected [infrastructure provider].

[kind] is not designed for production use.

**Minimum [kind] supported version**: v0.9.0
**Minimum [kind] supported version**: v0.14.0

Note for macOS users: you may need to [increase the memory available](https://docs.docker.com/docker-for-mac/#resources) for containers (recommend 6Gb for CAPD).

Expand Down Expand Up @@ -268,10 +268,16 @@ The Docker provider is not designed for production use and is intended for devel
</aside>
The Docker provider does not require additional prerequisites.
You can run:
The Docker provider requires the `ClusterTopology` feature to deploy ClusterClass-based clusters. We are
only supporting ClusterClass-based cluster-templates in this quickstart as ClusterClass makes it possible to
adapt configuration based on Kubernetes version. This is required to install Kubernetes clusters < v1.24 and
for the upgrade from v1.23 to v1.24 as we have to use different cgroupDrivers depending on Kubernetes version.
```
# Enable the experimental Cluster topology feature.
export CLUSTER_TOPOLOGY=true
# Initialize the management cluster
clusterctl init --infrastructure docker
```
Expand Down Expand Up @@ -706,7 +712,7 @@ For the purpose of this tutorial, we'll name our cluster capi-quickstart.
```bash
clusterctl generate cluster capi-quickstart \
--kubernetes-version v1.23.3 \
--kubernetes-version v1.24.0 \
--control-plane-machine-count=3 \
--worker-machine-count=3 \
> capi-quickstart.yaml
Expand All @@ -725,17 +731,7 @@ The Docker provider is not designed for production use and is intended for devel
```bash
clusterctl generate cluster capi-quickstart --flavor development \
--kubernetes-version v1.23.3 \
--control-plane-machine-count=3 \
--worker-machine-count=3 \
> capi-quickstart.yaml
```
To create a Cluster with ClusterClass:
```bash
clusterctl generate cluster capi-quickstart --flavor development-topology \
--kubernetes-version v1.23.3 \
--kubernetes-version v1.24.0 \
--control-plane-machine-count=3 \
--worker-machine-count=3 \
> capi-quickstart.yaml
Expand Down Expand Up @@ -795,7 +791,7 @@ You should see an output is similar to this:
```bash
NAME INITIALIZED API SERVER AVAILABLE VERSION REPLICAS READY UPDATED UNAVAILABLE
capi-quickstart-control-plane true v1.23.3 3 3 3
capi-quickstart-control-plane true v1.24.0 3 3 3
```
<aside class="note warning">
Expand Down
2 changes: 1 addition & 1 deletion hack/ensure-kind.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ set -o pipefail
set -x

GOPATH_BIN="$(go env GOPATH)/bin/"
MINIMUM_KIND_VERSION=v0.11.0
MINIMUM_KIND_VERSION=v0.14.0
goarch="$(go env GOARCH)"
goos="$(go env GOOS)"

Expand Down
1 change: 1 addition & 0 deletions test/e2e/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,7 @@ cluster-templates-v1beta1: $(KUSTOMIZE) ## Generate cluster templates for v1beta
$(KUSTOMIZE) build $(DOCKER_TEMPLATES)/v1beta1/cluster-template-machine-pool --load-restrictor LoadRestrictionsNone > $(DOCKER_TEMPLATES)/v1beta1/cluster-template-machine-pool.yaml
$(KUSTOMIZE) build $(DOCKER_TEMPLATES)/v1beta1/cluster-template-node-drain --load-restrictor LoadRestrictionsNone > $(DOCKER_TEMPLATES)/v1beta1/cluster-template-node-drain.yaml
$(KUSTOMIZE) build $(DOCKER_TEMPLATES)/v1beta1/cluster-template-upgrades --load-restrictor LoadRestrictionsNone > $(DOCKER_TEMPLATES)/v1beta1/cluster-template-upgrades.yaml
$(KUSTOMIZE) build $(DOCKER_TEMPLATES)/v1beta1/cluster-template-upgrades-cgroupfs --load-restrictor LoadRestrictionsNone > $(DOCKER_TEMPLATES)/v1beta1/cluster-template-upgrades-cgroupfs.yaml
$(KUSTOMIZE) build $(DOCKER_TEMPLATES)/v1beta1/cluster-template-kcp-scale-in --load-restrictor LoadRestrictionsNone > $(DOCKER_TEMPLATES)/v1beta1/cluster-template-kcp-scale-in.yaml
$(KUSTOMIZE) build $(DOCKER_TEMPLATES)/v1beta1/cluster-template-ipv6 --load-restrictor LoadRestrictionsNone > $(DOCKER_TEMPLATES)/v1beta1/cluster-template-ipv6.yaml
$(KUSTOMIZE) build $(DOCKER_TEMPLATES)/v1beta1/cluster-template-topology --load-restrictor LoadRestrictionsNone > $(DOCKER_TEMPLATES)/v1beta1/cluster-template-topology.yaml
Expand Down
56 changes: 32 additions & 24 deletions test/e2e/cluster_upgrade_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,59 +20,67 @@ limitations under the License.
package e2e

import (
"github.com/blang/semver"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"k8s.io/utils/pointer"

"sigs.k8s.io/cluster-api/test/framework/clusterctl"
)

var _ = Describe("When upgrading a workload cluster and testing K8S conformance [Conformance] [K8s-Upgrade]", func() {
var _ = Describe("When upgrading a workload cluster using ClusterClass and testing K8S conformance [Conformance] [K8s-Upgrade]", func() {
ClusterUpgradeConformanceSpec(ctx, func() ClusterUpgradeConformanceSpecInput {
return ClusterUpgradeConformanceSpecInput{
E2EConfig: e2eConfig,
ClusterctlConfigPath: clusterctlConfigPath,
BootstrapClusterProxy: bootstrapClusterProxy,
ArtifactFolder: artifactFolder,
SkipCleanup: skipCleanup,
// "upgrades" is the same as the "topology" flavor but with an additional MachinePool.
flavor := pointer.String("upgrades")
// For KubernetesVersionUpgradeFrom < v1.24 we have to use upgrades-cgroupfs flavor.
// This is because kind and CAPD only support:
// * cgroupDriver cgroupfs for Kubernetes < v1.24
// * cgroupDriver systemd for Kubernetes >= v1.24.
// Notes:
// * We always use a ClusterClass-based cluster-template for the upgrade test
// * The ClusterClass will automatically adjust the cgroupDriver for KCP and MDs.
// * We have to handle the MachinePool ourselves
// * The upgrades-cgroupfs flavor uses an MP which is pinned to cgroupfs
// * During the upgrade UpgradeMachinePoolAndWait automatically drops the cgroupfs pinning
// when the target version is >= v1.24.
// We can remove this as soon as we don't test upgrades from Kubernetes < v1.24 anymore with CAPD
// or MachinePools are supported in ClusterClass.
version, err := semver.ParseTolerant(e2eConfig.GetVariable(KubernetesVersionUpgradeFrom))
Expect(err).ToNot(HaveOccurred(), "Invalid argument, KUBERNETES_VERSION_UPGRADE_FROM is not a valid version")
if version.LT(semver.MustParse("1.24.0")) {
// "upgrades-cgroupfs" is the same as the "topology" flavor but with an additional MachinePool
// with pinned cgroupDriver to cgroupfs.
flavor = pointer.String("upgrades-cgroupfs")
}
})
})

var _ = Describe("When upgrading a workload cluster using ClusterClass", func() {
ClusterUpgradeConformanceSpec(ctx, func() ClusterUpgradeConformanceSpecInput {
return ClusterUpgradeConformanceSpecInput{
E2EConfig: e2eConfig,
ClusterctlConfigPath: clusterctlConfigPath,
BootstrapClusterProxy: bootstrapClusterProxy,
ArtifactFolder: artifactFolder,
SkipCleanup: skipCleanup,
Flavor: pointer.String("topology"),
// This test is run in CI in parallel with other tests. To keep the test duration reasonable
// the conformance tests are skipped.
SkipConformanceTests: true,
Flavor: flavor,
}
})
})

var _ = Describe("When upgrading a workload cluster with a single control plane machine", func() {
var _ = Describe("When upgrading a workload cluster using ClusterClass", func() {
ClusterUpgradeConformanceSpec(ctx, func() ClusterUpgradeConformanceSpecInput {
return ClusterUpgradeConformanceSpecInput{
E2EConfig: e2eConfig,
ClusterctlConfigPath: clusterctlConfigPath,
BootstrapClusterProxy: bootstrapClusterProxy,
ArtifactFolder: artifactFolder,
SkipCleanup: skipCleanup,
Flavor: pointer.String("topology"),
// This test is run in CI in parallel with other tests. To keep the test duration reasonable
// the conformance tests are skipped.
SkipConformanceTests: true,
ControlPlaneMachineCount: pointer.Int64(1),
WorkerMachineCount: pointer.Int64(1),
Flavor: pointer.String(clusterctl.DefaultFlavor),
WorkerMachineCount: pointer.Int64(2),
SkipConformanceTests: true,
}
})
})

var _ = Describe("When upgrading a workload cluster with a HA control plane", func() {
var _ = Describe("When upgrading a workload cluster using ClusterClass with a HA control plane", func() {
ClusterUpgradeConformanceSpec(ctx, func() ClusterUpgradeConformanceSpecInput {
return ClusterUpgradeConformanceSpecInput{
E2EConfig: e2eConfig,
Expand All @@ -85,12 +93,12 @@ var _ = Describe("When upgrading a workload cluster with a HA control plane", fu
SkipConformanceTests: true,
ControlPlaneMachineCount: pointer.Int64(3),
WorkerMachineCount: pointer.Int64(1),
Flavor: pointer.String(clusterctl.DefaultFlavor),
Flavor: pointer.String("topology"),
}
})
})

var _ = Describe("When upgrading a workload cluster with a HA control plane using scale-in rollout", func() {
var _ = Describe("When upgrading a workload cluster using ClusterClass with a HA control plane using scale-in rollout", func() {
ClusterUpgradeConformanceSpec(ctx, func() ClusterUpgradeConformanceSpecInput {
return ClusterUpgradeConformanceSpecInput{
E2EConfig: e2eConfig,
Expand Down
15 changes: 8 additions & 7 deletions test/e2e/config/docker.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -187,6 +187,7 @@ providers:
- sourcePath: "../data/infrastructure-docker/v1beta1/cluster-template-machine-pool.yaml"
- sourcePath: "../data/infrastructure-docker/v1beta1/cluster-template-node-drain.yaml"
- sourcePath: "../data/infrastructure-docker/v1beta1/cluster-template-upgrades.yaml"
- sourcePath: "../data/infrastructure-docker/v1beta1/cluster-template-upgrades-cgroupfs.yaml"
- sourcePath: "../data/infrastructure-docker/v1beta1/cluster-template-kcp-scale-in.yaml"
- sourcePath: "../data/infrastructure-docker/v1beta1/cluster-template-ipv6.yaml"
- sourcePath: "../data/infrastructure-docker/v1beta1/cluster-template-topology.yaml"
Expand All @@ -199,12 +200,12 @@ variables:
# allowing the same e2e config file to be re-used in different Prow jobs e.g. each one with a K8s version permutation.
# The following Kubernetes versions should be the latest versions with already published kindest/node images.
# This avoids building node images in the default case which improves the test duration significantly.
KUBERNETES_VERSION_MANAGEMENT: "v1.23.3"
KUBERNETES_VERSION: "v1.23.3"
KUBERNETES_VERSION_UPGRADE_FROM: "v1.22.4"
KUBERNETES_VERSION_UPGRADE_TO: "v1.23.3"
ETCD_VERSION_UPGRADE_TO: "3.5.1-0"
COREDNS_VERSION_UPGRADE_TO: "1.8.4"
KUBERNETES_VERSION_MANAGEMENT: "v1.24.0"
KUBERNETES_VERSION: "v1.24.0"
KUBERNETES_VERSION_UPGRADE_FROM: "v1.23.6"
KUBERNETES_VERSION_UPGRADE_TO: "v1.24.0"
ETCD_VERSION_UPGRADE_TO: "3.5.3-0"
COREDNS_VERSION_UPGRADE_TO: "v1.8.6"
DOCKER_SERVICE_DOMAIN: "cluster.local"
IP_FAMILY: "IPv4"
DOCKER_SERVICE_CIDRS: "10.128.0.0/12"
Expand All @@ -224,7 +225,7 @@ variables:
# NOTE: We test the latest release with a previous contract.
INIT_WITH_BINARY: "https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.4.7/clusterctl-{OS}-{ARCH}"
INIT_WITH_PROVIDERS_CONTRACT: "v1alpha4"
INIT_WITH_KUBERNETES_VERSION: "v1.23.3"
INIT_WITH_KUBERNETES_VERSION: "v1.24.0"

intervals:
default/wait-controllers: ["3m", "10s"]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,16 +67,10 @@ spec:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
# We have to pin the cgroupDriver to cgroupfs as kubeadm >=1.21 defaults to systemd
# kind will implement systemd support in: https://github.com/kubernetes-sigs/kind/issues/1726
cgroup-driver: cgroupfs
eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'
joinConfiguration:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
# We have to pin the cgroupDriver to cgroupfs as kubeadm >=1.21 defaults to systemd
# kind will implement systemd support in: https://github.com/kubernetes-sigs/kind/issues/1726
cgroup-driver: cgroupfs
eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'
version: "${KUBERNETES_VERSION}"
3 changes: 0 additions & 3 deletions test/e2e/data/infrastructure-docker/v1alpha3/bases/md.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,6 @@ spec:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
# We have to pin the cgroupDriver to cgroupfs as kubeadm >=1.21 defaults to systemd
# kind will implement systemd support in: https://github.com/kubernetes-sigs/kind/issues/1726
cgroup-driver: cgroupfs
eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'
---
# MachineDeployment object with
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,16 +68,10 @@ spec:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
# We have to pin the cgroupDriver to cgroupfs as kubeadm >=1.21 defaults to systemd
# kind will implement systemd support in: https://github.com/kubernetes-sigs/kind/issues/1726
cgroup-driver: cgroupfs
eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'
joinConfiguration:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
# We have to pin the cgroupDriver to cgroupfs as kubeadm >=1.21 defaults to systemd
# kind will implement systemd support in: https://github.com/kubernetes-sigs/kind/issues/1726
cgroup-driver: cgroupfs
eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'
version: "${KUBERNETES_VERSION}"
3 changes: 0 additions & 3 deletions test/e2e/data/infrastructure-docker/v1alpha4/bases/md.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,6 @@ spec:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
# We have to pin the cgroupDriver to cgroupfs as kubeadm >=1.21 defaults to systemd
# kind will implement systemd support in: https://github.com/kubernetes-sigs/kind/issues/1726
cgroup-driver: cgroupfs
eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'
---
# MachineDeployment object
Expand Down
3 changes: 0 additions & 3 deletions test/e2e/data/infrastructure-docker/v1alpha4/bases/mp.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,4 @@ spec:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
# We have to pin the cgroupDriver to cgroupfs as kubeadm >=1.21 defaults to systemd
# kind will implement systemd support in: https://github.com/kubernetes-sigs/kind/issues/1726
cgroup-driver: cgroupfs
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
Original file line number Diff line number Diff line change
Expand Up @@ -48,17 +48,11 @@ spec:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
# We have to pin the cgroupDriver to cgroupfs as kubeadm >=1.21 defaults to systemd
# kind will implement systemd support in: https://github.com/kubernetes-sigs/kind/issues/1726
cgroup-driver: cgroupfs
eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'
joinConfiguration:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
# We have to pin the cgroupDriver to cgroupfs as kubeadm >=1.21 defaults to systemd
# kind will implement systemd support in: https://github.com/kubernetes-sigs/kind/issues/1726
cgroup-driver: cgroupfs
eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'
---
# cp0 Machine
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,16 +68,10 @@ spec:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
# We have to pin the cgroupDriver to cgroupfs as kubeadm >=1.21 defaults to systemd
# kind will implement systemd support in: https://github.com/kubernetes-sigs/kind/issues/1726
cgroup-driver: cgroupfs
eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'
joinConfiguration:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
# We have to pin the cgroupDriver to cgroupfs as kubeadm >=1.21 defaults to systemd
# kind will implement systemd support in: https://github.com/kubernetes-sigs/kind/issues/1726
cgroup-driver: cgroupfs
eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'
version: "${KUBERNETES_VERSION}"
3 changes: 0 additions & 3 deletions test/e2e/data/infrastructure-docker/v1beta1/bases/md.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,6 @@ spec:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
# We have to pin the cgroupDriver to cgroupfs as kubeadm >=1.21 defaults to systemd
# kind will implement systemd support in: https://github.com/kubernetes-sigs/kind/issues/1726
cgroup-driver: cgroupfs
eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'
---
# MachineDeployment object
Expand Down
3 changes: 0 additions & 3 deletions test/e2e/data/infrastructure-docker/v1beta1/bases/mp.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,4 @@ spec:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
# We have to pin the cgroupDriver to cgroupfs as kubeadm >=1.21 defaults to systemd
# kind will implement systemd support in: https://github.com/kubernetes-sigs/kind/issues/1726
cgroup-driver: cgroupfs
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
Original file line number Diff line number Diff line change
Expand Up @@ -48,17 +48,11 @@ spec:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
# We have to pin the cgroupDriver to cgroupfs as kubeadm >=1.21 defaults to systemd
# kind will implement systemd support in: https://github.com/kubernetes-sigs/kind/issues/1726
cgroup-driver: cgroupfs
eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'
joinConfiguration:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
# We have to pin the cgroupDriver to cgroupfs as kubeadm >=1.21 defaults to systemd
# kind will implement systemd support in: https://github.com/kubernetes-sigs/kind/issues/1726
cgroup-driver: cgroupfs
eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'
---
# cp0 Machine
Expand Down

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
- op: add
path: /spec/topology/variables/-
value:
name: kubeadmControlPlaneMaxSurge
value: "0"
Original file line number Diff line number Diff line change
@@ -1,7 +1,10 @@
bases:
- ../bases/crs.yaml
- ../bases/md.yaml
- ../bases/cluster-with-kcp.yaml
- ../bases/cluster-with-topology.yaml

patchesStrategicMerge:
- ./cluster-with-kcp.yaml
patches:
- path: ./kcp-scale-in-variable.yaml
target:
group: cluster.x-k8s.io
version: v1beta1
kind: Cluster
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
resources:
- ../bases/cluster-with-topology.yaml
- ../bases/crs.yaml
- ../bases/cluster-with-topology.yaml
Loading

0 comments on commit 40952f4

Please sign in to comment.