Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

📖 Update upgrade docs for v1alpha4 #4849

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 14 additions & 11 deletions docs/book/src/clusterctl/commands/upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,20 +15,23 @@ clusterctl upgrade plan
Produces an output similar to this:

```shell
Checking cert-manager version...
Cert-Manager will be upgraded from "v0.11.0" to "v1.1.0"

Checking new release availability...

Latest release available for the v1alpha3 API Version of Cluster API (contract):
Management group: capi-system/cluster-api, latest release available for the v1alpha4 API Version of Cluster API (contract):

NAME NAMESPACE TYPE CURRENT VERSION TARGET VERSION
cluster-api capi-system CoreProvider v0.3.0 v0.3.1
kubeadm capi-kubeadm-bootstrap-system BootstrapProvider v0.3.0 v0.3.1
kubeadm capi-kubeadm-control-plane-system ControlPlaneProvider v0.3.0 v0.3.1
docker capd-system InfrastructureProvider v0.3.0 v0.3.1
NAME NAMESPACE TYPE CURRENT VERSION NEXT VERSION
bootstrap-kubeadm capi-kubeadm-bootstrap-system BootstrapProvider v0.4.0 v0.4.1
control-plane-kubeadm capi-kubeadm-control-plane-system ControlPlaneProvider v0.4.0 v0.4.1
cluster-api capi-system CoreProvider v0.4.0 v0.4.1
infrastructure-azure capz-system InfrastructureProvider v0.4.0 v0.4.1


You can now apply the upgrade by executing the following command:

clusterctl upgrade apply --contract v1alpha3
clusterctl upgrade apply --contract v1alpha4
```

The output contains the latest release available for each API Version of Cluster API (contract)
Expand All @@ -51,7 +54,7 @@ command to upgrade all the providers in the management cluster. This upgrades
all the providers to the latest stable releases.

```shell
clusterctl upgrade apply --contract v1alpha3
clusterctl upgrade apply --contract v1alpha4
```

The upgrade process is composed by three steps:
Expand Down Expand Up @@ -84,9 +87,9 @@ the following:

```shell
clusterctl upgrade apply \
--core capi-system/cluster-api:v0.3.1 \
--bootstrap capi-kubeadm-bootstrap-system/kubeadm:v0.3.1 \
--control-plane capi-kubeadm-control-plane-system/kubeadm:v0.3.1 \
--core capi-system/cluster-api:v0.4.1 \
--bootstrap capi-kubeadm-bootstrap-system/kubeadm:v0.4.1 \
--control-plane capi-kubeadm-control-plane-system/kubeadm:v0.4.1 \
--infrastructure capv-system/vsphere:v0.7.0-alpha.0
```

Expand Down
110 changes: 8 additions & 102 deletions docs/book/src/tasks/upgrading-cluster-api-versions.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,117 +8,23 @@ features and improvements.
## Considerations

If moving between different API versions, there may be additional tasks that you need to complete. See below for
instructions moving between v1alpha2 and v1alpha3.
instructions moving between v1alpha3 and v1alpha4.

Ensure that the version of Cluster API is compatible with the Kubernetes version of the management cluster.

## Upgrading to newer versions of 0.3.x
## Upgrading to newer versions of 0.4.x

It is [recommended to use clusterctl to upgrade between versions of Cluster API 0.3.x](../clusterctl/commands/upgrade.md).
Use [clusterctl to upgrade between versions of Cluster API 0.4.x](../clusterctl/commands/upgrade.md).

## Upgrading from Cluster API v1alpha2 (0.2.x) to Cluster API v1alpha3 (0.3.x)
## Upgrading from Cluster API v1alpha3 (0.3.x) to Cluster API v1alpha4 (0.4.x)

We will be using the [clusterctl init] command to upgrade an existing [management cluster] from `v1alpha2` to `v1alpha3`.
For detailed information about the changes from `v1alpha3` to `v1alpha4`, please refer to the [Cluster API v1alpha3 compared to v1alpha4 section].

For detailed information about the changes from `v1alpha2` to `v1alpha3`, please refer to the [Cluster API v1alpha2 compared to v1alpha3 section].
Use [clusterctl to upgrade from Cluster API v0.3.x to Cluster API 0.4.x](../clusterctl/commands/upgrade.md).

### Prerequisites

There are a few preliminary steps needed to be able to run `clusterctl init` on a [management cluster] with `v1alpha2` [components] installed.

#### Delete the cabpk-system namespace

<aside class="note warning">

<h1>Warning</h1>

Please proceed with caution and ensure you do not have any additional components deployed on the namespace.

</aside>

Delete the `cabpk-system` namespace by running:

```bash
kubectl delete namespace cabpk-system
```

#### Delete the core and infrastructure provider controller-manager deployments

Delete the `capi-controller-manager` deployment from the `capi-system` namespace:

```bash
kubectl delete deployment capi-controller-manager -n capi-system
```

Depending on your infrastructure provider, delete the controller-manager deployment.

For example, if you are using the [AWS provider], delete the `capa-controller-manager` deployment from the `capa-system` namespace:

```bash
kubectl delete deployment capa-controller-manager -n capa-system
```

#### Optional: Ensure preserveUnknownFields is set to 'false' for the infrastructure provider CRDs Spec
This should be the case for all infrastructure providers using conversion webhooks to allow upgrading from `v1alpha2` to
`v1alpha3`.

This can verified by running `kubectl get crd <crd name>.infrastructure.cluster.x-k8s.io -o yaml` for all the
infrastructure provider CRDs.

### Upgrade Cluster API components using clusterctl

Run [clusterctl init] with the relevant infrastructure flag. For the [AWS provider] you would run:

```bash
clusterctl init --infrastructure aws
```

You should now be able to manage your resources using the `v1alpha3` version of the Cluster API components.

### Adopting existing machines into KubeadmControlPlane management

<aside class="note warning">

<h1> Important </h1>

You must be running at least Cluster API 0.3.7 for adoption to be successful.

</aside>

If your cluster has existing machines labeled with `cluster.x-k8s.io/control-plane`, you may opt in to management of those machines by
creating a new KubeadmControlPlane object and updating the associated Cluster object's `controlPlaneRef` like so:

```yaml
---
apiVersion: "cluster.x-k8s.io/v1alpha3"
kind: Cluster
...
spec:
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
name: controlplane
namespace: default
...
```

Caveats:

* The KCP controller will refuse to adopt any control plane Machines not bootstrapped with the kubeadm bootstrapper.
* The KCP controller may immediately begin upgrading Machines post-adoption if they're out of date.
* The KCP controller attempts to behave intelligently when adopting existing Machines, but because the bootstrapping process sets various fields in the KubeadmConfig of a machine it's not always obvious the original user-supplied `KubeadmConfig` would have been for that machine. The controller attempts to guess this intent to not replace Machines unnecessarily, so if it guesses wrongly, the consequence is that the KCP controller will effect an "upgrade" to its current config.
* If the cluster's PKI materials were generated by an initial KubeadmConfig reconcile, they'll be owned by the KubeadmConfig bound to that machine. The adoption process re-parents these resources to the KCP so they're not lost during an upgrade, but deleting the KCP post-adoption will destroy those materials.
* The `ClusterConfiguration` is only partially reconciled with their ConfigMaps the workload cluster, and `kubeadm` considers the ConfigMap authoritative. Fields which are reconciled include:
* `kubeadmConfigSpec.clusterConfiguration.etcd.local.imageRepository`
* `kubeadmConfigSpec.clusterConfiguration.etcd.local.imageTag`
* `kubeadmConfigSpec.clusterConfiguration.dns.imageRepository`
* `kubeadmConfigSpec.clusterConfiguration.dns.imageTag`
* Further information can be found in [issue 2083][issue2083]
You should now be able to manage your resources using the `v1alpha4` version of the Cluster API components.

<!-- links -->
[components]: ../reference/glossary.md#provider-components
[management cluster]: ../reference/glossary.md#management-cluster
[AWS provider]: https://github.com/kubernetes-sigs/cluster-api-provider-aws
[clusterctl init]: ../clusterctl/commands/init.md
[Cluster API v1alpha2 compared to v1alpha3 section]: ../developer/providers/v1alpha2-to-v1alpha3.md
[issue2083]: https://github.com/kubernetes-sigs/cluster-api/issues/2083
[Cluster API v1alpha3 compared to v1alpha4 section]: ../developer/providers/v1alpha3-to-v1alpha4.md