Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

📖clusterctl: more docs #2266

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 10 additions & 2 deletions docs/book/src/clusterctl/commands/adopt.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,19 @@
# clusterctl adopt

## Pre-requisites
The `clusterctl adopt` command is designed for allowing users to start using clusterctl on management clusters originally
created by installing providers with `kubectl apply <components-yaml>` instead of `clusterctl init`.

The adoption process must be repeated for each provider installed in the cluster, thus allowing clusterctl to re-create
the providers inventory as described in the `clusterctl init` [documentation](init.md#additional-information).

### Labels
## Pre-requisites

In order for `clusterctl adopt` to work, ensure the components are correctly
labeled. Please see the [provider contract labels][provider-contract-labels] for reference.

<!-- links -->
[provider-contract-labels]: ../provider-contract.md#labels

## Adopting a provider

TODO
97 changes: 97 additions & 0 deletions docs/book/src/clusterctl/commands/config-cluster.md
Original file line number Diff line number Diff line change
@@ -1 +1,98 @@
# clusterctl config cluster

The `clusterctl config cluster` command returns a YAML template for creating a workload cluster.

For example

```
clusterctl config cluster my-cluster --kubernetes-version v1.16.3 > my-cluster.yaml
```

Creates a YAML file named `my-cluster.yaml` with a predefined list of Cluster API objects; Cluster, Machines,
Machine Deployments, etc. to be deployed in the current namespace (in case, use the `--target-namespace` flag to
specify a different target namespace).

Then, the file can be modified using your editor of choice; when ready, run the following command
to apply the cluster manifest.

```
kubectl apply -f my-cluster.yaml
```

### Selecting the infrastructure provider to use

The `clusterctl config cluster` command uses smart defaults in order to simplify the user experience; in the example above,
it detects that there is only an `aws` infrastructure provider in the current management cluster and so it automatically
selects a cluster template from the `aws` provider's repository.

In case there is more than one infrastructure provider, the following syntax can be used to select which infrastructure
provider to use for the workload cluster:

```
clusterctl config cluster my-cluster --kubernetes-version v1.16.3 \
--infrastructure:aws > my-cluster.yaml
```

or

```
clusterctl config cluster my-cluster --kubernetes-version v1.16.3 \
--infrastructure:aws:v0.4.1 > my-cluster.yaml
```

### Flavors

The infrastructure provider authors can provide different type of cluster templates, or flavors; use the `--flavor` flag
to specify which flavor to use; e.g.

```
clusterctl config cluster my-cluster --kubernetes-version v1.16.3 \
--flavor high-availabilty > my-cluster.yaml
```

Please refer to the providers documentation for more info about available flavors.

### Alternative source for cluster templates

clusterctl uses the provider's repository as a primary source for cluster templates; the following alternative sources
for cluster templates can be used as well:

#### ConfigMaps

Use the `--from-config-map` flag to read cluster templates stored in a Kubernetes ConfigMap; e.g.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--from-config-map is currently not a supported flag. Is there another PR adding this functionality in?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😄 #2265

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should document the other flags --from-config-map-key and --from-config-map-namespace. I was testing this out and was about to open issues for those scenarios but then I saw the other flags in the help output. 😄


```
clusterctl config cluster my-cluster --kubernetes-version v1.16.3 \
--from-config-map my-templates > my-cluster.yaml
```

Also following flags are available `--from-config-map-namespace` (defaults to current namespace) and `--from-config-map-key`
(defaults to `template`).

#### GitHub or local file system folder

Use the `--from` flag to read cluster templates stored in a GitHub repository or in a local file system folder; e.g.

```
clusterctl config cluster my-cluster --kubernetes-version v1.16.3 \
--from https://github.com/my-org/my-repository/blob/master/my-template.yaml > my-cluster.yaml
```

or

```
clusterctl config cluster my-cluster --kubernetes-version v1.16.3 \
--from ~/my-template.yaml > my-cluster.yaml
Copy link
Contributor

@wfernandes wfernandes Feb 6, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--from is currently not a flag. Is there another PR adding this functionality in?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😄 #2265

```

### Variables

If the selected cluster template expects some environment variables, user should ensure those variables are set in advance.

e.g. if the `AWS_CREDENTIALS` variable is expected for a cluster template targeting the `aws` infrastructure, you
should ensure the corresponding environment variable to be set before executing `clusterctl config cluster`.

Please refer to the providers documentation for more info about the required variables or use the
`clusterctl config cluster --list-variables` flag to get a list of variables names required by a cluster template.

The [clusterctl configuration](configuration.md) file can be used as alternative to environment variables.
26 changes: 26 additions & 0 deletions docs/book/src/clusterctl/commands/move.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,3 +35,29 @@ The `Cluster` object created in the target management cluster instead will be ac
process completes.

</aside>

## Pivot

Pivoting is a process for moving the provider components and declared Cluster API resources from a source management
cluster to a target management cluster.

This can now be achieved with the following procedure:

1. Use `clusterctl init` to install the provider components into the target management cluster
2. Use `clusterctl move` to move the cluster-api resources from a Source Management cluster to a Target Management cluster

## Bootstrap & Pivot

The pivot process can be bounded with the creation of a temporary bootstrap cluster
used to provision a target Management cluster.

This can now be achieved with the following procedure:

1. Create a temporary bootstrap cluster, e.g. using Kind or Minikube
2. Use `clusterctl init` to install the provider components
3. Use `clusterctl config cluster ... | kubectl apply -f -` to provision a target management cluster
4. Wait for the target management cluster to be up and running
5. Get the kubeconfig for the new target management cluster
6. Use `clusterctl init` with the new cluster's kubeconfig to install the provider components
7. Use `clusterctl move` to move the Cluster API resources from the bootstrap cluster to the target management cluster
8. Delete the bootstrap cluster
74 changes: 73 additions & 1 deletion docs/book/src/clusterctl/commands/upgrade.md
Original file line number Diff line number Diff line change
@@ -1 +1,73 @@
# clusterctl
# clusterctl upgrade

The `clusterctl upgrade` command can be used to upgrade the version of the Cluster API providers (CRDs, controllers)
installed into a management cluster.

## Background info: management groups

The upgrade procedure is designed to ensure all the providers in a *management group* use the same
API Version of Cluster API (contract), e.g. the v1alpha 3 Cluster API contract.

A management group is a group of providers composed by a CoreProvider and a set of Bootstrap/ControlPlane/Infrastructure
providers watching objects in the same namespace.

Usually, in a management cluster there is only a management group, but in case of [n-core multi tenancy](init.md#multi-tenancy)
there can be more than one.

# upgrade plan

The `clusterctl upgrade plan` command can be used to identify possible targets for upgrades.


```shell
clusterctl upgrade plan
```

Produces an output similar to this:

```shell
Checking new release availability...

Management group: capi-system/cluster-api, latest release available for the v1alpha3 API Version of Cluster API (contract):

NAME NAMESPACE TYPE CURRENT VERSION TARGET VERSION
kubeadm-bootstrap capi-kubeadm-bootstrap-system BootstrapProvider v0.3.0 v0.3.1
cluster-api capi-system CoreProvider v0.3.0 v0.3.1
docker capd-system InfrastructureProvider v0.3.0 v0.3.1


You can now apply the upgrade by executing the following command:

clusterctl upgrade apply --management-group capi-system/cluster-api --contract v1alpha3
```

The output contains the latest release available for each management group in the cluster/for each API Version of Cluster API (contract)
available at the moment.

# upgrade apply

After choosing the desired option for the upgrade, you can run the provided command.

```shell
clusterctl upgrade apply --management-group capi-system/cluster-api --cluster-api-version v1alpha3
```

The upgrade process is composed by two steps:

* Delete the current version of the provider components, while preserving the namespace where the provider components
are hosted and the provider's CRDs.
* Install the new version of the provider components.

Please note that clusterctl does not upgrade Cluster API objects (Clusters, MachineDeployments, Machine etc.); upgrading
such objects are the responsibility of the provider's controllers.

<aside class="note warning">

<h1>Warning!</h1>

The current implementation of the upgrade process does not preserve controllers flags that are not set through the
components YAML/at the installation time.

User is required to re-apply flag values after the upgrade completes.

</aside>
2 changes: 2 additions & 0 deletions docs/book/src/clusterctl/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,8 @@ providers:
type: "CoreProvider"
```

See [provider contract](provider-contract.md) for instructions about how to set up a provider repository.

## Variables

When installing a provider `clusterctl` reads a YAML file that is published in the provider repository; while executing
Expand Down
15 changes: 7 additions & 8 deletions docs/book/src/clusterctl/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,21 +56,14 @@ See the [Minikube documentation](https://minikube.sigs.k8s.io/) for more details
{{#/tab }}
{{#tab Production}}

{{#tabs name:"tab-create-production-cluster" tabs:"Pre-Existing cluster,Pivot"}}
{{#tabs name:"tab-create-production-cluster" tabs:"Pre-Existing cluster"}}
{{#tab Pre-Existing cluster}}

For production use-cases a "real" kubernetes cluster should be used with appropriate backup and DR policies and procedures in place.

```bash
export KUBECONFIG=<...>
```
{{#/tab }}
{{#tab Pivot}}

- Create a bootstrap management cluster with kind/Minikube
- Use `clusterctl init` and `clusterctl config cluster` to create a production cluster (see below)
- "Pivot" the bootstrap management cluster into the production management cluster

{{#/tab }}
{{#/tabs }}

Expand Down Expand Up @@ -224,6 +217,9 @@ it detects that there is only an `aws` infrastructure provider and so it uses th
The `clusterctl config cluster` uses cluster templates which are provided by the infrastructure providers.
See the provider's documentation for more information.

See [`clusterctl config cluster`](commands/config-cluster.md) for details about how to use alternative sources
for cluster templates.

</aside>

<aside class="note warning">
Expand All @@ -233,6 +229,9 @@ See the provider's documentation for more information.
If the cluster template defined by the infrastructure provider expects some environment variables, user
should ensure those variables are set in advance.

See [`clusterctl config cluster`](commands/config-cluster.md) for details about how to discover the list of
variables required by a cluster templates.

</aside>

For example
Expand Down
21 changes: 21 additions & 0 deletions docs/book/src/clusterctl/provider-contract.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,27 @@ It is possible to customize the list of providers for `clusterctl` by changing t

</aside>

#### Creating a provider repository on GitHub

You can use GitHub release to package your provider artifacts for other people to use.

A github release can be used as a provider repository if:

* The release tag is a valid semantic version number
* The components YAML, the metadata YAML and eventually the workload cluster templates are include into the release assets.

See the [GitHub help](https://help.github.com/en/github/administering-a-repository/creating-releases) for more information
about how to create a release.

#### Creating a local provider repository

clusterctl supports reading from a repository defined on the local file system.

A local repository can be defined by creating a `<provider-name>` folder with a `<version>` sub-folder for each hosted release;
the sub-folder name MUST be a valid semantic version number.

Each version sub-folder MUST contain the corresponding components YAML, the metadata YAML and eventually the workload cluster templates.

### Metadata YAML

The provider is required to generate a **metadata YAML** file and publish it to the provider's repository.
Expand Down