Skip to content

Commit

Permalink
Document multi-tenancy contract
Browse files Browse the repository at this point in the history
  • Loading branch information
fabriziopandini committed Jan 20, 2021
1 parent daba8fe commit bb54134
Show file tree
Hide file tree
Showing 8 changed files with 78 additions and 131 deletions.
2 changes: 2 additions & 0 deletions docs/book/src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,8 @@
- [MachineHealthCheck](./developer/architecture/controllers/machine-health-check.md)
- [Control Plane](./developer/architecture/controllers/control-plane.md)
- [MachinePool](./developer/architecture/controllers/machine-pool.md)
- [Multi-tenancy](./developer/architecture/controllers/multi-tenancy.md)
- [Support multiple instances](./developer/architecture/controllers/support-multiple-instances.md)
- [Provider Implementers](./developer/providers/implementers.md)
- [v1alpha1 to v1alpha2](./developer/providers/v1alpha1-to-v1alpha2.md)
- [v1alpha2 to v1alpha3](./developer/providers/v1alpha2-to-v1alpha3.md)
Expand Down
60 changes: 0 additions & 60 deletions docs/book/src/clusterctl/commands/init.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,66 +125,6 @@ same namespace.

</aside>

#### Multi-tenancy

*Multi-tenancy* for Cluster API means a management cluster where multiple instances of the same provider are installed.

The user can achieve multi-tenancy configurations with `clusterctl` by a combination of:

- Multiple calls to `clusterctl init`;
- Usage of the `--target-namespace` flag;
- Usage of the `--watching-namespace` flag;

The `clusterctl` command officially supports the following multi-tenancy configurations:

{{#tabs name:"tab-multi-tenancy" tabs:"n-Infra, n-Core"}}
{{#tab n-Infra}}
A management cluster with <em>n (n>1)</em> instances of an infrastructure provider, and <em>only one</em> instance
of Cluster API core provider, bootstrap provider and control plane provider (optional).

For example:

* Cluster API core provider installed in the `capi-system` namespace, watching objects in all namespaces;
* The kubeadm bootstrap provider in `capbpk-system`, watching all namespaces;
* The kubeadm control plane provider in `cacpk-system`, watching all namespaces;
* The `aws` infrastructure provider in `aws-system1`, watching objects in `aws-system1` only;
* The `aws` infrastructure provider in `aws-system2`, watching objects in `aws-system2` only;
* etc. (more instances of the `aws` provider)

{{#/tab }}
{{#tab n-Core}}
A management cluster with <em>n (n>1)</em> instances of the Cluster API core provider, each one with <em>a dedicated</em>
instance of infrastructure provider, bootstrap provider, and control plane provider (optional).

For example:

* A Cluster API core provider installed in the `capi-system1` namespace, watching objects in `capi-system1` only, and with:
* The kubeadm bootstrap provider in `capi-system1`, watching `capi-system1`;
* The kubeadm control plane provider in `capi-system1`, watching `capi-system1`;
* The `aws` infrastructure provider in `capi-system1`, watching objects `capi-system1`;
* A Cluster API core provider installed in the `capi-system2` namespace, watching objects in `capi-system2` only, and with:
* The kubeadm bootstrap provider in `capi-system2`, watching `capi-system2`;
* The kubeadm control plane provider in `capi-system2`, watching `capi-system2`;
* The `aws` infrastructure provider in `capi-system2`, watching objects `capi-system2`;
* etc. (more instances of the Cluster API core provider and the dedicated providers)


{{#/tab }}
{{#/tabs }}


<aside class="note warning">

<h1>Warning</h1>

It is possible to achieve many other different configurations of multi-tenancy with `clusterctl`.

However, the user should be aware that configurations not listed above are not verified by the `clusterctl`tests
and support will be provided at best effort only.

</aside>


## Provider repositories

To access provider specific information, such as the components YAML to be used for installing a provider,
Expand Down
64 changes: 0 additions & 64 deletions docs/book/src/clusterctl/commands/upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,6 @@
The `clusterctl upgrade` command can be used to upgrade the version of the Cluster API providers (CRDs, controllers)
installed into a management cluster.

## Background info: management groups

The upgrade procedure is designed to ensure all the providers in a *management group* use the same
API Version of Cluster API (contract), e.g. the v1alpha 3 Cluster API contract.

A management group is a group of providers composed by a CoreProvider and a set of Bootstrap/ControlPlane/Infrastructure
providers watching objects in the same namespace.

Usually, in a management cluster there is only a management group, but in case of [n-core multi tenancy](init.md#multi-tenancy)
there can be more than one.

# upgrade plan

The `clusterctl upgrade plan` command can be used to identify possible targets for upgrades.
Expand Down Expand Up @@ -106,56 +95,3 @@ clusterctl upgrade apply --management-group capi-system/cluster-api \
In this case, all the provider's versions must be explicitly stated.

</aside>

## Upgrading a Multi-tenancy management cluster

[Multi-tenancy](init.md#multi-tenancy) for Cluster API means a management cluster where multiple instances of the same
provider are installed, and this is achieved by multiple calls to `clusterctl init`, and in most cases, each one with
different environment variables for customizing the provider instances.

In order to upgrade a multi-tenancy management cluster, and preserve the instance specific settings, you should do
the same during upgrades and execute multiple calls to `clusterctl upgrade apply`, each one with different environment
variables.

For instance, in case of a management cluster with n>1 instances of an infrastructure provider, and only one instance
of Cluster API core provider, bootstrap provider and control plane provider, you should:

Run once `clusterctl upgrade apply` for the core provider, the bootstrap provider and the control plane provider;
this can be achieved by using the `--core`, `--bootstrap` and `--control-plane` flags followed by the upgrade target
for each one of those providers, e.g.

```shell
clusterctl upgrade apply --management-group capi-system/cluster-api \
--core capi-system/cluster-api:v0.3.1 \
--bootstrap capi-kubeadm-bootstrap-system/kubeadm:v0.3.1 \
--control-plane capi-kubeadm-control-plane-system/kubeadm:v0.3.1
```

Run `clusterctl upgrade apply` for each infrastructure provider instance, using the `--infrastructure` flag,
taking care to provide different environment variables for each call (as in the initial setup), e.g.

Set the environment variables for instance 1 and then run:

```shell
clusterctl upgrade apply --management-group capi-system/cluster-api \
--infrastructure instance1/docker:v0.3.1
```

Afterwards, set the environment variables for instance 2 and then run:

```shell
clusterctl upgrade apply --management-group capi-system/cluster-api \
--infrastructure instance2/docker:v0.3.1
```

etc.

<aside class="note warning">

<h1>tips</h1>

As alternative of using multiple set of env variables it is possible to use
multiple config files and pass them to the different `clusterctl upgrade apply` calls
using the `--config` flag.

</aside>
4 changes: 1 addition & 3 deletions docs/book/src/clusterctl/provider-contract.md
Original file line number Diff line number Diff line change
Expand Up @@ -283,8 +283,6 @@ Provider authors should be aware of the following transformations that `clusterc
* Enforcement of target namespace:
* The name of the namespace object is set;
* The namespace field of all the objects is set (with exception of cluster wide objects like e.g. ClusterRoles);
* ClusterRole and ClusterRoleBinding are renamed by adding a “${namespace}-“ prefix to the name; this change reduces the risks
of conflicts between several instances of the same provider in case of multi tenancy;
* Enforcement of watching namespace;
* All components are labeled;

Expand All @@ -307,7 +305,7 @@ If, for any reason, the provider authors/YAML designers decide not to comply wit
* implement link to external objects from a cluster template (e.g. secrets, configMaps NOT included in the cluster template)

The provider authors/YAML designers should be aware that it is their responsibility to ensure the proper
functioning of all the `clusterctl` features both in single tenancy or multi-tenancy scenarios and/or document known limitations.
functioning of `clusterctl` when using non-compliant component YAML or cluster templates.

### Move

Expand Down
13 changes: 13 additions & 0 deletions docs/book/src/developer/architecture/controllers/multi-tenancy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Multi tenancy

Multi tenancy in Cluster API defines the capability of an infrastructure provider to manage different credentials, each
one of them corresponding to an infrastructure tenant.

## Contract

In order to support multi tenancy, the following rule applies:

- Infrastructure providers MUST be able to manage different sets of credentials (if any)
- Providers SHOULD deploy and run any kind of webhook (validation, admission, conversion)
following Cluster API codebase best practices for the same release.
- Providers MUST create and publish a `{type}-component.yaml` accordingly.
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# Support running multiple instances of the same provider

Up until v1alpha3, the need of supporting [multiple credentials](../../../reference/glossary.md#multi-tenancy) was addressed by running multiple
instances of the same provider, each one with its own set of credentials while watching different namespaces.

However, running multiple instances of the same provider proved to be complicated for several reasons:

- Complexity in packaging providers: CustomResourceDefinitions (CRD) are global resources, these may have a reference
to a service that can be used to convert between CRD versions (conversion webhooks). Only one of these services should
be running at any given time, this requirement led us to previously split the webhooks code to a different deployment
and namespace.
- Complexity in deploying providers, due to the requirement to ensure consistency of the management cluster, e.g.
controllers watching the same namespaces.
- The introduction of the concept of management groups in clusterctl, with impacts on the user experience/documentation.
- Complexity in managing co-existence of different versions of the same provider while there could be only
one version of CRDs and webhooks. Please note that this constraint generates a risk, because some version of the provider
de-facto were forced to run with CRDs and webhooks deployed from a different version.

Nevertheless, we want to make it possible for users to choose to deploy multiple instances of the same providers,
in case the above limitations/extra complexity are acceptable for them.

## Contract

In order to make it possible for users to deploy multiple instances of the same provider:

- Providers MUST support the `--namespace` flag in their controllers.

⚠️ Users selecting this deployment model, please be aware:

- Support should be considered best-effort.
- Cluster API (incl. every provider managed under `kubernetes-sigs`, won't release a specialized components file
supporting the scenario described above; however, users should be able to create such deployment model from
the `/config` folder.
- Cluster API (incl. every provider managed under `kubernetes-sigs`) testing infrastructure won't run test cases
with multiple instances of the same provider.

In conclusion, giving the increasingly complex task that is to manage multiple instances of the same controllers,
the Cluster API community may only provide best effort support for users that choose this model.

As always, if some members of the community would like to take on the responsibility of managing this model,
please reach out through the usual communication channels, we'll make sure to guide you in the right path.
13 changes: 13 additions & 0 deletions docs/book/src/developer/providers/v1alpha3-to-v1alpha4.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,3 +41,16 @@ the delegating client by default under the hood, so this can be now removed.
- The functions `fake.NewFakeClientWithScheme` and `fake.NewFakeClient` have been deprecated.
- Switch to `fake.NewClientBuilder().WithObjects().Build()` instead, which provides a cleaner interface
to create a new fake client with objects, lists, or a scheme.

## Multi tenancy

Up until v1alpha3, the need of supporting multiple credentials was addressed by running multiple
instances of the same provider, each one with its own set of credentials while watching different namespaces.

Starting from v1alpha4 instead we are going require that an infrastructure provider should manage different credentials,
each one of them corresponding to an infrastructure tenant.

see [Multi-tenancy](../architecture/controllers/multi-tenancy.md) and [Support multiple instances](../architecture/controllers/support-multiple-instances.md) for
more details.

Specific changes related to this topic will be detailed in this document.
12 changes: 8 additions & 4 deletions docs/book/src/reference/glossary.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,11 +142,15 @@ Perform create, scale, upgrade, or destroy operations on the cluster.

The cluster where one or more Infrastructure Providers run, and where resources (e.g. Machines) are stored. Typically referred to when you are provisioning multiple workload clusters.

### Management group
### Multi-tenancy

A management group is a group of providers composed by a CoreProvider and a set of Bootstrap/ControlPlane/Infrastructure providers
watching objects in the same namespace. For example, a management group can be used for upgrades, in order to ensure all the providers
in a management group support the same Cluster API version.
Multi tenancy in Cluster API defines the capability of an infrastructure provider to manage different credentials, each
one of them corresponding to an infrastructure tenant.

Please note that up until v1alpha3 this concept had a different meaning, referring to the capability to run multiple
instances of the same provider, each one with its own credentials; starting from v1alpha4 we are disambiguating the two concepts.

see [Multi-tenancy](../developer/architecture/controllers/multi-tenancy.md) and [Support multiple instances](../developer/architecture/controllers/support-multiple-instances.md).

# N
---
Expand Down

0 comments on commit bb54134

Please sign in to comment.