diff --git a/docs/book/src/SUMMARY.md b/docs/book/src/SUMMARY.md
index 1ef7d40b6f22..bd72d1eb7463 100644
--- a/docs/book/src/SUMMARY.md
+++ b/docs/book/src/SUMMARY.md
@@ -8,6 +8,14 @@
- [Certificate Management](./tasks/certs/index.md)
- [Using Custom Certificates](./tasks/certs/using-custom-certificates.md)
- [Generating a Kubeconfig](./tasks/certs/generate-kubeconfig.md)
+- [clusterctl](./clusterctl/intro.md)
+ - [init](./clusterctl/init.md)
+ - [move](./clusterctl/move.md)
+ - [update](./clusterctl/update.md)
+ - [config cluster](clusterctl/config-cluster.md)
+ - [adopt](clusterctl/adopt.md)
+ - [clusterctl configuration](clusterctl/configuring-clusterctl.md)
+ - [providers contract](clusterctl/providers-contract.md)
- [Developer Guide](./architecture/developer-guide.md)
- [Repository Layout](./architecture/repository-layout.md)
- [Rapid iterative development with Tilt](./developer/tilt.md)
@@ -33,6 +41,5 @@
- [Reference](./reference/reference.md)
- [Glossary](./reference/glossary.md)
- [Provider List](./reference/providers.md)
- - [clusterctl CLI](./tooling/clusterctl.md)
- [Code of Conduct](./code-of-conduct.md)
- [Contributing](./CONTRIBUTING.md)
diff --git a/docs/book/src/clusterctl/adopt.md b/docs/book/src/clusterctl/adopt.md
new file mode 100644
index 000000000000..30404ce4c546
--- /dev/null
+++ b/docs/book/src/clusterctl/adopt.md
@@ -0,0 +1 @@
+TODO
\ No newline at end of file
diff --git a/docs/book/src/clusterctl/config-cluster.md b/docs/book/src/clusterctl/config-cluster.md
new file mode 100644
index 000000000000..1333ed77b7e1
--- /dev/null
+++ b/docs/book/src/clusterctl/config-cluster.md
@@ -0,0 +1 @@
+TODO
diff --git a/docs/book/src/clusterctl/configuring-clusterctl.md b/docs/book/src/clusterctl/configuring-clusterctl.md
new file mode 100644
index 000000000000..17ba72d27904
--- /dev/null
+++ b/docs/book/src/clusterctl/configuring-clusterctl.md
@@ -0,0 +1,24 @@
+# The clusterctl config file
+
+The clusterctl config file can be used for:
+
+- Customizing the list of providers/providers repositories that an instance of clusterctl can use
+- Providing variables values to be used for variables substitution when installing components YAML or cluster templates
+
+## Provider repositories
+
+The `clusterctl` command is designed to work with all the providers implementing
+the [clustetrctl contract](providers-contract.md).
+
+Each provider is expected to define a provider repository, that is well-known place where the release assets for
+a provider are published. An example of provider repository is the [Github release assets for Cluster API](https://github.com/kubernetes-sigs/cluster-api/releases).
+
+The `clusterctl` command ships with a pre-defined list of provider repositories that includes all the providers implementation
+developed as a SIG-cluster-lifecycle project.
+
+The user can change this list by adding new providers or changing the
+repository address for a pre-defined provider by using ....
+
+## Variables
+
+TODO
\ No newline at end of file
diff --git a/docs/book/src/clusterctl/init.md b/docs/book/src/clusterctl/init.md
new file mode 100644
index 000000000000..c57703790f1e
--- /dev/null
+++ b/docs/book/src/clusterctl/init.md
@@ -0,0 +1,245 @@
+# clusterctl init
+
+The `clusterctl init` command installs the cluster-API components and transforms the Kubernetes cluster
+into a management cluster.
+
+This document provides more detail on how `clusterctl init` works and on the supported options for customizing your
+management cluster.
+
+## Defining the management cluster
+
+The `clusterctl init` command accepts in input a list of providers to install.
+
+
+
+#### Automatically installed providers
+
+The `clusterctl init` command automatically adds to the list of providers to install the Cluster API core provider and
+the kubeadm bootstrap provider. This allows tu use a concise command syntax for initializing a management cluster. e.g.
+use the command:
+
+`clusterctl init --infrastracture aws`
+
+To install the `aws` infrastructure provider, the Cluster API core provider and the kubeadm bootstrap provider
+
+
+
+
+#### Provider version
+
+The `clusterctl init` command by default installs the latest version available for each selected provider.
+
+
+
+#### Target namespace
+
+The `clusterctl init` command by default installs each provider in the default target namespace that is defined in
+the provider's component YAML file, like e.g. `capi-system` for the Cluster API core provider.
+
+See the provider documentation for more details.
+
+
+
+
+
+#### Watching namespace
+
+The `clusterctl init` command by default installs each provider configured for watching objects in the default watching
+namespace that is defined in the provider's component YAML file, like e.g. `` (empty, all-namespaces) for the Cluster API core provider.
+
+See the provider documentation for more details.
+
+
+
+
+
+#### Multi-tenancy
+
+With the term *multi-tenant* we indicating a management cluster where more instances of one or more providers are installed.
+
+The user can achieve multi-tenancy configurations with `clusterctl` by a combination of:
+
+- Multiple calls to `clusterctl init`;
+- Usage of the `--target-namespace` flag;
+- Usage of the `--watching-namespace` flag;
+
+The `clusterctl` command officially supports the following multi-tenancy configurations:
+
+{{#tabs name:"tab-multi-tenancy" tabs:"n-Infra, n-Core"}}
+{{#tab n-Infra}}
+A management cluster with n (with n>1) instances of an infrastructure provider, and only one instance
+of Cluster API core provider, bootstrap provider and control plane provider (optional).
+
+For example:
+
+* Cluster API core provider installed in the `capi-system` namespace, watching objects in all namespaces;
+* The kubeadm bootstrap provider in `capbpk-system`, watching all namespaces;
+* The kubeadm control plane provider in `cacpk-system`, watching all namespaces;
+* An `aws` infrastructure provider in `aws-system1`, watching objects in `aws-system1` only;
+* An `aws` infrastructure provider in `aws-system2`, watching objects in `aws-system2` only;
+* etc. (more instances of the `aws` provider)
+
+{{#/tab }}
+{{#tab n-Core}}
+A management cluster with n (with n>1) instances of the Cluster API core provider, each one with a dedicated
+instance of infrastructure provider, bootstrap provider, and control plane provider (optional.
+
+For example:
+
+* A Cluster API core provider installed in the `capi-system1` namespace, watching objects in `capi-system1` only with a dedicated:
+ * kubeadm bootstrap provider in `capi-system1`, watching `capi-system1`;
+ * kubeadm control plane provider in `capi-system1`, watching `capi-system1`;
+ * `aws` infrastructure provider in `capi-system1`, watching objects `capi-system1`;
+* A Cluster API core provider installed in the `capi-system2` namespace, watching objects in `capi-system2` only with a dedicated:
+ * kubeadm bootstrap provider in `capi-system2`, watching `capi-system2`;
+ * kubeadm control plane provider in `capi-system2`, watching `capi-system2`;
+ * `aws` infrastructure provider in `capi-system2`, watching objects `capi-system2`;
+* etc. (more instances of the Cluster API core provider and the dedicated providers)
+
+
+{{#/tab }}
+{{#/tabs }}
+
+
+
+
+
+## Provider repositories
+
+In order to access provider specific information, like the components YAML to be used for installing a provider,
+`clusterctl init` access the **provider repositories**, that are well-known place where the release assets for
+a provider are published.
+
+See the [clusterctl configuration](configuring-clusterctl.md) for more info about provider's repository configurations.
+
+
+
+## Variable substitution
+Providers can use variables into the components YAML published in the provider's repository.
+
+During `clusterctl init`, those variables will be replace with environment variables or with variables read from the
+[clusterctl configuration](configuring-clusterctl.md).
+
+
+
+
+
+## Additional information
+
+When installing a provider the `clusterctl init` command takes following additional steps in order to simplify
+the management of lifecycle of the provider's components.
+
+* All the provider's components are labeled, so they can be easily identified in
+subsequent moments of the provider's lifecycle, e.g. for upgrades. Applied labels are:
+ - `clusterctl.cluster.x-k8s.io`
+ - `clusterctl.cluster.x-k8s.io/provider=`
+
+* An additional `Provider` object is created in the target namespace where the provider is installed.
+This objects keeps track of the provider version, of the watching namespace and other useful information
+for the inventory of the providers currently installed in the management cluster.
+
+
diff --git a/docs/book/src/clusterctl/intro.md b/docs/book/src/clusterctl/intro.md
new file mode 100644
index 000000000000..04bb1728063b
--- /dev/null
+++ b/docs/book/src/clusterctl/intro.md
@@ -0,0 +1,235 @@
+# clusterctl
+
+The `clusterctl` CLI tool handles the lifecycle of a cluster-API management cluster.
+
+## Day 1
+
+The `clusterctl` user interface is specifically designed for providing a simple "Day1 experience" and a
+quick start with cluster API.
+
+### Prerequisites
+
+* Cluster API requires an existing kubernetes cluster accessible via kubectl;
+
+{{#tabs name:"tab-create-cluster" tabs:"Development,Production"}}
+{{#tab Development}}
+
+{{#tabs name:"tab-create-development-cluster" tabs:"kind,minikube"}}
+{{#tab kind}}
+TODO
+{{#/tab }}
+{{#tab minikube}}
+TODO
+{{#/tab }}
+{{#/tabs }}
+
+{{#/tab }}
+{{#tab Production}}
+TODO
+{{#/tab }}
+{{#/tabs }}
+
+* If the provider of your choice expects some preliminary steps to be executed, users should take care of in advance;
+* If the provider of your choice expects some environment variables, e.g. the `AWS_CREDENTIALS` for the `aws`
+infrastructure provider, user should ensure those variables are set in advance.
+
+{{#tabs name:"tab-installation-infrastructure" tabs:"AWS,Azure,Docker,GCP,vSphere,OpenStack"}}
+{{#tab AWS}}
+
+#### Preliminary steps
+
+Download the latest binary of `clusterawsadm` from the [AWS provider releases] and make sure to place it in your path.
+
+
+
+#### Environment variables
+
+```bash
+# Create the base64 encoded credentials using clusterawsadm.
+# This command uses your environment variables and encodes
+# them in a value to be stored in a Kubernetes Secret.
+export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm alpha bootstrap encode-aws-credentials)
+```
+
+{{#/tab }}
+{{#tab Azure}}
+
+#### Preliminary steps
+
+#### Environment variables
+
+```bash
+# Create the base64 encoded credentials
+export AZURE_SUBSCRIPTION_ID_B64="$(echo -n "$AZURE_SUBSCRIPTION_ID" | base64 | tr -d '\n')"
+export AZURE_TENANT_ID_B64="$(echo -n "$AZURE_TENANT_ID" | base64 | tr -d '\n')"
+export AZURE_CLIENT_ID_B64="$(echo -n "$AZURE_CLIENT_ID" | base64 | tr -d '\n')"
+export AZURE_CLIENT_SECRET_B64="$(echo -n "$AZURE_CLIENT_SECRET" | base64 | tr -d '\n')"
+```
+
+
+
+{{#/tab }}
+{{#tab Docker}}
+
+#### Preliminary steps
+
+#### Environment variables
+
+{{#/tab }}
+{{#tab GCP}}
+
+#### Preliminary steps
+
+#### Environment variables
+
+```bash
+# Create the base64 encoded credentials by catting your credentials json.
+# This command uses your environment variables and encodes
+# them in a value to be stored in a Kubernetes Secret.
+export GCP_B64ENCODED_CREDENTIALS=$( cat /path/to/gcp-credentials.json | base64 | tr -d '\n' )
+```
+
+{{#/tab }}
+{{#tab vSphere}}
+
+#### Preliminary steps
+
+It is required to use an official CAPV machine image for your vSphere VM templates. See [Uploading CAPV Machine Images](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/master/docs/getting_started.md#uploading-the-capv-machine-image) for instructions on how to do this.
+
+For more information about prerequisites, credentials management, or permissions for vSphere, visit the [getting started guide](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/master/docs/getting_started.md).
+
+#### Environment variables
+
+
+{{#/tab }}
+{{#tab OpenStack}}
+
+#### Preliminary steps
+
+#### Environment variables
+
+{{#/tab }}
+{{#/tabs }}
+
+### 1. Initialize the management cluster
+
+The `clusterctl init` command installs the cluster-API components and transform the Kubernetes cluster
+into a management cluster.
+
+```shell
+clusterctl init --infrastructure aws
+```
+
+The command accepts in input a list of providers to install; when executed for the first time, `clusterctl init`
+automatically adds to the list the cluster API core provider, and if a bootstrap provider is not specified, it adds
+also the kubeadm bootstrap provider.
+
+The output of `clusterctl init` is similar to this:
+
+```shell
+performing init...
+ - cluster-api CoreProvider installed (v0.2.8)
+ - aws InfrastructureProvider installed (v0.4.1)
+
+Your cluster API management cluster has been initialized successfully!
+
+You can now create your first workload cluster by running the following:
+
+ clusterctl config cluster [name] --kubernetes-version [version] | kubectl apply -f -
+```
+
+### 2. Create the first workload cluster
+
+Once the management cluster is ready, you can create the first workload cluster.
+
+The `clusterctl create config` command returns a YAML template for creating a workload cluster.
+Store it locally, eventually customize it, and the apply it to start provisioning the workload cluster.
+
+
+
+
+
+
+
+For example
+
+```
+clusterctl config cluster my-cluster2 --kubernetes-version v1.16.3 > my-cluster1.yaml
+```
+
+Creates a YAML file named `my-cluster1.yaml` with a predefined list of cluster API objects; Cluster, Machines,
+Machine Deployments, etc.
+
+The file can be eventually modified using your editor of choice; when ready, run the following command
+to apply the cluster manifest.
+
+```
+kubectl apply -f my-cluster1.yaml
+```
+
+The output is similar to this:
+
+```
+kubeadmconfig.bootstrap.cluster.x-k8s.io/my-cluster2-controlplane-0 created
+kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/my-cluster2-worker created
+cluster.cluster.x-k8s.io/my-cluster2 created
+machine.cluster.x-k8s.io/my-cluster2-controlplane-0 created
+machinedeployment.cluster.x-k8s.io/my-cluster2-worker created
+awscluster.infrastructure.cluster.x-k8s.io/my-cluster2 created
+awsmachine.infrastructure.cluster.x-k8s.io/my-cluster2-controlplane-0 created
+awsmachinetemplate.infrastructure.cluster.x-k8s.io/my-cluster2-worker created
+```
+
+## Day 2 operations
+
+The `clusterctl` command supports also day2 operations:
+
+* use `clusterctl init` to install additional Cluster API providers
+* use `clusterctl upgrade` to upgrade Cluster API providers
+* use `clusterctl delete` to delete Cluster API providers
+
+* use `clusterctl config cluster` to spec out additional workload clusters
+* use `clusterctl move` to migrate objects defining a workload clusters (e.g. Cluster, Machines) from a management cluster to another management cluster
+
+## What's next
+
+* Deep dive on `clusterctl init`
+* Focus on `clusterctl upgrade`
+* Focus on `clusterctl move`
diff --git a/docs/book/src/clusterctl/move.md b/docs/book/src/clusterctl/move.md
new file mode 100644
index 000000000000..30404ce4c546
--- /dev/null
+++ b/docs/book/src/clusterctl/move.md
@@ -0,0 +1 @@
+TODO
\ No newline at end of file
diff --git a/docs/book/src/clusterctl/providers-contract.md b/docs/book/src/clusterctl/providers-contract.md
new file mode 100644
index 000000000000..dc4b09b2c8d9
--- /dev/null
+++ b/docs/book/src/clusterctl/providers-contract.md
@@ -0,0 +1,189 @@
+# The `clusterctl` provider contract
+
+The `clusterctl` command is designed to work with all the providers compliant with the following rules.
+
+## Provider Repositories
+
+Each providers MUST define a **provider repository**, that is a well-known place where the release assets for
+a provider are published.
+
+The provider repository should contain the following files:
+
+* The metadata YAML
+* The Component YAML
+* Workload cluster templates
+
+An example of provider repository is the [Github release assets for Cluster API](https://github.com/kubernetes-sigs/cluster-api/releases).
+
+
+
+
+
+### Metadata YAML
+
+The provider is required to generate a **metadata YAML** file and publish it to the provider's repository.
+
+The metadata YAML file documents the release series of each provider and maps each release series to a ClusterAPI version.
+
+For example:
+
+```yaml
+TODO: add example
+```
+
+
+
+### Components YAML
+
+The provider is required to generate a **components YAML** file, that is a single YAML with _all_ the components required
+for installing the provider itself (CRD, Controller, RBAC etc.) and publish it to the provider's repository.
+Following rules apply:
+
+#### Naming conventions
+
+It is strongly recommended that:
+* Core providers release a file called `core-components.yaml`
+* Infrastructure providers release a file called `infrastructure-components.yaml`
+* Bootstrap provider release a file called ` bootstrap-components.yaml`
+* Control plane providers release a file called `control-plane-components.yaml`
+
+#### Target namespace
+
+The components YAML should contain one Namespace object, which will be used as the default target namespace
+when creating the provider components.
+
+All the objects in the components YAML MUST belong to the target namespace, with the exception of objects that
+are not namespaced, like ClusterRoles/ClusterRoleBinding and CRD objects.
+
+
+
+#### Controllers & Watching namespace
+
+Each provider is expected to deploy controllers using a Deployment.
+
+While defining the Deployment Spec, the container that executes the controller binary MUST be called `manager`.
+
+The manager MUST support a command args named `--namespace` for specifying the namespace where the controller
+will look for objects to reconcile.
+
+#### Variables
+
+The components YAML can contain environment variables matching the regexp `\${\s*([A-Z0-9_]+)\s*}`; it is highly
+recommended to prefix the variable name with the provider name e.g. `{ $AWS_CREDENTIALS }`
+
+Each provider SHOULD create user facing documentation with the list of required variables and with all the additional
+notes that are required to assist the user in defining the value for each value.
+
+### Workload cluster templates
+
+Infrastructure provider could publish **cluster templates** file to be used by `clusterctl config cluster`.
+The cluster templates file, are single YAML with _all_ the objects required to create a new workload cluster.
+Following rules apply:
+
+#### Naming conventions
+
+Cluster templates MUST be stored in the same folder of the component YAML and adhere to the the following naming convention:
+1. The default cluster template should be named `config-{bootstrap}.yaml`. e.g `config-kubeadm.yaml`
+2. Additional cluster template should be named `config-{flavor}-{bootstrap}.yaml`. e.g `config-production-kubeadm.yaml`
+
+`{bootstrap}` is the name of the bootstrap provider used in the template; `{flavor}` is the name the user can pass to the
+`clusterctl config cluster --flavor` flag to identify the specific template to use.
+
+Each provider SHOULD create user facing documentation with the list of available cluster templates.
+
+#### Target namespace
+
+The cluster template YAML MUST assume the target namespace already exists.
+
+All the objects in the cluster template YAML MUST be deployed in the same namespace.
+
+#### Variables
+
+Also the cluster templates YAML can contain environment variables (same as components YAML).
+
+Each provider SHOULD create user facing documentation with the list of required variables and with all the additional
+notes that are required to assist the user in defining the value for each value.
+
+
+
+## Additional notes
+
+### Components YAML transformations
+
+Provider's owners should be aware of the following transformations that `clusterctl` applies during components installation:
+
+* Variables substitution;
+* Enforcement of target namespace:
+ * The name of the namespace object is set;
+ * The namespace field of all the objects is set (with exception of cluster wide objects like e.g. ClusterRoles);
+ * ClusterRole and ClusterRoleBinding are renamed by adding a “namespace-“ prefix to the name; this change reduces the risks
+ of conflicts between several instances of the same provider in case of multi tenancy;
+* Enforcement of watching namespace;
+* All components are labeled;
+
+### Cluster template transformations
+
+Provider's owners should be aware of the following transformations that clusterctl applies during components installation:
+
+* Variables substitution;
+* Enforcement of target namespace:
+ * The namespace field of all the objects is set;
+
+### Links to external objects
+
+The `clusterctl` command requires that both the components YAML and the cluster templates contains _all_ the required
+objects.
+
+If, for any reasons, the provider's owner/YAML designers decide to go not comply with this recommendation and e.g. to
+
+* implement links to external objects from a component YAML (e.g. secrets, aggregated ClusterRoles NOT included in the component YAML )
+* implement link to external objects from a cluster template (e.g. secrets, configMaps NOT included in the cluster template)
+
+The provider's owner/YAML designers should be aware that it is their responsibility to ensure the proper
+functioning of all the `clusterctl` features both in single tenancy or multi-tenancy scenarios and/or document known limitations.
+
+### Move constraints
+
+TODO: document current assumptions
+
+### Adopt
+
+TODO: document current assumptions
\ No newline at end of file
diff --git a/docs/book/src/clusterctl/update.md b/docs/book/src/clusterctl/update.md
new file mode 100644
index 000000000000..30404ce4c546
--- /dev/null
+++ b/docs/book/src/clusterctl/update.md
@@ -0,0 +1 @@
+TODO
\ No newline at end of file
diff --git a/docs/book/src/providers/clusterctl.md b/docs/book/src/providers/clusterctl.md
deleted file mode 100644
index 76fe8e51d8cb..000000000000
--- a/docs/book/src/providers/clusterctl.md
+++ /dev/null
@@ -1,57 +0,0 @@
-# `clusterctl`
-
-## clusterctl v1alpha3 (clusterctl redesign)
-
-`clusterctl` is a CLI tool for handling the lifecycle of a cluster-API management cluster.
-
-The v1alpha3 release is designed for providing a simple day 1 experience; `clusterctl` is bundled with Cluster API and can be reused across providers
-that are compliant with the following rules.
-
-### Components YAML
-
-The provider is required to generate a single YAML file with all the components required for installing the provider
-itself (CRD, Controller, RBAC etc.).
-
-Infrastructure providers MUST release a file called `infrastructure-components.yaml`, while bootstrap provider MUST
-release a file called ` bootstrap-components.yaml` (exception for CABPK, which is included in CAPI by default).
-
-The components YAML should contain one Namespace object, which will be used as the default target namespace
-when creating the provider components.
-
-> If the generated component YAML does't contain a Namespace object, user will need to provide one to `clusterctl init` using
-> the `--target-namespace` flag.
-
-> In case there is more than one Namespace object in the components YAML, `clusterctl` will generate an error and abort
-> the provider installation.
-
-The components YAML can contain environment variables matching the regexp `\${\s*([A-Z0-9_]+)\s*}`; it is highly
-recommended to prefix the variable name with the provider name e.g. `{ $AWS_CREDENTIALS }`
-
-> Users are required to ensure that environment variables are set in advance before running `clusterctl init`; if a variable
-> is missing, `clusterctl` will generate an error and abort the provider installation.
-
-### Workload cluster templates
-
-Infrastructure provider could publish cluster templates to be used by `clusterctl config cluster`.
-
-Cluster templates MUST be stored in the same folder of the component YAML and adhere to the the following naming convention:
-1. The default cluster template should be named `config-{bootstrap}.yaml`. e.g `config-kubeadm.yaml`
-2. Additional cluster template should be named `config-{flavor}-{bootstrap}.yaml`. e.g `config-production-kubeadm.yaml`
-
-`{bootstrap}` is the name of the bootstrap provider used in the template; `{flavor}` is the name the user can pass to the
-`clusterctl config cluster --flavor` flag to identify the specific template to use.
-
-## Previous versions (unsupported)
-
-### v1alpha1
-
-`clusterctl` was a command line tool packaged with v1alpha1 providers. The goal of this tool was to go from nothing to a
-running management cluster in whatever environment the provider was built for. For example, Cluster-API-Provider-AWS
-packaged a `clusterctl` that created a Kubernetes cluster in EC2 and installed the necessary controllers to respond to
-Cluster API's APIs.
-
-### v1alpha2
-
-`clusterctl` was likely becoming provider-agnostic meaning one clusterctl was bundled with Cluster API and can be reused
-across providers. Work here is still being figured out but providers will not be packaging their own `clusterctl`
-anymore.