Skip to content

Commit

Permalink
Update documentation now that the provider specific code has been (ku…
Browse files Browse the repository at this point in the history
…bernetes-sigs#445)

removed from this repository.
  • Loading branch information
roberthbailey authored and k8s-ci-robot committed Jul 25, 2018
1 parent 0b2f26f commit 0980af3
Show file tree
Hide file tree
Showing 8 changed files with 81 additions and 91 deletions.
2 changes: 1 addition & 1 deletion .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Fixes #

**Special notes for your reviewer**:

1. Please confirm that if this PR changes any image versions, then that's the sole change this PR makes.
_Please confirm that if this PR changes any image versions, then that's the sole change this PR makes._

**Release note**:
<!-- Write your release note:
Expand Down
16 changes: 7 additions & 9 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Please fill out either the individual or corporate Contributor License Agreement

## Finding Things That Need Help

If you're new to the project and want to help, but don't know where to start, we have a semi-curated list of issues that should not need deep knowledge of the system. [Have a look and see if anything sounds interesting](https://github.com/kubernetes-sigs/cluster-api/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22). Alternatively, read some of the docs on other controllers and try to write your own, file and fix any/all issues that come up, including gaps in documentation!
If you're new to the project and want to help, but don't know where to start, we have a semi-curated list of issues that should not need deep knowledge of the system. [Have a look and see if anything sounds interesting](https://github.com/kubernetes-sigs/cluster-api/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22). Alternatively, read some of the docs on other controllers and try to write your own, file and fix any/all issues that come up, including gaps in documentation!

## Contributing a Patch

Expand All @@ -26,12 +26,10 @@ All changes must be code reviewed. Coding conventions and standards are explaine

Cluster API maintainers may add "LGTM" (Looks Good To Me) or an equivalent comment to indicate that a PR is acceptable. Any change requires at least one LGTM. No pull requests can be merged until at least one Cluster API maintainer signs off with an LGTM.

## Cloud Provider Dev Guide
## Cloud Provider Developer Guide

### Overview

The Cluster API is a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes.

This document is meant to help OSS contributors implement support for providers (cloud or on-prem).

As part of adding support for a provider (cloud or on-prem), you will need to:
Expand All @@ -53,7 +51,8 @@ To minimize code duplication and maximize flexibility, bootstrap clusters with a

### A new Machine can be created in a declarative way

**A new Machine can be created in a declarative way, including Kubernetes version and container runtime version. It should also be able to specify provider-specific information such as OS image, instance type, disk configuration, etc., though this will not be portable.**
A new Machine can be created in a declarative way, specifying versions of various components such as the kubelet.
It should also be able to specify provider-specific information such as OS image, instance type, disk configuration, etc., though this will not be portable.

When a cluster is first created with a cluster config file, there is no master node or api server. So the user will need to bootstrap a cluster. While the implementation details are specific to the provider, the following guidance should help you:

Expand All @@ -66,14 +65,14 @@ When a cluster is first created with a cluster config file, there is no master n

While not mandatory, it is suggested for new providers to support configurable machine setups for creating new machines.
This is to allow flexibility in what startup scripts are used and what versions are supported instead of hardcoding startup scripts into the machine controller.
You can find an example implementation for GCE [here](https://github.com/kubernetes-sigs/cluster-api/blob/master/cloud/google/machinesetup/config_types.go).
You can find an example implementation for GCE [here](https://github.com/kubernetes-sigs/cluster-api-provider-gcp/blob/ee60efd89c4d0129a6d42b40d069c0b41d2c4987/cloud/google/machinesetup/config_types.go).

##### GCE Implementation

For GCE, a [config map](https://github.com/kubernetes-sigs/cluster-api/blob/6aecf9c80a1ca29b45cb43ebfd50ac0d57eb7132/clusterctl/examples/google/provider-components.yaml.template#L118) holds the list of valid machine setup configs,
For GCE, a [config map](https://github.com/kubernetes-sigs/cluster-api-provider-gcp/blob/c0ac09e86b6630bd65c277120883719e514cfdf5/clusterctl/examples/google/provider-components.yaml.template#L151) holds the list of valid machine setup configs,
and the yaml file is volume mounted into the machine controller using a ConfigMap named `machine-setup`.

A [config type](https://github.com/kubernetes-sigs/cluster-api/blob/master/cloud/google/machinesetup/config_types.go#L45) defines a set of parameters that can be taken from the machine object being created, and maps those parameters to startup scripts and other relevant information.
A [config type](https://github.com/kubernetes-sigs/cluster-api-provider-gcp/blob/ee60efd89c4d0129a6d42b40d069c0b41d2c4987/cloud/google/machinesetup/config_types.go#L70) defines a set of parameters that can be taken from the machine object being created, and maps those parameters to startup scripts and other relevant information.
In GCE, the OS, machine roles, and version info are the parameters that map to a GCP image path and metadata (which contains the startup script).

When creating a new machine, there should be a check for whether the machine setup is supported.
Expand All @@ -93,7 +92,6 @@ When the client deletes a Machine object, your controller's reconciler should tr
These include:

* A specific Machine can have its kubelet version upgraded or downgraded.
* A specific Machine can have its container runtime changed, or its version upgraded or downgraded.
* A specific Machine can have its OS image upgraded or downgraded.

A sample implementation for an upgrader is [provided here](https://github.com/kubernetes-sigs/cluster-api/blob/master/tools/upgrader/util/upgrade.go). Each machine is upgraded serially, which can amount to:
Expand Down
10 changes: 7 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,18 @@ To learn more, see the [Cluster API KEP][cluster-api-kep].

* Chat with us on [Slack](http://slack.k8s.io/): #cluster-api

* Pointers to repositories and PRs where some Cluster API provisioners are being
developed.
## Provider Implementations

The code in this repository is independent of any specific deployment environment.
Provider specific code is being developed in separate repositories, some of which
are also sponsored by SIG-cluster-lifecycle:

* AWS, https://github.com/kubernetes-sigs/cluster-api-provider-aws
* AWS/Openshift, https://github.com/openshift/cluster-operator
* Azure, https://github.com/platform9/azure-provider
* GCE, https://github.com/kubernetes-sigs/cluster-api-provider-gcp
* OpenStack, https://github.com/kubernetes-sigs/cluster-api-provider-openstack
* vSphere, https://github.com/kubernetes-sigs/cluster-api/tree/master/cloud/vsphere
* vSphere, https://github.com/roberthbailey/cluster-api-provider-vsphere

## Getting Started
### Prerequisites
Expand Down
31 changes: 16 additions & 15 deletions clusterctl/CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,27 @@
# Contributing Guidelines

1. Follow the [Getting Started]((https://github.com/kubernetes-sigs/cluster-api/blob/master/cluster-api/clusterctl/README.md)) steps to create a cluster.
Before submitting a PR you should run the unit and integration tests.

# Development
## Building

Before submitting an PR you should run the unit and integration tests. Instructions for doing so are given in the [Testing](#Testing) section.
To build the go code, run

## Testing
```shell
./scripts/ci-build.sh
```

### Unit Tests
When changing this application, you will often end up modifying other packages above this folder in the project tree. You
should run all the unit tests in the repository. To run the unit tests, run the following command from the root,
`cluster-api`, folder of the repo.
To verify that the code still builds into docker images, run

```
go test ./...
```shell
./scripts/ci-make.sh
```

### Integration Tests
## Testing

When changing this application, you will often end up modifying other packages above this folder in the project tree.
You should run all the tests in the repository. To run the tests, run the following command from the root,
`cluster-api`, folder of the repo.

To run the integration tests, run the following command from this folder. The integration tests are for sanity checking
that clusterctl's basic functionality is working.
```shell
./scripts/ci-test.sh
```
go test -tags=integration -v
```
45 changes: 32 additions & 13 deletions clusterctl/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,35 +6,50 @@ Read the [experience doc here](https://docs.google.com/document/d/1-sYb3EdkRga49

## Getting Started

**Due to the [limitations](#Limitations) described below, you must currently compile and run a `clusterctl` binary
from your chosen [provider implementation](../README.md#provider-implementations) rather than using the binary from
this repository.**


### Prerequisites

1. Install [minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/)
2. Install a [driver](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md) for minikube. For Linux, we recommend kvm2. For MacOS, we recommend VirtualBox.
2. Build the `clusterctl` tool

```bash
$ git clone https://github.com/kubernetes-sigs/cluster-api.git $GOPATH/src/sigs.k8s.io/cluster-api
$ git clone https://github.com/kubernetes-sigs/cluster-api $GOPATH/src/sigs.k8s.io/cluster-api
$ cd $GOPATH/src/sigs.k8s.io/cluster-api/clusterctl/
$ go build
```

### Limitations
TBD

`clusterctl` can only use a provider that is compiled in. As provider specific code has been moved out
of this repository, running the `clusterctl` binary compiled from this repository isn't particularly useful.

There is current work ongoing to rectify this issue, which centers around removing the
[`ProviderDeployer interface`](https://github.com/kubernetes-sigs/cluster-api/blob/b90c541b315ecbac096fa371b4436d60ce5715a9/clusterctl/clusterdeployer/clusterdeployer.go#L33-L40)
from the `clusterdeployer` package. The two tracking issues for removing the two functions in the interface are
https://github.com/kubernetes-sigs/cluster-api/issues/158 and https://github.com/kubernetes-sigs/cluster-api/issues/160.

### Creating a cluster
1. Create the `cluster.yaml`, `machines.yaml`, `provider-components.yaml`, and `addons.yaml` files configured for your cluster. See the provider specific templates and generation tools at `$GOPATH/src/sigs.k8s.io/cluster-api/clusterctl/examples/<provider>`.
2. Create a cluster

```shell
clusterctl create cluster --provider [google/vsphere] -c cluster.yaml -m machines.yaml -p provider-components.yaml -a addons.yaml
```
1. Create the `cluster.yaml`, `machines.yaml`, `provider-components.yaml`, and `addons.yaml` files configured for your cluster.
See the provider specific templates and generation tools for your chosen [provider implementation](../README.md#provider-implementations).

To choose a specific minikube driver, please use the `--vm-driver` command line parameter. For example to use the kvm2 driver with clusterctl you woud add `--vm-driver kvm2`
1. Create a cluster:

```shell
./clusterctl create cluster --provider <provider> -c cluster.yaml -m machines.yaml -p provider-components.yaml -a addons.yaml
```

To choose a specific minikube driver, please use the `--vm-driver` command line parameter. For example to use the kvm2 driver with clusterctl you would add `--vm-driver kvm2`

Additional advanced flags can be found via help.

```shell
clusterctl create cluster --help
./clusterctl create cluster --help
```

### Interacting with your cluster
Expand All @@ -50,7 +65,8 @@ $ kubectl --kubeconfig kubeconfig get machines -o yaml

#### Scaling your cluster

**NOT YET SUPPORTED!**
You can scale your cluster by adding additional individual Machines, or by adding a MachineSet or MachineDeployment
and changing the number of replicas.

#### Upgrading your cluster

Expand All @@ -62,11 +78,14 @@ $ kubectl --kubeconfig kubeconfig get machines -o yaml

### Deleting a cluster

**NOT YET SUPPORTED!**
When you are ready to remove your cluster, you can use clusterctl to delete the cluster:

clusterctl does not yet support deletion, please see provider specific deletion guides:
```shell
./clusterctl delete cluster --kubeconfig kubeconfig
```

- [google](../cloud/google/README.md#Cluster-Deletion)
Please also check the documentation for your [provider implementation](../README.md#provider-implementations)
to determine if any additional steps need to be taken to completely clean up your cluster.

## Contributing

Expand Down
62 changes: 15 additions & 47 deletions docs/proposals/machine-api-proposal.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,21 +9,17 @@ and add optional machine management features to Kubernetes clusters.

This API strives to be able to add these capabilities:

1. A new Node can be created in a declarative way, including Kubernetes version
and container runtime version. It should also be able to specify
provider-specific information such as OS image, instance type, disk
configuration, etc., though this will not be portable.
1. A new Node can be created in a declarative way, including Kubernetes version.
It should also be able to specify provider-specific information such as OS image,
instance type, disk configuration, etc., though this will not be portable.

2. A specific Node can be deleted, freeing external resources associated with
1. A specific Node can be deleted, freeing external resources associated with
it.

3. A specific Node can have its kubelet version upgraded or downgraded in a
1. A specific Node can have its kubelet version upgraded or downgraded in a
declarative way\*.

4. A specific Node can have its container runtime changed, or its version
upgraded or downgraded, in a declarative way\*.

5. A specific Node can have its OS image upgraded or downgraded in a declarative
1. A specific Node can have its OS image upgraded or downgraded in a declarative
way\*.

\* It is an implementation detail of the provider if these operations are
Expand All @@ -43,10 +39,9 @@ with a new one matching the updated spec. If a Machine object is deleted, the
corresponding Node should have its external resources released by the
provider-specific controller, and should be deleted as well.

Fields like the kubelet version, the container runtime to use, and its version,
are modeled as fields on the Machine's spec. Any other information that is
provider-specific, though, is part of an opaque ProviderConfig string that is
not portable between different providers.
Fields like the kubelet version are modeled as fields on the Machine's spec.
Any other information that is provider-specific, though, is part of an opaque
ProviderConfig string that is not portable between different providers.

The ProviderConfig is recommended to be a serialized API object in a format
owned by that provider, akin to the [Component Config](https://goo.gl/opSc2o)
Expand Down Expand Up @@ -98,35 +93,7 @@ update, or if a full Node replacement is necessary.

## Omitted Capabilities

* A scalable representation of a group of nodes

Given the existing targeted capabilities, this functionality could easily be
built client-side via label selectors to find groups of Nodes and using (1) and
(2) to add or delete instances to simulate this scaling.

It is natural to extend this API in the future to introduce the concepts of
MachineSets and MachineDeployments that mirror ReplicaSets and Deployments, but
an initial goal is to first solidify the definition and behavior of a single
Machine, similar to how Kubernetes first solidifed Pods.

A nice property of this proposal is that if provider controllers are written
solely against Machines, the concept of MachineSets can be implemented in a
provider-agnostic way with a generic controller that uses the MachineSet
template to create and delete Machine instances. All Machine-based provider
controllers will continue to work, and will get full MachineSet functionality
for free without modification. Similarly, a MachineDeployment controller could
then be introduced to generically operate on MachineSets without having to know
about Machines or providers. Provider-specific controllers that are actually
responsible for creating and deleting hosts would only ever have to worry about
individual Machine objects, unless they explicitly opt into watching
higher-level APIs like MachineSets in order to take advantage of
provider-specific features like AutoScalingGroups or Managed Instance Groups.

However, this leaves the barrier to entry very low for adding new providers:
simply implement creation and deletion of individual Nodes, and get Sets and
Deployments for free.

* A provider-agnostic mechanism to request new nodes
### A provider-agnostic mechanism to request new nodes

In this proposal, only certain attributes of Machines are provider-agnostic and
can be operated on in a generic way. In other iterations of similar proposals,
Expand All @@ -136,9 +103,10 @@ support usecases around automated Machine scaling. This introduced a lot of
upfront complexity in the API proposals.

This proposal starts much more minimalistic, but doesn't preclude the option of
extending the API to support these advanced concepts in the future.
extending the API to support these advanced concepts in the future (see
https://github.com/kubernetes-sigs/cluster-api/issues/22).

* Dynamic API endpoint
### Dynamic API endpoint

This proposal lacks the ability to declaratively update the kube-apiserver
endpoint for the kubelet to register with. This feature could be added later,
Expand All @@ -150,7 +118,7 @@ endpoint into any hosts it provisions.

## Conditions

Brian Grant and Eric Tune have indicated that the API pattern of having
Brian Grant (@bgrant0607) and Eric Tune (@erictune) have indicated that the API pattern of having
"Conditions" lists in object statuses is soon to be deprecated. These have
generally been used as a timeline of state transitions for the object's
reconcilation, and difficult to consume for clients that just want a meaningful
Expand All @@ -161,4 +129,4 @@ revisit the specifics when new patterns start to emerge in core.

## Types

Please see the full types [here](types.go).
Please see the full types [here](https://github.com/kubernetes-sigs/cluster-api/blob/master/pkg/apis/cluster/v1alpha1/machine_types.go).
2 changes: 1 addition & 1 deletion tools/repair/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ $ go build
```

## Run
1) Create a cluster using the `gcp-deployer` tool.
1) Create a cluster using the `clusterctl` tool.
2) To do a dry run of detecting broken nodes and seeing what needs to be
repaired, run `./repair --dryrun true`.
3) To actually repair the nodes in cluster, run `./repair` without the
Expand Down
4 changes: 2 additions & 2 deletions tools/upgrader/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,5 +14,5 @@ $ go build
```

## Running
1) First, create a cluster using the `gcp-deployer` tool (the default Kubernetes version should be `1.8.3`)
2) To update the entire cluster to `v1.9.4`, run `./upgrader -v 1.9.4`
1) First, create a cluster using the `clusterctl` tool (the default Kubernetes version should be `1.9.4`)
2) To update the entire cluster to `v1.9.5`, run `./upgrader -v 1.9.5`

0 comments on commit 0980af3

Please sign in to comment.