Skip to content

Commit

Permalink
Update kubadm ha installation for 1.12 (#10264)
Browse files Browse the repository at this point in the history
* Update kubadm ha installation for 1.12

Signed-off-by: Chuck Ha <[email protected]>

* update stable version

Signed-off-by: Chuck Ha <[email protected]>

* Update stacked control plane for v1.12 (#2)

* use v1alpha3

Signed-off-by: Chuck Ha <[email protected]>

* more v1alpha3 (#4)

* updates

Signed-off-by: Chuck Ha <[email protected]>
  • Loading branch information
chuckha authored and k8s-ci-robot committed Sep 20, 2018
1 parent 035fc88 commit ce69248
Showing 1 changed file with 72 additions and 61 deletions.
133 changes: 72 additions & 61 deletions content/en/docs/setup/independent/high-availability.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ and control plane nodes are co-located.
- With an external etcd cluster. This approach requires more infrastructure. The
control plane nodes and etcd members are separated.

Your clusters must run Kubernetes version 1.11 or later. You should also be aware that
Your clusters must run Kubernetes version 1.12 or later. You should also be aware that
setting up HA clusters with kubeadm is still experimental. You might encounter issues
with upgrading your clusters, for example. We encourage you to try either approach,
and provide feedback.
Expand Down Expand Up @@ -139,15 +139,18 @@ different configuration.
### Bootstrap the first stacked control plane node
{{< note >}}
**Note**: Optionally replace `stable` with a different version of Kubernetes, for example `v1.12.0`.
{{< /note >}}
1. Create a `kubeadm-config.yaml` template file:
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.x
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
kubernetesVersion: stable
apiServerCertSANs:
- "LOAD_BALANCER_DNS"
api:
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
etcd:
local:
extraArgs:
Expand All @@ -166,9 +169,6 @@ different configuration.
# This CIDR is a Calico default. Substitute or remove for your CNI provider.
podSubnet: "192.168.0.0/16"
1. Replace `x` in `kubernetesVersion: v1.11.x` with the latest available version.
For example: `kubernetesVersion: v1.11.1`
1. Replace the following variables in the template with the appropriate
values for your cluster:
Expand Down Expand Up @@ -224,13 +224,12 @@ done

1. Create a second, different `kubeadm-config.yaml` template file:

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.x
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
kubernetesVersion: stable
apiServerCertSANs:
- "LOAD_BALANCER_DNS"
api:
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
etcd:
local:
extraArgs:
Expand All @@ -250,9 +249,6 @@ done
# This CIDR is a calico default. Substitute or remove for your CNI provider.
podSubnet: "192.168.0.0/16"

1. Replace `x` in `kubernetesVersion: v1.11.x` with the latest available version.
For example: `kubernetesVersion: v1.11.1`

1. Replace the following variables in the template with the appropriate values for your cluster:

- `LOAD_BALANCER_DNS`
Expand Down Expand Up @@ -296,7 +292,7 @@ done
export CP1_IP=10.0.0.8
export CP1_HOSTNAME=cp1
export KUBECONFIG=/etc/kubernetes/admin.conf
export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP1_HOSTNAME} https://${CP1_IP}:2380
kubeadm alpha phase etcd local --config kubeadm-config.yaml
```
Expand All @@ -317,13 +313,12 @@ done

1. Create a third, different `kubeadm-config.yaml` template file:

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.x
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
kubernetesVersion: stable
apiServerCertSANs:
- "LOAD_BALANCER_DNS"
api:
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
etcd:
local:
extraArgs:
Expand All @@ -343,9 +338,6 @@ done
# This CIDR is a calico default. Substitute or remove for your CNI provider.
podSubnet: "192.168.0.0/16"

1. Replace `x` in `kubernetesVersion: v1.11.x` with the latest available version.
For example: `kubernetesVersion: v1.11.1`

1. Replace the following variables in the template with the appropriate values for your cluster:

- `LOAD_BALANCER_DNS`
Expand Down Expand Up @@ -391,7 +383,7 @@ done
export CP2_IP=10.0.0.9
export CP2_HOSTNAME=cp2
export KUBECONFIG=/etc/kubernetes/admin.conf
export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP2_HOSTNAME} https://${CP2_IP}:2380
kubeadm alpha phase etcd local --config kubeadm-config.yaml
```
Expand All @@ -411,39 +403,48 @@ done
- Follow [these instructions](/docs/setup/independent/setup-ha-etcd-with-kubeadm/)
to set up the etcd cluster.

### Copy required files to other control plane nodes

The following certificates were created when you created the cluster. Copy them
to your other control plane nodes:

- `/etc/kubernetes/pki/etcd/ca.crt`
- `/etc/kubernetes/pki/apiserver-etcd-client.crt`
- `/etc/kubernetes/pki/apiserver-etcd-client.key`
#### Copy required files from an etcd node to all control plane nodes

In the following example, replace `USER` and `CONTROL_PLANE_HOSTS` values with values
for your environment.

```sh
# Make a list of required etcd certificate files
cat << EOF > etcd-pki-files.txt
/etc/kubernetes/pki/etcd/ca.crt
/etc/kubernetes/pki/apiserver-etcd-client.crt
/etc/kubernetes/pki/apiserver-etcd-client.key
EOF
# create the archive
tar -czf etcd-pki.tar.gz -T etcd-pki-files.txt
# copy the archive to the control plane nodes
USER=ubuntu
CONTROL_PLANE_HOSTS="10.0.0.7 10.0.0.8 10.0.0.9"
for host in $CONTROL_PLANE_HOSTS; do
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${USER}"@$host:
scp /etc/kubernetes/pki/apiserver-etcd-client.key "${USER}"@$host:
scp etcd-pki.tar.gz "${USER}"@$host:
done
```

### Set up the first control plane node

1. Create a `kubeadm-config.yaml` template file:
1. Extract the etcd certificates

tar -xzf etcd-pki.tar.gz -C /etc/kubernetes/pki --strip-components=3

1. Create a `kubeadm-config.yaml`:

{{< note >}}
**Note**: Optionally replace `stable` with a different version of Kubernetes, for example `v1.11.3`.
{{< /note >}}

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.x
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
kubernetesVersion: stable
apiServerCertSANs:
- "LOAD_BALANCER_DNS"
api:
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
etcd:
external:
endpoints:
Expand All @@ -457,9 +458,6 @@ done
# This CIDR is a calico default. Substitute or remove for your CNI provider.
podSubnet: "192.168.0.0/16"

1. Replace `x` in `kubernetesVersion: v1.11.x` with the latest available version.
For example: `kubernetesVersion: v1.11.1`

1. Replace the following variables in the template with the appropriate values for your cluster:

- `LOAD_BALANCER_DNS`
Expand All @@ -469,11 +467,12 @@ done
- `ETCD_2_IP`

1. Run `kubeadm init --config kubeadm-config.yaml`
1. Copy the output join commamnd.

### Copy required files to the correct locations

The following certificates and other required files were created when you ran `kubeadm init`.
Copy these files to your other control plane nodes:
The following pki files were created during the `kubeadm init` step and must be shared with
all other control plane nodes.

- `/etc/kubernetes/pki/ca.crt`
- `/etc/kubernetes/pki/ca.key`
Expand All @@ -486,24 +485,34 @@ In the following example, replace the list of
`CONTROL_PLANE_IPS` values with the IP addresses of the other control plane nodes.

```sh
# make a list of required kubernetes certificate files
cat << EOF > certificate_files.txt
/etc/kubernetes/pki/ca.crt
/etc/kubernetes/pki/ca.key
/etc/kubernetes/pki/sa.key
/etc/kubernetes/pki/sa.pub
/etc/kubernetes/pki/front-proxy-ca.crt
/etc/kubernetes/pki/front-proxy-ca.key
EOF
# create the archive
tar -czf control-plane-certificates.tar.gz -T certificate_files.txt
USER=ubuntu # customizable
CONTROL_PLANE_IPS="10.0.0.7 10.0.0.8"
for host in ${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
scp control-plane-certificates.tar.gz "${USER}"@$host:
done
```

{{< note >}}
**Note**: Remember that your config may differ from this example.
{{< /note >}}

### Set up the other control plane nodes

1. Extract the required certificates

mkdir -p /etc/kubernetes/pki
tar -xzf etcd-pki.tar.gz -C /etc/kubernetes/pki --strip-components 3
tar -xzf control-plane-certificates.tar.gz -C /etc/kubernetes/pki --strip-components 3

1. Verify the location of the copied files.
Your `/etc/kubernetes` directory should look like this:

Expand All @@ -517,8 +526,10 @@ done
- `/etc/kubernetes/pki/sa.pub`
- `/etc/kubernetes/pki/etcd/ca.crt`

1. Run `kubeadm init --config kubeadm-config.yaml` on each control plane node, where
`kubeadm-config.yaml` is the file you already created.
1. Run the copied `kubeadm join` command from above. Add the flag "--experimental-control-plane".
The final command will look something like this:

kubeadm join ha.k8s.example.com:6443 --token 5ynki1.3erp9i3yo7gqg1nv --discovery-token-ca-cert-hash sha256:a00055bd8c710a9906a3d91b87ea02976334e1247936ac061d867a0f014ecd81 --experimental-control-plane

## Common tasks after bootstrapping control plane

Expand Down

0 comments on commit ce69248

Please sign in to comment.