diff --git a/content/en/docs/setup/multiple-zones.md b/content/en/docs/setup/multiple-zones.md index da31844a01b9a..7b1af187acbe1 100644 --- a/content/en/docs/setup/multiple-zones.md +++ b/content/en/docs/setup/multiple-zones.md @@ -5,8 +5,17 @@ reviewers: - quinton-hoole title: Running in Multiple Zones weight: 90 +content_template: templates/concept --- +{{% capture overview %}} + +This page describes how to run a cluster in multiple zones. + +{{% /capture %}} + +{{% capture body %}} + ## Introduction Kubernetes 1.2 adds support for running a single cluster in multiple failure zones @@ -27,8 +36,6 @@ add similar support for other clouds or even bare metal, by simply arranging for the appropriate labels to be added to nodes and volumes). -{{< toc >}} - ## Functionality When nodes are started, the kubelet automatically adds labels to them with @@ -122,9 +129,12 @@ labels are `failure-domain.beta.kubernetes.io/region` for the region, and `failure-domain.beta.kubernetes.io/zone` for the zone: ```shell -> kubectl get nodes --show-labels +kubectl get nodes --show-labels +``` +The output is similar to this: +```shell NAME STATUS ROLES AGE VERSION LABELS kubernetes-master Ready,SchedulingDisabled 6m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master kubernetes-minion-87j9 Ready 6m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-87j9 @@ -158,8 +168,12 @@ View the nodes again; 3 more nodes should have launched and be tagged in us-central1-b: ```shell -> kubectl get nodes --show-labels +kubectl get nodes --show-labels +``` + +The output is similar to this: +```shell NAME STATUS ROLES AGE VERSION LABELS kubernetes-master Ready,SchedulingDisabled 16m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-1,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-master kubernetes-minion-281d Ready 2m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d @@ -211,7 +225,12 @@ was addressed in 1.3+. Now let's validate that Kubernetes automatically labeled the zone & region the PV was created in. ```shell -> kubectl get pv --show-labels +kubectl get pv --show-labels +``` + +The output is similar to this: + +```shell NAME CAPACITY ACCESSMODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS pv-gce-mj4gm 5Gi RWO Retain Bound default/claim1 manual 46s failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a ``` @@ -244,9 +263,20 @@ Note that the pod was automatically created in the same zone as the volume, as cross-zone attachments are not generally permitted by cloud providers: ```shell -> kubectl describe pod mypod | grep Node +kubectl describe pod mypod | grep Node +``` + +```shell Node: kubernetes-minion-9vlv/10.240.0.5 -> kubectl get node kubernetes-minion-9vlv --show-labels +``` + +And check node labels: + +```shell +kubectl get node kubernetes-minion-9vlv --show-labels +``` + +```shell NAME STATUS AGE VERSION LABELS kubernetes-minion-9vlv Ready 22m v1.6.0+fff5156 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv ``` @@ -283,12 +313,20 @@ find kubernetes/examples/guestbook-go/ -name '*.json' | xargs -I {} kubectl crea The pods should be spread across all 3 zones: ```shell -> kubectl describe pod -l app=guestbook | grep Node +kubectl describe pod -l app=guestbook | grep Node +``` + +```shell Node: kubernetes-minion-9vlv/10.240.0.5 Node: kubernetes-minion-281d/10.240.0.8 Node: kubernetes-minion-olsh/10.240.0.11 +``` - > kubectl get node kubernetes-minion-9vlv kubernetes-minion-281d kubernetes-minion-olsh --show-labels +```shell +kubectl get node kubernetes-minion-9vlv kubernetes-minion-281d kubernetes-minion-olsh --show-labels +``` + +```shell NAME STATUS ROLES AGE VERSION LABELS kubernetes-minion-9vlv Ready 34m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-a,kubernetes.io/hostname=kubernetes-minion-9vlv kubernetes-minion-281d Ready 20m v1.13.0 beta.kubernetes.io/instance-type=n1-standard-2,failure-domain.beta.kubernetes.io/region=us-central1,failure-domain.beta.kubernetes.io/zone=us-central1-b,kubernetes.io/hostname=kubernetes-minion-281d @@ -300,15 +338,42 @@ Load-balancers span all zones in a cluster; the guestbook-go example includes an example load-balanced service: ```shell -> kubectl describe service guestbook | grep LoadBalancer.Ingress +kubectl describe service guestbook | grep LoadBalancer.Ingress +``` + +The output is similar to this: + +```shell LoadBalancer Ingress: 130.211.126.21 +``` + +Set the above IP: + +```shell +export IP=130.211.126.21 +``` + +Explore with curl via IP: -> ip=130.211.126.21 +```shell +curl -s http://${IP}:3000/env | grep HOSTNAME +``` -> curl -s http://${ip}:3000/env | grep HOSTNAME +The output is similar to this: + +```shell "HOSTNAME": "guestbook-44sep", +``` -> (for i in `seq 20`; do curl -s http://${ip}:3000/env | grep HOSTNAME; done) | sort | uniq +Again, explore multiple times: + +```shell +(for i in `seq 20`; do curl -s http://${IP}:3000/env | grep HOSTNAME; done) | sort | uniq +``` + +The output is similar to this: + +```shell "HOSTNAME": "guestbook-44sep", "HOSTNAME": "guestbook-hum5n", "HOSTNAME": "guestbook-ppm40", @@ -335,3 +400,5 @@ KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2c k KUBERNETES_PROVIDER=aws KUBE_USE_EXISTING_MASTER=true KUBE_AWS_ZONE=us-west-2b kubernetes/cluster/kube-down.sh KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=us-west-2a kubernetes/cluster/kube-down.sh ``` + +{{% /capture %}}