From f120439575e852a69f1f5a11d11f8312ccbb6185 Mon Sep 17 00:00:00 2001 From: Alexey Pyltsyn Date: Mon, 28 Oct 2019 04:49:25 +0300 Subject: [PATCH] Improve Concepts section (#17013) Signed-off-by: Alexey Pyltsyn --- .../configuration/scheduling-framework.md | 2 +- .../containers/container-lifecycle-hooks.md | 2 +- .../compute-storage-net/device-plugins.md | 2 +- .../extend-kubernetes/extend-cluster.md | 2 +- .../en/docs/concepts/policy/limit-range.md | 129 +++++++++--------- .../connect-applications-service.md | 8 +- .../workloads/controllers/deployment.md | 26 ++-- 7 files changed, 89 insertions(+), 82 deletions(-) diff --git a/content/en/docs/concepts/configuration/scheduling-framework.md b/content/en/docs/concepts/configuration/scheduling-framework.md index 7e1fd970e3ad0..58fb36b192307 100644 --- a/content/en/docs/concepts/configuration/scheduling-framework.md +++ b/content/en/docs/concepts/configuration/scheduling-framework.md @@ -10,7 +10,7 @@ weight: 70 {{< feature-state for_k8s_version="1.15" state="alpha" >}} -The scheduling framework is a new plugable architecture for Kubernetes Scheduler +The scheduling framework is a new pluggable architecture for Kubernetes Scheduler that makes scheduler customizations easy. It adds a new set of "plugin" APIs to the existing scheduler. Plugins are compiled into the scheduler. The APIs allow most scheduling features to be implemented as plugins, while keeping the diff --git a/content/en/docs/concepts/containers/container-lifecycle-hooks.md b/content/en/docs/concepts/containers/container-lifecycle-hooks.md index 08d855732fabb..3d4f81152d20e 100644 --- a/content/en/docs/concepts/containers/container-lifecycle-hooks.md +++ b/content/en/docs/concepts/containers/container-lifecycle-hooks.md @@ -99,7 +99,7 @@ Here is some example output of events from running this command: ``` Events: - FirstSeen LastSeen Count From SubobjectPath Type Reason Message + FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image "test:1.0" diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md index 8e4b9434cf9b7..b731b8e02f025 100644 --- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md +++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins.md @@ -13,7 +13,7 @@ Kubernetes provides a [device plugin framework](https://github.com/kubernetes/co that you can use to advertise system hardware resources to the {{< glossary_tooltip term_id="kubelet" >}}. -Instead of customising the code for Kubernetes itself, vendors can implement a +Instead of customizing the code for Kubernetes itself, vendors can implement a device plugin that you deploy either manually or as a {{< glossary_tooltip term_id="daemonset" >}}. The targeted devices include GPUs, high-performance NICs, FPGAs, InfiniBand adapters, and other similar computing resources that may require vendor specific initialization diff --git a/content/en/docs/concepts/extend-kubernetes/extend-cluster.md b/content/en/docs/concepts/extend-kubernetes/extend-cluster.md index d5ab77a28b55f..b75d37e335821 100644 --- a/content/en/docs/concepts/extend-kubernetes/extend-cluster.md +++ b/content/en/docs/concepts/extend-kubernetes/extend-cluster.md @@ -147,7 +147,7 @@ Kubernetes provides several built-in authentication methods, and an [Authenticat ### Authorization - [Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It just works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. +[Authorization](/docs/reference/access-authn-authz/webhook/) determines whether specific users can read, write, and do other operations on API resources. It just works at the level of whole resources -- it doesn't discriminate based on arbitrary object fields. If the built-in authorization options don't meet your needs, and [Authorization webhook](/docs/reference/access-authn-authz/webhook/) allows calling out to user-provided code to make an authorization decision. ### Dynamic Admission Control diff --git a/content/en/docs/concepts/policy/limit-range.md b/content/en/docs/concepts/policy/limit-range.md index 5fe14e18dc946..b49c012ca42aa 100644 --- a/content/en/docs/concepts/policy/limit-range.md +++ b/content/en/docs/concepts/policy/limit-range.md @@ -24,7 +24,7 @@ A limit range, defined by a `LimitRange` object, provides constraints that can: - Enforce a ratio between request and limit for a resource in a namespace. - Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime. -## Enabling Limit Range +## Enabling Limit Range Limit Range support is enabled by default for many Kubernetes distributions. It is enabled when the apiserver `--enable-admission-plugins=` flag has `LimitRanger` admission controller as @@ -40,8 +40,8 @@ A limit range is enforced in a particular namespace when there is a - The `LimitRanger` admission controller enforces defaults limits for all Pods and Container that do not set compute resource requirements and tracks usage to ensure it does not exceed resource minimum , maximum and ratio defined in any `LimitRange` present in the namespace. - If creating or updating a resource (Pod, Container, PersistentVolumeClaim) violates a limit range constraint, the request to the API server will fail with HTTP status code `403 FORBIDDEN` and a message explaining the constraint that would have been violated. - If limit range is activated in a namespace for compute resources like `cpu` and `memory`, users must specify - requests or limits for those values; otherwise, the system may reject pod creation. -- LimitRange validations occurs only at Pod Admission stage, not on Running pods. + requests or limits for those values; otherwise, the system may reject pod creation. +- LimitRange validations occurs only at Pod Admission stage, not on Running pods. Examples of policies that could be created using limit range are: @@ -54,19 +54,19 @@ there may be contention for resources; The Containers or Pods will not be creat Neither contention nor changes to limitrange will affect already created resources. -## Limiting Container compute resources +## Limiting Container compute resources The following section discusses the creation of a LimitRange acting at Container Level. -A Pod with 04 containers is first created; each container within the Pod has a specific `spec.resource` configuration +A Pod with 04 containers is first created; each container within the Pod has a specific `spec.resource` configuration each container within the pod is handled differently by the LimitRanger admission controller. - Create a namespace `limitrange-demo` using the following kubectl command +Create a namespace `limitrange-demo` using the following kubectl command: ```shell kubectl create namespace limitrange-demo ``` -To avoid passing the target limitrange-demo in your kubectl commands, change your context with the following command +To avoid passing the target limitrange-demo in your kubectl commands, change your context with the following command: ```shell kubectl config set-context --current --namespace=limitrange-demo @@ -77,16 +77,15 @@ Here is the configuration file for a LimitRange object: This object defines minimum and maximum Memory/CPU limits, default cpu/Memory requests and default limits for CPU/Memory resources to be apply to containers. -Create the `limit-mem-cpu-per-container` LimitRange in the `limitrange-demo` namespace with the following kubectl command. +Create the `limit-mem-cpu-per-container` LimitRange in the `limitrange-demo` namespace with the following kubectl command: + ```shell kubectl create -f https://k8s.io/examples/admin/resource/limit-mem-cpu-container.yaml -n limitrange-demo ``` - ```shell - kubectl describe limitrange/limit-mem-cpu-per-container -n limitrange-demo - ``` - +kubectl describe limitrange/limit-mem-cpu-per-container -n limitrange-demo +``` ```shell Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio @@ -95,21 +94,20 @@ Container cpu 100m 800m 110m 700m - Container memory 99Mi 1Gi 111Mi 900Mi - ``` - - Here is the configuration file for a Pod with 04 containers to demonstrate LimitRange features : {{< codenew file="admin/resource/limit-range-pod-1.yaml" >}} -Create the `busybox1` Pod : +Create the `busybox1` Pod: ```shell kubectl apply -f https://k8s.io/examples/admin/resource/limit-range-pod-1.yaml -n limitrange-demo ``` -### Container spec with valid CPU/Memory requests and limits -View the `busybox-cnt01` resource configuration +### Container spec with valid CPU/Memory requests and limits + +View the `busybox-cnt01` resource configuration: -```shell +```shell kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[0].resources" ``` @@ -127,7 +125,7 @@ kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[0].re ``` - The `busybox-cnt01` Container inside `busybox` Pod defined `requests.cpu=100m` and `requests.memory=100Mi`. -- `100m <= 500m <= 800m` , The container cpu limit (500m) falls inside the authorized CPU limit range. +- `100m <= 500m <= 800m` , The container cpu limit (500m) falls inside the authorized CPU limit range. - `99Mi <= 200Mi <= 1Gi` , The container memory limit (200Mi) falls inside the authorized Memory limit range. - No request/limits ratio validation for CPU/Memory , thus the container is valid and created. @@ -136,7 +134,7 @@ kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[0].re View the `busybox-cnt02` resource configuration -```shell +```shell kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[1].resources" ``` @@ -154,7 +152,7 @@ kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[1].re ``` - The `busybox-cnt02` Container inside `busybox1` Pod defined `requests.cpu=100m` and `requests.memory=100Mi` but not limits for cpu and memory. - The container do not have a limits section, the default limits defined in the limit-mem-cpu-per-container LimitRange object are injected to this container `limits.cpu=700mi` and `limits.memory=900Mi`. -- `100m <= 700m <= 800m` , The container cpu limit (700m) falls inside the authorized CPU limit range. +- `100m <= 700m <= 800m` , The container cpu limit (700m) falls inside the authorized CPU limit range. - `99Mi <= 900Mi <= 1Gi` , The container memory limit (900Mi) falls inside the authorized Memory limit range. - No request/limits ratio set , thus the container is valid and created. @@ -162,10 +160,10 @@ kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[1].re ### Container spec with a valid CPU/Memory limits but no requests View the `busybox-cnt03` resource configuration -```shell +```shell kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[2].resources" ``` -```json +```json { "limits": { "cpu": "500m", @@ -180,18 +178,19 @@ kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[2].re - The `busybox-cnt03` Container inside `busybox1` Pod defined `limits.cpu=500m` and `limits.memory=200Mi` but no `requests` for cpu and memory. - The container do not define a request section, the defaultRequest defined in the limit-mem-cpu-per-container LimitRange is not used to fill its limits section but the limits defined by the container are set as requests `limits.cpu=500m` and `limits.memory=200Mi`. -- `100m <= 500m <= 800m` , The container cpu limit (500m) falls inside the authorized CPU limit range. -- `99Mi <= 200Mi <= 1Gi` , The container memory limit (200Mi) falls inside the authorized Memory limit range. +- `100m <= 500m <= 800m` , The container cpu limit (500m) falls inside the authorized CPU limit range. +- `99Mi <= 200Mi <= 1Gi` , The container memory limit (200Mi) falls inside the authorized Memory limit range. - No request/limits ratio set , thus the container is valid and created. +### Container spec with no CPU/Memory requests/limits +View the `busybox-cnt04` resource configuration: -### Container spec with no CPU/Memory requests/limits -View the `busybox-cnt04` resource configuration -```shell +```shell kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[3].resources" ``` -```json + +```json { "limits": { "cpu": "700m", @@ -205,29 +204,34 @@ kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[3].re ``` - The `busybox-cnt04` Container inside `busybox1` define neither `limits` nor `requests`. -- The container do not define a limit section, the default limit defined in the limit-mem-cpu-per-container LimitRange is used to fill its request +- The container do not define a limit section, the default limit defined in the limit-mem-cpu-per-container LimitRange is used to fill its request `limits.cpu=700m and` `limits.memory=900Mi` . - The container do not define a request section, the defaultRequest defined in the limit-mem-cpu-per-container LimitRange is used to fill its request section requests.cpu=110m and requests.memory=111Mi -- `100m <= 700m <= 800m` , The container cpu limit (700m) falls inside the authorized CPU limit range. +- `100m <= 700m <= 800m` , The container cpu limit (700m) falls inside the authorized CPU limit range. - `99Mi <= 900Mi <= 1Gi` , The container memory limit (900Mi) falls inside the authorized Memory limitrange . - No request/limits ratio set , thus the container is valid and created. All containers defined in the `busybox` Pod passed LimitRange validations, this the Pod is valid and create in the namespace. -## Limiting Pod compute resources +## Limiting Pod compute resources + The following section discusses how to constrain resources at Pod level. {{< codenew file="admin/resource/limit-mem-cpu-pod.yaml" >}} -Without having to delete `busybox1` Pod, create the `limit-mem-cpu-pod` LimitRange in the `limitrange-demo` namespace +Without having to delete `busybox1` Pod, create the `limit-mem-cpu-pod` LimitRange in the `limitrange-demo` namespace: + ```shell kubectl apply -f https://k8s.io/examples/admin/resource/limit-mem-cpu-pod.yaml -n limitrange-demo ``` -The limitrange is created and limits CPU to 2 Core and Memory to 2Gi per Pod. -```shell +The limitrange is created and limits CPU to 2 Core and Memory to 2Gi per Pod: + +```shell limitrange/limit-mem-cpu-per-pod created ``` -Describe the `limit-mem-cpu-per-pod` limit object using the following kubectl command + +Describe the `limit-mem-cpu-per-pod` limit object using the following kubectl command: + ```shell kubectl describe limitrange/limit-mem-cpu-per-pod ``` @@ -239,51 +243,56 @@ Type Resource Min Max Default Request Default Limit Max Limit/Reques ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu - 2 - - - Pod memory - 2Gi - - - -``` -Now create the `busybox2` Pod. +``` + +Now create the `busybox2` Pod: {{< codenew file="admin/resource/limit-range-pod-2.yaml" >}} ```shell kubectl apply -f https://k8s.io/examples/admin/resource/limit-range-pod-2.yaml -n limitrange-demo ``` -The `busybox2` Pod definition is identical to `busybox1` but an error is reported since Pod's resources are now limited + +The `busybox2` Pod definition is identical to `busybox1` but an error is reported since Pod's resources are now limited: + ```shell Error from server (Forbidden): error when creating "limit-range-pod-2.yaml": pods "busybox2" is forbidden: [maximum cpu usage per Pod is 2, but limit is 2400m., maximum memory usage per Pod is 2Gi, but limit is 2306867200.] ``` ```shell -kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[].resources.limits.memory" +kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[].resources.limits.memory" "200Mi" "900Mi" "200Mi" "900Mi" ``` -`busybox2` Pod will not be admitted on the cluster since the total memory limit of its container is greater than the limit defined in the LimitRange. -`busybox1` will not be evicted since it was created and admitted on the cluster before the LimitRange creation. +`busybox2` Pod will not be admitted on the cluster since the total memory limit of its container is greater than the limit defined in the LimitRange. +`busybox1` will not be evicted since it was created and admitted on the cluster before the LimitRange creation. ## Limiting Storage resources -You can enforce minimum and maximum size of [storage resources](/docs/concepts/storage/persistent-volumes/) that can be requested by each PersistentVolumeClaim in a namespace using a LimitRange. +You can enforce minimum and maximum size of [storage resources](/docs/concepts/storage/persistent-volumes/) that can be requested by each PersistentVolumeClaim in a namespace using a LimitRange: {{< codenew file="admin/resource/storagelimits.yaml" >}} -Apply the YAML using `kubectl create`. +Apply the YAML using `kubectl create`: ```shell -kubectl create -f https://k8s.io/examples/admin/resource/storagelimits.yaml -n limitrange-demo +kubectl create -f https://k8s.io/examples/admin/resource/storagelimits.yaml -n limitrange-demo ``` ```shell limitrange/storagelimits created ``` -Describe the created object, + +Describe the created object: ```shell -kubectl describe limits/storagelimits +kubectl describe limits/storagelimits ``` -the output should look like + +The output should look like: ```shell Name: storagelimits @@ -297,31 +306,31 @@ PersistentVolumeClaim storage 1Gi 2Gi - - - ```shell kubectl create -f https://k8s.io/examples/admin/resource//pvc-limit-lower.yaml -n limitrange-demo -``` +``` -While creating a PVC with `requests.storage` lower than the Min value in the LimitRange, an Error thrown by the server +While creating a PVC with `requests.storage` lower than the Min value in the LimitRange, an Error thrown by the server: ```shell Error from server (Forbidden): error when creating "pvc-limit-lower.yaml": persistentvolumeclaims "pvc-limit-lower" is forbidden: minimum storage usage per PersistentVolumeClaim is 1Gi, but request is 500Mi. ``` -Same behaviour is noted if the `requests.storage` is greater than the Max value in the LimitRange +Same behaviour is noted if the `requests.storage` is greater than the Max value in the LimitRange: {{< codenew file="admin/resource/pvc-limit-greater.yaml" >}} ```shell kubectl create -f https://k8s.io/examples/admin/resource/pvc-limit-greater.yaml -n limitrange-demo -``` +``` ```shell Error from server (Forbidden): error when creating "pvc-limit-greater.yaml": persistentvolumeclaims "pvc-limit-greater" is forbidden: maximum storage usage per PersistentVolumeClaim is 2Gi, but request is 5Gi. ``` -## Limits/Requests Ratio +## Limits/Requests Ratio If `LimitRangeItem.maxLimitRequestRatio` if specified in th `LimitRangeSpec`, the named resource must have a request and limit that are both non-zero where limit divided by request is less than or equal to the enumerated value - the following `LimitRange` enforces memory limit to be at most twice the amount of the memory request for any pod in the namespace. +The following `LimitRange` enforces memory limit to be at most twice the amount of the memory request for any pod in the namespace. {{< codenew file="admin/resource/limit-memory-ratio-pod.yaml" >}} @@ -335,7 +344,7 @@ Describe the LimitRange with the following kubectl comm $ kubectl describe limitrange/limit-memory-ratio-pod ``` -```shell +```shell Name: limit-memory-ratio-pod Namespace: limitrange-demo Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio @@ -343,30 +352,28 @@ Type Resource Min Max Default Request Default Limit Max Limit/Reques Pod memory - - - - 2 ``` +Let's create a pod with `requests.memory=100Mi` and `limits.memory=300Mi`: -Let's create a pod with `requests.memory=100Mi` and `limits.memory=300Mi` {{< codenew file="admin/resource/limit-range-pod-3.yaml" >}} - ```shell kubectl apply -f https://k8s.io/examples/admin/resource/limit-range-pod-3.yaml ``` The pod creation failed as the ratio here (`3`) is greater than the enforced limit (`2`) in `limit-memory-ratio-pod` LimitRange - ```shell Error from server (Forbidden): error when creating "limit-range-pod-3.yaml": pods "busybox3" is forbidden: memory max limit to request ratio per Pod is 2, but provided ratio is 3.000000. ``` +### Clean up + +Delete the `limitrange-demo` namespace to free all resources: -### Clean up -Delete the `limitrange-demo` namespace to free all resources ```shell kubectl delete ns limitrange-demo ``` - ## Examples - See [a tutorial on how to limit compute resources per namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) . diff --git a/content/en/docs/concepts/services-networking/connect-applications-service.md b/content/en/docs/concepts/services-networking/connect-applications-service.md index 96d7cc195b1f0..4a1c2b1e8c9b8 100644 --- a/content/en/docs/concepts/services-networking/connect-applications-service.md +++ b/content/en/docs/concepts/services-networking/connect-applications-service.md @@ -136,8 +136,8 @@ and DNS. The former works out of the box while the latter requires the [CoreDNS cluster addon](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/coredns). {{< note >}} If the service environment variables are not desired (because possible clashing with expected program ones, -too many variables to process, only using DNS, etc) you can disable this mode by setting the `enableServiceLinks` -flag to `false` on the [pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core). +too many variables to process, only using DNS, etc) you can disable this mode by setting the `enableServiceLinks` +flag to `false` on the [pod spec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core). {{< /note >}} @@ -254,9 +254,9 @@ nginxsecret Opaque 2 1m Following are the manual steps to follow in case you run into problems running make (on windows for example): ```shell -#create a public private key pair +# Create a public private key pair openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -out /d/tmp/nginx.crt -subj "/CN=my-nginx/O=my-nginx" -#convert the keys to base64 encoding +# Convert the keys to base64 encoding cat /d/tmp/nginx.crt | base64 cat /d/tmp/nginx.key | base64 ``` diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md index a428b84819e48..975131b996c10 100644 --- a/content/en/docs/concepts/workloads/controllers/deployment.md +++ b/content/en/docs/concepts/workloads/controllers/deployment.md @@ -249,7 +249,7 @@ up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas. ```shell kubectl describe deployments ``` - The output is similar to this: + The output is similar to this: ``` Name: nginx-deployment Namespace: default @@ -407,12 +407,12 @@ rolled back. Kubernetes by default sets the value to 25%. {{< /note >}} -* Get the description of the Deployment: +* Get the description of the Deployment: ```shell kubectl describe deployment ``` - The output is similar to this: + The output is similar to this: ``` Name: nginx-deployment Namespace: default @@ -441,7 +441,7 @@ rolled back. OldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created) NewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created) Events: - FirstSeen LastSeen Count From SubobjectPath Type Reason Message + FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1 @@ -459,11 +459,11 @@ rolled back. Follow the steps given below to check the rollout history: -1. First, check the revisions of this Deployment: +1. First, check the revisions of this Deployment: ```shell kubectl rollout history deployment.v1.apps/nginx-deployment ``` - The output is similar to this: + The output is similar to this: ``` deployments "nginx-deployment" REVISION CHANGE-CAUSE @@ -483,7 +483,7 @@ Follow the steps given below to check the rollout history: kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2 ``` - The output is similar to this: + The output is similar to this: ``` deployments "nginx-deployment" revision 2 Labels: app=nginx @@ -508,7 +508,7 @@ Follow the steps given below to rollback the Deployment from the current version kubectl rollout undo deployment.v1.apps/nginx-deployment ``` - The output is similar to this: + The output is similar to this: ``` deployment.apps/nginx-deployment ``` @@ -518,7 +518,7 @@ Follow the steps given below to rollback the Deployment from the current version kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2 ``` - The output is similar to this: + The output is similar to this: ``` deployment.apps/nginx-deployment ``` @@ -533,7 +533,7 @@ Follow the steps given below to rollback the Deployment from the current version kubectl get deployment nginx-deployment ``` - The output is similar to this: + The output is similar to this: ``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 3 3 3 3 30m @@ -542,7 +542,7 @@ Follow the steps given below to rollback the Deployment from the current version ```shell kubectl describe deployment nginx-deployment ``` - The output is similar to this: + The output is similar to this: ``` Name: nginx-deployment Namespace: default @@ -662,13 +662,13 @@ ReplicaSet with the most replicas. ReplicaSets with zero replicas are not scaled In our example above, 3 replicas are added to the old ReplicaSet and 2 replicas are added to the new ReplicaSet. The rollout process should eventually move all replicas to the new ReplicaSet, assuming -the new replicas become healthy. To confirm this, run: +the new replicas become healthy. To confirm this, run: ```shell kubectl get deploy ``` -The output is similar to this: +The output is similar to this: ``` NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 15 18 7 8 7m