From 7f4f2e142905feacff4fcac350340da52fb88091 Mon Sep 17 00:00:00 2001 From: Juan Diego Palomino Date: Mon, 28 Jan 2019 09:48:27 -0800 Subject: [PATCH 1/9] First draft of the updates to the ReplicaSet Docs To start with, I tried to cleanup the docs to adhere to the style guide https://kubernetes.io/docs/contribute/style/style-guide/. I then added some description of the ReplicaSet-Pod link via the owner reference field and behavior by it. I also made a clarification on the ReplicaSet demo where it was being redundant for the sake of demonstrating the different forms of usage. I consider this draft incomplete as I still lack knowledge of how the pod labels affect the behavior. --- .../workloads/controllers/replicaset.md | 60 ++++++++++++++----- content/en/examples/controllers/frontend.yaml | 2 + 2 files changed, 46 insertions(+), 16 deletions(-) diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index 9da14d35a525d..1b046e7445c79 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -10,17 +10,27 @@ weight: 10 {{% capture overview %}} -ReplicaSet is the next-generation Replication Controller. The only difference +ReplicaSet is a type of Replication Controller. The only difference between a _ReplicaSet_ and a -[_Replication Controller_](/docs/concepts/workloads/controllers/replicationcontroller/) right now is +[_ReplicationController_](/docs/concepts/workloads/controllers/replicationcontroller/) right now is the selector support. ReplicaSet supports the new set-based selector requirements as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors) whereas a Replication Controller only supports equality-based selector requirements. + {{% /capture %}} {{% capture body %}} +## How a ReplicaSet Works + +Just like a ReplicationController, a ReplicaSet's purpose is to maintain a stable set of replica Pods available at any +given time. It does so by creating and deleting a pods as needed to reach the desired number. The link a ReplicaSet has +to its Pods is via the [metadata.ownerReferences][/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents] +field, which specifies one resource dependent on another. All these _Pods_ have their owning ReplicaSet's identifying +information within their ownerReferences field. It's through this link that the ReplicaSet knows of the state of the +Pods it is maintaining and plan accordingly. + ## How to use a ReplicaSet Most [`kubectl`](/docs/user-guide/kubectl/) commands that support @@ -32,9 +42,9 @@ instead. Also, the imperative whereas Deployments are declarative, so we recommend using Deployments through the [`rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout) command. -While ReplicaSets can be used independently, today it's mainly used by -[Deployments](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod -creation, deletion and updates. When you use Deployments you don't have to worry +While ReplicaSets can be used independently, it could also be used by other resources. For example, +[Deployments](/docs/concepts/workloads/controllers/deployment/) use them as a mechanism to orchestrate +pod creation, deletion and updates. When you use Deployments you don't have to worry about managing the ReplicaSets that they create. Deployments own and manage their ReplicaSets. @@ -53,13 +63,25 @@ use a Deployment instead, and define your application in the spec section. {{< codenew file="controllers/frontend.yaml" >}} -Saving this manifest into `frontend.yaml` and submitting it to a Kubernetes cluster should +Saving this manifest into `frontend.yaml` and submitting it to a Kubernetes cluster will create the defined ReplicaSet and the pods that it manages. ```shell -$ kubectl create -f http://k8s.io/examples/controllers/frontend.yaml +kubectl create -f http://k8s.io/examples/controllers/frontend.yaml +``` + +With the output of +```shell replicaset.apps/frontend created -$ kubectl describe rs/frontend +``` + +You can then check on the state of the replicaset: +```shell +kubectl describe rs/frontend +``` + +And you will see output similar to: +```shell Name: frontend Namespace: default Selector: tier=frontend,tier in (frontend) @@ -88,7 +110,15 @@ Events: 1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-qhloh 1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-dnjpy 1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-9si5l -$ kubectl get pods +``` + +And lastly checking on the pods brought up: +```shell +kubectl get pods +``` + +Will yield pod information similar to +```shell NAME READY STATUS RESTARTS AGE frontend-9si5l 1/1 Running 0 1m frontend-dnjpy 1/1 Running 0 1m @@ -104,8 +134,8 @@ A ReplicaSet also needs a [`.spec` section](https://git.k8s.io/community/contrib ### Pod Template -The `.spec.template` is the only required field of the `.spec`. The `.spec.template` is a -[pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a +The `.spec.template` is the only required field of the `.spec`. The `.spec.template` is a +[pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [pod](/docs/concepts/workloads/pods/pod/), except that it is nested and does not have an `apiVersion` or `kind`. In addition to required fields of a pod, a pod template in a ReplicaSet must specify appropriate @@ -130,8 +160,8 @@ be rejected by the API. In Kubernetes 1.9 the API version `apps/v1` on the ReplicaSet kind is the current version and is enabled by default. The API version `apps/v1beta2` is deprecated. -Also you should not normally create any pods whose labels match this selector, either directly, with -another ReplicaSet, or with another controller such as a Deployment. If you do so, the ReplicaSet thinks that it +Also you should not normally create any pods whose labels match this selector, either directly, with +another ReplicaSet, or with another controller such as a Deployment. If you do so, the ReplicaSet thinks that it created the other pods. Kubernetes does not stop you from doing this. If you do end up with multiple controllers that have overlapping selectors, you @@ -183,7 +213,7 @@ To update pods to a new spec in a controlled way, use a [rolling update](#rollin ### Isolating pods from a ReplicaSet -Pods may be removed from a ReplicaSet's target set by changing their labels. This technique may be used to remove pods +Pods may be removed from a ReplicaSet's target set by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically ( assuming that the number of replicas is not also changed). @@ -242,5 +272,3 @@ to a machine lifetime: the pod needs to be running on the machine before other p safe to terminate when the machine is otherwise ready to be rebooted/shutdown. {{% /capture %}} - - diff --git a/content/en/examples/controllers/frontend.yaml b/content/en/examples/controllers/frontend.yaml index f9dba82b7ef23..69484b1edd613 100644 --- a/content/en/examples/controllers/frontend.yaml +++ b/content/en/examples/controllers/frontend.yaml @@ -9,6 +9,8 @@ spec: # modify replicas according to your case replicas: 3 selector: + # The matchLabels and matchExpressions achieve the same effect, and the redundancy + # is to demonstrate the usage of both equality-based and set-based selectors matchLabels: tier: frontend matchExpressions: From 56d9ff2710610bcf984c7aab4989acbac3a09430 Mon Sep 17 00:00:00 2001 From: Juan Diego Palomino Date: Tue, 29 Jan 2019 09:32:47 -0800 Subject: [PATCH 2/9] Clearing up RC refs & explaining acquisition behavior I'm beginning to address the cr by cleaning up references to the ReplicationController and making it clear that RCs are discouraged/old. I then expanded on the behavior of ReplicaSet in the presence of pods it can acquire but are not created directly by it. --- .../workloads/controllers/replicaset.md | 106 ++++++++++++++---- content/en/examples/controllers/frontend.yaml | 4 - content/en/examples/controllers/pod-rs.yaml | 23 ++++ 3 files changed, 107 insertions(+), 26 deletions(-) create mode 100644 content/en/examples/controllers/pod-rs.yaml diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index 1b046e7445c79..069d6d4c1abed 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -10,12 +10,8 @@ weight: 10 {{% capture overview %}} -ReplicaSet is a type of Replication Controller. The only difference -between a _ReplicaSet_ and a -[_ReplicationController_](/docs/concepts/workloads/controllers/replicationcontroller/) right now is -the selector support. ReplicaSet supports the new set-based selector requirements -as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors) -whereas a Replication Controller only supports equality-based selector requirements. +A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often +used to guarantee the availability of a specified number of identical pods. {{% /capture %}} @@ -24,12 +20,19 @@ whereas a Replication Controller only supports equality-based selector requireme ## How a ReplicaSet Works -Just like a ReplicationController, a ReplicaSet's purpose is to maintain a stable set of replica Pods available at any -given time. It does so by creating and deleting a pods as needed to reach the desired number. The link a ReplicaSet has -to its Pods is via the [metadata.ownerReferences][/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents] -field, which specifies one resource dependent on another. All these _Pods_ have their owning ReplicaSet's identifying -information within their ownerReferences field. It's through this link that the ReplicaSet knows of the state of the -Pods it is maintaining and plan accordingly. +A ReplicaSet is defined with fields including, a selector which specifies how to identify pods it can acquire, a number +of replicas indicating how many pods it should be maintaining, and a pod template specifying the data of new pods +it should bring bring up to meet the number of replicas criteria. A ReplicaSet then fulfills its purpose by creating +and deleting a Pods as needed to reach the desired number, using its given pod template for the creation. + +The link a ReplicaSet has to its Pods is via the [metadata.ownerReferences][/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents] +field, which specifies one resource dependent on another. All _Pods_ acquired by a ReplicaSet have their owning +ReplicaSet's identifying information within their ownerReferences field. It's through this link that the ReplicaSet +knows of the state of the Pods it is maintaining and plan accordingly. + +ReplicaSets identify new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the +OwnerReference is not a Controller and it matches a ReplicaSet's selector, it will be immediately acquired by said +ReplicaSet. ## How to use a ReplicaSet @@ -43,10 +46,10 @@ imperative whereas Deployments are declarative, so we recommend using Deployment through the [`rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout) command. While ReplicaSets can be used independently, it could also be used by other resources. For example, -[Deployments](/docs/concepts/workloads/controllers/deployment/) use them as a mechanism to orchestrate -pod creation, deletion and updates. When you use Deployments you don't have to worry -about managing the ReplicaSets that they create. Deployments own and manage -their ReplicaSets. +[Deployments](/docs/concepts/workloads/controllers/deployment/) are the recommended way of using ReplicaSets. +When you use Deployments you don't have to worry about managing the ReplicaSets that they create. Deployments own +and manage their ReplicaSets. + ## When to use a ReplicaSet @@ -70,11 +73,6 @@ create the defined ReplicaSet and the pods that it manages. kubectl create -f http://k8s.io/examples/controllers/frontend.yaml ``` -With the output of -```shell -replicaset.apps/frontend created -``` - You can then check on the state of the replicaset: ```shell kubectl describe rs/frontend @@ -125,7 +123,65 @@ frontend-dnjpy 1/1 Running 0 1m frontend-qhloh 1/1 Running 0 1m ``` -## Writing a ReplicaSet Spec +## Non-Template Acquisitions + +A ReplicaSet is not limited to owning pods specified by its template. Take the previous frontend ReplicaSet example, +and the Pods specified in the following manifest: + +{{< codenew file="controllers/pod-rs.yaml" >}} + +As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the forntend +ReplicaSet, they will immediately be acquired by it. + +Creating the Pods after the frontend ReplicaSet has been deployed and set up its initial Pod replicas to fulfill its +replica count requirement: + +```shell +kubectl create -f http://k8s.io/examples/controllers/pod-rs.yaml +``` + +Will cause the new Pods to be acquired by the ReplicaSet, then immediately terminated as the ReplicaSet would be over +its desired count. Fetching the pods: +```shell +kubectl get pods +``` + +Will show that the new Pods are either already terminated, or in the process of being terminated +```shell +NAME READY STATUS RESTARTS AGE +frontend-9si5l 1/1 Running 0 1m +frontend-dnjpy 1/1 Running 0 1m +frontend-qhloh 1/1 Running 0 1m +pod2 0/1 Terminating 0 4s +``` + +If we create the Pods first: +```shell +kubectl create -f http://k8s.io/examples/controllers/pod-rs.yaml +``` + +And then create the ReplicaSet however: +```shell +kubectl create -f http://k8s.io/examples/controllers/frontend.yaml +``` + +We shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the +number of its new Pods and the original matches its desired count. As fetching the pods: +```shell +kubectl get pods +``` + +Will reveal in its output: +```shell +NAME READY STATUS RESTARTS AGE +frontend-pxj4r 1/1 Running 0 5s +pod1 1/1 Running 0 13s +pod2 1/1 Running 0 13s +``` + +In this manner, a ReplicaSet can own a non-homogenous set of Pods + +## Writing a ReplicaSet Manifest As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields. For general information about working with manifests, see [object management using kubectl](/docs/concepts/overview/object-management-kubectl/overview/). @@ -271,4 +327,10 @@ machine-level function, such as machine monitoring or machine logging. These po to a machine lifetime: the pod needs to be running on the machine before other pods start, and are safe to terminate when the machine is otherwise ready to be rebooted/shutdown. +### ReplicationController +ReplicaSets are the successors to [_ReplicationControllers_](/docs/concepts/workloads/controllers/replicationcontroller/). +The two serve the same purpose, and behave seimilarly except that a ReplicationController does not support set-based +selector requirements as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors). +As such, ReplicaSets are preferred over ReplicationControllers + {{% /capture %}} diff --git a/content/en/examples/controllers/frontend.yaml b/content/en/examples/controllers/frontend.yaml index 69484b1edd613..70219488fe942 100644 --- a/content/en/examples/controllers/frontend.yaml +++ b/content/en/examples/controllers/frontend.yaml @@ -9,12 +9,8 @@ spec: # modify replicas according to your case replicas: 3 selector: - # The matchLabels and matchExpressions achieve the same effect, and the redundancy - # is to demonstrate the usage of both equality-based and set-based selectors matchLabels: tier: frontend - matchExpressions: - - {key: tier, operator: In, values: [frontend]} template: metadata: labels: diff --git a/content/en/examples/controllers/pod-rs.yaml b/content/en/examples/controllers/pod-rs.yaml new file mode 100644 index 0000000000000..df7b390597c49 --- /dev/null +++ b/content/en/examples/controllers/pod-rs.yaml @@ -0,0 +1,23 @@ +apiVersion: v1 +kind: Pod +metadata: + name: pod1 + labels: + tier: frontend +spec: + containers: + - name: hello1 + image: gcr.io/google-samples/hello-app:2.0 + +--- + +apiVersion: v1 +kind: Pod +metadata: + name: pod2 + labels: + tier: frontend +spec: + containers: + - name: hello2 + image: gcr.io/google-samples/hello-app:1.0 From 7e7a84011d85a700736a80593b8635bf4890ef31 Mon Sep 17 00:00:00 2001 From: Juan Diego Palomino Date: Wed, 30 Jan 2019 09:38:48 -0800 Subject: [PATCH 3/9] Mismatched link seems to have disappeared from preview "As with all other Kubernetes API objects," etc... is present in the sibling concepts/workloads/controllers/ files, so I am hesitant to change that w/o changing the others, but I did abbreviate it. "The `.spec.template` is the only required field of the `.spec`." is false, we also need the selector Trying to address passive voice Cleaned up Writing a ReplicaSet Manifest section removed How to use a ReplicaSet section as it has redundant info from the examples and Alternatives section Expanded examples a bit Cleared up passive voice --- .../workloads/controllers/replicaset.md | 108 +++++++----------- .../{controllers => pods}/pod-rs.yaml | 0 2 files changed, 43 insertions(+), 65 deletions(-) rename content/en/examples/{controllers => pods}/pod-rs.yaml (100%) diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index 069d6d4c1abed..5af74feafd50e 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -22,11 +22,11 @@ used to guarantee the availability of a specified number of identical pods. A ReplicaSet is defined with fields including, a selector which specifies how to identify pods it can acquire, a number of replicas indicating how many pods it should be maintaining, and a pod template specifying the data of new pods -it should bring bring up to meet the number of replicas criteria. A ReplicaSet then fulfills its purpose by creating -and deleting a Pods as needed to reach the desired number, using its given pod template for the creation. +it should create to meet the number of replicas criteria. A ReplicaSet then fulfills its purpose by creating +and deleting Pods as needed to reach the desired number, using its given pod template for the creation. The link a ReplicaSet has to its Pods is via the [metadata.ownerReferences][/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents] -field, which specifies one resource dependent on another. All _Pods_ acquired by a ReplicaSet have their owning +field, which specifies what resource the current object is owned by. All _Pods_ acquired by a ReplicaSet have their owning ReplicaSet's identifying information within their ownerReferences field. It's through this link that the ReplicaSet knows of the state of the Pods it is maintaining and plan accordingly. @@ -34,23 +34,6 @@ ReplicaSets identify new Pods to acquire by using its selector. If there is a Po OwnerReference is not a Controller and it matches a ReplicaSet's selector, it will be immediately acquired by said ReplicaSet. -## How to use a ReplicaSet - -Most [`kubectl`](/docs/user-guide/kubectl/) commands that support -Replication Controllers also support ReplicaSets. One exception is the -[`rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update) command. If -you want the rolling update functionality please consider using Deployments -instead. Also, the -[`rolling-update`](/docs/reference/generated/kubectl/kubectl-commands#rolling-update) command is -imperative whereas Deployments are declarative, so we recommend using Deployments -through the [`rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout) command. - -While ReplicaSets can be used independently, it could also be used by other resources. For example, -[Deployments](/docs/concepts/workloads/controllers/deployment/) are the recommended way of using ReplicaSets. -When you use Deployments you don't have to worry about managing the ReplicaSets that they create. Deployments own -and manage their ReplicaSets. - - ## When to use a ReplicaSet A ReplicaSet ensures that a specified number of pod replicas are running at any given @@ -73,7 +56,18 @@ create the defined ReplicaSet and the pods that it manages. kubectl create -f http://k8s.io/examples/controllers/frontend.yaml ``` -You can then check on the state of the replicaset: +You can then get the current ReplicaSets deployed: +```shell +kubectl get rs +``` + +And see the frontend one we just created: +```shell +NAME DESIRED CURRENT READY AGE +frontend 3 3 3 6s +``` + +You can also check on the state of the replicaset: ```shell kubectl describe rs/frontend ``` @@ -137,7 +131,7 @@ Creating the Pods after the frontend ReplicaSet has been deployed and set up its replica count requirement: ```shell -kubectl create -f http://k8s.io/examples/controllers/pod-rs.yaml +kubectl create -f http://k8s.io/examples/pods/pod-rs.yaml ``` Will cause the new Pods to be acquired by the ReplicaSet, then immediately terminated as the ReplicaSet would be over @@ -183,57 +177,39 @@ In this manner, a ReplicaSet can own a non-homogenous set of Pods ## Writing a ReplicaSet Manifest -As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields. For -general information about working with manifests, see [object management using kubectl](/docs/concepts/overview/object-management-kubectl/overview/). +As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields. +For ReplicaSets, the kind is always just ReplicaSet. +In Kubernetes 1.9 the API version `apps/v1` on the ReplicaSet kind is the current version and is enabled by default. The API version `apps/v1beta2` is deprecated. +Please refer to the first lines of the `frontend.yaml` example for guidance. A ReplicaSet also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status). ### Pod Template -The `.spec.template` is the only required field of the `.spec`. The `.spec.template` is a -[pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a -[pod](/docs/concepts/workloads/pods/pod/), except that it is nested and does not have an `apiVersion` or `kind`. +The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates) which also requires +to have labels in place. In our `frontend.yaml` example we had one label: `tier: frontend`. +Be careful not to overlap with the selectors of other controllers, lest they try to adopt this pod. -In addition to required fields of a pod, a pod template in a ReplicaSet must specify appropriate -labels and an appropriate restart policy. - -For labels, make sure to not overlap with other controllers. For more information, see [pod selector](#pod-selector). - -For [restart policy](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy), the only allowed value for `.spec.template.spec.restartPolicy` is `Always`, which is the default. - -For local container restarts, ReplicaSet delegates to an agent on the node, -for example the [Kubelet](/docs/admin/kubelet/) or Docker. +For the template's [restart policy](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) field, +`.spec.template.spec.restartPolicy`, the only allowed value for is `Always`, which is the default. ### Pod Selector -The `.spec.selector` field is a [label selector](/docs/concepts/overview/working-with-objects/labels/). A ReplicaSet -manages all the pods with labels that match the selector. It does not distinguish -between pods that it created or deleted and pods that another person or process created or -deleted. This allows the ReplicaSet to be replaced without affecting the running pods. +The `.spec.selector` field is a [label selector](/docs/concepts/overview/working-with-objects/labels/). As discussed +[earlier](#how-a-replicaset-works) these are the labels used to identify potential pods to acquire. In our +`frontend.yaml` example, the selector was: +```shell +matchLabels: + tier: frontend +``` The `.spec.template.metadata.labels` must match the `.spec.selector`, or it will be rejected by the API. -In Kubernetes 1.9 the API version `apps/v1` on the ReplicaSet kind is the current version and is enabled by default. The API version `apps/v1beta2` is deprecated. - -Also you should not normally create any pods whose labels match this selector, either directly, with -another ReplicaSet, or with another controller such as a Deployment. If you do so, the ReplicaSet thinks that it -created the other pods. Kubernetes does not stop you from doing this. - -If you do end up with multiple controllers that have overlapping selectors, you -will have to manage the deletion yourself. - -### Labels on a ReplicaSet - -The ReplicaSet can itself have labels (`.metadata.labels`). Typically, you -would set these the same as the `.spec.template.metadata.labels`. However, they are allowed to be -different, and the `.metadata.labels` do not affect the behavior of the ReplicaSet. - ### Replicas -You can specify how many pods should run concurrently by setting `.spec.replicas`. The number running at any time may be higher -or lower, such as if the replicas were just increased or decreased, or if a pod is gracefully -shut down, and a replacement starts early. +You can specify how many pods should run concurrently by setting `.spec.replicas`. The ReplicaSet will create/delete +its pods to match this number. If you do not specify `.spec.replicas`, then it defaults to 1. @@ -269,9 +245,9 @@ To update pods to a new spec in a controlled way, use a [rolling update](#rollin ### Isolating pods from a ReplicaSet -Pods may be removed from a ReplicaSet's target set by changing their labels. This technique may be used to remove pods +You can remove Pods from a ReplicaSet by changing their labels. This technique may be used to remove pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically ( - assuming that the number of replicas is not also changed). +assuming that the number of replicas is not also changed). ### Scaling a ReplicaSet @@ -306,10 +282,12 @@ kubectl autoscale rs frontend --max=10 ### Deployment (Recommended) -[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is a higher-level API object that updates its underlying ReplicaSets and their Pods -in a similar fashion as `kubectl rolling-update`. Deployments are recommended if you want this rolling update functionality, -because unlike `kubectl rolling-update`, they are declarative, server-side, and have additional features. For more information on running a stateless -application using a Deployment, please read [Run a Stateless Application Using a Deployment](/docs/tasks/run-application/run-stateless-application-deployment/). +[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is an object which can own ReplicaSets and update +them and their Pods via declarative, server-side rolling updates. +While ReplicaSets can be used independently, today it’s mainly used by Deployments as a mechanism to orchestrate pod +creation, deletion and updates. When you use Deployments you don’t have to worry about managing the ReplicaSets that +they create. Deployments own and manage their ReplicaSets. +As such, it is recommended to use Deployments when you want ReplicaSets. ### Bare Pods @@ -329,7 +307,7 @@ safe to terminate when the machine is otherwise ready to be rebooted/shutdown. ### ReplicationController ReplicaSets are the successors to [_ReplicationControllers_](/docs/concepts/workloads/controllers/replicationcontroller/). -The two serve the same purpose, and behave seimilarly except that a ReplicationController does not support set-based +The two serve the same purpose, and behave similarly, except that a ReplicationController does not support set-based selector requirements as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors). As such, ReplicaSets are preferred over ReplicationControllers diff --git a/content/en/examples/controllers/pod-rs.yaml b/content/en/examples/pods/pod-rs.yaml similarity index 100% rename from content/en/examples/controllers/pod-rs.yaml rename to content/en/examples/pods/pod-rs.yaml From 2a9dee26b01aa1261d14e2e1b81f0f9c9dd90a27 Mon Sep 17 00:00:00 2001 From: Juan Diego Palomino Date: Wed, 30 Jan 2019 09:41:36 -0800 Subject: [PATCH 4/9] refactoring link to example yaml --- content/en/docs/concepts/workloads/controllers/replicaset.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index 5af74feafd50e..af031a544e91e 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -122,7 +122,7 @@ frontend-qhloh 1/1 Running 0 1m A ReplicaSet is not limited to owning pods specified by its template. Take the previous frontend ReplicaSet example, and the Pods specified in the following manifest: -{{< codenew file="controllers/pod-rs.yaml" >}} +{{< codenew file="pods/pod-rs.yaml" >}} As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the forntend ReplicaSet, they will immediately be acquired by it. @@ -151,7 +151,7 @@ pod2 0/1 Terminating 0 4s If we create the Pods first: ```shell -kubectl create -f http://k8s.io/examples/controllers/pod-rs.yaml +kubectl create -f http://k8s.io/examples/pods/pod-rs.yaml ``` And then create the ReplicaSet however: From 32ceb9c4f95914b75efeaeafbbb49a07adc1db37 Mon Sep 17 00:00:00 2001 From: Juan Diego Palomino Date: Wed, 30 Jan 2019 09:54:43 -0800 Subject: [PATCH 5/9] adding pod-rs test case --- content/en/examples/examples_test.go | 1 + 1 file changed, 1 insertion(+) diff --git a/content/en/examples/examples_test.go b/content/en/examples/examples_test.go index 0dd16589d006b..3d0fefdc25580 100644 --- a/content/en/examples/examples_test.go +++ b/content/en/examples/examples_test.go @@ -453,6 +453,7 @@ func TestExampleObjectSchemas(t *testing.T) { "private-reg-pod": {&api.Pod{}}, "share-process-namespace": {&api.Pod{}}, "simple-pod": {&api.Pod{}}, + "pod-rs": {&api.Pod{}, &api.Pod{}}, "two-container-pod": {&api.Pod{}}, }, "pods/config": { From bb185a6cba2a4606f80c3de5c0a2d6ee9f17bc74 Mon Sep 17 00:00:00 2001 From: Juan Diego Palomino Date: Thu, 31 Jan 2019 09:35:15 -0800 Subject: [PATCH 6/9] Addressing Steve Perry's comments Capitalize Pod throughout. Link is not rendering correctly. Use () instead of [] for the path. Ending with "for the creation" seems vague to me. Maybe this: "...reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod template." Suggestion: "is via the Pod's metadata.ownerReferences field." That way the reader won't jump to the incorrect conclusion that we're talking about the ReplicaSet's metadata.ownerReferences field. with fields, including a selector that and plans accordingly Our style for headings is sentence case. So this heading would be "How a ReplicaSet works". Several headings in this topic need to be converted to sentence case. cleaned up frontend.yaml example added example checking the Pod's owner reference being set to it's parent ReplicaSet --- .../workloads/controllers/replicaset.md | 153 +++++++++++------- content/en/examples/controllers/frontend.yaml | 15 -- 2 files changed, 94 insertions(+), 74 deletions(-) diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index af031a544e91e..4188be614fb97 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -11,34 +11,35 @@ weight: 10 {{% capture overview %}} A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often -used to guarantee the availability of a specified number of identical pods. +used to guarantee the availability of a specified number of identical Pods. {{% /capture %}} {{% capture body %}} -## How a ReplicaSet Works +## How a ReplicaSet works -A ReplicaSet is defined with fields including, a selector which specifies how to identify pods it can acquire, a number -of replicas indicating how many pods it should be maintaining, and a pod template specifying the data of new pods +A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number +of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods it should create to meet the number of replicas criteria. A ReplicaSet then fulfills its purpose by creating -and deleting Pods as needed to reach the desired number, using its given pod template for the creation. +and deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod +template. -The link a ReplicaSet has to its Pods is via the [metadata.ownerReferences][/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents] +The link a ReplicaSet has to its Pods is via the Pods' [metadata.ownerReferences](/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents) field, which specifies what resource the current object is owned by. All _Pods_ acquired by a ReplicaSet have their owning ReplicaSet's identifying information within their ownerReferences field. It's through this link that the ReplicaSet -knows of the state of the Pods it is maintaining and plan accordingly. +knows of the state of the Pods it is maintaining and plans accordingly. -ReplicaSets identify new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the -OwnerReference is not a Controller and it matches a ReplicaSet's selector, it will be immediately acquired by said +A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the +OwnerReference is not a controller and it matches a ReplicaSet's selector, it will be immediately acquired by said ReplicaSet. ## When to use a ReplicaSet A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However, a Deployment is a higher-level concept that manages ReplicaSets and -provides declarative updates to pods along with a lot of other useful features. +provides declarative updates to Pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don't require updates at all. @@ -50,7 +51,7 @@ use a Deployment instead, and define your application in the spec section. {{< codenew file="controllers/frontend.yaml" >}} Saving this manifest into `frontend.yaml` and submitting it to a Kubernetes cluster will -create the defined ReplicaSet and the pods that it manages. +create the defined ReplicaSet and the Pods that it manages. ```shell kubectl create -f http://k8s.io/examples/controllers/frontend.yaml @@ -61,7 +62,7 @@ You can then get the current ReplicaSets deployed: kubectl get rs ``` -And see the frontend one we just created: +And see the frontend one you created: ```shell NAME DESIRED CURRENT READY AGE frontend 3 3 3 6s @@ -104,12 +105,12 @@ Events: 1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-9si5l ``` -And lastly checking on the pods brought up: +And lastly you can check for the Pods brought up: ```shell -kubectl get pods +kubectl get Pods ``` -Will yield pod information similar to +You should see Pod information similar to ```shell NAME READY STATUS RESTARTS AGE frontend-9si5l 1/1 Running 0 1m @@ -117,30 +118,62 @@ frontend-dnjpy 1/1 Running 0 1m frontend-qhloh 1/1 Running 0 1m ``` -## Non-Template Acquisitions +You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. +To do this, get the yaml of one of the Pods running: +```shell +kubectl get pods frontend-9si5l -o yaml +``` + +The output will look similar to this, with the frontend ReplicaSet's info set in the metadata's ownerReferences field: +```shell +apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: 2019-01-31T17:20:41Z + generateName: frontend- + labels: + tier: frontend + name: frontend-9si5l + namespace: default + ownerReferences: + - apiVersion: extensions/v1beta1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: frontend + uid: 892a2330-257c-11e9-aecd-025000000001 +... +``` + +## Non-Template Pod acquisitions -A ReplicaSet is not limited to owning pods specified by its template. Take the previous frontend ReplicaSet example, -and the Pods specified in the following manifest: +While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have +labels which match the selector of one of you ReplicaSets. The reason for this is because a ReplicaSet is not limited +to owning Pods specified by its template-- it can acquire other Pods in the manner specified in the previous sections. -{{< codenew file="pods/pod-rs.yaml" >}} +Take the previous frontend ReplicaSet example, and the Pods specified in the following manifest: -As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the forntend +{{< codenew file="Pods/pod-rs.yaml" >}} + +As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it. -Creating the Pods after the frontend ReplicaSet has been deployed and set up its initial Pod replicas to fulfill its -replica count requirement: +Suppose you create the Pods after the frontend ReplicaSet has been deployed and had set up its initial Pod replicas to +fulfill its replica count requirement: ```shell -kubectl create -f http://k8s.io/examples/pods/pod-rs.yaml +kubectl create -f http://k8s.io/examples/Pods/pod-rs.yaml ``` -Will cause the new Pods to be acquired by the ReplicaSet, then immediately terminated as the ReplicaSet would be over -its desired count. Fetching the pods: +The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over +its desired count. + +Fetching the Pods: ```shell -kubectl get pods +kubectl get Pods ``` -Will show that the new Pods are either already terminated, or in the process of being terminated +The output shows that the new Pods are either already terminated, or in the process of being terminated ```shell NAME READY STATUS RESTARTS AGE frontend-9si5l 1/1 Running 0 1m @@ -149,9 +182,9 @@ frontend-qhloh 1/1 Running 0 1m pod2 0/1 Terminating 0 4s ``` -If we create the Pods first: +If you create the Pods first: ```shell -kubectl create -f http://k8s.io/examples/pods/pod-rs.yaml +kubectl create -f http://k8s.io/examples/Pods/pod-rs.yaml ``` And then create the ReplicaSet however: @@ -160,9 +193,9 @@ kubectl create -f http://k8s.io/examples/controllers/frontend.yaml ``` We shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the -number of its new Pods and the original matches its desired count. As fetching the pods: +number of its new Pods and the original matches its desired count. As fetching the Pods: ```shell -kubectl get pods +kubectl get Pods ``` Will reveal in its output: @@ -175,28 +208,28 @@ pod2 1/1 Running 0 13s In this manner, a ReplicaSet can own a non-homogenous set of Pods -## Writing a ReplicaSet Manifest +## Writing a ReplicaSet manifest As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields. For ReplicaSets, the kind is always just ReplicaSet. In Kubernetes 1.9 the API version `apps/v1` on the ReplicaSet kind is the current version and is enabled by default. The API version `apps/v1beta2` is deprecated. -Please refer to the first lines of the `frontend.yaml` example for guidance. +Refer to the first lines of the `frontend.yaml` example for guidance. A ReplicaSet also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status). ### Pod Template -The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates) which also requires -to have labels in place. In our `frontend.yaml` example we had one label: `tier: frontend`. -Be careful not to overlap with the selectors of other controllers, lest they try to adopt this pod. +The `.spec.template` is a [pod template](/docs/concepts/workloads/Pods/pod-overview/#pod-templates) which is also +required to have labels in place. In our `frontend.yaml` example we had one label: `tier: frontend`. +Be careful not to overlap with the selectors of other controllers, lest they try to adopt this Pod. -For the template's [restart policy](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) field, -`.spec.template.spec.restartPolicy`, the only allowed value for is `Always`, which is the default. +For the template's [restart policy](/docs/concepts/workloads/Pods/pod-lifecycle/#restart-policy) field, +`.spec.template.spec.restartPolicy`, the only allowed value is `Always`, which is the default. ### Pod Selector The `.spec.selector` field is a [label selector](/docs/concepts/overview/working-with-objects/labels/). As discussed -[earlier](#how-a-replicaset-works) these are the labels used to identify potential pods to acquire. In our +[earlier](#how-a-replicaset-works) these are the labels used to identify potential Pods to acquire. In our `frontend.yaml` example, the selector was: ```shell matchLabels: @@ -208,8 +241,8 @@ be rejected by the API. ### Replicas -You can specify how many pods should run concurrently by setting `.spec.replicas`. The ReplicaSet will create/delete -its pods to match this number. +You can specify how many Pods should run concurrently by setting `.spec.replicas`. The ReplicaSet will create/delete +its Pods to match this number. If you do not specify `.spec.replicas`, then it defaults to 1. @@ -219,7 +252,8 @@ If you do not specify `.spec.replicas`, then it defaults to 1. To delete a ReplicaSet and all of its Pods, use [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). The [Garbage collector](/docs/concepts/workloads/controllers/garbage-collection/) automatically deletes all of the dependent Pods by default. -When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Background` or `Foreground` in delete option. e.g. : +When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Background` or `Foreground` in delete option. +For example: ```shell kubectl proxy --port=8080 curl -X DELETE 'localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/frontend' \ @@ -229,8 +263,9 @@ curl -X DELETE 'localhost:8080/apis/extensions/v1beta1/namespaces/default/repli ### Deleting just a ReplicaSet -You can delete a ReplicaSet without affecting any of its pods using [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) with the `--cascade=false` option. -When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Orphan`, e.g. : +You can delete a ReplicaSet without affecting any of its Pods using [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) with the `--cascade=false` option. +When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Orphan`. +For example: ```shell kubectl proxy --port=8080 curl -X DELETE 'localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/frontend' \ @@ -239,22 +274,22 @@ curl -X DELETE 'localhost:8080/apis/extensions/v1beta1/namespaces/default/repli ``` Once the original is deleted, you can create a new ReplicaSet to replace it. As long -as the old and new `.spec.selector` are the same, then the new one will adopt the old pods. -However, it will not make any effort to make existing pods match a new, different pod template. -To update pods to a new spec in a controlled way, use a [rolling update](#rolling-updates). +as the old and new `.spec.selector` are the same, then the new one will adopt the old Pods. +However, it will not make any effort to make existing Pods match a new, different pod template. +To update Pods to a new spec in a controlled way, use a [rolling update](#rolling-updates). -### Isolating pods from a ReplicaSet +### Isolating Pods from a ReplicaSet -You can remove Pods from a ReplicaSet by changing their labels. This technique may be used to remove pods +You can remove Pods from a ReplicaSet by changing their labels. This technique may be used to remove Pods from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically ( assuming that the number of replicas is not also changed). ### Scaling a ReplicaSet A ReplicaSet can be easily scaled up or down by simply updating the `.spec.replicas` field. The ReplicaSet controller -ensures that a desired number of pods with a matching label selector are available and operational. +ensures that a desired number of Pods with a matching label selector are available and operational. -### ReplicaSet as an Horizontal Pod Autoscaler Target +### ReplicaSet as a Horizontal Pod Autoscaler Target A ReplicaSet can also be a target for [Horizontal Pod Autoscalers (HPA)](/docs/tasks/run-application/horizontal-pod-autoscale/). That is, @@ -265,7 +300,7 @@ the ReplicaSet we created in the previous example. Saving this manifest into `hpa-rs.yaml` and submitting it to a Kubernetes cluster should create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage -of the replicated pods. +of the replicated Pods. ```shell kubectl create -f https://k8s.io/examples/controllers/hpa-rs.yaml @@ -280,29 +315,29 @@ kubectl autoscale rs frontend --max=10 ## Alternatives to ReplicaSet -### Deployment (Recommended) +### Deployment (recommended) [`Deployment`](/docs/concepts/workloads/controllers/deployment/) is an object which can own ReplicaSets and update them and their Pods via declarative, server-side rolling updates. -While ReplicaSets can be used independently, today it’s mainly used by Deployments as a mechanism to orchestrate pod +While ReplicaSets can be used independently, today they're mainly used by Deployments as a mechanism to orchestrate Pod creation, deletion and updates. When you use Deployments you don’t have to worry about managing the ReplicaSets that they create. Deployments own and manage their ReplicaSets. As such, it is recommended to use Deployments when you want ReplicaSets. ### Bare Pods -Unlike the case where a user directly created pods, a ReplicaSet replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node (for example, Kubelet or Docker). +Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single Pod. Think of it similarly to a process supervisor, only it supervises multiple Pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node (for example, Kubelet or Docker). ### Job -Use a [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/) instead of a ReplicaSet for pods that are expected to terminate on their own +Use a [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/) instead of a ReplicaSet for Pods that are expected to terminate on their own (that is, batch jobs). ### DaemonSet -Use a [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/) instead of a ReplicaSet for pods that provide a -machine-level function, such as machine monitoring or machine logging. These pods have a lifetime that is tied -to a machine lifetime: the pod needs to be running on the machine before other pods start, and are +Use a [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/) instead of a ReplicaSet for Pods that provide a +machine-level function, such as machine monitoring or machine logging. These Pods have a lifetime that is tied +to a machine lifetime: the Pod needs to be running on the machine before other Pods start, and are safe to terminate when the machine is otherwise ready to be rebooted/shutdown. ### ReplicationController diff --git a/content/en/examples/controllers/frontend.yaml b/content/en/examples/controllers/frontend.yaml index 70219488fe942..b9f31044ec103 100644 --- a/content/en/examples/controllers/frontend.yaml +++ b/content/en/examples/controllers/frontend.yaml @@ -14,23 +14,8 @@ spec: template: metadata: labels: - app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google_samples/gb-frontend:v3 - resources: - requests: - cpu: 100m - memory: 100Mi - env: - - name: GET_HOSTS_FROM - value: dns - # If your cluster config does not include a dns service, then to - # instead access environment variables to find service host - # info, comment out the 'value: dns' line above, and uncomment the - # line below. - # value: env - ports: - - containerPort: 80 From f5d639669ab14003aaf34980c90141d77f72a4c0 Mon Sep 17 00:00:00 2001 From: Juan Diego Palomino Date: Thu, 31 Jan 2019 09:38:10 -0800 Subject: [PATCH 7/9] Previous commit broke Pod example links due to casing --- .../en/docs/concepts/workloads/controllers/replicaset.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index 4188be614fb97..985c2b0fe90ef 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -153,7 +153,7 @@ to owning Pods specified by its template-- it can acquire other Pods in the mann Take the previous frontend ReplicaSet example, and the Pods specified in the following manifest: -{{< codenew file="Pods/pod-rs.yaml" >}} +{{< codenew file="pods/pod-rs.yaml" >}} As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it. @@ -162,7 +162,7 @@ Suppose you create the Pods after the frontend ReplicaSet has been deployed and fulfill its replica count requirement: ```shell -kubectl create -f http://k8s.io/examples/Pods/pod-rs.yaml +kubectl create -f http://k8s.io/examples/pods/pod-rs.yaml ``` The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over @@ -184,7 +184,7 @@ pod2 0/1 Terminating 0 4s If you create the Pods first: ```shell -kubectl create -f http://k8s.io/examples/Pods/pod-rs.yaml +kubectl create -f http://k8s.io/examples/pods/pod-rs.yaml ``` And then create the ReplicaSet however: From a08dd12b57e3aa4b71c4f27c40e381494757dfc6 Mon Sep 17 00:00:00 2001 From: Juan Diego Palomino Date: Thu, 31 Jan 2019 09:40:27 -0800 Subject: [PATCH 8/9] Forgot 1 comment Suggestion: In the ReplicaSet, .spec.template.metadata.labels must match spec.selector, or ... --- content/en/docs/concepts/workloads/controllers/replicaset.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index 985c2b0fe90ef..94568a2c50f4c 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -236,7 +236,7 @@ matchLabels: tier: frontend ``` -The `.spec.template.metadata.labels` must match the `.spec.selector`, or it will +In the ReplicaSet, `.spec.template.metadata.labels` must match `spec.selector`, or it will be rejected by the API. ### Replicas From 12d901c75cb967706f021220b29240c38bc2b8a3 Mon Sep 17 00:00:00 2001 From: Juan Diego Palomino Date: Fri, 1 Feb 2019 09:06:48 -0800 Subject: [PATCH 9/9] Addressing grammar/syntax errors --- .../concepts/workloads/controllers/replicaset.md | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index 94568a2c50f4c..adeaadc469674 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -27,7 +27,7 @@ and deleting Pods as needed to reach the desired number. When a ReplicaSet needs template. The link a ReplicaSet has to its Pods is via the Pods' [metadata.ownerReferences](/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents) -field, which specifies what resource the current object is owned by. All _Pods_ acquired by a ReplicaSet have their owning +field, which specifies what resource the current object is owned by. All Pods acquired by a ReplicaSet have their owning ReplicaSet's identifying information within their ownerReferences field. It's through this link that the ReplicaSet knows of the state of the Pods it is maintaining and plans accordingly. @@ -110,7 +110,7 @@ And lastly you can check for the Pods brought up: kubectl get Pods ``` -You should see Pod information similar to +You should see Pod information similar to: ```shell NAME READY STATUS RESTARTS AGE frontend-9si5l 1/1 Running 0 1m @@ -148,7 +148,7 @@ metadata: ## Non-Template Pod acquisitions While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have -labels which match the selector of one of you ReplicaSets. The reason for this is because a ReplicaSet is not limited +labels which match the selector of one of your ReplicaSets. The reason for this is because a ReplicaSet is not limited to owning Pods specified by its template-- it can acquire other Pods in the manner specified in the previous sections. Take the previous frontend ReplicaSet example, and the Pods specified in the following manifest: @@ -158,7 +158,7 @@ Take the previous frontend ReplicaSet example, and the Pods specified in the fo As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend ReplicaSet, they will immediately be acquired by it. -Suppose you create the Pods after the frontend ReplicaSet has been deployed and had set up its initial Pod replicas to +Suppose you create the Pods after the frontend ReplicaSet has been deployed and has set up its initial Pod replicas to fulfill its replica count requirement: ```shell @@ -173,7 +173,7 @@ Fetching the Pods: kubectl get Pods ``` -The output shows that the new Pods are either already terminated, or in the process of being terminated +The output shows that the new Pods are either already terminated, or in the process of being terminated: ```shell NAME READY STATUS RESTARTS AGE frontend-9si5l 1/1 Running 0 1m @@ -192,7 +192,7 @@ And then create the ReplicaSet however: kubectl create -f http://k8s.io/examples/controllers/frontend.yaml ``` -We shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the +You shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the number of its new Pods and the original matches its desired count. As fetching the Pods: ```shell kubectl get Pods @@ -252,7 +252,8 @@ If you do not specify `.spec.replicas`, then it defaults to 1. To delete a ReplicaSet and all of its Pods, use [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). The [Garbage collector](/docs/concepts/workloads/controllers/garbage-collection/) automatically deletes all of the dependent Pods by default. -When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Background` or `Foreground` in delete option. +When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Background` or `Foreground` in +the -d option. For example: ```shell kubectl proxy --port=8080