Skip to content

Commit

Permalink
Addressing Steve Perry's comments
Browse files Browse the repository at this point in the history
Capitalize Pod throughout.

Link is not rendering correctly. Use () instead of [] for the path.

Ending with "for the creation" seems vague to me. Maybe this:
"...reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod template."

Suggestion: "is via the Pod's metadata.ownerReferences field." That way the reader won't jump to the incorrect conclusion that we're talking about the ReplicaSet's metadata.ownerReferences field.

with fields, including a selector that

and plans accordingly

Our style for headings is sentence case. So this heading would be "How a ReplicaSet works".

Several headings in this topic need to be converted to sentence case.

cleaned up frontend.yaml example

added example checking the Pod's owner reference being set to it's parent ReplicaSet
  • Loading branch information
juandiegopalomino committed Jan 31, 2019
1 parent 32ceb9c commit bb185a6
Show file tree
Hide file tree
Showing 2 changed files with 94 additions and 74 deletions.
153 changes: 94 additions & 59 deletions content/en/docs/concepts/workloads/controllers/replicaset.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,34 +11,35 @@ weight: 10
{{% capture overview %}}

A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often
used to guarantee the availability of a specified number of identical pods.
used to guarantee the availability of a specified number of identical Pods.


{{% /capture %}}

{{% capture body %}}

## How a ReplicaSet Works
## How a ReplicaSet works

A ReplicaSet is defined with fields including, a selector which specifies how to identify pods it can acquire, a number
of replicas indicating how many pods it should be maintaining, and a pod template specifying the data of new pods
A ReplicaSet is defined with fields, including a selector that specifies how to identify Pods it can acquire, a number
of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods
it should create to meet the number of replicas criteria. A ReplicaSet then fulfills its purpose by creating
and deleting Pods as needed to reach the desired number, using its given pod template for the creation.
and deleting Pods as needed to reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod
template.

The link a ReplicaSet has to its Pods is via the [metadata.ownerReferences][/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents]
The link a ReplicaSet has to its Pods is via the Pods' [metadata.ownerReferences](/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents)
field, which specifies what resource the current object is owned by. All _Pods_ acquired by a ReplicaSet have their owning
ReplicaSet's identifying information within their ownerReferences field. It's through this link that the ReplicaSet
knows of the state of the Pods it is maintaining and plan accordingly.
knows of the state of the Pods it is maintaining and plans accordingly.

ReplicaSets identify new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the
OwnerReference is not a Controller and it matches a ReplicaSet's selector, it will be immediately acquired by said
A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the
OwnerReference is not a controller and it matches a ReplicaSet's selector, it will be immediately acquired by said
ReplicaSet.

## When to use a ReplicaSet

A ReplicaSet ensures that a specified number of pod replicas are running at any given
time. However, a Deployment is a higher-level concept that manages ReplicaSets and
provides declarative updates to pods along with a lot of other useful features.
provides declarative updates to Pods along with a lot of other useful features.
Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless
you require custom update orchestration or don't require updates at all.

Expand All @@ -50,7 +51,7 @@ use a Deployment instead, and define your application in the spec section.
{{< codenew file="controllers/frontend.yaml" >}}

Saving this manifest into `frontend.yaml` and submitting it to a Kubernetes cluster will
create the defined ReplicaSet and the pods that it manages.
create the defined ReplicaSet and the Pods that it manages.

```shell
kubectl create -f http://k8s.io/examples/controllers/frontend.yaml
Expand All @@ -61,7 +62,7 @@ You can then get the current ReplicaSets deployed:
kubectl get rs
```

And see the frontend one we just created:
And see the frontend one you created:
```shell
NAME DESIRED CURRENT READY AGE
frontend 3 3 3 6s
Expand Down Expand Up @@ -104,43 +105,75 @@ Events:
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-9si5l
```

And lastly checking on the pods brought up:
And lastly you can check for the Pods brought up:
```shell
kubectl get pods
kubectl get Pods
```

Will yield pod information similar to
You should see Pod information similar to
```shell
NAME READY STATUS RESTARTS AGE
frontend-9si5l 1/1 Running 0 1m
frontend-dnjpy 1/1 Running 0 1m
frontend-qhloh 1/1 Running 0 1m
```

## Non-Template Acquisitions
You can also verify that the owner reference of these pods is set to the frontend ReplicaSet.
To do this, get the yaml of one of the Pods running:
```shell
kubectl get pods frontend-9si5l -o yaml
```

The output will look similar to this, with the frontend ReplicaSet's info set in the metadata's ownerReferences field:
```shell
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: 2019-01-31T17:20:41Z
generateName: frontend-
labels:
tier: frontend
name: frontend-9si5l
namespace: default
ownerReferences:
- apiVersion: extensions/v1beta1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: frontend
uid: 892a2330-257c-11e9-aecd-025000000001
...
```

## Non-Template Pod acquisitions

A ReplicaSet is not limited to owning pods specified by its template. Take the previous frontend ReplicaSet example,
and the Pods specified in the following manifest:
While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have
labels which match the selector of one of you ReplicaSets. The reason for this is because a ReplicaSet is not limited
to owning Pods specified by its template-- it can acquire other Pods in the manner specified in the previous sections.

{{< codenew file="pods/pod-rs.yaml" >}}
Take the previous frontend ReplicaSet example, and the Pods specified in the following manifest:

As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the forntend
{{< codenew file="Pods/pod-rs.yaml" >}}

As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the frontend
ReplicaSet, they will immediately be acquired by it.

Creating the Pods after the frontend ReplicaSet has been deployed and set up its initial Pod replicas to fulfill its
replica count requirement:
Suppose you create the Pods after the frontend ReplicaSet has been deployed and had set up its initial Pod replicas to
fulfill its replica count requirement:

```shell
kubectl create -f http://k8s.io/examples/pods/pod-rs.yaml
kubectl create -f http://k8s.io/examples/Pods/pod-rs.yaml
```

Will cause the new Pods to be acquired by the ReplicaSet, then immediately terminated as the ReplicaSet would be over
its desired count. Fetching the pods:
The new Pods will be acquired by the ReplicaSet, and then immediately terminated as the ReplicaSet would be over
its desired count.

Fetching the Pods:
```shell
kubectl get pods
kubectl get Pods
```

Will show that the new Pods are either already terminated, or in the process of being terminated
The output shows that the new Pods are either already terminated, or in the process of being terminated
```shell
NAME READY STATUS RESTARTS AGE
frontend-9si5l 1/1 Running 0 1m
Expand All @@ -149,9 +182,9 @@ frontend-qhloh 1/1 Running 0 1m
pod2 0/1 Terminating 0 4s
```

If we create the Pods first:
If you create the Pods first:
```shell
kubectl create -f http://k8s.io/examples/pods/pod-rs.yaml
kubectl create -f http://k8s.io/examples/Pods/pod-rs.yaml
```

And then create the ReplicaSet however:
Expand All @@ -160,9 +193,9 @@ kubectl create -f http://k8s.io/examples/controllers/frontend.yaml
```

We shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the
number of its new Pods and the original matches its desired count. As fetching the pods:
number of its new Pods and the original matches its desired count. As fetching the Pods:
```shell
kubectl get pods
kubectl get Pods
```

Will reveal in its output:
Expand All @@ -175,28 +208,28 @@ pod2 1/1 Running 0 13s

In this manner, a ReplicaSet can own a non-homogenous set of Pods

## Writing a ReplicaSet Manifest
## Writing a ReplicaSet manifest

As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields.
For ReplicaSets, the kind is always just ReplicaSet.
In Kubernetes 1.9 the API version `apps/v1` on the ReplicaSet kind is the current version and is enabled by default. The API version `apps/v1beta2` is deprecated.
Please refer to the first lines of the `frontend.yaml` example for guidance.
Refer to the first lines of the `frontend.yaml` example for guidance.

A ReplicaSet also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status).

### Pod Template

The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates) which also requires
to have labels in place. In our `frontend.yaml` example we had one label: `tier: frontend`.
Be careful not to overlap with the selectors of other controllers, lest they try to adopt this pod.
The `.spec.template` is a [pod template](/docs/concepts/workloads/Pods/pod-overview/#pod-templates) which is also
required to have labels in place. In our `frontend.yaml` example we had one label: `tier: frontend`.
Be careful not to overlap with the selectors of other controllers, lest they try to adopt this Pod.

For the template's [restart policy](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) field,
`.spec.template.spec.restartPolicy`, the only allowed value for is `Always`, which is the default.
For the template's [restart policy](/docs/concepts/workloads/Pods/pod-lifecycle/#restart-policy) field,
`.spec.template.spec.restartPolicy`, the only allowed value is `Always`, which is the default.

### Pod Selector

The `.spec.selector` field is a [label selector](/docs/concepts/overview/working-with-objects/labels/). As discussed
[earlier](#how-a-replicaset-works) these are the labels used to identify potential pods to acquire. In our
[earlier](#how-a-replicaset-works) these are the labels used to identify potential Pods to acquire. In our
`frontend.yaml` example, the selector was:
```shell
matchLabels:
Expand All @@ -208,8 +241,8 @@ be rejected by the API.

### Replicas

You can specify how many pods should run concurrently by setting `.spec.replicas`. The ReplicaSet will create/delete
its pods to match this number.
You can specify how many Pods should run concurrently by setting `.spec.replicas`. The ReplicaSet will create/delete
its Pods to match this number.

If you do not specify `.spec.replicas`, then it defaults to 1.

Expand All @@ -219,7 +252,8 @@ If you do not specify `.spec.replicas`, then it defaults to 1.

To delete a ReplicaSet and all of its Pods, use [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). The [Garbage collector](/docs/concepts/workloads/controllers/garbage-collection/) automatically deletes all of the dependent Pods by default.

When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Background` or `Foreground` in delete option. e.g. :
When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Background` or `Foreground` in delete option.
For example:
```shell
kubectl proxy --port=8080
curl -X DELETE 'localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/frontend' \
Expand All @@ -229,8 +263,9 @@ curl -X DELETE 'localhost:8080/apis/extensions/v1beta1/namespaces/default/repli

### Deleting just a ReplicaSet

You can delete a ReplicaSet without affecting any of its pods using [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) with the `--cascade=false` option.
When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Orphan`, e.g. :
You can delete a ReplicaSet without affecting any of its Pods using [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete) with the `--cascade=false` option.
When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Orphan`.
For example:
```shell
kubectl proxy --port=8080
curl -X DELETE 'localhost:8080/apis/extensions/v1beta1/namespaces/default/replicasets/frontend' \
Expand All @@ -239,22 +274,22 @@ curl -X DELETE 'localhost:8080/apis/extensions/v1beta1/namespaces/default/repli
```

Once the original is deleted, you can create a new ReplicaSet to replace it. As long
as the old and new `.spec.selector` are the same, then the new one will adopt the old pods.
However, it will not make any effort to make existing pods match a new, different pod template.
To update pods to a new spec in a controlled way, use a [rolling update](#rolling-updates).
as the old and new `.spec.selector` are the same, then the new one will adopt the old Pods.
However, it will not make any effort to make existing Pods match a new, different pod template.
To update Pods to a new spec in a controlled way, use a [rolling update](#rolling-updates).

### Isolating pods from a ReplicaSet
### Isolating Pods from a ReplicaSet

You can remove Pods from a ReplicaSet by changing their labels. This technique may be used to remove pods
You can remove Pods from a ReplicaSet by changing their labels. This technique may be used to remove Pods
from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (
assuming that the number of replicas is not also changed).

### Scaling a ReplicaSet

A ReplicaSet can be easily scaled up or down by simply updating the `.spec.replicas` field. The ReplicaSet controller
ensures that a desired number of pods with a matching label selector are available and operational.
ensures that a desired number of Pods with a matching label selector are available and operational.

### ReplicaSet as an Horizontal Pod Autoscaler Target
### ReplicaSet as a Horizontal Pod Autoscaler Target

A ReplicaSet can also be a target for
[Horizontal Pod Autoscalers (HPA)](/docs/tasks/run-application/horizontal-pod-autoscale/). That is,
Expand All @@ -265,7 +300,7 @@ the ReplicaSet we created in the previous example.

Saving this manifest into `hpa-rs.yaml` and submitting it to a Kubernetes cluster should
create the defined HPA that autoscales the target ReplicaSet depending on the CPU usage
of the replicated pods.
of the replicated Pods.

```shell
kubectl create -f https://k8s.io/examples/controllers/hpa-rs.yaml
Expand All @@ -280,29 +315,29 @@ kubectl autoscale rs frontend --max=10

## Alternatives to ReplicaSet

### Deployment (Recommended)
### Deployment (recommended)

[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is an object which can own ReplicaSets and update
them and their Pods via declarative, server-side rolling updates.
While ReplicaSets can be used independently, today it’s mainly used by Deployments as a mechanism to orchestrate pod
While ReplicaSets can be used independently, today they're mainly used by Deployments as a mechanism to orchestrate Pod
creation, deletion and updates. When you use Deployments you don’t have to worry about managing the ReplicaSets that
they create. Deployments own and manage their ReplicaSets.
As such, it is recommended to use Deployments when you want ReplicaSets.

### Bare Pods

Unlike the case where a user directly created pods, a ReplicaSet replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node (for example, Kubelet or Docker).
Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single Pod. Think of it similarly to a process supervisor, only it supervises multiple Pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node (for example, Kubelet or Docker).

### Job

Use a [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/) instead of a ReplicaSet for pods that are expected to terminate on their own
Use a [`Job`](/docs/concepts/jobs/run-to-completion-finite-workloads/) instead of a ReplicaSet for Pods that are expected to terminate on their own
(that is, batch jobs).

### DaemonSet

Use a [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/) instead of a ReplicaSet for pods that provide a
machine-level function, such as machine monitoring or machine logging. These pods have a lifetime that is tied
to a machine lifetime: the pod needs to be running on the machine before other pods start, and are
Use a [`DaemonSet`](/docs/concepts/workloads/controllers/daemonset/) instead of a ReplicaSet for Pods that provide a
machine-level function, such as machine monitoring or machine logging. These Pods have a lifetime that is tied
to a machine lifetime: the Pod needs to be running on the machine before other Pods start, and are
safe to terminate when the machine is otherwise ready to be rebooted/shutdown.

### ReplicationController
Expand Down
15 changes: 0 additions & 15 deletions content/en/examples/controllers/frontend.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,23 +14,8 @@ spec:
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# If your cluster config does not include a dns service, then to
# instead access environment variables to find service host
# info, comment out the 'value: dns' line above, and uncomment the
# line below.
# value: env
ports:
- containerPort: 80

0 comments on commit bb185a6

Please sign in to comment.