Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying to update ReplicaSet Docs as per issue 12081 #12409

Merged
merged 9 commits into from
Feb 6, 2019
138 changes: 114 additions & 24 deletions content/en/docs/concepts/workloads/controllers/replicaset.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,30 @@ weight: 10

{{% capture overview %}}

ReplicaSet is the next-generation Replication Controller. The only difference
between a _ReplicaSet_ and a
[_Replication Controller_](/docs/concepts/workloads/controllers/replicationcontroller/) right now is
the selector support. ReplicaSet supports the new set-based selector requirements
as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors)
whereas a Replication Controller only supports equality-based selector requirements.
A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often
used to guarantee the availability of a specified number of identical pods.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Capitalize Pod throughout.



{{% /capture %}}

{{% capture body %}}

## How a ReplicaSet Works
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our style for headings is sentence case. So this heading would be "How a ReplicaSet works".

Several headings in this topic need to be converted to sentence case.


A ReplicaSet is defined with fields including, a selector which specifies how to identify pods it can acquire, a number
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"with fields, including a selector that ..."

of replicas indicating how many pods it should be maintaining, and a pod template specifying the data of new pods
it should bring bring up to meet the number of replicas criteria. A ReplicaSet then fulfills its purpose by creating
and deleting a Pods as needed to reach the desired number, using its given pod template for the creation.

The link a ReplicaSet has to its Pods is via the [metadata.ownerReferences][/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Link is not rendering correctly. Use () instead of [] for the path.

Copy link
Contributor

@steveperry-53 steveperry-53 Jan 30, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: "is via the Pod's metadata.ownerReferences field." That way the reader won't jump to the incorrect conclusion that we're talking about the ReplicaSet's metadata.ownerReferences field.

Also, I think we need a small example of a Pod spec that shows the ownerReferences field.

kind: Pod
metadata:
container
      node-hello'
  name: my-pod
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: my-repset

field, which specifies one resource dependent on another. All _Pods_ acquired by a ReplicaSet have their owning
ReplicaSet's identifying information within their ownerReferences field. It's through this link that the ReplicaSet
knows of the state of the Pods it is maintaining and plan accordingly.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either
"and plans accordingly."
or
"and can plan accordingly."


ReplicaSets identify new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either
"A ReplicaSet ... using its selector."
or
"ReplicaSets ... using their selectors."

OwnerReference is not a Controller and it matches a ReplicaSet's selector, it will be immediately acquired by said
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lower case for controller.

ReplicaSet.

## How to use a ReplicaSet

Most [`kubectl`](/docs/user-guide/kubectl/) commands that support
Expand All @@ -32,11 +45,11 @@ instead. Also, the
imperative whereas Deployments are declarative, so we recommend using Deployments
through the [`rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout) command.

While ReplicaSets can be used independently, today it's mainly used by
[Deployments](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod
creation, deletion and updates. When you use Deployments you don't have to worry
about managing the ReplicaSets that they create. Deployments own and manage
their ReplicaSets.
While ReplicaSets can be used independently, it could also be used by other resources. For example,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think rather than saying ReplicaSets can be used by Deployments, we should say that creating a Deployment is the recommended way of using a ReplicaSet.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

[Deployments](/docs/concepts/workloads/controllers/deployment/) are the recommended way of using ReplicaSets.
When you use Deployments you don't have to worry about managing the ReplicaSets that they create. Deployments own
and manage their ReplicaSets.


## When to use a ReplicaSet

Expand All @@ -53,13 +66,20 @@ use a Deployment instead, and define your application in the spec section.

{{< codenew file="controllers/frontend.yaml" >}}

Saving this manifest into `frontend.yaml` and submitting it to a Kubernetes cluster should
Saving this manifest into `frontend.yaml` and submitting it to a Kubernetes cluster will
create the defined ReplicaSet and the pods that it manages.

```shell
$ kubectl create -f http://k8s.io/examples/controllers/frontend.yaml
replicaset.apps/frontend created
$ kubectl describe rs/frontend
kubectl create -f http://k8s.io/examples/controllers/frontend.yaml
```

You can then check on the state of the replicaset:
```shell
kubectl describe rs/frontend
```

And you will see output similar to:
```shell
Name: frontend
Namespace: default
Selector: tier=frontend,tier in (frontend)
Expand Down Expand Up @@ -88,14 +108,80 @@ Events:
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-qhloh
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-dnjpy
1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-9si5l
$ kubectl get pods
```

And lastly checking on the pods brought up:
```shell
kubectl get pods
```

Will yield pod information similar to
```shell
NAME READY STATUS RESTARTS AGE
frontend-9si5l 1/1 Running 0 1m
frontend-dnjpy 1/1 Running 0 1m
frontend-qhloh 1/1 Running 0 1m
```

## Writing a ReplicaSet Spec
## Non-Template Acquisitions

A ReplicaSet is not limited to owning pods specified by its template. Take the previous frontend ReplicaSet example,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before you show the example of non-template acquisitions, I think you should make a statement that recommends against manually creating Pods that match some ReplicaSet's selector. Then the example will serve as a "here's why".

and the Pods specified in the following manifest:

{{< codenew file="controllers/pod-rs.yaml" >}}

As those Pods do not have a Controller (or any object) as their owner reference and match the selector of the forntend
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lower case for controller.

ReplicaSet, they will immediately be acquired by it.

Creating the Pods after the frontend ReplicaSet has been deployed and set up its initial Pod replicas to fulfill its
Copy link
Contributor

@steveperry-53 steveperry-53 Jan 30, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Splitting this sentence with a command in the middle doesn't work for me. I got lost trying to parse the first portion of the sentence.

Suggestion: Suppose the frontend ReplicaSet is already deployed, and then you enter this command to create the Pods:
kubectl create ...
The ReplicaSet will immediately acquire the Pods and terminate them, because the ReplicaSet already has its three Pods.

Fetch the Pods:
kubectl get pods

The output shows ...
...
Suppose you create the Pods first:
kubectl create ...
Then create the ReplicaSet
kubectl create ...
You can see that ...
kubectl get pods
The output shows...:
...

replica count requirement:

```shell
kubectl create -f http://k8s.io/examples/controllers/pod-rs.yaml
```

Will cause the new Pods to be acquired by the ReplicaSet, then immediately terminated as the ReplicaSet would be over
its desired count. Fetching the pods:
```shell
kubectl get pods
```

Will show that the new Pods are either already terminated, or in the process of being terminated
```shell
NAME READY STATUS RESTARTS AGE
frontend-9si5l 1/1 Running 0 1m
frontend-dnjpy 1/1 Running 0 1m
frontend-qhloh 1/1 Running 0 1m
pod2 0/1 Terminating 0 4s
```

If we create the Pods first:
```shell
kubectl create -f http://k8s.io/examples/controllers/pod-rs.yaml
```

And then create the ReplicaSet however:
```shell
kubectl create -f http://k8s.io/examples/controllers/frontend.yaml
```

We shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use "You" instead of "We".

number of its new Pods and the original matches its desired count. As fetching the pods:
```shell
kubectl get pods
```

Will reveal in its output:
```shell
NAME READY STATUS RESTARTS AGE
frontend-pxj4r 1/1 Running 0 5s
pod1 1/1 Running 0 13s
pod2 1/1 Running 0 13s
```

In this manner, a ReplicaSet can own a non-homogenous set of Pods
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At this point, reinforce the idea that you don't want to create Pods that match the selector of a current or future ReplicaSet. Be specific, instead of saying "non-homogeneous", I suggest saying something like, " a set of Pods that are running different images."


## Writing a ReplicaSet Manifest

As with all other Kubernetes API objects, a ReplicaSet needs the `apiVersion`, `kind`, and `metadata` fields. For
general information about working with manifests, see [object management using kubectl](/docs/concepts/overview/object-management-kubectl/overview/).
Expand All @@ -104,8 +190,8 @@ A ReplicaSet also needs a [`.spec` section](https://git.k8s.io/community/contrib

### Pod Template

The `.spec.template` is the only required field of the `.spec`. The `.spec.template` is a
[pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a
The `.spec.template` is the only required field of the `.spec`. The `.spec.template` is a
[pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a
[pod](/docs/concepts/workloads/pods/pod/), except that it is nested and does not have an `apiVersion` or `kind`.

In addition to required fields of a pod, a pod template in a ReplicaSet must specify appropriate
Expand All @@ -130,8 +216,8 @@ be rejected by the API.

In Kubernetes 1.9 the API version `apps/v1` on the ReplicaSet kind is the current version and is enabled by default. The API version `apps/v1beta2` is deprecated.

Also you should not normally create any pods whose labels match this selector, either directly, with
another ReplicaSet, or with another controller such as a Deployment. If you do so, the ReplicaSet thinks that it
Also you should not normally create any pods whose labels match this selector, either directly, with
another ReplicaSet, or with another controller such as a Deployment. If you do so, the ReplicaSet thinks that it
created the other pods. Kubernetes does not stop you from doing this.

If you do end up with multiple controllers that have overlapping selectors, you
Expand Down Expand Up @@ -183,7 +269,7 @@ To update pods to a new spec in a controlled way, use a [rolling update](#rollin

### Isolating pods from a ReplicaSet

Pods may be removed from a ReplicaSet's target set by changing their labels. This technique may be used to remove pods
Pods may be removed from a ReplicaSet's target set by changing their labels. This technique may be used to remove pods
from service for debugging, data recovery, etc. Pods that are removed in this way will be replaced automatically (
assuming that the number of replicas is not also changed).

Expand Down Expand Up @@ -241,6 +327,10 @@ machine-level function, such as machine monitoring or machine logging. These po
to a machine lifetime: the pod needs to be running on the machine before other pods start, and are
safe to terminate when the machine is otherwise ready to be rebooted/shutdown.

{{% /capture %}}

### ReplicationController
ReplicaSets are the successors to [_ReplicationControllers_](/docs/concepts/workloads/controllers/replicationcontroller/).
The two serve the same purpose, and behave seimilarly except that a ReplicationController does not support set-based
selector requirements as described in the [labels user guide](/docs/concepts/overview/working-with-objects/labels/#label-selectors).
As such, ReplicaSets are preferred over ReplicationControllers

{{% /capture %}}
2 changes: 0 additions & 2 deletions content/en/examples/controllers/frontend.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,6 @@ spec:
selector:
matchLabels:
tier: frontend
matchExpressions:
- {key: tier, operator: In, values: [frontend]}
template:
metadata:
labels:
Copy link
Contributor

@steveperry-53 steveperry-53 Jan 30, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Strong suggestion: Remove all the extraneous fields from this ReplicaSet manifest. Remove the entire resources section and the entire env section. Also remove the ports section. The containerPort field is problematic, because people get the wrong impression about what it does. It actually does nothing; it's just informational. But people tend to think that it causes the container to listen on a particular port. It doesn't.

Also, unless you can think of a reason to have the app: guestbook label, I would remove it. That label is not part of the selector, so I think it just distracts.

Expand Down
23 changes: 23 additions & 0 deletions content/en/examples/controllers/pod-rs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
apiVersion: v1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this file needs to go in content/en/examples/pods. Otherwise the example verification test fails with this: examples_test.go:576: controllers/pod-rs.yaml: pod-rs does not have a test case defined

kind: Pod
metadata:
name: pod1
labels:
tier: frontend
spec:
containers:
- name: hello1
image: gcr.io/google-samples/hello-app:2.0

---

apiVersion: v1
kind: Pod
metadata:
name: pod2
labels:
tier: frontend
spec:
containers:
- name: hello2
image: gcr.io/google-samples/hello-app:1.0