-
Notifications
You must be signed in to change notification settings - Fork 14.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trying to update ReplicaSet Docs as per issue 12081 #12409
Conversation
To start with, I tried to cleanup the docs to adhere to the style guide https://kubernetes.io/docs/contribute/style/style-guide/. I then added some description of the ReplicaSet-Pod link via the owner reference field and behavior by it. I also made a clarification on the ReplicaSet demo where it was being redundant for the sake of demonstrating the different forms of usage. I consider this draft incomplete as I still lack knowledge of how the pod labels affect the behavior.
@juandiegopalomino Thanks for getting this going. I'll review and comment today. |
Here's an experiment I would do if I were investigating ReplicaSets. Create a ReplicaSet that has 3 pods that run gcr.io/google-samples/hello-app:2.0. Verify that each Pod has an ownerReference equal to the ReplicaSet. Next create an individual Pod that has the same labels, but runs a different image, say gcr.io/google-samples/node-hello:1.0. Verify that all 4 pods continue to run. Now delete one of the Pods from the ReplicaSet. Now you have 3 Pods, all with labels that match the selector in the ReplicaSet. What happens? Does the ReplicaSet controller create a new Pod? Or does the ReplicaSet controller recruit the singleton Pod by adding an ownerReference to the singleton Pod? If the answer is to recruit the singleton Pod, then we have a weird situation where the Pods in a ReplicaSet are not all running the same image. OK, so I tried that experiment, and it didn't go the way I expected. Because the singleton Pod had no ownerReference, the ReplicaSet adopted the singleton Pod. At that point, the ReplicaSet had too many Pods, so it killed the singleton Pod. |
@@ -10,17 +10,27 @@ weight: 10 | |||
|
|||
{{% capture overview %}} | |||
|
|||
ReplicaSet is the next-generation Replication Controller. The only difference | |||
ReplicaSet is a type of Replication Controller. The only difference |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ReplicaSet is a replacement for the old ReplicationController. So it's not really a type of replication controller. We can probably remove all mention of ReplicationController from this topic, except for one statement that mentions ReplicationController as a thing of the past.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
## How a ReplicaSet Works | ||
|
||
Just like a ReplicationController, a ReplicaSet's purpose is to maintain a stable set of replica Pods available at any | ||
given time. It does so by creating and deleting a pods as needed to reach the desired number. The link a ReplicaSet has |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our style is to capitalize the names of Kubernetes objects. So Pod should be capitalized throughout the topic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
While ReplicaSets can be used independently, today it's mainly used by | ||
[Deployments](/docs/concepts/workloads/controllers/deployment/) as a mechanism to orchestrate pod | ||
creation, deletion and updates. When you use Deployments you don't have to worry | ||
While ReplicaSets can be used independently, it could also be used by other resources. For example, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think rather than saying ReplicaSets can be used by Deployments, we should say that creating a Deployment is the recommended way of using a ReplicaSet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
Here's an experiment that illustrates how things can go wrong. Create a Pod that has label steve: perry, and runs gcr.io/google-samples/hello-app:2.0. Create a second Pod that has label steve: perry, and runs gcr.io/google-samples/node-hello:1.0. Now create this ReplicaSet.
What happens? Now delete the ReplicaSet. All three Pods get deleted. The first two Pods never signed up to be part of the ReplicaSet. But they got adopted, and now die along with the ReplicaSet. What can we learn from this pathological example?
|
@steveperry-53 attempting the experiment you recommended, I found that the replicaset only brings up a lone additional pod, and set the owner reference of the first two to itself. This is not desired as the replicaset is now maintaining pods which do not behave as specified in the template. I'm still reading over the email thread with Dominik for the initial experiment and the unexpected conclusion. |
So based on the 2 experiments it appears that the replicaset bases its behaviors on pods that it currently has, and those that it can immediately inherit. When brought up, it will immediately inherit those it can based on its label selectors prior to creating new pods, and if its is first brought up and then pods it can inherit appear, it will treat them as new pods in its sets and tolerate/destroy them accordingly. |
I would not use the term "inherit". Instead, I would use the term "acquire", and the term "release". The ReplicaSet Controller acquires a Pod if the Pod's labels match the ReplicaSet's label selector and the Pod does not have an OwnerReference where .Controller = True. The ReplicaSet releases a Pod if the Pod's labels do not match the ReplicaSet's label selector and the Pod has an OwnerReference to the ReplicaSet where .Controller = True |
"This is not desired as the replicaset is now maintaining pods which do not behave as specified in the template." This is by design - although counter intuitive in the beginning. |
Here's my big batch of recommendations for this topic. These are just my opinions, so use them, or not, depending on how you see fit. Don't talk about ReplicationController. Make one statement, at the end of the topic, about ReplicationController being the old thing. Don't talk so much about Deployment. Make one statement that says Deployment is recommended. Make the YAML examples more concise. Don't show both matchLabels and matchExpressions in the same YAML file. If you do show commands and output, our style is to separate the command from this output. Enter this command to create the ReplicaSet: Describe the ReplicaSet: The output shows blah blah: Don't show output just for the sake of it. Only show the portion of the output that illustrates the ideas you are talking about. In a concept topic, if you find yourself showing a bunch of commands and output, consider splitting those off into a Task topic. Example Task topic Some of the text in the current topic is not needed. For example: Change "Writing a ReplicaSet Spec" to "Writing a ReplicaSet Manifest". The manifest has a spec field, but it's confusing to call the entire manifest a spec. The current topic has several sections that talk about the fields of a ReplicaSet manifest. It would be better to shorten all that text, and have the shortened text refer to a nearby example manifest. For example, "In the manifest you can see that every member Pod must have the As an example of text that wanders, look at the Pod Template section. The section gets side tracked by talking about overlapping labels and local container restarts. Here's a false statement: Here's a statement I don't understand: Under Working with ReplicaSets, I think we should improve the explanation of propagationPolicy and --cascade. We might get some ideas from the Garbage Collection topic. The current topic has a lot of passive voice that we can switch to active voice. This statement makes me uncomfortable: In general, the text drifts. Example: |
This statement makes me uncomfortable:
|
@dtornow how about calling it an abstraction layer? |
I'm beginning to address the cr by cleaning up references to the ReplicationController and making it clear that RCs are discouraged/old. I then expanded on the behavior of ReplicaSet in the presence of pods it can acquire but are not created directly by it.
I am generally hesitant to use the terms "abstraction" or "layer". Question: What information do you want to convey when you say "abstraction layer"? |
Note to self: double check curl commands |
@@ -0,0 +1,23 @@ | |||
apiVersion: v1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this file needs to go in content/en/examples/pods. Otherwise the example verification test fails with this: examples_test.go:576: controllers/pod-rs.yaml: pod-rs does not have a test case defined
"As with all other Kubernetes API objects," etc... is present in the sibling concepts/workloads/controllers/ files, so I am hesitant to change that w/o changing the others, but I did abbreviate it. "The `.spec.template` is the only required field of the `.spec`." is false, we also need the selector Trying to address passive voice Cleaned up Writing a ReplicaSet Manifest section removed How to use a ReplicaSet section as it has redundant info from the examples and Alternatives section Expanded examples a bit Cleared up passive voice
@steveperry-53 @chenopis ok, just tried to address all the remaining points we discussed. Pls let me know if I missed one or wrote an error |
Deploy preview for kubernetes-io-master-staging ready! Built with commit 12d901c https://deploy-preview-12409--kubernetes-io-master-staging.netlify.com |
Capitalize Pod throughout. Link is not rendering correctly. Use () instead of [] for the path. Ending with "for the creation" seems vague to me. Maybe this: "...reach the desired number. When a ReplicaSet needs to create new Pods, it uses its Pod template." Suggestion: "is via the Pod's metadata.ownerReferences field." That way the reader won't jump to the incorrect conclusion that we're talking about the ReplicaSet's metadata.ownerReferences field. with fields, including a selector that and plans accordingly Our style for headings is sentence case. So this heading would be "How a ReplicaSet works". Several headings in this topic need to be converted to sentence case. cleaned up frontend.yaml example added example checking the Pod's owner reference being set to it's parent ReplicaSet
@steveperry-53 my pleasure. I believe my latest commit covers your new comments. As always, please let me know of any flaws. |
Suggestion: In the ReplicaSet, .spec.template.metadata.labels must match spec.selector, or ...
imperative whereas Deployments are declarative, so we recommend using Deployments | ||
through the [`rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout) command. | ||
The link a ReplicaSet has to its Pods is via the Pods' [metadata.ownerReferences](/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents) | ||
field, which specifies what resource the current object is owned by. All _Pods_ acquired by a ReplicaSet have their owning |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No italics for Pods. So change Pods to Pods.
kubectl get Pods | ||
``` | ||
|
||
You should see Pod information similar to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Colon after "similar to"
kubectl get Pods | ||
``` | ||
|
||
The output shows that the new Pods are either already terminated, or in the process of being terminated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Colon after "being terminated"
kubectl create -f http://k8s.io/examples/controllers/frontend.yaml | ||
``` | ||
|
||
We shall see that the ReplicaSet has acquired the Pods and has only created new ones according to its spec until the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use "You" instead of "We".
@@ -157,7 +252,8 @@ If you do not specify `.spec.replicas`, then it defaults to 1. | |||
|
|||
To delete a ReplicaSet and all of its Pods, use [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands#delete). The [Garbage collector](/docs/concepts/workloads/controllers/garbage-collection/) automatically deletes all of the dependent Pods by default. | |||
|
|||
When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Background` or `Foreground` in delete option. e.g. : | |||
When using the REST API or the `client-go` library, you must set `propagationPolicy` to `Background` or `Foreground` in delete option. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change "in delete option" to "in the -d
option".
|
||
### Pod Template | ||
While you can create bare Pods with no problems, it is strongly recommended to make sure that the bare Pods do not have | ||
labels which match the selector of one of you ReplicaSets. The reason for this is because a ReplicaSet is not limited |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change "one of you" to "one of your".
|
||
For [restart policy](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy), the only allowed value for `.spec.template.spec.restartPolicy` is `Always`, which is the default. | ||
Suppose you create the Pods after the frontend ReplicaSet has been deployed and had set up its initial Pod replicas to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change "and had set" to "and has set".
@juandiegopalomino, Looks good. Just a few more nit picky comments. |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: steveperry-53 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
in a similar fashion as `kubectl rolling-update`. Deployments are recommended if you want this rolling update functionality, | ||
because unlike `kubectl rolling-update`, they are declarative, server-side, and have additional features. For more information on running a stateless | ||
application using a Deployment, please read [Run a Stateless Application Using a Deployment](/docs/tasks/run-application/run-stateless-application-deployment/). | ||
[`Deployment`](/docs/concepts/workloads/controllers/deployment/) is an object which can own ReplicaSets and update |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW, @dtornow found out that Deployments do ensure that the Pods in the resulting ReplicaSets match the image spec listed in the Deployment manifest. That's one big difference between ReplicaSets by themselves vs being used via a Deployment.
/lgtm |
I still have to investigate the actual mechanics of this: Does the Deployment Controller actually compare the Pods' Specs or does it rely on the hash in the label |
fixes #12081 |
To start with, I tried to cleanup the docs to adhere to the style guide https://kubernetes.io/docs/contribute/style/style-guide/. I then added some description of the ReplicaSet-Pod link via the owner reference field and behavior by it. I also made a clarification on the ReplicaSet demo where it was being redundant for the sake of demonstrating the different forms of usage. I consider this draft incomplete as I still lack knowledge of how the pod labels affect the behavior.
Preview: https://deploy-preview-12409--kubernetes-io-master-staging.netlify.com/docs/concepts/workloads/controllers/replicaset/
cc @steveperry-53 @chenopis