diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index cb08f4b8b8228..2a5bd33039cb0 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -35,8 +35,10 @@ specific Pods: ## Node labels {#built-in-node-labels} Like many other Kubernetes objects, nodes have -[labels](/docs/concepts/overview/working-with-objects/labels/). You can [attach labels manually](/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node). -Kubernetes also populates a [standard set of labels](/docs/reference/node/node-labels/) on all nodes in a cluster. +[labels](/docs/concepts/overview/working-with-objects/labels/). You can +[attach labels manually](/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node). +Kubernetes also populates a [standard set of labels](/docs/reference/node/node-labels/) +on all nodes in a cluster. {{}} The value of these labels is cloud provider specific and is not guaranteed to be reliable. @@ -294,17 +296,23 @@ Pod affinity rule uses the "hard" `requiredDuringSchedulingIgnoredDuringExecution`, while the anti-affinity rule uses the "soft" `preferredDuringSchedulingIgnoredDuringExecution`. -The affinity rule says that the scheduler can only schedule a Pod onto a node if -the node is in the same zone as one or more existing Pods with the label -`security=S1`. More precisely, the scheduler must place the Pod on a node that has the -`topology.kubernetes.io/zone=V` label, as long as there is at least one node in -that zone that currently has one or more Pods with the Pod label `security=S1`. - -The anti-affinity rule says that the scheduler should try to avoid scheduling -the Pod onto a node that is in the same zone as one or more Pods with the label -`security=S2`. More precisely, the scheduler should try to avoid placing the Pod on a node that has the -`topology.kubernetes.io/zone=R` label if there are other nodes in the -same zone currently running Pods with the `Security=S2` Pod label. +The affinity rule specifies that the scheduler is allowed to place the example Pod +on a node only if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/) +where other Pods have been labeled with `security=S1`. +For instance, if we have a cluster with a designated zone, let's call it "Zone V," +consisting of nodes labeled with `topology.kubernetes.io/zone=V`, the scheduler can +assign the Pod to any node within Zone V, as long as there is at least one Pod within +Zone V already labeled with `security=S1`. Conversely, if there are no Pods with `security=S1` +labels in Zone V, the scheduler will not assign the example Pod to any node in that zone. + +The anti-affinity rule specifies that the scheduler should try to avoid scheduling the Pod +on a node if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/) +where other Pods have been labeled with `security=S2`. +For instance, if we have a cluster with a designated zone, let's call it "Zone R," +consisting of nodes labeled with `topology.kubernetes.io/zone=R`, the scheduler should avoid +assigning the Pod to any node within Zone R, as long as there is at least one Pod within +Zone R already labeled with `security=S2`. Conversely, the anti-affinity rule does not impact +scheduling into Zone R if there are no Pods with `security=S2` labels. To get yourself more familiar with the examples of Pod affinity and anti-affinity, refer to the [design proposal](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md). @@ -318,7 +326,8 @@ to learn more about how these work. In principle, the `topologyKey` can be any allowed label key with the following exceptions for performance and security reasons: -- For Pod affinity and anti-affinity, an empty `topologyKey` field is not allowed in both `requiredDuringSchedulingIgnoredDuringExecution` +- For Pod affinity and anti-affinity, an empty `topologyKey` field is not allowed in both + `requiredDuringSchedulingIgnoredDuringExecution` and `preferredDuringSchedulingIgnoredDuringExecution`. - For `requiredDuringSchedulingIgnoredDuringExecution` Pod anti-affinity rules, the admission controller `LimitPodHardAntiAffinityTopology` limits diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index 0029b6c20b44d..06d3c5bb67d99 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -244,9 +244,8 @@ The `emptyDir.medium` field controls where `emptyDir` volumes are stored. By default `emptyDir` volumes are stored on whatever medium that backs the node such as disk, SSD, or network storage, depending on your environment. If you set the `emptyDir.medium` field to `"Memory"`, Kubernetes mounts a tmpfs (RAM-backed -filesystem) for you instead. While tmpfs is very fast, be aware that unlike -disks, tmpfs is cleared on node reboot and any files you write count against -your container's memory limit. +filesystem) for you instead. While tmpfs is very fast be aware that, unlike +disks, files you write count against the memory limit of the container that wrote them. A size limit can be specified for the default medium, which limits the capacity