From a56576be0a8cc9aa7748cddd91d0cbe9a8d08030 Mon Sep 17 00:00:00 2001 From: oculushut Date: Tue, 29 Aug 2023 18:53:56 +0100 Subject: [PATCH 1/7] removes "tmpfs is cleared on node reboot" I believe the statement is confusing since we are in the emptyDir section of the documentation. If a Node is restarted then all pods that resided on that node will be rescheduled onto another Node. Rescheduled pods will have an empty volume whether you choose emptyDir.medium "Memory" or not. --- content/en/docs/concepts/storage/volumes.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index 0029b6c20b44d..7ca289d2c53c0 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -244,9 +244,8 @@ The `emptyDir.medium` field controls where `emptyDir` volumes are stored. By default `emptyDir` volumes are stored on whatever medium that backs the node such as disk, SSD, or network storage, depending on your environment. If you set the `emptyDir.medium` field to `"Memory"`, Kubernetes mounts a tmpfs (RAM-backed -filesystem) for you instead. While tmpfs is very fast, be aware that unlike -disks, tmpfs is cleared on node reboot and any files you write count against -your container's memory limit. +filesystem) for you instead. While tmpfs is very fast, be aware that, unlike +disks, files you write count against your container's memory limit. A size limit can be specified for the default medium, which limits the capacity From 0f86275ead656accdb3b6500d17a537c107db38b Mon Sep 17 00:00:00 2001 From: oculushut Date: Tue, 29 Aug 2023 19:00:45 +0100 Subject: [PATCH 2/7] removes bad comma --- content/en/docs/concepts/storage/volumes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index 7ca289d2c53c0..6d9f4a9560521 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -244,7 +244,7 @@ The `emptyDir.medium` field controls where `emptyDir` volumes are stored. By default `emptyDir` volumes are stored on whatever medium that backs the node such as disk, SSD, or network storage, depending on your environment. If you set the `emptyDir.medium` field to `"Memory"`, Kubernetes mounts a tmpfs (RAM-backed -filesystem) for you instead. While tmpfs is very fast, be aware that, unlike +filesystem) for you instead. While tmpfs is very fast be aware that, unlike disks, files you write count against your container's memory limit. From abb1fc27c6f48f494ab3520af5f94845fd35793f Mon Sep 17 00:00:00 2001 From: oculushut Date: Wed, 30 Aug 2023 10:01:49 +0100 Subject: [PATCH 3/7] improves memory limit description Co-authored-by: Tim Bannister --- content/en/docs/concepts/storage/volumes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index 6d9f4a9560521..06d3c5bb67d99 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -245,7 +245,7 @@ default `emptyDir` volumes are stored on whatever medium that backs the node such as disk, SSD, or network storage, depending on your environment. If you set the `emptyDir.medium` field to `"Memory"`, Kubernetes mounts a tmpfs (RAM-backed filesystem) for you instead. While tmpfs is very fast be aware that, unlike -disks, files you write count against your container's memory limit. +disks, files you write count against the memory limit of the container that wrote them. A size limit can be specified for the default medium, which limits the capacity From 88ea5f582e9d76cca4244556735f926cd69ad8a6 Mon Sep 17 00:00:00 2001 From: oculushut Date: Wed, 30 Aug 2023 12:23:11 +0100 Subject: [PATCH 4/7] removes incorrect explanation for topologyKey in an affinity or anti-affinity rule (#1) The existing text does not make sense to me. There is no zone "V" or "R" in the example. I have changed the text to be consistent with top answer here which seems to make more sense: https://stackoverflow.com/questions/72240224/what-is-topologykey-in-pod-affinity --- .../concepts/scheduling-eviction/assign-pod-node.md | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index cb08f4b8b8228..4caa14411cf6a 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -295,16 +295,12 @@ Pod affinity rule uses the "hard" uses the "soft" `preferredDuringSchedulingIgnoredDuringExecution`. The affinity rule says that the scheduler can only schedule a Pod onto a node if -the node is in the same zone as one or more existing Pods with the label -`security=S1`. More precisely, the scheduler must place the Pod on a node that has the -`topology.kubernetes.io/zone=V` label, as long as there is at least one node in -that zone that currently has one or more Pods with the Pod label `security=S1`. +the node is in the same zone (`topologyKey: topology.kubernetes.io/zone`) as one or more existing Pods with the label +`security=S1`. The anti-affinity rule says that the scheduler should try to avoid scheduling -the Pod onto a node that is in the same zone as one or more Pods with the label -`security=S2`. More precisely, the scheduler should try to avoid placing the Pod on a node that has the -`topology.kubernetes.io/zone=R` label if there are other nodes in the -same zone currently running Pods with the `Security=S2` Pod label. +the Pod onto a node that is in the same zone (`topologyKey: topology.kubernetes.io/zone`) as one or more Pods with the label +`security=S2`. To get yourself more familiar with the examples of Pod affinity and anti-affinity, refer to the [design proposal](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md). From dec1d83d7c82dcf64b5e62291ab802bc0afb300c Mon Sep 17 00:00:00 2001 From: oculushut Date: Thu, 21 Sep 2023 16:40:41 +0100 Subject: [PATCH 5/7] gives more context to node label configuration --- .../docs/concepts/scheduling-eviction/assign-pod-node.md | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index 4caa14411cf6a..9de952c1826f6 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -294,13 +294,9 @@ Pod affinity rule uses the "hard" `requiredDuringSchedulingIgnoredDuringExecution`, while the anti-affinity rule uses the "soft" `preferredDuringSchedulingIgnoredDuringExecution`. -The affinity rule says that the scheduler can only schedule a Pod onto a node if -the node is in the same zone (`topologyKey: topology.kubernetes.io/zone`) as one or more existing Pods with the label -`security=S1`. +The affinity rule specifies that the scheduler is allowed to place the example Pod on a node only if that node belongs to a specific [zone](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) where other Pods have been labeled with `security=S1`. For instance, if we have a cluster with a designated zone, let's call it "Zone V," consisting of nodes labeled with `topology.kubernetes.io/zone=V`, the scheduler can assign the Pod to any node within Zone V, as long as there is at least one Pod within Zone V already labeled with `security=S1`. Conversely, if there are no Pods with `security=S1` labels in Zone V, the scheduler will not assign the example Pod to any node in that zone. -The anti-affinity rule says that the scheduler should try to avoid scheduling -the Pod onto a node that is in the same zone (`topologyKey: topology.kubernetes.io/zone`) as one or more Pods with the label -`security=S2`. +The anti-affinity rule specifies that the scheduler should try to avoid scheduling the Pod on a node if that node belongs to a specific [zone](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) where other Pods have been labeled with `security=S2`. For instance, if we have a cluster with a designated zone, let's call it "Zone R," consisting of nodes labeled with `topology.kubernetes.io/zone=R`, the scheduler should avoid assigning the Pod to any node within Zone R, as long as there is at least one Pod within Zone R already labeled with `security=S2`. Conversely, the anti-affinity rule does not impact scheduling into Zone R if there are no Pods with `security=S2` labels. To get yourself more familiar with the examples of Pod affinity and anti-affinity, refer to the [design proposal](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md). From 5102afca35f8a0a73ebba4f1435429cec4e2f03e Mon Sep 17 00:00:00 2001 From: oculushut Date: Tue, 3 Oct 2023 12:17:59 +0100 Subject: [PATCH 6/7] manually wraps text and replaces absolute links with relative ones --- .../scheduling-eviction/assign-pod-node.md | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index 9de952c1826f6..3b9cb78b9d354 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -294,9 +294,21 @@ Pod affinity rule uses the "hard" `requiredDuringSchedulingIgnoredDuringExecution`, while the anti-affinity rule uses the "soft" `preferredDuringSchedulingIgnoredDuringExecution`. -The affinity rule specifies that the scheduler is allowed to place the example Pod on a node only if that node belongs to a specific [zone](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) where other Pods have been labeled with `security=S1`. For instance, if we have a cluster with a designated zone, let's call it "Zone V," consisting of nodes labeled with `topology.kubernetes.io/zone=V`, the scheduler can assign the Pod to any node within Zone V, as long as there is at least one Pod within Zone V already labeled with `security=S1`. Conversely, if there are no Pods with `security=S1` labels in Zone V, the scheduler will not assign the example Pod to any node in that zone. - -The anti-affinity rule specifies that the scheduler should try to avoid scheduling the Pod on a node if that node belongs to a specific [zone](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) where other Pods have been labeled with `security=S2`. For instance, if we have a cluster with a designated zone, let's call it "Zone R," consisting of nodes labeled with `topology.kubernetes.io/zone=R`, the scheduler should avoid assigning the Pod to any node within Zone R, as long as there is at least one Pod within Zone R already labeled with `security=S2`. Conversely, the anti-affinity rule does not impact scheduling into Zone R if there are no Pods with `security=S2` labels. +The affinity rule specifies that the scheduler is allowed to place the example Pod +on a node only if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/) where other Pods have been labeled with `security=S1`. +For instance, if we have a cluster with a designated zone, let's call it "Zone V," +consisting of nodes labeled with `topology.kubernetes.io/zone=V`, the scheduler can +assign the Pod to any node within Zone V, as long as there is at least one Pod within +Zone V already labeled with `security=S1`. Conversely, if there are no Pods with `security=S1` +labels in Zone V, the scheduler will not assign the example Pod to any node in that zone. + +The anti-affinity rule specifies that the scheduler should try to avoid scheduling the Pod +on a node if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/) where other Pods have been labeled with `security=S2`. +For instance, if we have a cluster with a designated zone, let's call it "Zone R," +consisting of nodes labeled with `topology.kubernetes.io/zone=R`, the scheduler should avoid +assigning the Pod to any node within Zone R, as long as there is at least one Pod within +Zone R already labeled with `security=S2`. Conversely, the anti-affinity rule does not impact +scheduling into Zone R if there are no Pods with `security=S2` labels. To get yourself more familiar with the examples of Pod affinity and anti-affinity, refer to the [design proposal](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md). From d3057c10737e39f210b614c9c13543510eef0013 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Mon, 9 Oct 2023 21:49:37 +0800 Subject: [PATCH 7/7] Update assign-pod-node.md --- .../scheduling-eviction/assign-pod-node.md | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index 3b9cb78b9d354..2a5bd33039cb0 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -35,8 +35,10 @@ specific Pods: ## Node labels {#built-in-node-labels} Like many other Kubernetes objects, nodes have -[labels](/docs/concepts/overview/working-with-objects/labels/). You can [attach labels manually](/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node). -Kubernetes also populates a [standard set of labels](/docs/reference/node/node-labels/) on all nodes in a cluster. +[labels](/docs/concepts/overview/working-with-objects/labels/). You can +[attach labels manually](/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node). +Kubernetes also populates a [standard set of labels](/docs/reference/node/node-labels/) +on all nodes in a cluster. {{}} The value of these labels is cloud provider specific and is not guaranteed to be reliable. @@ -295,7 +297,8 @@ Pod affinity rule uses the "hard" uses the "soft" `preferredDuringSchedulingIgnoredDuringExecution`. The affinity rule specifies that the scheduler is allowed to place the example Pod -on a node only if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/) where other Pods have been labeled with `security=S1`. +on a node only if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/) +where other Pods have been labeled with `security=S1`. For instance, if we have a cluster with a designated zone, let's call it "Zone V," consisting of nodes labeled with `topology.kubernetes.io/zone=V`, the scheduler can assign the Pod to any node within Zone V, as long as there is at least one Pod within @@ -303,7 +306,8 @@ Zone V already labeled with `security=S1`. Conversely, if there are no Pods with labels in Zone V, the scheduler will not assign the example Pod to any node in that zone. The anti-affinity rule specifies that the scheduler should try to avoid scheduling the Pod -on a node if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/) where other Pods have been labeled with `security=S2`. +on a node if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/) +where other Pods have been labeled with `security=S2`. For instance, if we have a cluster with a designated zone, let's call it "Zone R," consisting of nodes labeled with `topology.kubernetes.io/zone=R`, the scheduler should avoid assigning the Pod to any node within Zone R, as long as there is at least one Pod within @@ -322,7 +326,8 @@ to learn more about how these work. In principle, the `topologyKey` can be any allowed label key with the following exceptions for performance and security reasons: -- For Pod affinity and anti-affinity, an empty `topologyKey` field is not allowed in both `requiredDuringSchedulingIgnoredDuringExecution` +- For Pod affinity and anti-affinity, an empty `topologyKey` field is not allowed in both + `requiredDuringSchedulingIgnoredDuringExecution` and `preferredDuringSchedulingIgnoredDuringExecution`. - For `requiredDuringSchedulingIgnoredDuringExecution` Pod anti-affinity rules, the admission controller `LimitPodHardAntiAffinityTopology` limits