diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES
index 6c8efc4bd80c6..7f8470e8616ae 100644
--- a/OWNERS_ALIASES
+++ b/OWNERS_ALIASES
@@ -24,6 +24,7 @@ aliases:
- jimangel
- jlbutler
- kbhawkey
+ - natalisucks
- nate-double-u # RT 1.24 Docs Lead
- onlydole
- pi-victor
@@ -40,6 +41,7 @@ aliases:
- jimangel
- kbhawkey
- mehabhalodiya
+ - natalisucks
- onlydole
- rajeshdeshpande02
- sftim
@@ -147,8 +149,11 @@ aliases:
- divya-mohan0209
- jimangel
- kbhawkey
+ - natalisucks
- onlydole
+ - reylejano
- sftim
+ - tengqm
sig-docs-zh-owners: # Admins for Chinese content
# chenopis
- chenrui333
@@ -241,7 +246,6 @@ aliases:
# authoritative source: https://git.k8s.io/sig-release/OWNERS_ALIASES
sig-release-leads:
- cpanato # SIG Technical Lead
- - hasheddan # SIG Technical Lead
- jeremyrickard # SIG Technical Lead
- justaugustus # SIG Chair
- LappleApple # SIG Program Manager
diff --git a/assets/scss/_custom.scss b/assets/scss/_custom.scss
index d46bac0924a1b..1ebe8c81faed2 100644
--- a/assets/scss/_custom.scss
+++ b/assets/scss/_custom.scss
@@ -566,7 +566,8 @@ main.content {
}
}
-/* COMMUNITY */
+/* COMMUNITY legacy styles */
+/* Leave these in place until localizations are caught up */
.newcommunitywrapper {
.news {
diff --git a/content/en/blog/_posts/2022-03-31-ready-for-dockershim-removal.md b/content/en/blog/_posts/2022-03-31-ready-for-dockershim-removal.md
new file mode 100644
index 0000000000000..63d0cc5be5f3d
--- /dev/null
+++ b/content/en/blog/_posts/2022-03-31-ready-for-dockershim-removal.md
@@ -0,0 +1,34 @@
+---
+layout: blog
+title: "Is Your Cluster Ready for v1.24?"
+date: 2022-03-31
+slug: ready-for-dockershim-removal
+---
+
+**Author:** Kat Cosgrove
+
+
+Way back in December of 2020, Kubernetes announced the [deprecation of Dockershim](/blog/2020/12/02/dont-panic-kubernetes-and-docker/). In Kubernetes, dockershim is a software shim that allows you to use the entire Docker engine as your container runtime within Kubernetes. In the upcoming v1.24 release, we are removing Dockershim - the delay between deprecation and removal in line with the [project’s policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/) of supporting features for at least one year after deprecation. If you are a cluster operator, this guide includes the practical realities of what you need to know going into this release. Also, what do you need to do to ensure your cluster doesn’t fall over!
+
+## First, does this even affect you?
+
+If you are rolling your own cluster or are otherwise unsure whether or not this removal affects you, stay on the safe side and [check to see if you have any dependencies on Docker Engine](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/). Please note that using Docker Desktop to build your application containers is not a Docker dependency for your cluster. Container images created by Docker are compliant with the [Open Container Initiative (OCI)](https://opencontainers.org/), a Linux Foundation governance structure that defines industry standards around container formats and runtimes. They will work just fine on any container runtime supported by Kubernetes.
+
+If you are using a managed Kubernetes service from a cloud provider, and you haven’t explicitly changed the container runtime, there may be nothing else for you to do. Amazon EKS, Azure AKS, and Google GKE all default to containerd now, though you should make sure they do not need updating if you have any node customizations. To check the runtime of your nodes, follow [Find Out What Container Runtime is Used on a Node](/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/).
+
+Regardless of whether you are rolling your own cluster or using a managed Kubernetes service from a cloud provider, you may need to [migrate telemetry or security agents that rely on Docker Engine](/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/).
+
+## I have a Docker dependency. What now?
+
+If your Kubernetes cluster depends on Docker Engine and you intend to upgrade to Kubernetes v1.24 (which you should eventually do for security and similar reasons), you will need to change your container runtime from Docker Engine to something else or use [cri-dockerd](https://github.com/Mirantis/cri-dockerd). Since [containerd](https://containerd.io/) is a graduated CNCF project and the runtime within Docker itself, it’s a safe bet as an alternative container runtime. Fortunately, the Kubernetes project has already documented the process of [changing a node’s container runtime](/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/), using containerd as an example. Instructions are similar for switching to one of the other supported runtimes.
+
+## I want to upgrade Kubernetes, and I need to maintain compatibility with Docker as a runtime. What are my options?
+
+Fear not, you aren’t being left out in the cold and you don’t have to take the security risk of staying on an old version of Kubernetes. Mirantis and Docker have jointly released, and are maintaining, a replacement for dockershim. That replacement is called [cri-dockerd](https://github.com/Mirantis/cri-dockerd). If you do need to maintain compatibility with Docker as a runtime, install cri-dockerd following the instructions in the project’s documentation.
+
+## Is that it?
+
+
+Yes. As long as you go into this release aware of the changes being made and the details of your own clusters, and you make sure to communicate clearly with your development teams, it will be minimally dramatic. You may have some changes to make to your cluster, application code, or scripts, but all of these requirements are documented. Switching from using Docker Engine as your runtime to using [one of the other supported container runtimes](/docs/setup/production-environment/container-runtimes/) effectively means removing the middleman, since the purpose of dockershim is to access the container runtime used by Docker itself. From a practical perspective, this removal is better both for you and for Kubernetes maintainers in the long-run.
+
+If you still have questions, please first check the [Dockershim Removal FAQ](/blog/2022/02/17/dockershim-faq/).
diff --git a/content/en/community/_index.html b/content/en/community/_index.html
index c08aa25ae0149..de7fcee9a45ee 100644
--- a/content/en/community/_index.html
+++ b/content/en/community/_index.html
@@ -1,257 +1,183 @@
----
-title: Community
-layout: basic
-cid: community
----
-
-
-
-
-
-
-
-
-
-
The Kubernetes community -- users, contributors, and the culture we've built together -- is one of the biggest reasons for the meteoric rise of this open source project. Our culture and values continue to grow and change as the project itself grows and changes. We all work together toward constant improvement of the project and the ways we work on it.
-
We are the people who file issues and pull requests, attend SIG meetings, Kubernetes meetups, and KubeCon, advocate for its adoption and innovation, run kubectl get pods, and contribute in a thousand other vital ways. Read on to learn how you can get involved and become part of this amazing community.
-The Kubernetes Community values are the keystone to the ongoing success of the project.
-These principles guide every aspect of the Kubernetes project.
-
-
-
-The Kubernetes community values respect and inclusiveness, and enforces a Code of Conduct in all interactions. If you notice a violation of the Code of Conduct at an event or meeting, in Slack, or in another communication mechanism, reach out to the Kubernetes Code of Conduct Committee at conduct@kubernetes.io. All reports are kept confidential. You can read about the committee here.
-
-
-
+---
+title: Community
+layout: basic
+cid: community
+community_styles_migrated: true
+---
+
+
+
+
The Kubernetes community — users, contributors, and the culture we've
+ built together — is one of the biggest reasons for the meteoric rise of
+ this open source project. Our culture and values continue to grow and change
+ as the project itself grows and changes. We all work together toward constant
+ improvement of the project and the ways we work on it.
+
We are the people who file issues and pull requests, attend SIG meetings,
+ Kubernetes meetups, and KubeCon, advocate for its adoption and innovation,
+ run kubectl get pods, and contribute in a thousand other vital
+ ways. Read on to learn how you can get involved and become part of this amazing
+ community.
The Kubernetes Community values are the keystone to the ongoing success of the project.
+ These principles guide every aspect of the Kubernetes project.
The Kubernetes community values respect and inclusiveness, and enforces a Code of Conduct in all interactions.
+
If you notice a violation of the Code of Conduct at an event or meeting, in Slack, or in another communication mechanism, reach out to the Kubernetes Code of Conduct Committee at conduct@kubernetes.io. All reports are kept confidential. You can read about the committee in the Kubernetes community repository on GitHub.
If you notice a violation of the Code of Conduct at an event or meeting, in
Slack, or in another communication mechanism, reach out to
the Kubernetes Code of Conduct Committee.
You can reach us by email at conduct@kubernetes.io.
Your anonymity will be protected.
+
+
-
+
{{< include "/static/cncf-code-of-conduct.md" >}}
-
diff --git a/content/en/community/static/OWNERS b/content/en/community/static/OWNERS
new file mode 100644
index 0000000000000..3db354af1468a
--- /dev/null
+++ b/content/en/community/static/OWNERS
@@ -0,0 +1,7 @@
+# See the OWNERS docs at https://go.k8s.io/owners
+
+# Disable inheritance to encourage careful review of any changes here.
+options:
+ no_parent_owners: true
+approvers:
+- sig-docs-leads
diff --git a/content/en/community/static/README.md b/content/en/community/static/README.md
index ef8e8d5a3e6bc..bc44990c077f8 100644
--- a/content/en/community/static/README.md
+++ b/content/en/community/static/README.md
@@ -1,2 +1,5 @@
The files in this directory have been imported from other sources. Do not
-edit them directly, except by replacing them with new versions.
\ No newline at end of file
+edit them directly, except by replacing them with new versions.
+
+Localization note: you do not need to create localized versions of any of
+ the files in this directory.
\ No newline at end of file
diff --git a/content/en/community/values.md b/content/en/community/values.md
index 4ae1fe30b6d55..675e93c865b71 100644
--- a/content/en/community/values.md
+++ b/content/en/community/values.md
@@ -1,13 +1,18 @@
---
-title: Community
+title: Kubernetes Community Values
layout: basic
cid: community
-css: /css/community.css
----
-
-
+community_styles_migrated: true
-
+# this page is deprecated
+# canonical page is https://www.kubernetes.dev/community/values/
+sitemap:
+ priority: 0.1
+---
+
{{< include "/static/community-values.md" >}}
-
+
+
+
diff --git a/content/en/docs/concepts/configuration/organize-cluster-access-kubeconfig.md b/content/en/docs/concepts/configuration/organize-cluster-access-kubeconfig.md
index b27fcdee6153d..713592cf989ec 100644
--- a/content/en/docs/concepts/configuration/organize-cluster-access-kubeconfig.md
+++ b/content/en/docs/concepts/configuration/organize-cluster-access-kubeconfig.md
@@ -148,7 +148,28 @@ File references on the command line are relative to the current working director
In `$HOME/.kube/config`, relative paths are stored relatively, and absolute paths
are stored absolutely.
+## Proxy
+You can configure `kubectl` to use proxy by setting `proxy-url` in the kubeconfig file, like:
+
+```yaml
+apiVersion: v1
+kind: Config
+
+proxy-url: https://proxy.host:3128
+
+clusters:
+- cluster:
+ name: development
+
+users:
+- name: developer
+
+contexts:
+- context:
+ name: development
+
+```
## {{% heading "whatsnext" %}}
diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md
index 3512ed2098515..c07880a4580d2 100644
--- a/content/en/docs/concepts/containers/images.md
+++ b/content/en/docs/concepts/containers/images.md
@@ -29,8 +29,7 @@ and possibly a port number as well; for example: `fictional.registry.example:104
If you don't specify a registry hostname, Kubernetes assumes that you mean the Docker public registry.
-After the image name part you can add a _tag_ (as also using with commands such
-as `docker` and `podman`).
+After the image name part you can add a _tag_ (in the same way you would when using with commands like `docker` or `podman`).
Tags let you identify different versions of the same series of images.
Image tags consist of lowercase and uppercase letters, digits, underscores (`_`),
@@ -91,7 +90,7 @@ the image's digest;
replace `:` with `@`
(for example, `image@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`).
-When using image tags, if the image registry were to change the code that the tag on that image represents, you might end up with a mix of Pods running the old and new code. An image digest uniquely identifies a specific version of the image, so Kubernetes runs the same code every time it starts a container with that image name and digest specified. Specifying an image fixes the code that you run so that a change at the registry cannot lead to that mix of versions.
+When using image tags, if the image registry were to change the code that the tag on that image represents, you might end up with a mix of Pods running the old and new code. An image digest uniquely identifies a specific version of the image, so Kubernetes runs the same code every time it starts a container with that image name and digest specified. Specifying an image by digest fixes the code that you run so that a change at the registry cannot lead to that mix of versions.
There are third-party [admission controllers](/docs/reference/access-authn-authz/admission-controllers/)
that mutate Pods (and pod templates) when they are created, so that the
@@ -175,95 +174,11 @@ These options are explained in more detail below.
### Configuring nodes to authenticate to a private registry
-If you run Docker on your nodes, you can configure the Docker container
-runtime to authenticate to a private container registry.
+Specific instructions for setting credentials depends on the container runtime and registry you chose to use. You should refer to your solution's documentation for the most accurate information.
-This approach is suitable if you can control node configuration.
-
-{{< note >}}
-Default Kubernetes only supports the `auths` and `HttpHeaders` section in Docker configuration.
-Docker credential helpers (`credHelpers` or `credsStore`) are not supported.
-{{< /note >}}
-
-
-Docker stores keys for private registries in the `$HOME/.dockercfg` or `$HOME/.docker/config.json` file. If you put the same file
-in the search paths list below, kubelet uses it as the credential provider when pulling images.
-
-* `{--root-dir:-/var/lib/kubelet}/config.json`
-* `{cwd of kubelet}/config.json`
-* `${HOME}/.docker/config.json`
-* `/.docker/config.json`
-* `{--root-dir:-/var/lib/kubelet}/.dockercfg`
-* `{cwd of kubelet}/.dockercfg`
-* `${HOME}/.dockercfg`
-* `/.dockercfg`
-
-{{< note >}}
-You may have to set `HOME=/root` explicitly in the environment of the kubelet process.
-{{< /note >}}
-
-Here are the recommended steps to configuring your nodes to use a private registry. In this
-example, run these on your desktop/laptop:
-
- 1. Run `docker login [server]` for each set of credentials you want to use. This updates `$HOME/.docker/config.json` on your PC.
- 1. View `$HOME/.docker/config.json` in an editor to ensure it contains only the credentials you want to use.
- 1. Get a list of your nodes; for example:
- - if you want the names: `nodes=$( kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}' )`
- - if you want to get the IP addresses: `nodes=$( kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}' )`
- 1. Copy your local `.docker/config.json` to one of the search paths list above.
- - for example, to test this out: `for n in $nodes; do scp ~/.docker/config.json root@"$n":/var/lib/kubelet/config.json; done`
-
-{{< note >}}
-For production clusters, use a configuration management tool so that you can apply this
-setting to all the nodes where you need it.
-{{< /note >}}
-
-Verify by creating a Pod that uses a private image; for example:
-
-```shell
-kubectl apply -f - <}}
@@ -362,6 +278,8 @@ Kubernetes supports specifying container image registry keys on a Pod.
#### Creating a Secret with a Docker config
+You need to know the username, registry password and client email address for authenticating
+to the registry, as well as its hostname.
Run the following command, substituting the appropriate uppercase values:
```shell
@@ -426,14 +344,13 @@ There are a number of solutions for configuring private registries. Here are so
common use cases and suggested solutions.
1. Cluster running only non-proprietary (e.g. open-source) images. No need to hide images.
- - Use public images on the Docker hub.
+ - Use public images from a public registry
- No configuration required.
- Some cloud providers automatically cache or mirror public images, which improves availability and reduces the time to pull images.
1. Cluster running some proprietary images which should be hidden to those outside the company, but
visible to all cluster users.
- - Use a hosted private [Docker registry](https://docs.docker.com/registry/).
- - It may be hosted on the [Docker Hub](https://hub.docker.com/signup), or elsewhere.
- - Manually configure .docker/config.json on each node as described above.
+ - Use a hosted private registry
+ - Manual configuration may be required on the nodes that need to access to private registry
- Or, run an internal private registry behind your firewall with open read access.
- No Kubernetes configuration is required.
- Use a hosted container image registry service that controls image access
@@ -450,8 +367,6 @@ common use cases and suggested solutions.
If you need access to multiple registries, you can create one secret for each registry.
-Kubelet will merge any `imagePullSecrets` into a single virtual `.docker/config.json`
-
## {{% heading "whatsnext" %}}
diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
index 88f9f1f7828ab..910e1ecc222f8 100644
--- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
+++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
@@ -12,158 +12,181 @@ weight: 20
You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it can only run on particular set of
-{{< glossary_tooltip text="Node(s)" term_id="node" >}}.
+{{< glossary_tooltip text="node(s)" term_id="node" >}}.
There are several ways to do this and the recommended approaches all use
[label selectors](/docs/concepts/overview/working-with-objects/labels/) to facilitate the selection.
Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement
-(e.g. spread your pods across nodes so as not place the pod on a node with insufficient free resources, etc.)
-but there are some circumstances where you may want to control which node the pod deploys to - for example to ensure
-that a pod ends up on a machine with an SSD attached to it, or to co-locate pods from two different
+(for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources).
+However, there are some circumstances where you may want to control which node
+the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it, or to co-locate Pods from two different
services that communicate a lot into the same availability zone.
-
-## nodeSelector
+You can use any of the following methods to choose where Kubernetes schedules
+specific Pods:
-`nodeSelector` is the simplest recommended form of node selection constraint.
-`nodeSelector` is a field of PodSpec. It specifies a map of key-value pairs. For the pod to be eligible
-to run on a node, the node must have each of the indicated key-value pairs as labels (it can have
-additional labels as well). The most common usage is one key-value pair.
+ * [nodeSelector](#nodeselector) field matching against [node labels](#built-in-node-labels)
+ * [Affinity and anti-affinity](#affinity-and-anti-affinity)
+ * [nodeName](#nodename) field
-Let's walk through an example of how to use `nodeSelector`.
+## Node labels {#built-in-node-labels}
-### Step Zero: Prerequisites
+Like many other Kubernetes objects, nodes have
+[labels](/docs/concepts/overview/working-with-objects/labels/). You can [attach labels manually](/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node).
+Kubernetes also populates a standard set of labels on all nodes in a cluster. See [Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/)
+for a list of common node labels.
-This example assumes that you have a basic understanding of Kubernetes pods and that you have [set up a Kubernetes cluster](/docs/setup/).
+{{}}
+The value of these labels is cloud provider specific and is not guaranteed to be reliable.
+For example, the value of `kubernetes.io/hostname` may be the same as the node name in some environments
+and a different value in other environments.
+{{}}
-### Step One: Attach label to the node
+### Node isolation/restriction
-Run `kubectl get nodes` to get the names of your cluster's nodes. Pick out the one that you want to add a label to, and then run `kubectl label nodes =` to add a label to the node you've chosen. For example, if my node name is 'kubernetes-foo-node-1.c.a-robinson.internal' and my desired label is 'disktype=ssd', then I can run `kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal disktype=ssd`.
+Adding labels to nodes allows you to target Pods for scheduling on specific
+nodes or groups of nodes. You can use this functionality to ensure that specific
+Pods only run on nodes with certain isolation, security, or regulatory
+properties.
-You can verify that it worked by re-running `kubectl get nodes --show-labels` and checking that the node now has a label. You can also use `kubectl describe node "nodename"` to see the full list of labels of the given node.
+If you use labels for node isolation, choose label keys that the {{}}
+cannot modify. This prevents a compromised node from setting those labels on
+itself so that the scheduler schedules workloads onto the compromised node.
-### Step Two: Add a nodeSelector field to your pod configuration
+The [`NodeRestriction` admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
+prevents the kubelet from setting or modifying labels with a
+`node-restriction.kubernetes.io/` prefix.
-Take whatever pod config file you want to run, and add a nodeSelector section to it, like this. For example, if this is my pod config:
+To make use of that label prefix for node isolation:
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: nginx
- labels:
- env: test
-spec:
- containers:
- - name: nginx
- image: nginx
-```
+1. Ensure you are using the [Node authorizer](/docs/reference/access-authn-authz/node/) and have _enabled_ the `NodeRestriction` admission plugin.
+2. Add labels with the `node-restriction.kubernetes.io/` prefix to your nodes, and use those labels in your [node selectors](#nodeselector).
+ For example, `example.com.node-restriction.kubernetes.io/fips=true` or `example.com.node-restriction.kubernetes.io/pci-dss=true`.
-Then add a nodeSelector like so:
+## nodeSelector
-{{< codenew file="pods/pod-nginx.yaml" >}}
+`nodeSelector` is the simplest recommended form of node selection constraint.
+You can add the `nodeSelector` field to your Pod specification and specify the
+[node labels](#built-in-node-labels) you want the target node to have.
+Kubernetes only schedules the Pod onto nodes that have each of the labels you
+specify.
-When you then run `kubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml`,
-the Pod will get scheduled on the node that you attached the label to. You can
-verify that it worked by running `kubectl get pods -o wide` and looking at the
-"NODE" that the Pod was assigned to.
+See [Assign Pods to Nodes](/docs/tasks/configure-pod-container/assign-pods-nodes) for more
+information.
-## Interlude: built-in node labels {#built-in-node-labels}
+## Affinity and anti-affinity
-In addition to labels you [attach](#step-one-attach-label-to-the-node), nodes come pre-populated
-with a standard set of labels. See [Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/) for a list of these.
+`nodeSelector` is the simplest way to constrain Pods to nodes with specific
+labels. Affinity and anti-affinity expands the types of constraints you can
+define. Some of the benefits of affinity and anti-affinity include:
-{{< note >}}
-The value of these labels is cloud provider specific and is not guaranteed to be reliable.
-For example, the value of `kubernetes.io/hostname` may be the same as the Node name in some environments
-and a different value in other environments.
-{{< /note >}}
+* The affinity/anti-affinity language is more expressive. `nodeSelector` only
+ selects nodes with all the specified labels. Affinity/anti-affinity gives you
+ more control over the selection logic.
+* You can indicate that a rule is *soft* or *preferred*, so that the scheduler
+ still schedules the Pod even if it can't find a matching node.
+* You can constrain a Pod using labels on other Pods running on the node (or other topological domain),
+ instead of just node labels, which allows you to define rules for which Pods
+ can be co-located on a node.
-## Node isolation/restriction
+The affinity feature consists of two types of affinity:
-Adding labels to Node objects allows targeting pods to specific nodes or groups of nodes.
-This can be used to ensure specific pods only run on nodes with certain isolation, security, or regulatory properties.
-When using labels for this purpose, choosing label keys that cannot be modified by the kubelet process on the node is strongly recommended.
-This prevents a compromised node from using its kubelet credential to set those labels on its own Node object,
-and influencing the scheduler to schedule workloads to the compromised node.
+* *Node affinity* functions like the `nodeSelector` field but is more expressive and
+ allows you to specify soft rules.
+* *Inter-pod affinity/anti-affinity* allows you to constrain Pods against labels
+ on other Pods.
-The `NodeRestriction` admission plugin prevents kubelets from setting or modifying labels with a `node-restriction.kubernetes.io/` prefix.
-To make use of that label prefix for node isolation:
+### Node affinity
-1. Ensure you are using the [Node authorizer](/docs/reference/access-authn-authz/node/) and have _enabled_ the [NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction).
-2. Add labels under the `node-restriction.kubernetes.io/` prefix to your Node objects, and use those labels in your node selectors.
-For example, `example.com.node-restriction.kubernetes.io/fips=true` or `example.com.node-restriction.kubernetes.io/pci-dss=true`.
+Node affinity is conceptually similar to `nodeSelector`, allowing you to constrain which nodes your
+Pod can be scheduled on based on node labels. There are two types of node
+affinity:
-## Affinity and anti-affinity
+ * `requiredDuringSchedulingIgnoredDuringExecution`: The scheduler can't
+ schedule the Pod unless the rule is met. This functions like `nodeSelector`,
+ but with a more expressive syntax.
+ * `preferredDuringSchedulingIgnoredDuringExecution`: The scheduler tries to
+ find a node that meets the rule. If a matching node is not available, the
+ scheduler still schedules the Pod.
-`nodeSelector` provides a very simple way to constrain pods to nodes with particular labels. The affinity/anti-affinity
-feature, greatly expands the types of constraints you can express. The key enhancements are
+{{}}
+In the preceding types, `IgnoredDuringExecution` means that if the node labels
+change after Kubernetes schedules the Pod, the Pod continues to run.
+{{}}
-1. The affinity/anti-affinity language is more expressive. The language offers more matching rules
- besides exact matches created with a logical AND operation;
-2. you can indicate that the rule is "soft"/"preference" rather than a hard requirement, so if the scheduler
- can't satisfy it, the pod will still be scheduled;
-3. you can constrain against labels on other pods running on the node (or other topological domain),
- rather than against labels on the node itself, which allows rules about which pods can and cannot be co-located
+You can specify node affinities using the `.spec.affinity.nodeAffinity` field in
+your Pod spec.
-The affinity feature consists of two types of affinity, "node affinity" and "inter-pod affinity/anti-affinity".
-Node affinity is like the existing `nodeSelector` (but with the first two benefits listed above),
-while inter-pod affinity/anti-affinity constrains against pod labels rather than node labels, as
-described in the third item listed above, in addition to having the first and second properties listed above.
+For example, consider the following Pod spec:
-### Node affinity
+{{}}
-Node affinity is conceptually similar to `nodeSelector` -- it allows you to constrain which nodes your
-pod is eligible to be scheduled on, based on labels on the node.
+In this example, the following rules apply:
-There are currently two types of node affinity, called `requiredDuringSchedulingIgnoredDuringExecution` and
-`preferredDuringSchedulingIgnoredDuringExecution`. You can think of them as "hard" and "soft" respectively,
-in the sense that the former specifies rules that *must* be met for a pod to be scheduled onto a node (similar to
-`nodeSelector` but using a more expressive syntax), while the latter specifies *preferences* that the scheduler
-will try to enforce but will not guarantee. The "IgnoredDuringExecution" part of the names means that, similar
-to how `nodeSelector` works, if labels on a node change at runtime such that the affinity rules on a pod are no longer
-met, the pod continues to run on the node. In the future we plan to offer
-`requiredDuringSchedulingRequiredDuringExecution` which will be identical to `requiredDuringSchedulingIgnoredDuringExecution`
-except that it will evict pods from nodes that cease to satisfy the pods' node affinity requirements.
+ * The node *must* have a label with the key `kubernetes.io/e2e-az-name` and
+ the value is either `e2e-az1` or `e2e-az2`.
+ * The node *preferably* has a label with the key `another-node-label-key` and
+ the value `another-node-label-value`.
-Thus an example of `requiredDuringSchedulingIgnoredDuringExecution` would be "only run the pod on nodes with Intel CPUs"
-and an example `preferredDuringSchedulingIgnoredDuringExecution` would be "try to run this set of pods in failure
-zone XYZ, but if it's not possible, then allow some to run elsewhere".
+You can use the `operator` field to specify a logical operator for Kubernetes to use when
+interpreting the rules. You can use `In`, `NotIn`, `Exists`, `DoesNotExist`,
+`Gt` and `Lt`.
-Node affinity is specified as field `nodeAffinity` of field `affinity` in the PodSpec.
+`NotIn` and `DoesNotExist` allow you to define node anti-affinity behavior.
+Alternatively, you can use [node taints](/docs/concepts/scheduling-eviction/taint-and-toleration/)
+to repel Pods from specific nodes.
-Here's an example of a pod that uses node affinity:
+{{}}
+If you specify both `nodeSelector` and `nodeAffinity`, *both* must be satisfied
+for the Pod to be scheduled onto a node.
-{{< codenew file="pods/pod-with-node-affinity.yaml" >}}
+If you specify multiple `nodeSelectorTerms` associated with `nodeAffinity`
+types, then the Pod can be scheduled onto a node if one of the specified `nodeSelectorTerms` can be
+satisfied.
-This node affinity rule says the pod can only be placed on a node with a label whose key is
-`kubernetes.io/e2e-az-name` and whose value is either `e2e-az1` or `e2e-az2`. In addition,
-among nodes that meet that criteria, nodes with a label whose key is `another-node-label-key` and whose
-value is `another-node-label-value` should be preferred.
+If you specify multiple `matchExpressions` associated with a single `nodeSelectorTerms`,
+then the Pod can be scheduled onto a node only if all the `matchExpressions` are
+satisfied.
+{{}}
-You can see the operator `In` being used in the example. The new node affinity syntax supports the following operators: `In`, `NotIn`, `Exists`, `DoesNotExist`, `Gt`, `Lt`.
-You can use `NotIn` and `DoesNotExist` to achieve node anti-affinity behavior, or use
-[node taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) to repel pods from specific nodes.
+See [Assign Pods to Nodes using Node Affinity](/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/)
+for more information.
-If you specify both `nodeSelector` and `nodeAffinity`, *both* must be satisfied for the pod
-to be scheduled onto a candidate node.
+#### Node affinity weight
-If you specify multiple `nodeSelectorTerms` associated with `nodeAffinity` types, then the pod can be scheduled onto a node **if one of the** `nodeSelectorTerms` can be satisfied.
+You can specify a `weight` between 1 and 100 for each instance of the
+`preferredDuringSchedulingIgnoredDuringExecution` affinity type. When the
+scheduler finds nodes that meet all the other scheduling requirements of the Pod, the
+scheduler iterates through every preferred rule that the node satisfies and adds the
+value of the `weight` for that expression to a sum.
-If you specify multiple `matchExpressions` associated with `nodeSelectorTerms`, then the pod can be scheduled onto a node **only if all** `matchExpressions` is satisfied.
+The final sum is added to the score of other priority functions for the node.
+Nodes with the highest total score are prioritized when the scheduler makes a
+scheduling decision for the Pod.
-If you remove or change the label of the node where the pod is scheduled, the pod won't be removed. In other words, the affinity selection works only at the time of scheduling the pod.
+For example, consider the following Pod spec:
-The `weight` field in `preferredDuringSchedulingIgnoredDuringExecution` is in the range 1-100. For each node that meets all of the scheduling requirements (resource request, RequiredDuringScheduling affinity expressions, etc.), the scheduler will compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding MatchExpressions. This score is then combined with the scores of other priority functions for the node. The node(s) with the highest total score are the most preferred.
+{{}}
+
+If there are two possible nodes that match the
+`requiredDuringSchedulingIgnoredDuringExecution` rule, one with the
+`label-1:key-1` label and another with the `label-2:key-2` label, the scheduler
+considers the `weight` of each node and adds the weight to the other scores for
+that node, and schedules the Pod onto the node with the highest final score.
+
+{{}}
+If you want Kubernetes to successfully schedule the Pods in this example, you
+must have existing nodes with the `kubernetes.io/os=linux` label.
+{{}}
#### Node affinity per scheduling profile
{{< feature-state for_k8s_version="v1.20" state="beta" >}}
When configuring multiple [scheduling profiles](/docs/reference/scheduling/config/#multiple-profiles), you can associate
-a profile with a Node affinity, which is useful if a profile only applies to a specific set of Nodes.
-To do so, add an `addedAffinity` to the args of the [`NodeAffinity` plugin](/docs/reference/scheduling/config/#scheduling-plugins)
+a profile with a node affinity, which is useful if a profile only applies to a specific set of nodes.
+To do so, add an `addedAffinity` to the `args` field of the [`NodeAffinity` plugin](/docs/reference/scheduling/config/#scheduling-plugins)
in the [scheduler configuration](/docs/reference/scheduling/config/). For example:
```yaml
@@ -188,29 +211,41 @@ profiles:
The `addedAffinity` is applied to all Pods that set `.spec.schedulerName` to `foo-scheduler`, in addition to the
NodeAffinity specified in the PodSpec.
-That is, in order to match the Pod, Nodes need to satisfy `addedAffinity` and the Pod's `.spec.NodeAffinity`.
+That is, in order to match the Pod, nodes need to satisfy `addedAffinity` and
+the Pod's `.spec.NodeAffinity`.
-Since the `addedAffinity` is not visible to end users, its behavior might be unexpected to them. We
-recommend to use node labels that have clear correlation with the profile's scheduler name.
+Since the `addedAffinity` is not visible to end users, its behavior might be
+unexpected to them. Use node labels that have a clear correlation to the
+scheduler profile name.
{{< note >}}
-The DaemonSet controller, which [creates Pods for DaemonSets](/docs/concepts/workloads/controllers/daemonset/#scheduled-by-default-scheduler)
-is not aware of scheduling profiles. For this reason, it is recommended that you keep a scheduler profile, such as the
-`default-scheduler`, without any `addedAffinity`. Then, the Daemonset's Pod template should use this scheduler name.
-Otherwise, some Pods created by the Daemonset controller might remain unschedulable.
+The DaemonSet controller, which [creates Pods for DaemonSets](/docs/concepts/workloads/controllers/daemonset/#scheduled-by-default-scheduler),
+does not support scheduling profiles. When the DaemonSet controller creates
+Pods, the default Kubernetes scheduler places those Pods and honors any
+`nodeAffinity` rules in the DaemonSet controller.
{{< /note >}}
### Inter-pod affinity and anti-affinity
-Inter-pod affinity and anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled *based on
-labels on pods that are already running on the node* rather than based on labels on nodes. The rules are of the form
-"this pod should (or, in the case of anti-affinity, should not) run in an X if that X is already running one or more pods that meet rule Y".
-Y is expressed as a LabelSelector with an optional associated list of namespaces; unlike nodes, because pods are namespaced
-(and therefore the labels on pods are implicitly namespaced),
-a label selector over pod labels must specify which namespaces the selector should apply to. Conceptually X is a topology domain
-like node, rack, cloud provider zone, cloud provider region, etc. You express it using a `topologyKey` which is the
-key for the node label that the system uses to denote such a topology domain; for example, see the label keys listed above
-in the section [Interlude: built-in node labels](#built-in-node-labels).
+Inter-pod affinity and anti-affinity allow you to constrain which nodes your
+Pods can be scheduled on based on the labels of **Pods** already running on that
+node, instead of the node labels.
+
+Inter-pod affinity and anti-affinity rules take the form "this
+Pod should (or, in the case of anti-affinity, should not) run in an X if that X
+is already running one or more Pods that meet rule Y", where X is a topology
+domain like node, rack, cloud provider zone or region, or similar and Y is the
+rule Kubernetes tries to satisfy.
+
+You express these rules (Y) as [label selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors)
+with an optional associated list of namespaces. Pods are namespaced objects in
+Kubernetes, so Pod labels also implicitly have namespaces. Any label selectors
+for Pod labels should specify the namespaces in which Kubernetes should look for those
+labels.
+
+You express the topology domain (X) using a `topologyKey`, which is the key for
+the node label that the system uses to denote the domain. For examples, see
+[Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/).
{{< note >}}
Inter-pod affinity and anti-affinity require substantial amount of
@@ -219,76 +254,100 @@ not recommend using them in clusters larger than several hundred nodes.
{{< /note >}}
{{< note >}}
-Pod anti-affinity requires nodes to be consistently labelled, in other words every node in the cluster must have an appropriate label matching `topologyKey`. If some or all nodes are missing the specified `topologyKey` label, it can lead to unintended behavior.
+Pod anti-affinity requires nodes to be consistently labelled, in other words,
+every node in the cluster must have an appropriate label matching `topologyKey`.
+If some or all nodes are missing the specified `topologyKey` label, it can lead
+to unintended behavior.
{{< /note >}}
-As with node affinity, there are currently two types of pod affinity and anti-affinity, called `requiredDuringSchedulingIgnoredDuringExecution` and
-`preferredDuringSchedulingIgnoredDuringExecution` which denote "hard" vs. "soft" requirements.
-See the description in the node affinity section earlier.
-An example of `requiredDuringSchedulingIgnoredDuringExecution` affinity would be "co-locate the pods of service A and service B
-in the same zone, since they communicate a lot with each other"
-and an example `preferredDuringSchedulingIgnoredDuringExecution` anti-affinity would be "spread the pods from this service across zones"
-(a hard requirement wouldn't make sense, since you probably have more pods than zones).
+#### Types of inter-pod affinity and anti-affinity
+
+Similar to [node affinity](#node-affinity) are two types of Pod affinity and
+anti-affinity as follows:
+
+ * `requiredDuringSchedulingIgnoredDuringExecution`
+ * `preferredDuringSchedulingIgnoredDuringExecution`
-Inter-pod affinity is specified as field `podAffinity` of field `affinity` in the PodSpec.
-And inter-pod anti-affinity is specified as field `podAntiAffinity` of field `affinity` in the PodSpec.
+For example, you could use
+`requiredDuringSchedulingIgnoredDuringExecution` affinity to tell the scheduler to
+co-locate Pods of two services in the same cloud provider zone because they
+communicate with each other a lot. Similarly, you could use
+`preferredDuringSchedulingIgnoredDuringExecution` anti-affinity to spread Pods
+from a service across multiple cloud provider zones.
-#### An example of a pod that uses pod affinity:
+To use inter-pod affinity, use the `affinity.podAffinity` field in the Pod spec.
+For inter-pod anti-affinity, use the `affinity.podAntiAffinity` field in the Pod
+spec.
+
+#### Pod affinity example {#an-example-of-a-pod-that-uses-pod-affinity}
+
+Consider the following Pod spec:
{{< codenew file="pods/pod-with-pod-affinity.yaml" >}}
-The affinity on this pod defines one pod affinity rule and one pod anti-affinity rule. In this example, the
-`podAffinity` is `requiredDuringSchedulingIgnoredDuringExecution`
-while the `podAntiAffinity` is `preferredDuringSchedulingIgnoredDuringExecution`. The
-pod affinity rule says that the pod can be scheduled onto a node only if that node is in the same zone
-as at least one already-running pod that has a label with key "security" and value "S1". (More precisely, the pod is eligible to run
-on node N if node N has a label with key `topology.kubernetes.io/zone` and some value V
-such that there is at least one node in the cluster with key `topology.kubernetes.io/zone` and
-value V that is running a pod that has a label with key "security" and value "S1".) The pod anti-affinity
-rule says that the pod should not be scheduled onto a node if that node is in the same zone as a pod with
-label having key "security" and value "S2". See the
-[design doc](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md)
-for many more examples of pod affinity and anti-affinity, both the `requiredDuringSchedulingIgnoredDuringExecution`
-flavor and the `preferredDuringSchedulingIgnoredDuringExecution` flavor.
+This example defines one Pod affinity rule and one Pod anti-affinity rule. The
+Pod affinity rule uses the "hard"
+`requiredDuringSchedulingIgnoredDuringExecution`, while the anti-affinity rule
+uses the "soft" `preferredDuringSchedulingIgnoredDuringExecution`.
-The legal operators for pod affinity and anti-affinity are `In`, `NotIn`, `Exists`, `DoesNotExist`.
+The affinity rule says that the scheduler can only schedule a Pod onto a node if
+the node is in the same zone as one or more existing Pods with the label
+`security=S1`. More precisely, the scheduler must place the Pod on a node that has the
+`topology.kubernetes.io/zone=V` label, as long as there is at least one node in
+that zone that currently has one or more Pods with the Pod label `security=S1`.
+
+The anti-affinity rule says that the scheduler should try to avoid scheduling
+the Pod onto a node that is in the same zone as one or more Pods with the label
+`security=S2`. More precisely, the scheduler should try to avoid placing the Pod on a node that has the
+`topology.kubernetes.io/zone=R` label if there are other nodes in the
+same zone currently running Pods with the `Security=S2` Pod label.
+
+See the
+[design doc](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md)
+for many more examples of Pod affinity and anti-affinity.
-In principle, the `topologyKey` can be any legal label-key. However,
-for performance and security reasons, there are some constraints on topologyKey:
+You can use the `In`, `NotIn`, `Exists` and `DoesNotExist` values in the
+`operator` field for Pod affinity and anti-affinity.
-1. For pod affinity, empty `topologyKey` is not allowed in both `requiredDuringSchedulingIgnoredDuringExecution`
-and `preferredDuringSchedulingIgnoredDuringExecution`.
-2. For pod anti-affinity, empty `topologyKey` is also not allowed in both `requiredDuringSchedulingIgnoredDuringExecution`
-and `preferredDuringSchedulingIgnoredDuringExecution`.
-3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or disable it.
-4. Except for the above cases, the `topologyKey` can be any legal label-key.
+In principle, the `topologyKey` can be any allowed label key with the following
+exceptions for performance and security reasons:
-In addition to `labelSelector` and `topologyKey`, you can optionally specify a list `namespaces`
-of namespaces which the `labelSelector` should match against (this goes at the same level of the definition as `labelSelector` and `topologyKey`).
-If omitted or empty, it defaults to the namespace of the pod where the affinity/anti-affinity definition appears.
+* For Pod affinity and anti-affinity, an empty `topologyKey` field is not allowed in both `requiredDuringSchedulingIgnoredDuringExecution`
+ and `preferredDuringSchedulingIgnoredDuringExecution`.
+* For `requiredDuringSchedulingIgnoredDuringExecution` Pod anti-affinity rules,
+ the admission controller `LimitPodHardAntiAffinityTopology` limits
+ `topologyKey` to `kubernetes.io/hostname`. You can modify or disable the
+ admission controller if you want to allow custom topologies.
-All `matchExpressions` associated with `requiredDuringSchedulingIgnoredDuringExecution` affinity and anti-affinity
-must be satisfied for the pod to be scheduled onto a node.
+In addition to `labelSelector` and `topologyKey`, you can optionally specify a list
+of namespaces which the `labelSelector` should match against using the
+`namespaces` field at the same level as `labelSelector` and `topologyKey`.
+If omitted or empty, `namespaces` defaults to the namespace of the Pod where the
+affinity/anti-affinity definition appears.
#### Namespace selector
{{< feature-state for_k8s_version="v1.24" state="stable" >}}
-Users can also select matching namespaces using `namespaceSelector`, which is a label query over the set of namespaces.
-The affinity term is applied to the union of the namespaces selected by `namespaceSelector` and the ones listed in the `namespaces` field.
+You can also select matching namespaces using `namespaceSelector`, which is a label query over the set of namespaces.
+The affinity term is applied to namespaces selected by both `namespaceSelector` and the `namespaces` field.
Note that an empty `namespaceSelector` ({}) matches all namespaces, while a null or empty `namespaces` list and
-null `namespaceSelector` means "this pod's namespace".
+null `namespaceSelector` matches the namespace of the Pod where the rule is defined.
-#### More Practical Use-cases
+#### More practical use-cases
-Interpod Affinity and AntiAffinity can be even more useful when they are used with higher
-level collections such as ReplicaSets, StatefulSets, Deployments, etc. One can easily configure that a set of workloads should
+Inter-pod affinity and anti-affinity can be even more useful when they are used with higher
+level collections such as ReplicaSets, StatefulSets, Deployments, etc. These
+rules allow you to configure that a set of workloads should
be co-located in the same defined topology, eg., the same node.
-##### Always co-located in the same node
+Take, for example, a three-node cluster running a web application with an
+in-memory cache like redis. You could use inter-pod affinity and anti-affinity
+to co-locate the web servers with the cache as much as possible.
-In a three node cluster, a web application has in-memory cache such as redis. We want the web-servers to be co-located with the cache as much as possible.
-
-Here is the yaml snippet of a simple redis deployment with three replicas and selector label `app=store`. The deployment has `PodAntiAffinity` configured to ensure the scheduler does not co-locate replicas on a single node.
+In the following example Deployment for the redis cache, the replicas get the label `app=store`. The
+`podAntiAffinity` rule tells the scheduler to avoid placing multiple replicas
+with the `app=store` label on a single node. This creates each cache in a
+separate node.
```yaml
apiVersion: apps/v1
@@ -320,7 +379,10 @@ spec:
image: redis:3.2-alpine
```
-The below yaml snippet of the webserver deployment has `podAntiAffinity` and `podAffinity` configured. This informs the scheduler that all its replicas are to be co-located with pods that have selector label `app=store`. This will also ensure that each web-server replica does not co-locate on a single node.
+The following Deployment for the web servers creates replicas with the label `app=web-store`. The
+Pod affinity rule tells the scheduler to place each replica on a node that has a
+Pod with the label `app=store`. The Pod anti-affinity rule tells the scheduler
+to avoid placing multiple `app=web-store` servers on a single node.
```yaml
apiVersion: apps/v1
@@ -361,56 +423,37 @@ spec:
image: nginx:1.16-alpine
```
-If we create the above two deployments, our three node cluster should look like below.
+Creating the two preceding Deployments results in the following cluster layout,
+where each web server is co-located with a cache, on three separate nodes.
| node-1 | node-2 | node-3 |
|:--------------------:|:-------------------:|:------------------:|
| *webserver-1* | *webserver-2* | *webserver-3* |
| *cache-1* | *cache-2* | *cache-3* |
-As you can see, all the 3 replicas of the `web-server` are automatically co-located with the cache as expected.
-
-```
-kubectl get pods -o wide
-```
-The output is similar to this:
-```
-NAME READY STATUS RESTARTS AGE IP NODE
-redis-cache-1450370735-6dzlj 1/1 Running 0 8m 10.192.4.2 kube-node-3
-redis-cache-1450370735-j2j96 1/1 Running 0 8m 10.192.2.2 kube-node-1
-redis-cache-1450370735-z73mh 1/1 Running 0 8m 10.192.3.1 kube-node-2
-web-server-1287567482-5d4dz 1/1 Running 0 7m 10.192.2.3 kube-node-1
-web-server-1287567482-6f7v5 1/1 Running 0 7m 10.192.4.3 kube-node-3
-web-server-1287567482-s330j 1/1 Running 0 7m 10.192.3.2 kube-node-2
-```
-
-##### Never co-located in the same node
-
-The above example uses `PodAntiAffinity` rule with `topologyKey: "kubernetes.io/hostname"` to deploy the redis cluster so that
-no two instances are located on the same host.
-See [ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure)
-for an example of a StatefulSet configured with anti-affinity for high availability, using the same technique.
+See the [ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure)
+for an example of a StatefulSet configured with anti-affinity for high
+availability, using the same technique as this example.
## nodeName
-`nodeName` is the simplest form of node selection constraint, but due
-to its limitations it is typically not used. `nodeName` is a field of
-PodSpec. If it is non-empty, the scheduler ignores the pod and the
-kubelet running on the named node tries to run the pod. Thus, if
-`nodeName` is provided in the PodSpec, it takes precedence over the
-above methods for node selection.
+`nodeName` is a more direct form of node selection than affinity or
+`nodeSelector`. `nodeName` is a field in the Pod spec. If the `nodeName` field
+is not empty, the scheduler ignores the Pod and the kubelet on the named node
+tries to place the Pod on that node. Using `nodeName` overrules using
+`nodeSelector` or affinity and anti-affinity rules.
Some of the limitations of using `nodeName` to select nodes are:
-- If the named node does not exist, the pod will not be run, and in
+- If the named node does not exist, the Pod will not run, and in
some cases may be automatically deleted.
- If the named node does not have the resources to accommodate the
- pod, the pod will fail and its reason will indicate why,
+ Pod, the Pod will fail and its reason will indicate why,
for example OutOfmemory or OutOfcpu.
- Node names in cloud environments are not always predictable or
stable.
-Here is an example of a pod config file using the `nodeName` field:
+Here is an example of a Pod spec using the `nodeName` field:
```yaml
apiVersion: v1
@@ -424,21 +467,16 @@ spec:
nodeName: kube-01
```
-The above pod will run on the node kube-01.
-
-
+The above Pod will only run on the node `kube-01`.
## {{% heading "whatsnext" %}}
-
-[Taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) allow a Node to *repel* a set of Pods.
-
-The design documents for
-[node affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md)
-and for [inter-pod affinity/anti-affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md) contain extra background information about these features.
-
-Once a Pod is assigned to a Node, the kubelet runs the Pod and allocates node-local resources.
-The [topology manager](/docs/tasks/administer-cluster/topology-manager/) can take part in node-level
-resource allocation decisions.
+* Read more about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) .
+* Read the design docs for [node affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md)
+ and for [inter-pod affinity/anti-affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md).
+* Learn about how the [topology manager](/docs/tasks/administer-cluster/topology-manager/) takes part in node-level
+ resource allocation decisions.
+* Learn how to use [nodeSelector](/docs/tasks/configure-pod-container/assign-pods-nodes/).
+* Learn how to use [affinity and anti-affinity](/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/).
diff --git a/content/en/docs/concepts/scheduling-eviction/pod-overhead.md b/content/en/docs/concepts/scheduling-eviction/pod-overhead.md
index eebc235084ec4..d5db85dadf761 100644
--- a/content/en/docs/concepts/scheduling-eviction/pod-overhead.md
+++ b/content/en/docs/concepts/scheduling-eviction/pod-overhead.md
@@ -72,7 +72,7 @@ spec:
runtimeClassName: kata-fc
containers:
- name: busybox-ctr
- image: busybox
+ image: busybox:1.28
stdin: true
tty: true
resources:
diff --git a/content/en/docs/concepts/services-networking/ingress-controllers.md b/content/en/docs/concepts/services-networking/ingress-controllers.md
index 08b715ac7bc8c..5516306ffa766 100644
--- a/content/en/docs/concepts/services-networking/ingress-controllers.md
+++ b/content/en/docs/concepts/services-networking/ingress-controllers.md
@@ -23,7 +23,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
{{% thirdparty-content %}}
-* [AKS Application Gateway Ingress Controller](https://azure.github.io/application-gateway-kubernetes-ingress/) is an ingress controller that configures the [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/overview).
+* [AKS Application Gateway Ingress Controller](https://docs.microsoft.com/azure/application-gateway/tutorial-ingress-controller-add-on-existing?toc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Faks%2Ftoc.json&bc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fbread%2Ftoc.json) is an ingress controller that configures the [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/overview).
* [Ambassador](https://www.getambassador.io/) API Gateway is an [Envoy](https://www.envoyproxy.io)-based ingress
controller.
* [Apache APISIX ingress controller](https://github.com/apache/apisix-ingress-controller) is an [Apache APISIX](https://github.com/apache/apisix)-based ingress controller.
diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md
index 0298854137f46..af56668579f89 100644
--- a/content/en/docs/concepts/services-networking/service.md
+++ b/content/en/docs/concepts/services-networking/service.md
@@ -109,12 +109,45 @@ field.
{{< /note >}}
Port definitions in Pods have names, and you can reference these names in the
-`targetPort` attribute of a Service. This works even if there is a mixture
-of Pods in the Service using a single configured name, with the same network
-protocol available via different port numbers.
-This offers a lot of flexibility for deploying and evolving your Services.
-For example, you can change the port numbers that Pods expose in the next
-version of your backend software, without breaking clients.
+`targetPort` attribute of a Service. For example, we can bind the `targetPort`
+of the Service to the Pod port in the following way:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: nginx
+ labels:
+ app.kubernetes.io/name: proxy
+spec:
+ containers:
+ - name: nginx
+ image: nginx:11.14.2
+ ports:
+ - containerPort: 80
+ name: http-web-svc
+
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: nginx-service
+spec:
+ selector:
+ app.kubernetes.io/name: proxy
+ ports:
+ - name: name-of-service-port
+ protocol: TCP
+ port: 80
+ targetPort: http-web-svc
+```
+
+
+This works even if there is a mixture of Pods in the Service using a single
+configured name, with the same network protocol available via different
+port numbers. This offers a lot of flexibility for deploying and evolving
+your Services. For example, you can change the port numbers that Pods expose
+in the next version of your backend software, without breaking clients.
The default protocol for Services is TCP; you can also use any other
[supported protocol](#protocol-support).
diff --git a/content/en/docs/concepts/services-networking/topology-aware-hints.md b/content/en/docs/concepts/services-networking/topology-aware-hints.md
index 4cc4f4aa5e9af..cd02b4015ca17 100644
--- a/content/en/docs/concepts/services-networking/topology-aware-hints.md
+++ b/content/en/docs/concepts/services-networking/topology-aware-hints.md
@@ -19,6 +19,12 @@ those network endpoints can be routed closer to where it originated.
For example, you can route traffic within a locality to reduce
costs, or to improve network performance.
+{{< note >}}
+The "topology-aware hints" feature is at Beta stage and it is **NOT** enabled
+by default. To try out this feature, you have to enable the `TopologyAwareHints`
+[feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
+{{< /note >}}
+
## Motivation
diff --git a/content/en/docs/concepts/storage/ephemeral-volumes.md b/content/en/docs/concepts/storage/ephemeral-volumes.md
index 0ee07c7c991be..fb86967114afb 100644
--- a/content/en/docs/concepts/storage/ephemeral-volumes.md
+++ b/content/en/docs/concepts/storage/ephemeral-volumes.md
@@ -107,7 +107,7 @@ metadata:
spec:
containers:
- name: my-frontend
- image: busybox
+ image: busybox:1.28
volumeMounts:
- mountPath: "/data"
name: my-csi-inline-vol
@@ -125,9 +125,17 @@ driver. These attributes are specific to each driver and not
standardized. See the documentation of each CSI driver for further
instructions.
+### CSI driver restrictions
+
+{{< feature-state for_k8s_version="v1.21" state="deprecated" >}}
+
As a cluster administrator, you can use a [PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/) to control which CSI drivers can be used in a Pod, specified with the
[`allowedCSIDrivers` field](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podsecuritypolicyspec-v1beta1-policy).
+{{< note >}}
+PodSecurityPolicy is deprecated and will be removed in the Kubernetes v1.25 release.
+{{< /note >}}
+
### Generic ephemeral volumes
{{< feature-state for_k8s_version="v1.23" state="stable" >}}
@@ -158,7 +166,7 @@ metadata:
spec:
containers:
- name: my-frontend
- image: busybox
+ image: busybox:1.28
volumeMounts:
- mountPath: "/scratch"
name: scratch-volume
diff --git a/content/en/docs/concepts/storage/projected-volumes.md b/content/en/docs/concepts/storage/projected-volumes.md
index 3b98810f61c23..a498e3b237b54 100644
--- a/content/en/docs/concepts/storage/projected-volumes.md
+++ b/content/en/docs/concepts/storage/projected-volumes.md
@@ -23,7 +23,7 @@ Currently, the following types of volume sources can be projected:
* [`secret`](/docs/concepts/storage/volumes/#secret)
* [`downwardAPI`](/docs/concepts/storage/volumes/#downwardapi)
* [`configMap`](/docs/concepts/storage/volumes/#configmap)
-* `serviceAccountToken`
+* [`serviceAccountToken`](#serviceaccounttoken)
All sources are required to be in the same namespace as the Pod. For more details,
see the [all-in-one volume](https://github.com/kubernetes/design-proposals-archive/blob/main/node/all-in-one-volume.md) design document.
@@ -45,6 +45,7 @@ parameters are nearly the same with two exceptions:
volume source. However, as illustrated above, you can explicitly set the `mode`
for each individual projection.
+## serviceAccountToken projected volumes {#serviceaccounttoken}
When the `TokenRequestProjection` feature is enabled, you can inject the token
for the current [service account](/docs/reference/access-authn-authz/authentication/#service-account-tokens)
into a Pod at a specified path. For example:
@@ -52,8 +53,10 @@ into a Pod at a specified path. For example:
{{< codenew file="pods/storage/projected-service-account-token.yaml" >}}
The example Pod has a projected volume containing the injected service account
-token. This token can be used by a Pod's containers to access the Kubernetes API
-server. The `audience` field contains the intended audience of the
+token. Containers in this Pod can use that token to access the Kubernetes API
+server, authenticating with the identity of [the pod's ServiceAccount]
+(/docs/tasks/configure-pod-container/configure-service-account/).
+The `audience` field contains the intended audience of the
token. A recipient of the token must identify itself with an identifier specified
in the audience of the token, and otherwise should reject the token. This field
is optional and it defaults to the identifier of the API server.
diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md
index 421a293737954..788f592abe331 100644
--- a/content/en/docs/concepts/storage/storage-classes.md
+++ b/content/en/docs/concepts/storage/storage-classes.md
@@ -87,7 +87,7 @@ for provisioning PVs. This field must be specified.
You are not restricted to specifying the "internal" provisioners
listed here (whose names are prefixed with "kubernetes.io" and shipped
alongside Kubernetes). You can also run and specify external provisioners,
-which are independent programs that follow a [specification](https://git.k8s.io/community/contributors/design-proposals/storage/volume-provisioning.md)
+which are independent programs that follow a [specification](https://github.com/kubernetes/design-proposals-archive/blob/main/storage/volume-provisioning.md)
defined by Kubernetes. Authors of external provisioners have full discretion
over where their code lives, how the provisioner is shipped, how it needs to be
run, what volume plugin it uses (including Flex), etc. The repository
@@ -241,8 +241,8 @@ allowedTopologies:
- matchLabelExpressions:
- key: failure-domain.beta.kubernetes.io/zone
values:
- - us-central1-a
- - us-central1-b
+ - us-central-1a
+ - us-central-1b
```
## Parameters
diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md
index 2fdc61c735207..5c290336d7f80 100644
--- a/content/en/docs/concepts/storage/volumes.md
+++ b/content/en/docs/concepts/storage/volumes.md
@@ -251,7 +251,7 @@ metadata:
spec:
containers:
- name: test
- image: busybox
+ image: busybox:1.28
volumeMounts:
- name: config-vol
mountPath: /etc/config
@@ -1128,7 +1128,7 @@ spec:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- image: busybox
+ image: busybox:1.28
command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ]
volumeMounts:
- name: workdir1
diff --git a/content/en/docs/concepts/workloads/controllers/cron-jobs.md b/content/en/docs/concepts/workloads/controllers/cron-jobs.md
index 62cac0f001f8e..cafe51102bb39 100644
--- a/content/en/docs/concepts/workloads/controllers/cron-jobs.md
+++ b/content/en/docs/concepts/workloads/controllers/cron-jobs.md
@@ -69,7 +69,7 @@ takes you through this example in more detail).
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
# │ │ │ │ │ 7 is also Sunday on some systems)
-# │ │ │ │ │
+# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat
# │ │ │ │ │
# * * * * *
```
diff --git a/content/en/docs/concepts/workloads/controllers/daemonset.md b/content/en/docs/concepts/workloads/controllers/daemonset.md
index 7eec771d7d260..ffb1fbd6146c7 100644
--- a/content/en/docs/concepts/workloads/controllers/daemonset.md
+++ b/content/en/docs/concepts/workloads/controllers/daemonset.md
@@ -107,7 +107,7 @@ If you do not specify either, then the DaemonSet controller will create Pods on
### Scheduled by default scheduler
-{{< feature-state state="stable" for-kubernetes-version="1.17" >}}
+{{< feature-state for_kubernetes_version="1.17" state="stable" >}}
A DaemonSet ensures that all eligible nodes run a copy of a Pod. Normally, the
node that a Pod runs on is selected by the Kubernetes scheduler. However,
diff --git a/content/en/docs/concepts/workloads/pods/_index.md b/content/en/docs/concepts/workloads/pods/_index.md
index fa91723c5ad08..3aca6e094b7be 100644
--- a/content/en/docs/concepts/workloads/pods/_index.md
+++ b/content/en/docs/concepts/workloads/pods/_index.md
@@ -180,7 +180,7 @@ spec:
spec:
containers:
- name: hello
- image: busybox
+ image: busybox:1.28
command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
restartPolicy: OnFailure
# The pod template ends here
diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md
index 127351a079e99..7b89d9b29d58c 100644
--- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md
+++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md
@@ -63,7 +63,8 @@ different Kubernetes components.
| `AllowInsecureBackendProxy` | `true` | Beta | 1.17 | |
| `AnyVolumeDataSource` | `false` | Alpha | 1.18 | |
| `AppArmor` | `true` | Beta | 1.4 | |
-| `ControllerManagerLeaderMigration` | `false` | Alpha | 1.21 | |
+| `ControllerManagerLeaderMigration` | `false` | Alpha | 1.21 | 1.21 |
+| `ControllerManagerLeaderMigration` | `true` | Beta | 1.22 | |
| `CPUManager` | `false` | Alpha | 1.8 | 1.9 |
| `CPUManager` | `true` | Beta | 1.10 | |
| `CPUManagerPolicyAlphaOptions` | `false` | Alpha | 1.23 | |
diff --git a/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md b/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md
index 10c1ff80dd125..730973fd82fb3 100644
--- a/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md
+++ b/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md
@@ -156,15 +156,15 @@ configuration types to be used during a kubeadm init run.
effect:"NoSchedule"kubeletExtraArgs:v:4
-ignorePreflightErrors:
-- IsPrivilegedUser
-imagePullPolicy:"IfNotPresent"
+ignorePreflightErrors:
+- IsPrivilegedUser
+imagePullPolicy:"IfNotPresent"localAPIEndpoint:advertiseAddress:"10.100.0.1"bindPort:6443certificateKey:"e6a2eb8581237ab72a4f494f30285ec12a9694d750b9785706a83bfcbbbd2204"
-skipPhases:
-- addon/kube-proxy
+skipPhases:
+- addon/kube-proxy---apiVersion:kubeadm.k8s.io/v1beta3kind:ClusterConfiguration
@@ -264,6 +264,109 @@ node only (e.g. the node ip).
+## `BootstrapToken` {#BootstrapToken}
+
+
+**Appears in:**
+
+- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration)
+
+
+
BootstrapToken describes one bootstrap token, stored as a Secret in the cluster
expires specifies the timestamp when this token expires. Defaults to being set
+dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.
+
+
+
usages
+[]string
+
+
+
usages describes the ways in which this token can be used. Can by default be used
+for establishing bidirectional trust, but that can be changed here.
+
+
+
groups
+[]string
+
+
+
groups specifies the extra groups that this token will authenticate as when/if
+used for authentication
BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used
+for both validation of the practically of the API server from a joining node's point
+of view and as an authentication method for the node in the bootstrap phase of
+"kubeadm join". This token is and should be short-lived.
expires specifies the timestamp when this token expires. Defaults to being set
-dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.
-
-
-
usages
-[]string
-
-
-
usages describes the ways in which this token can be used. Can by default be used
-for establishing bidirectional trust, but that can be changed here.
-
-
-
groups
-[]string
-
-
-
groups specifies the extra groups that this token will authenticate as when/if
-used for authentication
BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used
-for both validation of the practically of the API server from a joining node's point
-of view and as an authentication method for the node in the bootstrap phase of
-"kubeadm join". This token is and should be short-lived.
-
-
-
-
Field
Description
-
-
-
-
-[Required]
-string
-
-
- No description provided.
-
-
-[Required]
-string
-
-
- No description provided.
-
-
-
diff --git a/content/en/docs/reference/glossary/namespace.md b/content/en/docs/reference/glossary/namespace.md
index 33a97d90a17b7..02381c4ee615e 100644
--- a/content/en/docs/reference/glossary/namespace.md
+++ b/content/en/docs/reference/glossary/namespace.md
@@ -4,7 +4,7 @@ id: namespace
date: 2018-04-12
full_link: /docs/concepts/overview/working-with-objects/namespaces
short_description: >
- An abstraction used by Kubernetes to support multiple virtual clusters on the same physical cluster.
+ An abstraction used by Kubernetes to support isolation of groups of resources within a single cluster.
aka:
tags:
diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md
index 2da2fc47be81a..254fded8fd22b 100644
--- a/content/en/docs/reference/kubectl/cheatsheet.md
+++ b/content/en/docs/reference/kubectl/cheatsheet.md
@@ -39,6 +39,11 @@ complete -F __start_kubectl k
source <(kubectl completion zsh) # setup autocomplete in zsh into the current shell
echo "[[ $commands[kubectl] ]] && source <(kubectl completion zsh)" >> ~/.zshrc # add autocomplete permanently to your zsh shell
```
+### A Note on --all-namespaces
+
+Appending `--all-namespaces` happens frequently enough where you should be aware of the shorthand for `--all-namespaces`:
+
+```kubectl -A```
## Kubectl context and configuration
@@ -97,10 +102,10 @@ kubectl apply -f https://git.io/vPieo # create resource(s) from url
kubectl create deployment nginx --image=nginx # start a single instance of nginx
# create a Job which prints "Hello World"
-kubectl create job hello --image=busybox -- echo "Hello World"
+kubectl create job hello --image=busybox:1.28 -- echo "Hello World"
# create a CronJob that prints "Hello World" every minute
-kubectl create cronjob hello --image=busybox --schedule="*/1 * * * *" -- echo "Hello World"
+kubectl create cronjob hello --image=busybox:1.28 --schedule="*/1 * * * *" -- echo "Hello World"
kubectl explain pods # get the documentation for pod manifests
@@ -113,7 +118,7 @@ metadata:
spec:
containers:
- name: busybox
- image: busybox
+ image: busybox:1.28
args:
- sleep
- "1000000"
@@ -125,7 +130,7 @@ metadata:
spec:
containers:
- name: busybox
- image: busybox
+ image: busybox:1.28
args:
- sleep
- "1000"
@@ -315,7 +320,7 @@ kubectl logs my-pod -c my-container --previous # dump pod container logs (s
kubectl logs -f my-pod # stream pod logs (stdout)
kubectl logs -f my-pod -c my-container # stream pod container logs (stdout, multi-container case)
kubectl logs -f -l name=myLabel --all-containers # stream all pods logs with label name=myLabel (stdout)
-kubectl run -i --tty busybox --image=busybox -- sh # Run pod as interactive shell
+kubectl run -i --tty busybox --image=busybox:1.28 -- sh # Run pod as interactive shell
kubectl run nginx --image=nginx -n mynamespace # Start a single instance of nginx pod in the namespace of mynamespace
kubectl run nginx --image=nginx # Run pod nginx and write its spec into a file called pod.yaml
--dry-run=client -o yaml > pod.yaml
diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md
index 4ca365258f953..4c3029c1429e4 100644
--- a/content/en/docs/reference/labels-annotations-taints/_index.md
+++ b/content/en/docs/reference/labels-annotations-taints/_index.md
@@ -495,9 +495,11 @@ based on setting `securityContext` within the Pod's `.spec`.
## Annotations used for audit
-- [`pod-security.kubernetes.io/exempt`](/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-exempt)
-- [`pod-security.kubernetes.io/enforce-policy`](/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-enforce-policy)
+- [`authorization.k8s.io/decision`](/docs/reference/labels-annotations-taints/audit-annotations/#authorization-k8s-io-decision)
+- [`authorization.k8s.io/reason`](/docs/reference/labels-annotations-taints/audit-annotations/#authorization-k8s-io-reason)
- [`pod-security.kubernetes.io/audit-violations`](/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-audit-violations)
+- [`pod-security.kubernetes.io/enforce-policy`](/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-enforce-policy)
+- [`pod-security.kubernetes.io/exempt`](/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-exempt)
See more details on the [Audit Annotations](/docs/reference/labels-annotations-taints/audit-annotations/) page.
diff --git a/content/en/docs/reference/labels-annotations-taints/audit-annotations.md b/content/en/docs/reference/labels-annotations-taints/audit-annotations.md
index 5dabbcbdb16ce..a0ef3a1531932 100644
--- a/content/en/docs/reference/labels-annotations-taints/audit-annotations.md
+++ b/content/en/docs/reference/labels-annotations-taints/audit-annotations.md
@@ -56,4 +56,20 @@ that was transgressed as well as the specific policies on the fields that were
violated from the PodSecurity enforcement.
See [Pod Security Standards](/docs/concepts/security/pod-security-standards/)
-for more information.
\ No newline at end of file
+for more information.
+
+## authorization.k8s.io/decision
+
+Example: `authorization.k8s.io/decision: "forbid"`
+
+This annotation indicates whether or not a request was authorized in Kubernetes audit logs.
+
+See [Auditing](/docs/tasks/debug-application-cluster/audit/) for more information.
+
+## authorization.k8s.io/reason
+
+Example: `authorization.k8s.io/decision: "Human-readable reason for the decision"`
+
+This annotation gives reason for the [decision](#authorization-k8s-io-decision) in Kubernetes audit logs.
+
+See [Auditing](/docs/tasks/debug-application-cluster/audit/) for more information.
diff --git a/content/en/docs/setup/best-practices/certificates.md b/content/en/docs/setup/best-practices/certificates.md
index 56ecb7f544846..52696ade4be81 100644
--- a/content/en/docs/setup/best-practices/certificates.md
+++ b/content/en/docs/setup/best-practices/certificates.md
@@ -22,6 +22,8 @@ This page explains the certificates that your cluster requires.
Kubernetes requires PKI for the following operations:
* Client certificates for the kubelet to authenticate to the API server
+* Kubelet [server certificates](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#client-and-serving-certificates)
+ for the the API server to talk to the kubelets
* Server certificate for the API server endpoint
* Client certificates for administrators of the cluster to authenticate to the API server
* Client certificates for the API server to talk to the kubelets
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md b/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md
index 7e375b9a6d4ac..9c389bd865e61 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md
@@ -203,6 +203,7 @@ The DEB and RPM packages shipped with the Kubernetes releases are:
| Package name | Description |
|--------------|-------------|
| `kubeadm` | Installs the `/usr/bin/kubeadm` CLI tool and the [kubelet drop-in file](#the-kubelet-drop-in-file-for-systemd) for the kubelet. |
-| `kubelet` | Installs the kubelet binary in `/usr/bin` and CNI binaries in `/opt/cni/bin`. |
+| `kubelet` | Installs the `/usr/bin/kubelet` binary. |
| `kubectl` | Installs the `/usr/bin/kubectl` binary. |
| `cri-tools` | Installs the `/usr/bin/crictl` binary from the [cri-tools git repository](https://github.com/kubernetes-sigs/cri-tools). |
+| `kubernetes-cni` | Installs the `/opt/cni/bin` binaries from the [plugins git repository](https://github.com/containernetworking/plugins). |
diff --git a/content/en/docs/tasks/access-application-cluster/access-cluster.md b/content/en/docs/tasks/access-application-cluster/access-cluster.md
index 8e89e12a59f7b..aae96d3e96bcc 100644
--- a/content/en/docs/tasks/access-application-cluster/access-cluster.md
+++ b/content/en/docs/tasks/access-application-cluster/access-cluster.md
@@ -205,35 +205,10 @@ See documentation for other libraries for how they authenticate.
## Accessing the API from a Pod
When accessing the API from a pod, locating and authenticating
-to the apiserver are somewhat different.
+to the API server are somewhat different.
-The recommended way to locate the apiserver within the pod is with
-the `kubernetes.default.svc` DNS name, which resolves to a Service IP which in turn
-will be routed to an apiserver.
-
-The recommended way to authenticate to the apiserver is with a
-[service account](/docs/tasks/configure-pod-container/configure-service-account/) credential. By kube-system, a pod
-is associated with a service account, and a credential (token) for that
-service account is placed into the filesystem tree of each container in that pod,
-at `/var/run/secrets/kubernetes.io/serviceaccount/token`.
-
-If available, a certificate bundle is placed into the filesystem tree of each
-container at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`, and should be
-used to verify the serving certificate of the apiserver.
-
-Finally, the default namespace to be used for namespaced API operations is placed in a file
-at `/var/run/secrets/kubernetes.io/serviceaccount/namespace` in each container.
-
-From within a pod the recommended ways to connect to API are:
-
- - Run `kubectl proxy` in a sidecar container in the pod, or as a background
- process within the container. This proxies the
- Kubernetes API to the localhost interface of the pod, so that other processes
- in any container of the pod can access it.
- - Use the Go client library, and create a client using the `rest.InClusterConfig()` and `kubernetes.NewForConfig()` functions.
- They handle locating and authenticating to the apiserver. [example](https://git.k8s.io/client-go/examples/in-cluster-client-configuration/main.go)
-
-In each case, the credentials of the pod are used to communicate securely with the apiserver.
+Please check [Accessing the API from within a Pod](/docs/tasks/run-application/access-api-from-pod/)
+for more details.
## Accessing services running on the cluster
diff --git a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md
index ca723f73cf881..251bebbaeff4e 100644
--- a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md
+++ b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md
@@ -240,13 +240,13 @@ The following manifest defines an Ingress that sends traffic to your Service via
following lines at the end:
```yaml
- - path: /v2
- pathType: Prefix
- backend:
- service:
- name: web2
- port:
- number: 8080
+ - path: /v2
+ pathType: Prefix
+ backend:
+ service:
+ name: web2
+ port:
+ number: 8080
```
1. Apply the changes:
diff --git a/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md b/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md
index 408438e1deddb..457fbd6332b43 100644
--- a/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md
+++ b/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md
@@ -7,14 +7,10 @@ content_type: task
This page shows how to change the reclaim policy of a Kubernetes
PersistentVolume.
-
## {{% heading "prerequisites" %}}
-
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
-
-
## Why change reclaim policy of a PersistentVolume
@@ -33,55 +29,58 @@ Released phase, where all of its data can be manually recovered.
1. List the PersistentVolumes in your cluster:
- ```shell
- kubectl get pv
- ```
+ ```shell
+ kubectl get pv
+ ```
- The output is similar to this:
+ The output is similar to this:
- NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
- pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s
- pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s
- pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s
+ ```none
+ NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
+ pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s
+ pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s
+ pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s
+ ```
- This list also includes the name of the claims that are bound to each volume
+ This list also includes the name of the claims that are bound to each volume
for easier identification of dynamically provisioned volumes.
1. Choose one of your PersistentVolumes and change its reclaim policy:
- ```shell
- kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
- ```
-
- where `` is the name of your chosen PersistentVolume.
+ ```shell
+ kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
+ ```
- {{< note >}}
- On Windows, you must _double_ quote any JSONPath template that contains spaces (not single quote as shown above for bash). This in turn means that you must use a single quote or escaped double quote around any literals in the template. For example:
+ where `` is the name of your chosen PersistentVolume.
-```cmd
-kubectl patch pv -p "{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}"
-```
+ {{< note >}}
+ On Windows, you must _double_ quote any JSONPath template that contains spaces (not single
+ quote as shown above for bash). This in turn means that you must use a single quote or escaped
+ double quote around any literals in the template. For example:
- {{< /note >}}
+ ```cmd
+ kubectl patch pv -p "{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}"
+ ```
+ {{< /note >}}
1. Verify that your chosen PersistentVolume has the right policy:
- ```shell
- kubectl get pv
- ```
+ ```shell
+ kubectl get pv
+ ```
- The output is similar to this:
-
- NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
- pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 40s
- pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 36s
- pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 33s
-
- In the preceding output, you can see that the volume bound to claim
- `default/claim3` has reclaim policy `Retain`. It will not be automatically
- deleted when a user deletes claim `default/claim3`.
+ The output is similar to this:
+ ```none
+ NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
+ pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 40s
+ pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 36s
+ pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 33s
+ ```
+ In the preceding output, you can see that the volume bound to claim
+ `default/claim3` has reclaim policy `Retain`. It will not be automatically
+ deleted when a user deletes claim `default/claim3`.
## {{% heading "whatsnext" %}}
@@ -91,8 +90,8 @@ kubectl patch pv -p "{\"spec\":{\"persistentVolumeReclaimPolicy\"
### References {#reference}
* {{< api-reference page="config-and-storage-resources/persistent-volume-v1" >}}
- * Pay attention to the `.spec.persistentVolumeReclaimPolicy` [field](https://kubernetes.io/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/#PersistentVolumeSpec) of PersistentVolume.
+ * Pay attention to the `.spec.persistentVolumeReclaimPolicy`
+ [field](/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/#PersistentVolumeSpec)
+ of PersistentVolume.
* {{< api-reference page="config-and-storage-resources/persistent-volume-claim-v1" >}}
-
-
diff --git a/content/en/docs/tasks/administer-cluster/declare-network-policy.md b/content/en/docs/tasks/administer-cluster/declare-network-policy.md
index 7acbaa9e7d50b..ac9715b9092ca 100644
--- a/content/en/docs/tasks/administer-cluster/declare-network-policy.md
+++ b/content/en/docs/tasks/administer-cluster/declare-network-policy.md
@@ -68,7 +68,7 @@ pod/nginx-701339712-e0qfq 1/1 Running 0 35s
You should be able to access the new `nginx` service from other Pods. To access the `nginx` Service from another Pod in the `default` namespace, start a busybox container:
```console
-kubectl run busybox --rm -ti --image=busybox -- /bin/sh
+kubectl run busybox --rm -ti --image=busybox:1.28 -- /bin/sh
```
In your shell, run the following command:
@@ -111,7 +111,7 @@ networkpolicy.networking.k8s.io/access-nginx created
When you attempt to access the `nginx` Service from a Pod without the correct labels, the request times out:
```console
-kubectl run busybox --rm -ti --image=busybox -- /bin/sh
+kubectl run busybox --rm -ti --image=busybox:1.28 -- /bin/sh
```
In your shell, run the command:
@@ -130,7 +130,7 @@ wget: download timed out
You can create a Pod with the correct labels to see that the request is allowed:
```console
-kubectl run busybox --rm -ti --labels="access=true" --image=busybox -- /bin/sh
+kubectl run busybox --rm -ti --labels="access=true" --image=busybox:1.28 -- /bin/sh
```
In your shell, run the command:
diff --git a/content/en/docs/tasks/administer-cluster/ip-masq-agent.md b/content/en/docs/tasks/administer-cluster/ip-masq-agent.md
index 2997508a156f0..e923345d1ab17 100644
--- a/content/en/docs/tasks/administer-cluster/ip-masq-agent.md
+++ b/content/en/docs/tasks/administer-cluster/ip-masq-agent.md
@@ -4,31 +4,27 @@ content_type: task
---
-This page shows how to configure and enable the ip-masq-agent.
-
+This page shows how to configure and enable the `ip-masq-agent`.
## {{% heading "prerequisites" %}}
-
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
-
-
## IP Masquerade Agent User Guide
-The ip-masq-agent configures iptables rules to hide a pod's IP address behind the cluster node's IP address. This is typically done when sending traffic to destinations outside the cluster's pod [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) range.
+The `ip-masq-agent` configures iptables rules to hide a pod's IP address behind the cluster node's IP address. This is typically done when sending traffic to destinations outside the cluster's pod [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) range.
### **Key Terms**
-* **NAT (Network Address Translation)**
- Is a method of remapping one IP address to another by modifying either the source and/or destination address information in the IP header. Typically performed by a device doing IP routing.
-* **Masquerading**
- A form of NAT that is typically used to perform a many to one address translation, where multiple source IP addresses are masked behind a single address, which is typically the device doing the IP routing. In Kubernetes this is the Node's IP address.
-* **CIDR (Classless Inter-Domain Routing)**
- Based on the variable-length subnet masking, allows specifying arbitrary-length prefixes. CIDR introduced a new method of representation for IP addresses, now commonly known as **CIDR notation**, in which an address or routing prefix is written with a suffix indicating the number of bits of the prefix, such as 192.168.2.0/24.
-* **Link Local**
- A link-local address is a network address that is valid only for communications within the network segment or the broadcast domain that the host is connected to. Link-local addresses for IPv4 are defined in the address block 169.254.0.0/16 in CIDR notation.
+* **NAT (Network Address Translation)**
+ Is a method of remapping one IP address to another by modifying either the source and/or destination address information in the IP header. Typically performed by a device doing IP routing.
+* **Masquerading**
+ A form of NAT that is typically used to perform a many to one address translation, where multiple source IP addresses are masked behind a single address, which is typically the device doing the IP routing. In Kubernetes this is the Node's IP address.
+* **CIDR (Classless Inter-Domain Routing)**
+ Based on the variable-length subnet masking, allows specifying arbitrary-length prefixes. CIDR introduced a new method of representation for IP addresses, now commonly known as **CIDR notation**, in which an address or routing prefix is written with a suffix indicating the number of bits of the prefix, such as 192.168.2.0/24.
+* **Link Local**
+ A link-local address is a network address that is valid only for communications within the network segment or the broadcast domain that the host is connected to. Link-local addresses for IPv4 are defined in the address block 169.254.0.0/16 in CIDR notation.
The ip-masq-agent configures iptables rules to handle masquerading node/pod IP addresses when sending traffic to destinations outside the cluster node's IP and the Cluster IP range. This essentially hides pod IP addresses behind the cluster node's IP address. In some environments, traffic to "external" addresses must come from a known machine address. For example, in Google Cloud, any traffic to the internet must come from a VM's IP. When containers are used, as in Google Kubernetes Engine, the Pod IP will be rejected for egress. To avoid this, we must hide the Pod IP behind the VM's own IP address - generally known as "masquerade". By default, the agent is configured to treat the three private IP ranges specified by [RFC 1918](https://tools.ietf.org/html/rfc1918) as non-masquerade [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). These ranges are 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. The agent will also treat link-local (169.254.0.0/16) as a non-masquerade CIDR by default. The agent is configured to reload its configuration from the location */etc/config/ip-masq-agent* every 60 seconds, which is also configurable.
@@ -36,14 +32,20 @@ The ip-masq-agent configures iptables rules to handle masquerading node/pod IP a
The agent configuration file must be written in YAML or JSON syntax, and may contain three optional keys:
-* **nonMasqueradeCIDRs:** A list of strings in [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation that specify the non-masquerade ranges.
-* **masqLinkLocal:** A Boolean (true / false) which indicates whether to masquerade traffic to the link local prefix 169.254.0.0/16. False by default.
-* **resyncInterval:** A time interval at which the agent attempts to reload config from disk. For example: '30s', where 's' means seconds, 'ms' means milliseconds, etc...
+* `nonMasqueradeCIDRs`: A list of strings in
+ [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation that specify the non-masquerade ranges.
+* `masqLinkLocal`: A Boolean (true/false) which indicates whether to masquerade traffic to the
+ link local prefix `169.254.0.0/16`. False by default.
+* `resyncInterval`: A time interval at which the agent attempts to reload config from disk.
+ For example: '30s', where 's' means seconds, 'ms' means milliseconds.
Traffic to 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16) ranges will NOT be masqueraded. Any other traffic (assumed to be internet) will be masqueraded. An example of a local destination from a pod could be its Node's IP address as well as another node's address or one of the IP addresses in Cluster's IP range. Any other traffic will be masqueraded by default. The below entries show the default set of rules that are applied by the ip-masq-agent:
-```
+```shell
iptables -t nat -L IP-MASQ-AGENT
+```
+
+```none
RETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
RETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
RETURN all -- anywhere 172.16.0.0/12 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
@@ -52,33 +54,35 @@ MASQUERADE all -- anywhere anywhere /* ip-masq-agent:
```
-By default, in GCE/Google Kubernetes Engine starting with Kubernetes version 1.7.0, if network policy is enabled or you are using a cluster CIDR not in the 10.0.0.0/8 range, the ip-masq-agent will run in your cluster. If you are running in another environment, you can add the ip-masq-agent [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) to your cluster:
-
-
+By default, in GCE/Google Kubernetes Engine, if network policy is enabled or
+you are using a cluster CIDR not in the 10.0.0.0/8 range, the `ip-masq-agent`
+will run in your cluster. If you are running in another environment,
+you can add the `ip-masq-agent` [DaemonSet](/docs/concepts/workloads/controllers/daemonset/)
+to your cluster.
## Create an ip-masq-agent
To create an ip-masq-agent, run the following kubectl command:
-`
+```shell
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/ip-masq-agent/master/ip-masq-agent.yaml
-`
+```
You must also apply the appropriate node label to any nodes in your cluster that you want the agent to run on.
-`
-kubectl label nodes my-node beta.kubernetes.io/masq-agent-ds-ready=true
-`
+```shell
+kubectl label nodes my-node node.kubernetes.io/masq-agent-ds-ready=true
+```
More information can be found in the ip-masq-agent documentation [here](https://github.com/kubernetes-sigs/ip-masq-agent)
In most cases, the default set of rules should be sufficient; however, if this is not the case for your cluster, you can create and apply a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to customize the IP ranges that are affected. For example, to allow only 10.0.0.0/8 to be considered by the ip-masq-agent, you can create the following [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) in a file called "config".
{{< note >}}
-It is important that the file is called config since, by default, that will be used as the key for lookup by the ip-masq-agent:
+It is important that the file is called config since, by default, that will be used as the key for lookup by the `ip-masq-agent`:
-```
+```yaml
nonMasqueradeCIDRs:
- 10.0.0.0/8
resyncInterval: 60s
@@ -87,15 +91,18 @@ resyncInterval: 60s
Run the following command to add the config map to your cluster:
-```
+```shell
kubectl create configmap ip-masq-agent --from-file=config --namespace=kube-system
```
-This will update a file located at */etc/config/ip-masq-agent* which is periodically checked every *resyncInterval* and applied to the cluster node.
+This will update a file located at `/etc/config/ip-masq-agent` which is periodically checked every `resyncInterval` and applied to the cluster node.
After the resync interval has expired, you should see the iptables rules reflect your changes:
-```
+```shell
iptables -t nat -L IP-MASQ-AGENT
+```
+
+```none
Chain IP-MASQ-AGENT (1 references)
target prot opt source destination
RETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL
@@ -103,9 +110,9 @@ RETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent:
MASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic should be subject to MASQUERADE (this match must come after cluster-local CIDR matches) */ ADDRTYPE match dst-type !LOCAL
```
-By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can set *masqLinkLocal* to true in the config map.
+By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can set `masqLinkLocal` to true in the ConfigMap.
-```
+```yaml
nonMasqueradeCIDRs:
- 10.0.0.0/8
resyncInterval: 60s
diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md
index dc25cc60276e1..152e59cb5dc1e 100644
--- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md
+++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md
@@ -81,13 +81,13 @@ kubectl describe pod liveness-exec
The output indicates that no liveness probes have failed yet:
```
-FirstSeen LastSeen Count From SubobjectPath Type Reason Message
---------- -------- ----- ---- ------------- -------- ------ -------
-24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
-23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox"
-23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox"
-23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
-23s 23s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
+Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Scheduled 11s default-scheduler Successfully assigned default/liveness-exec to node01
+ Normal Pulling 9s kubelet, node01 Pulling image "k8s.gcr.io/busybox"
+ Normal Pulled 7s kubelet, node01 Successfully pulled image "k8s.gcr.io/busybox"
+ Normal Created 7s kubelet, node01 Created container liveness
+ Normal Started 7s kubelet, node01 Started container liveness
```
After 35 seconds, view the Pod events again:
@@ -100,14 +100,15 @@ At the bottom of the output, there are messages indicating that the liveness
probes have failed, and the containers have been killed and recreated.
```
-FirstSeen LastSeen Count From SubobjectPath Type Reason Message
---------- -------- ----- ---- ------------- -------- ------ -------
-37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
-36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulling pulling image "k8s.gcr.io/busybox"
-36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Pulled Successfully pulled image "k8s.gcr.io/busybox"
-36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Created Created container with docker id 86849c15382e; Security:[seccomp=unconfined]
-36s 36s 1 {kubelet worker0} spec.containers{liveness} Normal Started Started container with docker id 86849c15382e
-2s 2s 1 {kubelet worker0} spec.containers{liveness} Warning Unhealthy Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal Scheduled 57s default-scheduler Successfully assigned default/liveness-exec to node01
+ Normal Pulling 55s kubelet, node01 Pulling image "k8s.gcr.io/busybox"
+ Normal Pulled 53s kubelet, node01 Successfully pulled image "k8s.gcr.io/busybox"
+ Normal Created 53s kubelet, node01 Created container liveness
+ Normal Started 53s kubelet, node01 Started container liveness
+ Warning Unhealthy 10s (x3 over 20s) kubelet, node01 Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
+ Normal Killing 10s kubelet, node01 Container liveness failed liveness probe, will be restarted
```
Wait another 30 seconds, and verify that the container has been restarted:
diff --git a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md
index 205628b525609..a463982422656 100644
--- a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md
+++ b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md
@@ -208,7 +208,6 @@ you need is an existing `docker-compose.yml` file.
- CLI
- [`kompose convert`](#kompose-convert)
- Documentation
- - [Build and Push Docker Images](#build-and-push-docker-images)
- [Alternative Conversions](#alternative-conversions)
- [Labels](#labels)
- [Restart](#restart)
@@ -326,55 +325,6 @@ INFO OpenShift file "foo-buildconfig.yaml" created
If you are manually pushing the OpenShift artifacts using ``oc create -f``, you need to ensure that you push the imagestream artifact before the buildconfig artifact, to workaround this OpenShift issue: https://github.com/openshift/origin/issues/4518 .
{{< /note >}}
-
-
-## Build and Push Docker Images
-
-Kompose supports both building and pushing Docker images. When using the `build` key within your Docker Compose file, your image will:
-
-- Automatically be built with Docker using the `image` key specified within your file
-- Be pushed to the correct Docker repository using local credentials (located at `.docker/config`)
-
-Using an [example Docker Compose file](https://raw.githubusercontent.com/kubernetes/kompose/master/examples/buildconfig/docker-compose.yml):
-
-```yaml
-version: "2"
-
-services:
- foo:
- build: "./build"
- image: docker.io/foo/bar
-```
-
-Using `kompose up` with a `build` key:
-
-```none
-kompose up
-INFO Build key detected. Attempting to build and push image 'docker.io/foo/bar'
-INFO Building image 'docker.io/foo/bar' from directory 'build'
-INFO Image 'docker.io/foo/bar' from directory 'build' built successfully
-INFO Pushing image 'foo/bar:latest' to registry 'docker.io'
-INFO Attempting authentication credentials 'https://index.docker.io/v1/
-INFO Successfully pushed image 'foo/bar:latest' to registry 'docker.io'
-INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl apply -f' commands instead.
-
-INFO Deploying application in "default" namespace
-INFO Successfully created Service: foo
-INFO Successfully created Deployment: foo
-
-Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods,pvc' for details.
-```
-
-In order to disable the functionality, or choose to use BuildConfig generation (with OpenShift) `--build (local|build-config|none)` can be passed.
-
-```sh
-# Disable building/pushing Docker images
-kompose up --build none
-
-# Generate Build Config artifacts for OpenShift
-kompose up --provider openshift --build build-config
-```
-
## Alternative Conversions
The default `kompose` transformation will generate Kubernetes [Deployments](/docs/concepts/workloads/controllers/deployment/) and [Services](/docs/concepts/services-networking/service/), in yaml format. You have alternative option to generate json with `-j`. Also, you can alternatively generate [Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/) objects, [Daemon Sets](/docs/concepts/workloads/controllers/daemonset/), or [Helm](https://github.com/helm/helm) charts.
diff --git a/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md b/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md
index 9653ff05ef1ff..c3922ed0a145b 100644
--- a/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md
+++ b/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md
@@ -110,7 +110,7 @@ specify the `-i`/`--interactive` argument, `kubectl` will automatically attach
to the console of the Ephemeral Container.
```shell
-kubectl debug -it ephemeral-demo --image=busybox --target=ephemeral-demo
+kubectl debug -it ephemeral-demo --image=busybox:1.28 --target=ephemeral-demo
```
```
@@ -182,7 +182,7 @@ but you need debugging utilities not included in `busybox`. You can simulate
this scenario using `kubectl run`:
```shell
-kubectl run myapp --image=busybox --restart=Never -- sleep 1d
+kubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d
```
Run this command to create a copy of `myapp` named `myapp-debug` that adds a
@@ -225,7 +225,7 @@ To simulate a crashing application, use `kubectl run` to create a container
that immediately exits:
```
-kubectl run --image=busybox myapp -- false
+kubectl run --image=busybox:1.28 myapp -- false
```
You can see using `kubectl describe pod myapp` that this container is crashing:
@@ -283,7 +283,7 @@ additional utilities.
As an example, create a Pod using `kubectl run`:
```
-kubectl run myapp --image=busybox --restart=Never -- sleep 1d
+kubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d
```
Now use `kubectl debug` to make a copy and change its container image
diff --git a/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md b/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md
index 14afc52c24764..c2818940e2b6c 100644
--- a/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md
+++ b/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md
@@ -8,14 +8,19 @@ content_type: concept
-For Kubernetes, the _Metrics API_ offers a basic set of metrics to support automatic scaling and similar use cases.
-This API makes information available about resource usage for node and pod, including metrics for CPU and memory.
-If you deploy the Metrics API into your cluster, clients of the Kubernetes API can then query for this information, and
-you can use Kubernetes' access control mechanisms to manage permissions to do so.
+For Kubernetes, the _Metrics API_ offers a basic set of metrics to support automatic scaling and
+similar use cases. This API makes information available about resource usage for node and pod,
+including metrics for CPU and memory. If you deploy the Metrics API into your cluster, clients of
+the Kubernetes API can then query for this information, and you can use Kubernetes' access control
+mechanisms to manage permissions to do so.
-The [HorizontalPodAutoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/) (HPA) and [VerticalPodAutoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme) (VPA) use data from the metrics API to adjust workload replicas and resources to meet customer demand.
+The [HorizontalPodAutoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/) (HPA) and
+[VerticalPodAutoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme) (VPA)
+use data from the metrics API to adjust workload replicas and resources to meet customer demand.
-You can also view the resource metrics using the [`kubectl top`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#top) command.
+You can also view the resource metrics using the
+[`kubectl top`](/docs/reference/generated/kubectl/kubectl-commands#top)
+command.
{{< note >}}
The Metrics API, and the metrics pipeline that it enables, only offers the minimum
@@ -59,34 +64,51 @@ Figure 1. Resource Metrics Pipeline
The architecture components, from right to left in the figure, consist of the following:
-* [cAdvisor](https://github.com/google/cadvisor): Daemon for collecting, aggregating and exposing container metrics included in Kubelet.
-* [kubelet](/docs/concepts/overview/components/#kubelet): Node agent for managing container resources. Resource metrics are accessible using the `/metrics/resource` and `/stats` kubelet API endpoints.
-* [Summary API](#summary-api-source): API provided by the kubelet for discovering and retrieving per-node summarized stats available through the `/stats` endpoint.
-* [metrics-server](#metrics-server): Cluster addon component that collects and aggregates resource metrics pulled from each kubelet. The API server serves Metrics API for use by HPA, VPA, and by the `kubectl top` command. Metrics Server is a reference implementation of the Metrics API.
-* [Metrics API](#metrics-api): Kubernetes API supporting access to CPU and memory used for workload autoscaling. To make this work in your cluster, you need an API extension server that provides the Metrics API.
+* [cAdvisor](https://github.com/google/cadvisor): Daemon for collecting, aggregating and exposing
+ container metrics included in Kubelet.
+* [kubelet](/docs/concepts/overview/components/#kubelet): Node agent for managing container
+ resources. Resource metrics are accessible using the `/metrics/resource` and `/stats` kubelet
+ API endpoints.
+* [Summary API](#summary-api-source): API provided by the kubelet for discovering and retrieving
+ per-node summarized stats available through the `/stats` endpoint.
+* [metrics-server](#metrics-server): Cluster addon component that collects and aggregates resource
+ metrics pulled from each kubelet. The API server serves Metrics API for use by HPA, VPA, and by
+ the `kubectl top` command. Metrics Server is a reference implementation of the Metrics API.
+* [Metrics API](#metrics-api): Kubernetes API supporting access to CPU and memory used for
+ workload autoscaling. To make this work in your cluster, you need an API extension server that
+ provides the Metrics API.
{{< note >}}
cAdvisor supports reading metrics from cgroups, which works with typical container runtimes on Linux.
- If you use a container runtime that uses another resource isolation mechanism, for example virtualization, then that container runtime must support [CRI Container Metrics](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/cri-container-stats.md) in order for metrics to be available to the kubelet.
+ If you use a container runtime that uses another resource isolation mechanism, for example
+ virtualization, then that container runtime must support
+ [CRI Container Metrics](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/cri-container-stats.md)
+ in order for metrics to be available to the kubelet.
{{< /note >}}
-
## Metrics API
-The metrics-server implements the Metrics API. This API allows you to access CPU and memory usage for the nodes and pods in your cluster. Its primary role is to feed resource usage metrics to K8s autoscaler components.
+The metrics-server implements the Metrics API. This API allows you to access CPU and memory usage
+for the nodes and pods in your cluster. Its primary role is to feed resource usage metrics to K8s
+autoscaler components.
+
+Here is an example of the Metrics API request for a `minikube` node piped through `jq` for easier
+reading:
-Here is an example of the Metrics API request for a `minikube` node piped through `jq` for easier reading:
```shell
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes/minikube" | jq '.'
```
Here is the same API call using `curl`:
+
```shell
curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes/minikube
```
-Sample reply:
+
+Sample response:
+
```json
{
"kind": "NodeMetrics",
@@ -104,16 +126,22 @@ Sample reply:
}
}
```
-Here is an example of the Metrics API request for a `kube-scheduler-minikube` pod contained in the `kube-system` namespace and piped through `jq` for easier reading:
+
+Here is an example of the Metrics API request for a `kube-scheduler-minikube` pod contained in the
+`kube-system` namespace and piped through `jq` for easier reading:
```shell
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube" | jq '.'
```
+
Here is the same API call using `curl`:
+
```shell
curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube
```
-Sample reply:
+
+Sample response:
+
```json
{
"kind": "PodMetrics",
@@ -138,47 +166,72 @@ Sample reply:
}
```
-The Metrics API is defined in the [k8s.io/metrics](https://github.com/kubernetes/metrics) repository. You must enable the [API aggregation layer](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) and register an [APIService](/docs/reference/kubernetes-api/cluster-resources/api-service-v1/) for the `metrics.k8s.io` API.
+The Metrics API is defined in the [k8s.io/metrics](https://github.com/kubernetes/metrics)
+repository. You must enable the [API aggregation layer](/docs/tasks/extend-kubernetes/configure-aggregation-layer/)
+and register an [APIService](/docs/reference/kubernetes-api/cluster-resources/api-service-v1/)
+for the `metrics.k8s.io` API.
-To learn more about the Metrics API, see [resource metrics API design](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/resource-metrics-api.md), the [metrics-server repository](https://github.com/kubernetes-sigs/metrics-server) and the [resource metrics API](https://github.com/kubernetes/metrics#resource-metrics-api).
+To learn more about the Metrics API, see [resource metrics API design](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/resource-metrics-api.md),
+the [metrics-server repository](https://github.com/kubernetes-sigs/metrics-server) and the
+[resource metrics API](https://github.com/kubernetes/metrics#resource-metrics-api).
-{{< note >}} You must deploy the metrics-server or alternative adapter that serves the Metrics API to be able to access it. {{< /note >}}
+{{< note >}}
+You must deploy the metrics-server or alternative adapter that serves the Metrics API to be able
+to access it.
+{{< /note >}}
## Measuring resource usage
### CPU
-CPU is reported as the average core usage measured in cpu units. One cpu, in Kubernetes, is equivalent to 1 vCPU/Core for cloud providers, and 1 hyper-thread on bare-metal Intel processors.
+CPU is reported as the average core usage measured in cpu units. One cpu, in Kubernetes, is
+equivalent to 1 vCPU/Core for cloud providers, and 1 hyper-thread on bare-metal Intel processors.
-This value is derived by taking a rate over a cumulative CPU counter provided by the kernel (in both Linux and Windows kernels). The time window used to calculate CPU is shown under window field in Metrics API.
+This value is derived by taking a rate over a cumulative CPU counter provided by the kernel (in
+both Linux and Windows kernels). The time window used to calculate CPU is shown under window field
+in Metrics API.
-To learn more about how Kubernetes allocates and measures CPU resources, see [meaning of CPU](/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu).
+To learn more about how Kubernetes allocates and measures CPU resources, see
+[meaning of CPU](/docs/concepts/configuration/manage-resources-container/#meaning-of-cpu).
### Memory
Memory is reported as the working set, measured in bytes, at the instant the metric was collected.
-In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under memory pressure. However, calculation of the working set varies by host OS, and generally makes heavy use of heuristics to produce an estimate.
+In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under
+memory pressure. However, calculation of the working set varies by host OS, and generally makes
+heavy use of heuristics to produce an estimate.
-The Kubernetes model for a container's working set expects that the container runtime counts anonymous memory associated with the container in question. The working set metric typically also includes some cached (file-backed) memory, because the host OS cannot always reclaim pages.
+The Kubernetes model for a container's working set expects that the container runtime counts
+anonymous memory associated with the container in question. The working set metric typically also
+includes some cached (file-backed) memory, because the host OS cannot always reclaim pages.
-To learn more about how Kubernetes allocates and measures memory resources, see [meaning of memory](/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-memory).
+To learn more about how Kubernetes allocates and measures memory resources, see
+[meaning of memory](/docs/concepts/configuration/manage-resources-container/#meaning-of-memory).
## Metrics Server
-The metrics-server fetches resource metrics from the kubelets and exposes them in the Kubernetes API server through the Metrics API for use by the HPA and VPA. You can also view these metrics using the `kubectl top` command.
+The metrics-server fetches resource metrics from the kubelets and exposes them in the Kubernetes
+API server through the Metrics API for use by the HPA and VPA. You can also view these metrics
+using the `kubectl top` command.
+
+The metrics-server uses the Kubernetes API to track nodes and pods in your cluster. The
+metrics-server queries each node over HTTP to fetch metrics. The metrics-server also builds an
+internal view of pod metadata, and keeps a cache of pod health. That cached pod health information
+is available via the extension API that the metrics-server makes available.
-The metrics-server uses the Kubernetes API to track nodes and pods in your cluster. The metrics-server queries each node over HTTP to fetch metrics. The metrics-server also builds an internal view of pod metadata, and keeps a cache of pod health. That cached pod health information is available via the extension API that the metrics-server makes available.
+For example with an HPA query, the metrics-server needs to identify which pods fulfill the label
+selectors in the deployment.
-For example with an HPA query, the metrics-server needs to identify which pods fulfill the label selectors in the deployment.
+The metrics-server calls the [kubelet](/docs/reference/command-line-tools-reference/kubelet/) API
+to collect metrics from each node. Depending on the metrics-server version it uses:
-The metrics-server calls the [kubelet](/docs/reference/command-line-tools-reference/kubelet/) API to collect metrics from each node. Depending on the metrics-server version it uses:
* Metrics resource endpoint `/metrics/resource` in version v0.6.0+ or
* Summary API endpoint `/stats/summary` in older versions
-
-To learn more about the metrics-server, see the [metrics-server repository](https://github.com/kubernetes-sigs/metrics-server).
+To learn more about the metrics-server, see the
+[metrics-server repository](https://github.com/kubernetes-sigs/metrics-server).
You can also check out the following:
@@ -190,20 +243,25 @@ You can also check out the following:
### Summary API source
-The [kubelet](/docs/reference/command-line-tools-reference/kubelet/) gathers stats at the node, volume, pod and container level, and emits this information in
+The [kubelet](/docs/reference/command-line-tools-reference/kubelet/) gathers stats at the node,
+volume, pod and container level, and emits this information in
the [Summary API](https://github.com/kubernetes/kubernetes/blob/7d309e0104fedb57280b261e5677d919cb2a0e2d/staging/src/k8s.io/kubelet/pkg/apis/stats/v1alpha1/types.go)
for consumers to read.
Here is an example of a Summary API request for a `minikube` node:
-
```shell
kubectl get --raw "/api/v1/nodes/minikube/proxy/stats/summary"
```
+
Here is the same API call using `curl`:
+
```shell
curl http://localhost:8080/api/v1/nodes/minikube/proxy/stats/summary
```
+
{{< note >}}
-The summary API `/stats/summary` endpoint will be replaced by the `/metrics/resource` endpoint beginning with metrics-server 0.6.x.
-{{< /note >}}
\ No newline at end of file
+The summary API `/stats/summary` endpoint will be replaced by the `/metrics/resource` endpoint
+beginning with metrics-server 0.6.x.
+{{< /note >}}
+
diff --git a/content/en/docs/tasks/job/parallel-processing-expansion.md b/content/en/docs/tasks/job/parallel-processing-expansion.md
index 30942b3edecfa..f62515fda0505 100644
--- a/content/en/docs/tasks/job/parallel-processing-expansion.md
+++ b/content/en/docs/tasks/job/parallel-processing-expansion.md
@@ -201,7 +201,7 @@ spec:
spec:
containers:
- name: c
- image: busybox
+ image: busybox:1.28
command: ["sh", "-c", "echo Processing URL {{ url }} && sleep 5"]
restartPolicy: Never
{% endfor %}
diff --git a/content/en/docs/tasks/manage-daemon/update-daemon-set.md b/content/en/docs/tasks/manage-daemon/update-daemon-set.md
index d36f487794701..e435f392fe49e 100644
--- a/content/en/docs/tasks/manage-daemon/update-daemon-set.md
+++ b/content/en/docs/tasks/manage-daemon/update-daemon-set.md
@@ -34,11 +34,11 @@ DaemonSet has two update strategy types:
To enable the rolling update feature of a DaemonSet, you must set its
`.spec.updateStrategy.type` to `RollingUpdate`.
-You may want to set
-[`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/docs/concepts/workloads/controllers/deployment/#max-unavailable)
+You may want to set
+[`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/#DaemonSetSpec)
(default to 1),
-[`.spec.minReadySeconds`](/docs/concepts/workloads/controllers/deployment/#min-ready-seconds)
-(default to 0) and
+[`.spec.minReadySeconds`](/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/#DaemonSetSpec)
+(default to 0) and
[`.spec.updateStrategy.rollingUpdate.maxSurge`](/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/#DaemonSetSpec)
(a beta feature and defaults to 0) as well.
diff --git a/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md b/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md
index e6e33efe753ef..7b38703c741d0 100644
--- a/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md
+++ b/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md
@@ -189,11 +189,11 @@ kubectl get deployment patch-demo --output yaml
The output shows that the PodSpec in the Deployment has only one Toleration:
-```shell
+```yaml
tolerations:
- - effect: NoSchedule
- key: disktype
- value: ssd
+- effect: NoSchedule
+ key: disktype
+ value: ssd
```
Notice that the `tolerations` list in the PodSpec was replaced, not merged. This is because
diff --git a/content/en/docs/tasks/run-application/access-api-from-pod.md b/content/en/docs/tasks/run-application/access-api-from-pod.md
index 9eb2521f7f43d..d56f624cd561b 100644
--- a/content/en/docs/tasks/run-application/access-api-from-pod.md
+++ b/content/en/docs/tasks/run-application/access-api-from-pod.md
@@ -48,7 +48,8 @@ While running in a Pod, the Kubernetes apiserver is accessible via a Service nam
do this automatically.
The recommended way to authenticate to the API server is with a
-[service account](/docs/tasks/configure-pod-container/configure-service-account/) credential. By default, a Pod
+[service account](/docs/tasks/configure-pod-container/configure-service-account/)
+credential. By default, a Pod
is associated with a service account, and a credential (token) for that
service account is placed into the filesystem tree of each container in that Pod,
at `/var/run/secrets/kubernetes.io/serviceaccount/token`.
diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
index 04d3e3daaaf01..3e66272e4850e 100644
--- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
+++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
@@ -152,7 +152,7 @@ runs in an infinite loop, sending queries to the php-apache service.
```shell
# Run this in a separate terminal
# so that the load generation continues and you can carry on with the rest of the steps
-kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"
+kubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"
```
Now run:
diff --git a/content/en/docs/tasks/tools/install-kubectl-linux.md b/content/en/docs/tasks/tools/install-kubectl-linux.md
index 987f3a4116310..cf2386dfc7148 100644
--- a/content/en/docs/tasks/tools/install-kubectl-linux.md
+++ b/content/en/docs/tasks/tools/install-kubectl-linux.md
@@ -52,7 +52,7 @@ For example, to download version {{< param "fullversion" >}} on Linux, type:
Validate the kubectl binary against the checksum file:
```bash
- echo "$(}}/bin/windows/amd64/kubectl.exe.sha256"
```
- Validate the kubectl binary against the checksum file:
+ Validate the `kubectl` binary against the checksum file:
- Using Command Prompt to manually compare `CertUtil`'s output to the checksum file downloaded:
@@ -59,7 +59,7 @@ The following methods exist for installing kubectl on Windows:
$($(CertUtil -hashfile .\kubectl.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl.exe.sha256)
```
-1. Append or prepend the kubectl binary folder to your `PATH` environment variable.
+1. Append or prepend the `kubectl` binary folder to your `PATH` environment variable.
1. Test to ensure the version of `kubectl` is the same as downloaded:
@@ -156,13 +156,13 @@ Below are the procedures to set up autocompletion for PowerShell.
1. Validate the binary (optional)
- Download the kubectl-convert checksum file:
+ Download the `kubectl-convert` checksum file:
```powershell
curl -LO "https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubectl-convert.exe.sha256"
```
- Validate the kubectl-convert binary against the checksum file:
+ Validate the `kubectl-convert` binary against the checksum file:
- Using Command Prompt to manually compare `CertUtil`'s output to the checksum file downloaded:
@@ -177,7 +177,7 @@ Below are the procedures to set up autocompletion for PowerShell.
$($(CertUtil -hashfile .\kubectl-convert.exe SHA256)[1] -replace " ", "") -eq $(type .\kubectl-convert.exe.sha256)
```
-1. Append or prepend the kubectl binary folder to your `PATH` environment variable.
+1. Append or prepend the `kubectl-convert` binary folder to your `PATH` environment variable.
1. Verify plugin is successfully installed
diff --git a/content/en/docs/tutorials/security/apparmor.md b/content/en/docs/tutorials/security/apparmor.md
index 727b26760822b..1e8ead93101ff 100644
--- a/content/en/docs/tutorials/security/apparmor.md
+++ b/content/en/docs/tutorials/security/apparmor.md
@@ -264,7 +264,7 @@ metadata:
spec:
containers:
- name: hello
- image: busybox
+ image: busybox:1.28
command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
EOF
pod/hello-apparmor-2 created
diff --git a/content/en/docs/tutorials/services/source-ip.md b/content/en/docs/tutorials/services/source-ip.md
index a5b0d5a1d2291..bb9a622c98d7f 100644
--- a/content/en/docs/tutorials/services/source-ip.md
+++ b/content/en/docs/tutorials/services/source-ip.md
@@ -119,7 +119,7 @@ clusterip ClusterIP 10.0.170.92 80/TCP 51s
And hitting the `ClusterIP` from a pod in the same cluster:
```shell
-kubectl run busybox -it --image=busybox --restart=Never --rm
+kubectl run busybox -it --image=busybox:1.28 --restart=Never --rm
```
The output is similar to this:
```
diff --git a/content/en/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml b/content/en/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml
index b37b616e6f7c7..ddfb8104cb946 100644
--- a/content/en/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml
+++ b/content/en/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml
@@ -5,7 +5,7 @@ metadata:
spec:
containers:
- name: count
- image: busybox
+ image: busybox:1.28
args:
- /bin/sh
- -c
diff --git a/content/en/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml b/content/en/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml
index 87bd198cfdab7..6b7d1f120106d 100644
--- a/content/en/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml
+++ b/content/en/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml
@@ -5,7 +5,7 @@ metadata:
spec:
containers:
- name: count
- image: busybox
+ image: busybox:1.28
args:
- /bin/sh
- -c
@@ -22,13 +22,13 @@ spec:
- name: varlog
mountPath: /var/log
- name: count-log-1
- image: busybox
+ image: busybox:1.28
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log']
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-log-2
- image: busybox
+ image: busybox:1.28
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log']
volumeMounts:
- name: varlog
diff --git a/content/en/examples/admin/logging/two-files-counter-pod.yaml b/content/en/examples/admin/logging/two-files-counter-pod.yaml
index 6ebeb717a1892..31bbed3cf8683 100644
--- a/content/en/examples/admin/logging/two-files-counter-pod.yaml
+++ b/content/en/examples/admin/logging/two-files-counter-pod.yaml
@@ -5,7 +5,7 @@ metadata:
spec:
containers:
- name: count
- image: busybox
+ image: busybox:1.28
args:
- /bin/sh
- -c
diff --git a/content/en/examples/admin/resource/limit-range-pod-1.yaml b/content/en/examples/admin/resource/limit-range-pod-1.yaml
index 0457792af94c4..b9bd20d06a2c7 100644
--- a/content/en/examples/admin/resource/limit-range-pod-1.yaml
+++ b/content/en/examples/admin/resource/limit-range-pod-1.yaml
@@ -5,7 +5,7 @@ metadata:
spec:
containers:
- name: busybox-cnt01
- image: busybox
+ image: busybox:1.28
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"]
resources:
@@ -16,7 +16,7 @@ spec:
memory: "200Mi"
cpu: "500m"
- name: busybox-cnt02
- image: busybox
+ image: busybox:1.28
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello from cnt02; sleep 10;done"]
resources:
@@ -24,7 +24,7 @@ spec:
memory: "100Mi"
cpu: "100m"
- name: busybox-cnt03
- image: busybox
+ image: busybox:1.28
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello from cnt03; sleep 10;done"]
resources:
@@ -32,6 +32,6 @@ spec:
memory: "200Mi"
cpu: "500m"
- name: busybox-cnt04
- image: busybox
+ image: busybox:1.28
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello from cnt04; sleep 10;done"]
diff --git a/content/en/examples/admin/resource/limit-range-pod-2.yaml b/content/en/examples/admin/resource/limit-range-pod-2.yaml
index efac440269c6f..40da19c1aee05 100644
--- a/content/en/examples/admin/resource/limit-range-pod-2.yaml
+++ b/content/en/examples/admin/resource/limit-range-pod-2.yaml
@@ -5,7 +5,7 @@ metadata:
spec:
containers:
- name: busybox-cnt01
- image: busybox
+ image: busybox:1.28
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"]
resources:
@@ -16,7 +16,7 @@ spec:
memory: "200Mi"
cpu: "500m"
- name: busybox-cnt02
- image: busybox
+ image: busybox:1.28
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello from cnt02; sleep 10;done"]
resources:
@@ -24,7 +24,7 @@ spec:
memory: "100Mi"
cpu: "100m"
- name: busybox-cnt03
- image: busybox
+ image: busybox:1.28
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello from cnt03; sleep 10;done"]
resources:
@@ -32,6 +32,6 @@ spec:
memory: "200Mi"
cpu: "500m"
- name: busybox-cnt04
- image: busybox
+ image: busybox:1.28
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello from cnt04; sleep 10;done"]
diff --git a/content/en/examples/admin/resource/limit-range-pod-3.yaml b/content/en/examples/admin/resource/limit-range-pod-3.yaml
index 8afdb6379cf61..503200a9662fc 100644
--- a/content/en/examples/admin/resource/limit-range-pod-3.yaml
+++ b/content/en/examples/admin/resource/limit-range-pod-3.yaml
@@ -5,7 +5,7 @@ metadata:
spec:
containers:
- name: busybox-cnt01
- image: busybox
+ image: busybox:1.28
resources:
limits:
memory: "300Mi"
diff --git a/content/en/examples/application/job/cronjob.yaml b/content/en/examples/application/job/cronjob.yaml
index 9f06ca7bd6758..78d0e2d314792 100644
--- a/content/en/examples/application/job/cronjob.yaml
+++ b/content/en/examples/application/job/cronjob.yaml
@@ -10,7 +10,7 @@ spec:
spec:
containers:
- name: hello
- image: busybox
+ image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- /bin/sh
diff --git a/content/en/examples/application/job/job-tmpl.yaml b/content/en/examples/application/job/job-tmpl.yaml
index 790025b38b886..d7dbbafd62bc5 100644
--- a/content/en/examples/application/job/job-tmpl.yaml
+++ b/content/en/examples/application/job/job-tmpl.yaml
@@ -13,6 +13,6 @@ spec:
spec:
containers:
- name: c
- image: busybox
+ image: busybox:1.28
command: ["sh", "-c", "echo Processing item $ITEM && sleep 5"]
restartPolicy: Never
diff --git a/content/en/examples/debug/counter-pod.yaml b/content/en/examples/debug/counter-pod.yaml
index f997886386258..a91b2f8915830 100644
--- a/content/en/examples/debug/counter-pod.yaml
+++ b/content/en/examples/debug/counter-pod.yaml
@@ -5,6 +5,6 @@ metadata:
spec:
containers:
- name: count
- image: busybox
+ image: busybox:1.28
args: [/bin/sh, -c,
'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
diff --git a/content/en/examples/examples_test.go b/content/en/examples/examples_test.go
index ac50cdfe4c641..eaf6e5808e2c3 100644
--- a/content/en/examples/examples_test.go
+++ b/content/en/examples/examples_test.go
@@ -556,6 +556,7 @@ func TestExampleObjectSchemas(t *testing.T) {
"pod-projected-svc-token": {&api.Pod{}},
"pod-rs": {&api.Pod{}, &api.Pod{}},
"pod-single-configmap-env-variable": {&api.Pod{}},
+ "pod-with-affinity-anti-affinity": {&api.Pod{}},
"pod-with-node-affinity": {&api.Pod{}},
"pod-with-pod-affinity": {&api.Pod{}},
"pod-with-toleration": {&api.Pod{}},
diff --git a/content/en/examples/pods/init-containers.yaml b/content/en/examples/pods/init-containers.yaml
index 667b03eccd2b0..e55895d673f38 100644
--- a/content/en/examples/pods/init-containers.yaml
+++ b/content/en/examples/pods/init-containers.yaml
@@ -14,7 +14,7 @@ spec:
# These containers are run during pod initialization
initContainers:
- name: install
- image: busybox
+ image: busybox:1.28
command:
- wget
- "-O"
diff --git a/content/en/examples/pods/inject/dependent-envars.yaml b/content/en/examples/pods/inject/dependent-envars.yaml
index 2509c6f47b56d..67d07098baec6 100644
--- a/content/en/examples/pods/inject/dependent-envars.yaml
+++ b/content/en/examples/pods/inject/dependent-envars.yaml
@@ -10,7 +10,7 @@ spec:
command:
- sh
- -c
- image: busybox
+ image: busybox:1.28
env:
- name: SERVICE_PORT
value: "80"
diff --git a/content/en/examples/pods/pod-with-affinity-anti-affinity.yaml b/content/en/examples/pods/pod-with-affinity-anti-affinity.yaml
new file mode 100644
index 0000000000000..a7d14b2d6f755
--- /dev/null
+++ b/content/en/examples/pods/pod-with-affinity-anti-affinity.yaml
@@ -0,0 +1,32 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: with-affinity-anti-affinity
+spec:
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: kubernetes.io/os
+ operator: In
+ values:
+ - linux
+ preferredDuringSchedulingIgnoredDuringExecution:
+ - weight: 1
+ preference:
+ matchExpressions:
+ - key: label-1
+ operator: In
+ values:
+ - key-1
+ - weight: 50
+ preference:
+ matchExpressions:
+ - key: label-2
+ operator: In
+ values:
+ - key-2
+ containers:
+ - name: with-node-affinity
+ image: k8s.gcr.io/pause:2.0
\ No newline at end of file
diff --git a/content/en/examples/pods/pod-with-node-affinity.yaml b/content/en/examples/pods/pod-with-node-affinity.yaml
index 253d2b21ea917..e077f79883eff 100644
--- a/content/en/examples/pods/pod-with-node-affinity.yaml
+++ b/content/en/examples/pods/pod-with-node-affinity.yaml
@@ -8,11 +8,10 @@ spec:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- - key: kubernetes.io/e2e-az-name
+ - key: kubernetes.io/os
operator: In
values:
- - e2e-az1
- - e2e-az2
+ - linux
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
diff --git a/content/en/examples/pods/security/hello-apparmor.yaml b/content/en/examples/pods/security/hello-apparmor.yaml
index 3e9b3b2a9c6be..000645f1c72c9 100644
--- a/content/en/examples/pods/security/hello-apparmor.yaml
+++ b/content/en/examples/pods/security/hello-apparmor.yaml
@@ -9,5 +9,5 @@ metadata:
spec:
containers:
- name: hello
- image: busybox
+ image: busybox:1.28
command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
diff --git a/content/en/examples/pods/security/security-context.yaml b/content/en/examples/pods/security/security-context.yaml
index 35cb1eeebe60a..7903c39c6467c 100644
--- a/content/en/examples/pods/security/security-context.yaml
+++ b/content/en/examples/pods/security/security-context.yaml
@@ -12,7 +12,7 @@ spec:
emptyDir: {}
containers:
- name: sec-ctx-demo
- image: busybox
+ image: busybox:1.28
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: sec-ctx-vol
diff --git a/content/en/examples/pods/share-process-namespace.yaml b/content/en/examples/pods/share-process-namespace.yaml
index af812732a247a..bd48bf0ff6e18 100644
--- a/content/en/examples/pods/share-process-namespace.yaml
+++ b/content/en/examples/pods/share-process-namespace.yaml
@@ -8,7 +8,7 @@ spec:
- name: nginx
image: nginx
- name: shell
- image: busybox
+ image: busybox:1.28
securityContext:
capabilities:
add:
diff --git a/content/en/examples/pods/storage/projected-secret-downwardapi-configmap.yaml b/content/en/examples/pods/storage/projected-secret-downwardapi-configmap.yaml
index 270db99dcd76b..453dc08c0c7d9 100644
--- a/content/en/examples/pods/storage/projected-secret-downwardapi-configmap.yaml
+++ b/content/en/examples/pods/storage/projected-secret-downwardapi-configmap.yaml
@@ -5,7 +5,7 @@ metadata:
spec:
containers:
- name: container-test
- image: busybox
+ image: busybox:1.28
volumeMounts:
- name: all-in-one
mountPath: "/projected-volume"
diff --git a/content/en/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml b/content/en/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml
index f69b43161ebf6..b921fd93c5833 100644
--- a/content/en/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml
+++ b/content/en/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml
@@ -5,7 +5,7 @@ metadata:
spec:
containers:
- name: container-test
- image: busybox
+ image: busybox:1.28
volumeMounts:
- name: all-in-one
mountPath: "/projected-volume"
diff --git a/content/en/examples/pods/storage/projected-service-account-token.yaml b/content/en/examples/pods/storage/projected-service-account-token.yaml
index 3ad06b5dc7d6e..cc307659a78ef 100644
--- a/content/en/examples/pods/storage/projected-service-account-token.yaml
+++ b/content/en/examples/pods/storage/projected-service-account-token.yaml
@@ -5,7 +5,7 @@ metadata:
spec:
containers:
- name: container-test
- image: busybox
+ image: busybox:1.28
volumeMounts:
- name: token-vol
mountPath: "/service-account"
diff --git a/content/en/examples/pods/storage/projected.yaml b/content/en/examples/pods/storage/projected.yaml
index 172ca0dee52de..4244048eb7558 100644
--- a/content/en/examples/pods/storage/projected.yaml
+++ b/content/en/examples/pods/storage/projected.yaml
@@ -5,7 +5,7 @@ metadata:
spec:
containers:
- name: test-projected-volume
- image: busybox
+ image: busybox:1.28
args:
- sleep
- "86400"
diff --git a/content/en/examples/service/networking/hostaliases-pod.yaml b/content/en/examples/service/networking/hostaliases-pod.yaml
index 643813b34a13d..268bffbbf5894 100644
--- a/content/en/examples/service/networking/hostaliases-pod.yaml
+++ b/content/en/examples/service/networking/hostaliases-pod.yaml
@@ -15,7 +15,7 @@ spec:
- "bar.remote"
containers:
- name: cat-hosts
- image: busybox
+ image: busybox:1.28
command:
- cat
args:
diff --git a/content/en/releases/patch-releases.md b/content/en/releases/patch-releases.md
index e956012593694..3358da7171368 100644
--- a/content/en/releases/patch-releases.md
+++ b/content/en/releases/patch-releases.md
@@ -78,10 +78,10 @@ releases may also occur in between these.
| Monthly Patch Release | Cherry Pick Deadline | Target date |
| --------------------- | -------------------- | ----------- |
-| March 2022 | 2022-03-11 | 2022-03-16 |
| April 2022 | 2022-04-08 | 2022-04-13 |
| May 2022 | 2022-05-13 | 2022-05-18 |
| June 2022 | 2022-06-10 | 2022-06-15 |
+| July 2022 | 2022-07-08 | 2022-07-13 |
## Detailed Release History for Active Branches
@@ -93,6 +93,7 @@ End of Life for **1.23** is **2023-02-28**.
| Patch Release | Cherry Pick Deadline | Target Date | Note |
|---------------|----------------------|-------------|------|
+| 1.23.6 | 2022-04-08 | 2022-04-13 | |
| 1.23.5 | 2022-03-11 | 2022-03-16 | |
| 1.23.4 | 2022-02-11 | 2022-02-16 | |
| 1.23.3 | 2022-01-24 | 2022-01-25 | [Out-of-Band Release](https://groups.google.com/u/2/a/kubernetes.io/g/dev/c/Xl1sm-CItaY) |
@@ -107,6 +108,7 @@ End of Life for **1.22** is **2022-10-28**
| Patch Release | Cherry Pick Deadline | Target Date | Note |
|---------------|----------------------|-------------|------|
+| 1.22.9 | 2022-04-08 | 2022-04-13 | |
| 1.22.8 | 2022-03-11 | 2022-03-16 | |
| 1.22.7 | 2022-02-11 | 2022-02-16 | |
| 1.22.6 | 2022-01-14 | 2022-01-19 | |
@@ -124,6 +126,7 @@ End of Life for **1.21** is **2022-06-28**
| Patch Release | Cherry Pick Deadline | Target Date | Note |
| ------------- | -------------------- | ----------- | ---------------------------------------------------------------------- |
+| 1.21.12 | 2022-04-08 | 2022-04-13 | |
| 1.21.11 | 2022-03-11 | 2022-03-16 | |
| 1.21.10 | 2022-02-11 | 2022-02-16 | |
| 1.21.9 | 2022-01-14 | 2022-01-19 | |
diff --git a/content/fr/docs/tasks/configure-pod-container/translate-compose-kubernetes.md b/content/fr/docs/tasks/configure-pod-container/translate-compose-kubernetes.md
index f856847e857b8..0ade44801bd82 100644
--- a/content/fr/docs/tasks/configure-pod-container/translate-compose-kubernetes.md
+++ b/content/fr/docs/tasks/configure-pod-container/translate-compose-kubernetes.md
@@ -121,22 +121,7 @@ En quelques étapes, nous vous emmenons de Docker Compose à Kubernetes. Tous do
kompose.service.type: LoadBalancer
```
-2. Lancez la commande `kompose up` pour déployer directement sur Kubernetes, ou passez plutôt à l'étape suivante pour générer un fichier à utiliser avec `kubectl`.
-
- ```bash
- $ kompose up
- We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application.
- If you need different kind of resources, use the 'kompose convert' and 'kubectl apply -f' commands instead.
-
- INFO Successfully created Service: redis
- INFO Successfully created Service: web
- INFO Successfully created Deployment: redis
- INFO Successfully created Deployment: web
-
- Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods,pvc' for details.
- ```
-
-3. Pour convertir le fichier `docker-compose.yml` en fichiers que vous pouvez utiliser avec `kubectl`, lancez `kompose convert` et ensuite `kubectl apply -f