diff --git a/.well-known/security.txt b/.well-known/security.txt
new file mode 100644
index 0000000000000..40371f929208b
--- /dev/null
+++ b/.well-known/security.txt
@@ -0,0 +1,5 @@
+Contact: mailto:security@kubernetes.io
+Expires: 2031-01-11T06:30:00.000Z
+Preferred-Languages: en
+Canonical: https://kubernetes.io/.well-known/security.txt
+Policy: https://github.com/kubernetes/website/blob/main/SECURITY.md
diff --git a/content/de/docs/reference/glossary/cadvisor.md b/content/de/docs/reference/glossary/cadvisor.md
new file mode 100644
index 0000000000000..f04a5c8370688
--- /dev/null
+++ b/content/de/docs/reference/glossary/cadvisor.md
@@ -0,0 +1,16 @@
+---
+title: cAdvisor
+id: cadvisor
+date: 2021-12-09
+full_link: https://github.com/google/cadvisor/
+short_description: >
+ Werkzeug, um Ressourcenverbrauch und Performance Charakteristiken von Container besser zu verstehen
+aka:
+tags:
+- tool
+---
+cAdvisor (Container Advisor) ermöglicht Benutzer von Container ein besseres Verständnis des Ressourcenverbrauchs und der Performance Charakteristiken ihrer laufenden {{< glossary_tooltip text="Container" term_id="container" >}}.
+
+
+
+Es ist ein laufender Daemon, der Informationen über laufende Container sammelt, aggregiert, verarbeitet, und exportiert. Genauer gesagt, speichert es für jeden Container die Ressourcenisolationsparameter, den historischen Ressourcenverbrauch, die Histogramme des kompletten historischen Ressourcenverbrauchs und die Netzwerkstatistiken. Diese Daten werden pro Container und maschinenweit exportiert.
diff --git a/content/de/docs/reference/glossary/certificate.md b/content/de/docs/reference/glossary/certificate.md
new file mode 100644
index 0000000000000..e9eea0db659a4
--- /dev/null
+++ b/content/de/docs/reference/glossary/certificate.md
@@ -0,0 +1,18 @@
+---
+title: Zertifikat
+id: certificate
+date: 2018-04-12
+full_link: /docs/tasks/tls/managing-tls-in-a-cluster/
+short_description: >
+ Eine kryptographisch sichere Datei, die verwendet wird um den Zugriff auf das Kubernetes Cluster zu validieren.
+
+aka:
+tags:
+- security
+---
+ Eine kryptographisch sichere Datei, die verwendet wird um den Zugriff auf das Kubernetes Cluster zu bestätigen.
+
+
+
+Zertfikate ermöglichen es Anwendungen in einem Kubernetes Cluster sicher auf die Kubernetes API zuzugreifen. Zertfikate bestätigen, dass Clients die Erlaubnis haben auf die API zuzugreifen.
+
diff --git a/content/de/docs/reference/glossary/cidr.md b/content/de/docs/reference/glossary/cidr.md
new file mode 100644
index 0000000000000..245ccd9b19d0c
--- /dev/null
+++ b/content/de/docs/reference/glossary/cidr.md
@@ -0,0 +1,18 @@
+---
+title: CIDR
+id: cidr
+date: 2019-11-12
+full_link:
+short_description: >
+ CIDR ist eine Notation, um Blöcke von IP Adressen zu beschreiben und wird viel verwendet in verschiedenen Netzwerkkonfigurationen.
+
+aka:
+tags:
+- networking
+---
+CIDR (Classless Inter-Domain Routing) ist eine Notation, um Blöcke von IP Adressen zu beschreiben und wird viel verwendet in verschiedenen Netzwerkkonfigurationen.
+
+
+
+Im Kubernetes Kontext, erhält jeder {{< glossary_tooltip text="Knoten" term_id="node" >}} eine Reihe von IP Adressen durch die Startadresse und eine Subnetzmaske unter Verwendung von CIDR. Dies erlaubt Knoten jedem {{< glossary_tooltip text="Pod" term_id="pod" >}} eine eigene IP Adresse zuzuweisen. Obwohl es ursprünglich ein Konzept für IPv4 ist, wurde CIDR erweitert um auch IPv6 einzubinden.
+
diff --git a/content/de/docs/reference/glossary/cla.md b/content/de/docs/reference/glossary/cla.md
new file mode 100644
index 0000000000000..b6163a995cc30
--- /dev/null
+++ b/content/de/docs/reference/glossary/cla.md
@@ -0,0 +1,18 @@
+---
+title: CLA (Contributor License Agreement)
+id: cla
+date: 2018-04-12
+full_link: https://github.com/kubernetes/community/blob/master/CLA.md
+short_description: >
+ Bedingungen unter denen ein Mitwirkender eine Lizenz an ein Open Source Projekt erteilt für seine Mitwirkungen.
+
+aka:
+tags:
+- community
+---
+ Bedingungen unter denen ein {{< glossary_tooltip text="Mitwirkender" term_id="contributor" >}} eine Lizenz an ein Open Source Projekt erteilt für seine Mitwirkungen.
+
+
+
+CLAs helfen dabei rechtliche Streitigkeiten rund um Mitwirkungen und geistigem Eigentum (IP) zu lösen.
+
diff --git a/content/en/docs/concepts/architecture/garbage-collection.md b/content/en/docs/concepts/architecture/garbage-collection.md
index 4b36d850b55bb..4c61c968ba054 100644
--- a/content/en/docs/concepts/architecture/garbage-collection.md
+++ b/content/en/docs/concepts/architecture/garbage-collection.md
@@ -139,7 +139,7 @@ until disk usage reaches the `LowThresholdPercent` value.
#### Garbage collection for unused container images {#image-maximum-age-gc}
-{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
+{{< feature-state feature_gate_name="ImageMaximumGCAge" >}}
As an alpha feature, you can specify the maximum time a local image can be unused for,
regardless of disk usage. This is a kubelet setting that you configure for each node.
diff --git a/content/en/docs/concepts/architecture/leases.md b/content/en/docs/concepts/architecture/leases.md
index b07f4db3de256..8d74f81a91d76 100644
--- a/content/en/docs/concepts/architecture/leases.md
+++ b/content/en/docs/concepts/architecture/leases.md
@@ -33,7 +33,7 @@ instances are on stand-by.
## API server identity
-{{< feature-state for_k8s_version="v1.26" state="beta" >}}
+{{< feature-state feature_gate_name="APIServerIdentity" >}}
Starting in Kubernetes v1.26, each `kube-apiserver` uses the Lease API to publish its identity to the
rest of the system. While not particularly useful on its own, this provides a mechanism for clients to
diff --git a/content/en/docs/concepts/architecture/mixed-version-proxy.md b/content/en/docs/concepts/architecture/mixed-version-proxy.md
index 36588430c1f32..1045c83119ec6 100644
--- a/content/en/docs/concepts/architecture/mixed-version-proxy.md
+++ b/content/en/docs/concepts/architecture/mixed-version-proxy.md
@@ -8,7 +8,7 @@ weight: 220
-{{< feature-state state="alpha" for_k8s_version="v1.28" >}}
+{{< feature-state feature_gate_name="UnknownVersionInteroperabilityProxy" >}}
Kubernetes {{< skew currentVersion >}} includes an alpha feature that lets an
{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}}
diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md
index 6473d35a17e25..c0bcecd3df4ac 100644
--- a/content/en/docs/concepts/architecture/nodes.md
+++ b/content/en/docs/concepts/architecture/nodes.md
@@ -280,7 +280,7 @@ If you want to explicitly reserve resources for non-Pod processes, see
## Node topology
-{{< feature-state state="stable" for_k8s_version="v1.27" >}}
+{{< feature-state feature_gate_name="TopologyManager" >}}
If you have enabled the `TopologyManager`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/), then
@@ -290,7 +290,7 @@ for more information.
## Graceful node shutdown {#graceful-node-shutdown}
-{{< feature-state state="beta" for_k8s_version="v1.21" >}}
+{{< feature-state feature_gate_name="GracefulNodeShutdown" >}}
The kubelet attempts to detect node system shutdown and terminates pods running on the node.
@@ -374,7 +374,7 @@ Message: Pod was terminated in response to imminent node shutdown.
### Pod Priority based graceful node shutdown {#pod-priority-graceful-node-shutdown}
-{{< feature-state state="beta" for_k8s_version="v1.24" >}}
+{{< feature-state feature_gate_name="GracefulNodeShutdownBasedOnPodPriority" >}}
To provide more flexibility during graceful node shutdown around the ordering
of pods during shutdown, graceful node shutdown honors the PriorityClass for
@@ -471,7 +471,7 @@ are emitted under the kubelet subsystem to monitor node shutdowns.
## Non-graceful node shutdown handling {#non-graceful-node-shutdown}
-{{< feature-state state="stable" for_k8s_version="v1.28" >}}
+{{< feature-state feature_gate_name="NodeOutOfServiceVolumeDetach" >}}
A node shutdown action may not be detected by kubelet's Node Shutdown Manager,
either because the command does not trigger the inhibitor locks mechanism used by
@@ -515,7 +515,7 @@ During a non-graceful shutdown, Pods are terminated in the two phases:
## Swap memory management {#swap-memory}
-{{< feature-state state="beta" for_k8s_version="v1.28" >}}
+{{< feature-state feature_gate_name="NodeSwap" >}}
To enable swap on a node, the `NodeSwap` feature gate must be enabled on
the kubelet, and the `--fail-swap-on` command line flag or `failSwapOn`
diff --git a/content/en/docs/concepts/cluster-administration/system-logs.md b/content/en/docs/concepts/cluster-administration/system-logs.md
index 1feeecd3db7e5..9fed93fc75dca 100644
--- a/content/en/docs/concepts/cluster-administration/system-logs.md
+++ b/content/en/docs/concepts/cluster-administration/system-logs.md
@@ -238,7 +238,7 @@ The `logrotate` tool rotates logs daily, or once the log size is greater than 10
## Log query
-{{< feature-state for_k8s_version="v1.27" state="alpha" >}}
+{{< feature-state feature_gate_name="NodeLogQuery" >}}
To help with debugging issues on nodes, Kubernetes v1.27 introduced a feature that allows viewing logs of services
running on the node. To use the feature, ensure that the `NodeLogQuery`
diff --git a/content/en/docs/concepts/cluster-administration/system-traces.md b/content/en/docs/concepts/cluster-administration/system-traces.md
index aaaf342b57042..9213dd4b48016 100644
--- a/content/en/docs/concepts/cluster-administration/system-traces.md
+++ b/content/en/docs/concepts/cluster-administration/system-traces.md
@@ -76,7 +76,7 @@ For more information about the `TracingConfiguration` struct, see
### kubelet traces
-{{< feature-state for_k8s_version="v1.27" state="beta" >}}
+{{< feature-state feature_gate_name="KubeletTracing" >}}
The kubelet CRI interface and authenticated http servers are instrumented to generate
trace spans. As with the apiserver, the endpoint and sampling rate are configurable.
diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md
index 9b36a6b72803d..8602d6c98d15b 100644
--- a/content/en/docs/concepts/containers/images.md
+++ b/content/en/docs/concepts/containers/images.md
@@ -161,7 +161,7 @@ which is 300 seconds (5 minutes).
### Image pull per runtime class
-{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
+{{< feature-state feature_gate_name="RuntimeClassInImageCriApi" >}}
Kubernetes includes alpha support for performing image pulls based on the RuntimeClass of a Pod.
If you enable the `RuntimeClassInImageCriApi` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/),
diff --git a/content/en/docs/concepts/overview/components.md b/content/en/docs/concepts/overview/components.md
index 28b633c141a29..177354fa47812 100644
--- a/content/en/docs/concepts/overview/components.md
+++ b/content/en/docs/concepts/overview/components.md
@@ -31,7 +31,7 @@ as well as detecting and responding to cluster events (for example, starting up
`{{< glossary_tooltip text="replicas" term_id="replica" >}}` field is unsatisfied).
Control plane components can be run on any machine in the cluster. However,
-for simplicity, set up scripts typically start all control plane components on
+for simplicity, setup scripts typically start all control plane components on
the same machine, and do not run user containers on this machine. See
[Creating Highly Available clusters with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/)
for an example control plane setup that runs across multiple machines.
@@ -150,4 +150,4 @@ Learn more about the following:
* Etcd's official [documentation](https://etcd.io/docs/).
* Several [container runtimes](/docs/setup/production-environment/container-runtimes/) in Kubernetes.
* Integrating with cloud providers using [cloud-controller-manager](/docs/concepts/architecture/cloud-controller/).
- * [kubectl](/docs/reference/generated/kubectl/kubectl-commands) commands.
\ No newline at end of file
+ * [kubectl](/docs/reference/generated/kubectl/kubectl-commands) commands.
diff --git a/content/en/docs/concepts/overview/kubernetes-api.md b/content/en/docs/concepts/overview/kubernetes-api.md
index ceec7e1eacd3d..f7e6da3d866aa 100644
--- a/content/en/docs/concepts/overview/kubernetes-api.md
+++ b/content/en/docs/concepts/overview/kubernetes-api.md
@@ -82,7 +82,7 @@ packages that define the API objects.
### OpenAPI V3
-{{< feature-state state="stable" for_k8s_version="v1.27" >}}
+{{< feature-state feature_gate_name="OpenAPIV3" >}}
Kubernetes supports publishing a description of its APIs as OpenAPI v3.
@@ -167,7 +167,7 @@ cluster.
### Aggregated Discovery
-{{< feature-state state="beta" for_k8s_version="v1.27" >}}
+{{< feature-state feature_gate_name="AggregatedDiscoveryEndpoint" >}}
Kubernetes offers beta support for aggregated discovery, publishing
all resources supported by a cluster through two endpoints (`/api` and
diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
index 8aa1e97200891..c976faa97878f 100644
--- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
+++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md
@@ -360,7 +360,7 @@ null `namespaceSelector` matches the namespace of the Pod where the rule is defi
#### matchLabelKeys
-{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
+{{< feature-state feature_gate_name="MatchLabelKeysInPodAffinity" >}}
{{< note >}}
@@ -391,26 +391,27 @@ metadata:
...
spec:
template:
- affinity:
- podAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- - labelSelector:
- matchExpressions:
- - key: app
- operator: In
- values:
- - database
- topologyKey: topology.kubernetes.io/zone
- # Only Pods from a given rollout are taken into consideration when calculating pod affinity.
- # If you update the Deployment, the replacement Pods follow their own affinity rules
- # (if there are any defined in the new Pod template)
- matchLabelKeys:
- - pod-template-hash
+ spec:
+ affinity:
+ podAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchExpressions:
+ - key: app
+ operator: In
+ values:
+ - database
+ topologyKey: topology.kubernetes.io/zone
+ # Only Pods from a given rollout are taken into consideration when calculating pod affinity.
+ # If you update the Deployment, the replacement Pods follow their own affinity rules
+ # (if there are any defined in the new Pod template)
+ matchLabelKeys:
+ - pod-template-hash
```
#### mismatchLabelKeys
-{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
+{{< feature-state feature_gate_name="MatchLabelKeysInPodAffinity" >}}
{{< note >}}
diff --git a/content/en/docs/concepts/security/_index.md b/content/en/docs/concepts/security/_index.md
index 50edcda94a3f6..47d4ef8d365f6 100644
--- a/content/en/docs/concepts/security/_index.md
+++ b/content/en/docs/concepts/security/_index.md
@@ -3,4 +3,127 @@ title: "Security"
weight: 85
description: >
Concepts for keeping your cloud-native workload secure.
+simple_list: true
---
+
+This section of the Kubernetes documentation aims to help you learn to run
+workloads more securely, and about the essential aspects of keeping a
+Kubernetes cluster secure.
+
+Kubernetes is based on a cloud-native architecture, and draws on advice from the
+{{< glossary_tooltip text="CNCF" term_id="cncf" >}} about good practice for
+cloud native information security.
+
+Read [Cloud Native Security and Kubernetes](/docs/concepts/security/cloud-native-security/)
+for the broader context about how to secure your cluster and the applications that
+you're running on it.
+
+## Kubernetes security mechanisms {#security-mechanisms}
+
+Kubernetes includes several APIs and security controls, as well as ways to
+define [policies](#policies) that can form part of how you manage information security.
+
+### Control plane protection
+
+A key security mechanism for any Kubernetes cluster is to
+[control access to the Kubernetes API](/docs/concepts/security/controlling-access).
+
+Kubernetes expects you to configure and use TLS to provide
+[data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/)
+within the control plane, and between the control plane and its clients.
+You can also enable [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
+for the data stored within Kubernetes control plane; this is separate from using
+encryption at rest for your own workloads' data, which might also be a good idea.
+
+### Secrets
+
+The [Secret](/docs/concepts/configuration/secret/) API provides basic protection for
+configuration values that require confidentiality.
+
+### Workload protection
+
+Enforce [Pod security standards](/docs/concepts/security/pod-security-standards/) to
+ensure that Pods and their containers are isolated appropriately. You can also use
+[RuntimeClasses](/docs/concepts/containers/runtime-class) to define custom isolation
+if you need it.
+
+[Network policies](/docs/concepts/services-networking/network-policies/) let you control
+network traffic between Pods, or between Pods and the network outside your cluster.
+
+You can deploy security controls from the wider ecosystem to implement preventative
+or detective controls around Pods, their containers, and the images that run in them.
+
+### Auditing
+
+Kubernetes [audit logging](/docs/tasks/debug/debug-cluster/audit/) provides a
+security-relevant, chronological set of records documenting the sequence of actions
+in a cluster. The cluster audits the activities generated by users, by applications
+that use the Kubernetes API, and by the control plane itself.
+
+## Cloud provider security
+
+{{% thirdparty-content vendor="true" %}}
+
+If you are running a Kubernetes cluster on your own hardware or a different cloud provider,
+consult your documentation for security best practices.
+Here are links to some of the popular cloud providers' security documentation:
+
+{{< table caption="Cloud provider security" >}}
+
+IaaS Provider | Link |
+-------------------- | ------------ |
+Alibaba Cloud | https://www.alibabacloud.com/trust-center |
+Amazon Web Services | https://aws.amazon.com/security |
+Google Cloud Platform | https://cloud.google.com/security |
+Huawei Cloud | https://www.huaweicloud.com/intl/en-us/securecenter/overallsafety |
+IBM Cloud | https://www.ibm.com/cloud/security |
+Microsoft Azure | https://docs.microsoft.com/en-us/azure/security/azure-security |
+Oracle Cloud Infrastructure | https://www.oracle.com/security |
+VMware vSphere | https://www.vmware.com/security/hardening-guides |
+
+{{< /table >}}
+
+## Policies
+
+You can define security policies using Kubernetes-native mechanisms,
+such as [NetworkPolicy](/docs/concepts/services-networking/network-policies/)
+(declarative control over network packet filtering) or
+[ValidatingAdmisisonPolicy](/docs/reference/access-authn-authz/validating-admission-policy/) (declarative restrictions on what changes
+someone can make using the Kubernetes API).
+
+However, you can also rely on policy implementations from the wider
+ecosystem around Kubernetes. Kubernetes provides extension mechanisms
+to let those ecosystem projects implement their own policy controls
+on source code review, container image approval, API access controls,
+networking, and more.
+
+For more information about policy mechanisms and Kubernetes,
+read [Policies](/docs/concepts/policy/).
+
+## {{% heading "whatsnext" %}}
+
+Learn about related Kubernetes security topics:
+
+* [Securing your cluster](/docs/tasks/administer-cluster/securing-a-cluster/)
+* [Known vulnerabilities](/docs/reference/issues-security/official-cve-feed/)
+ in Kubernetes (and links to further information)
+* [Data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane
+* [Data encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
+* [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access)
+* [Network policies](/docs/concepts/services-networking/network-policies/) for Pods
+* [Secrets in Kubernetes](/docs/concepts/configuration/secret/)
+* [Pod security standards](/docs/concepts/security/pod-security-standards/)
+* [RuntimeClasses](/docs/concepts/containers/runtime-class)
+
+Learn the context:
+
+
+* [Cloud Native Security and Kubernetes](/docs/concepts/security/cloud-native-security/)
+
+Get certified:
+
+* [Certified Kubernetes Security Specialist](https://training.linuxfoundation.org/certification/certified-kubernetes-security-specialist/)
+ certification and official training course.
+
+Read more in this section:
+
diff --git a/content/en/docs/concepts/security/cloud-native-security.md b/content/en/docs/concepts/security/cloud-native-security.md
new file mode 100644
index 0000000000000..778dba0c3836e
--- /dev/null
+++ b/content/en/docs/concepts/security/cloud-native-security.md
@@ -0,0 +1,226 @@
+---
+title: "Cloud Native Security and Kubernetes"
+linkTitle: "Cloud Native Security"
+weight: 10
+
+# The section index lists this explicitly
+hide_summary: true
+
+description: >
+ Concepts for keeping your cloud-native workload secure.
+---
+
+Kubernetes is based on a cloud-native architecture, and draws on advice from the
+{{< glossary_tooltip text="CNCF" term_id="cncf" >}} about good practice for
+cloud native information security.
+
+Read on through this page for an overview of how Kubernetes is designed to
+help you deploy a secure cloud native platform.
+
+## Cloud native information security
+
+{{< comment >}}
+There are localized versions available of this whitepaper; if you can link to one of those
+when localizing, that's even better.
+{{< /comment >}}
+
+The CNCF [white paper](https://github.com/cncf/tag-security/tree/main/security-whitepaper)
+on cloud native security defines security controls and practices that are
+appropriate to different _lifecycle phases_.
+
+## _Develop_ lifecycle phase {#lifecycle-phase-develop}
+
+- Ensure the integrity of development environments.
+- Design applications following good practice for information security,
+ appropriate for your context.
+- Consider end user security as part of solution design.
+
+To achieve this, you can:
+
+1. Adopt an architecture, such as [zero trust](https://glossary.cncf.io/zero-trust-architecture/),
+ that minimizes attack surfaces, even for internal threats.
+1. Define a code review process that considers security concerns.
+1. Build a _threat model_ of your system or application that identifies
+ trust boundaries. Use that to model to identify risks and to help find
+ ways to treat those risks.
+1. Incorporate advanced security automation, such as _fuzzing_ and
+ [security chaos engineering](https://glossary.cncf.io/security-chaos-engineering/),
+ where it's justified.
+
+## _Distribute_ lifecycle phase {#lifecycle-phase-distribute}
+
+- Ensure the security of the supply chain for container images you execute.
+- Ensure the security of the supply chain for the cluster and other components
+ that execute your application. An example of another component might be an
+ external database that your cloud-native application uses for persistence.
+
+To achieve this, you can:
+
+1. Scan container images and other artifacts for known vulnerabilities.
+1. Ensure that software distribution uses encryption in transit, with
+ a chain of trust for the software source.
+1. Adopt and follow processes to update dependencies when updates are
+ available, especially in response to security announcements.
+1. Use validation mechanisms such as digital certificates for supply
+ chain assurance.
+1. Subscribe to feeds and other mechanisms to alert you to security
+ risks.
+1. Restrict access to artifacts. Place container images in a
+ [private registry](/docs/concepts/containers/images/#using-a-private-registry)
+ that only allows authorized clients to pull images.
+
+## _Deploy_ lifecycle phase {#lifecycle-phase-deploy}
+
+Ensure appropriate restrictions on what can be deployed, who can deploy it,
+and where it can be deployed to.
+You can enforce measures from the _distribute_ phase, such as verifying the
+cryptographic identity of container image artifacts.
+
+When you deploy Kubernetes, you also set the foundation for your
+applications' runtime environment: a Kubernetes cluster (or
+multiple clusters).
+That IT infrastructure must provide the security guarantees that higher
+layers expect.
+
+## _Runtime_ lifecycle phase {#lifecycle-phase-runtime}
+
+The Runtime phase comprises three critical areas: [compute](#protection-runtime-compute),
+[access](#protection-runtime-access), and [storage](#protection-runtime-storage).
+
+
+### Runtime protection: access {#protection-runtime-access}
+
+The Kubernetes API is what makes your cluster work. Protecting this API is key
+to providing effective cluster security.
+
+Other pages in the Kubernetes documentation have more detail about how to set up
+specific aspects of access control. The [security checklist](/docs/concepts/security/security-checklist/)
+has a set of suggested basic checks for your cluster.
+
+Beyond that, securing your cluster means implementing effective
+[authentication](/docs/concepts/security/controlling-access/#authentication) and
+[authorization](/docs/concepts/security/controlling-access/#authorization) for API access. Use [ServiceAccounts](/docs/concepts/security/service-accounts/) to
+provide and manage security identities for workloads and cluster
+components.
+
+Kubernetes uses TLS to protect API traffic; make sure to deploy the cluster using
+TLS (including for traffic between nodes and the control plane), and protect the
+encryption keys. If you use Kubernetes' own API for
+[CertificateSigningRequests](/docs/reference/access-authn-authz/certificate-signing-requests/#certificate-signing-requests),
+pay special attention to restricting misuse there.
+
+### Runtime protection: compute {#protection-runtime-compute}
+
+{{< glossary_tooltip text="Containers" term_id="container" >}} provide two
+things: isolation between different applications, and a mechanism to combine
+those isolated applications to run on the same host computer. Those two
+aspects, isolation and aggregation, mean that runtime security involves
+trade-offs and finding an appropriate balance.
+
+Kubernetes relies on a {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}
+to actually set up and run containers. The Kubernetes project does
+not recommend a specific container runtime and you should make sure that
+the runtime(s) that you choose meet your information security needs.
+
+To protect your compute at runtime, you can:
+
+1. Enforce [Pod security standards](/docs/concepts/security/pod-security-standards/)
+ for applications, to help ensure they run with only the necessary privileges.
+1. Run a specialized operating system on your nodes that is designed specifically
+ for running containerized workloads. This is typically based on a read-only
+ operating system (_immutable image_) that provides only the services
+ essential for running containers.
+
+ Container-specific operating systems help to isolate system components and
+ present a reduced attack surface in case of a container escape.
+1. Define [ResourceQuotas](/docs/concepts/policy/resource-quotas/) to
+ fairly allocate shared resources, and use
+ mechanisms such as [LimitRanges](/docs/concepts/policy/limit-range/)
+ to ensure that Pods specify their resource requirements.
+1. Partition workloads across different nodes.
+ Use [node isolation](/docs/concepts/scheduling-eviction/assign-pod-node/#node-isolation-restriction)
+ mechanisms, either from Kubernetes itself or from the ecosystem, to ensure that
+ Pods with different trust contexts are run on separate sets of nodes.
+1. Use a {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}
+ that provides security restrictions.
+1. On Linux nodes, use a Linux security module such as [AppArmor](/docs/tutorials/security/apparmor/) (beta)
+ or [seccomp](/docs/tutorials/security/seccomp/).
+
+### Runtime protection: storage {#protection-runtime-storage}
+
+To protect storage for your cluster and the applications that run there, you can:
+
+1. Integrate your cluster with an external storage plugin that provides encryption at
+ rest for volumes.
+1. Enable [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/) for
+ API objects.
+1. Protect data durability using backups. Verify that you can restore these, whenever you need to.
+1. Authenticate connections between cluster nodes and any network storage they rely
+ upon.
+1. Implement data encryption within your own application.
+
+For encryption keys, generating these within specialized hardware provides
+the best protection against disclosure risks. A _hardware security module_
+can let you perform cryptographic operations without allowing the security
+key to be copied elsewhere.
+
+### Networking and security
+
+You should also consider network security measures, such as
+[NetworkPolicy](/docs/concepts/services-networking/network-policies/) or a
+[service mesh](https://glossary.cncf.io/service-mesh/).
+Some network plugins for Kubernetes provide encryption for your
+cluster network, using technologies such as a virtual
+private network (VPN) overlay.
+By design, Kubernetes lets you use your own networking plugin for your
+cluster (if you use managed Kubernetes, the person or organization
+managing your cluster may have chosen a network plugin for you).
+
+The network plugin you choose and the way you integrate it can have a
+strong impact on the security of information in transit.
+
+### Observability and runtime security
+
+Kubernetes lets you extend your cluster with extra tooling. You can set up third
+party solutions to help you monitor or troubleshoot your applications and the
+clusters they are running. You also get some basic observability features built
+in to Kubernetes itself. Your code running in containers can generate logs,
+publish metrics or provide other observability data; at deploy time, you need to
+make sure your cluster provides an appropriate level of protection there.
+
+If you set up a metrics dashboard or something similar, review the chain of components
+that populate data into that dashboard, as well as the dashboard itself. Make sure
+that the whole chain is designed with enough resilience and enough integrity protection
+that you can rely on it even during an incident where your cluster might be degraded.
+
+Where appropriate, deploy security measures below the level of Kubernetes
+itself, such as cryptographically measured boot, or authenticated distribution
+of time (which helps ensure the fidelity of logs and audit records).
+
+For a high assurance environment, deploy cryptographic protections to ensure that
+logs are both tamper-proof and confidential.
+
+## {{% heading "whatsnext" %}}
+
+### Cloud native security {#further-reading-cloud-native}
+
+* CNCF [white paper](https://github.com/cncf/tag-security/tree/main/security-whitepaper)
+ on cloud native security.
+* CNCF [white paper](https://github.com/cncf/tag-security/blob/f80844baaea22a358f5b20dca52cd6f72a32b066/supply-chain-security/supply-chain-security-paper/CNCF_SSCP_v1.pdf)
+ on good practices for securing a software supply chain.
+* [Fixing the Kubernetes clusterf\*\*k: Understanding security from the kernel up](https://archive.fosdem.org/2020/schedule/event/kubernetes/) (FOSDEM 2020)
+* [Kubernetes Security Best Practices](https://www.youtube.com/watch?v=wqsUfvRyYpw) (Kubernetes Forum Seoul 2019)
+* [Towards Measured Boot Out of the Box](https://www.youtube.com/watch?v=EzSkU3Oecuw) (Linux Security Summit 2016)
+
+### Kubernetes and information security {#further-reading-k8s}
+
+* [Kubernetes security](/docs/concepts/security/)
+* [Securing your cluster](/docs/tasks/administer-cluster/securing-a-cluster/)
+* [Data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane
+* [Data encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
+* [Secrets in Kubernetes](/docs/concepts/configuration/secret/)
+* [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access)
+* [Network policies](/docs/concepts/services-networking/network-policies/) for Pods
+* [Pod security standards](/docs/concepts/security/pod-security-standards/)
+* [RuntimeClasses](/docs/concepts/containers/runtime-class)
+
diff --git a/content/en/docs/concepts/security/overview.md b/content/en/docs/concepts/security/overview.md
deleted file mode 100644
index 29b4d8d55ba7f..0000000000000
--- a/content/en/docs/concepts/security/overview.md
+++ /dev/null
@@ -1,160 +0,0 @@
----
-reviewers:
-- zparnold
-title: Overview of Cloud Native Security
-description: >
- A model for thinking about Kubernetes security in the context of Cloud Native security.
-content_type: concept
-weight: 1
----
-
-
-
-This overview defines a model for thinking about Kubernetes security in the context of Cloud Native security.
-
-{{< warning >}}
-This container security model provides suggestions, not proven information security policies.
-{{< /warning >}}
-
-
-
-## The 4C's of Cloud Native security
-
-You can think about security in layers. The 4C's of Cloud Native security are Cloud,
-Clusters, Containers, and Code.
-
-{{< note >}}
-This layered approach augments the [defense in depth](https://en.wikipedia.org/wiki/Defense_in_depth_(computing))
-computing approach to security, which is widely regarded as a best practice for securing
-software systems.
-{{< /note >}}
-
-{{< figure src="/images/docs/4c.png" title="The 4C's of Cloud Native Security" class="diagram-large" >}}
-
-Each layer of the Cloud Native security model builds upon the next outermost layer.
-The Code layer benefits from strong base (Cloud, Cluster, Container) security layers.
-You cannot safeguard against poor security standards in the base layers by addressing
-security at the Code level.
-
-## Cloud
-
-In many ways, the Cloud (or co-located servers, or the corporate datacenter) is the
-[trusted computing base](https://en.wikipedia.org/wiki/Trusted_computing_base)
-of a Kubernetes cluster. If the Cloud layer is vulnerable (or
-configured in a vulnerable way) then there is no guarantee that the components built
-on top of this base are secure. Each cloud provider makes security recommendations
-for running workloads securely in their environment.
-
-### Cloud provider security
-
-If you are running a Kubernetes cluster on your own hardware or a different cloud provider,
-consult your documentation for security best practices.
-Here are links to some of the popular cloud providers' security documentation:
-
-{{< table caption="Cloud provider security" >}}
-
-IaaS Provider | Link |
--------------------- | ------------ |
-Alibaba Cloud | https://www.alibabacloud.com/trust-center |
-Amazon Web Services | https://aws.amazon.com/security |
-Google Cloud Platform | https://cloud.google.com/security |
-Huawei Cloud | https://www.huaweicloud.com/intl/en-us/securecenter/overallsafety |
-IBM Cloud | https://www.ibm.com/cloud/security |
-Microsoft Azure | https://docs.microsoft.com/en-us/azure/security/azure-security |
-Oracle Cloud Infrastructure | https://www.oracle.com/security |
-VMware vSphere | https://www.vmware.com/security/hardening-guides |
-
-{{< /table >}}
-
-### Infrastructure security {#infrastructure-security}
-
-Suggestions for securing your infrastructure in a Kubernetes cluster:
-
-{{< table caption="Infrastructure security" >}}
-
-Area of Concern for Kubernetes Infrastructure | Recommendation |
---------------------------------------------- | -------------- |
-Network access to API Server (Control plane) | All access to the Kubernetes control plane is not allowed publicly on the internet and is controlled by network access control lists restricted to the set of IP addresses needed to administer the cluster.|
-Network access to Nodes (nodes) | Nodes should be configured to _only_ accept connections (via network access control lists) from the control plane on the specified ports, and accept connections for services in Kubernetes of type NodePort and LoadBalancer. If possible, these nodes should not be exposed on the public internet entirely.
-Kubernetes access to Cloud Provider API | Each cloud provider needs to grant a different set of permissions to the Kubernetes control plane and nodes. It is best to provide the cluster with cloud provider access that follows the [principle of least privilege](https://en.wikipedia.org/wiki/Principle_of_least_privilege) for the resources it needs to administer. The [Kops documentation](https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md#iam-roles) provides information about IAM policies and roles.
-Access to etcd | Access to etcd (the datastore of Kubernetes) should be limited to the control plane only. Depending on your configuration, you should attempt to use etcd over TLS. More information can be found in the [etcd documentation](https://github.com/etcd-io/etcd/tree/master/Documentation).
-etcd Encryption | Wherever possible it's a good practice to encrypt all storage at rest, and since etcd holds the state of the entire cluster (including Secrets) its disk should especially be encrypted at rest.
-
-{{< /table >}}
-
-## Cluster
-
-There are two areas of concern for securing Kubernetes:
-
-* Securing the cluster components that are configurable
-* Securing the applications which run in the cluster
-
-### Components of the Cluster {#cluster-components}
-
-If you want to protect your cluster from accidental or malicious access and adopt
-good information practices, read and follow the advice about
-[securing your cluster](/docs/tasks/administer-cluster/securing-a-cluster/).
-
-### Components in the cluster (your application) {#cluster-applications}
-
-Depending on the attack surface of your application, you may want to focus on specific
-aspects of security. For example: If you are running a service (Service A) that is critical
-in a chain of other resources and a separate workload (Service B) which is
-vulnerable to a resource exhaustion attack, then the risk of compromising Service A
-is high if you do not limit the resources of Service B. The following table lists
-areas of security concerns and recommendations for securing workloads running in Kubernetes:
-
-Area of Concern for Workload Security | Recommendation |
------------------------------- | --------------------- |
-RBAC Authorization (Access to the Kubernetes API) | https://kubernetes.io/docs/reference/access-authn-authz/rbac/
-Authentication | https://kubernetes.io/docs/concepts/security/controlling-access/
-Application secrets management (and encrypting them in etcd at rest) | https://kubernetes.io/docs/concepts/configuration/secret/ https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/
-Ensuring that pods meet defined Pod Security Standards | https://kubernetes.io/docs/concepts/security/pod-security-standards/#policy-instantiation
-Quality of Service (and Cluster resource management) | https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/
-Network Policies | https://kubernetes.io/docs/concepts/services-networking/network-policies/
-TLS for Kubernetes Ingress | https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
-
-## Container
-
-Container security is outside the scope of this guide. Here are general recommendations and
-links to explore this topic:
-
-Area of Concern for Containers | Recommendation |
------------------------------- | -------------- |
-Container Vulnerability Scanning and OS Dependency Security | As part of an image build step, you should scan your containers for known vulnerabilities.
-Image Signing and Enforcement | Sign container images to maintain a system of trust for the content of your containers.
-Disallow privileged users | When constructing containers, consult your documentation for how to create users inside of the containers that have the least level of operating system privilege necessary in order to carry out the goal of the container.
-Use container runtime with stronger isolation | Select [container runtime classes](/docs/concepts/containers/runtime-class/) that provide stronger isolation.
-
-## Code
-
-Application code is one of the primary attack surfaces over which you have the most control.
-While securing application code is outside of the Kubernetes security topic, here
-are recommendations to protect application code:
-
-### Code security
-
-{{< table caption="Code security" >}}
-
-Area of Concern for Code | Recommendation |
--------------------------| -------------- |
-Access over TLS only | If your code needs to communicate by TCP, perform a TLS handshake with the client ahead of time. With the exception of a few cases, encrypt everything in transit. Going one step further, it's a good idea to encrypt network traffic between services. This can be done through a process known as mutual TLS authentication or [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication) which performs a two sided verification of communication between two certificate holding services. |
-Limiting port ranges of communication | This recommendation may be a bit self-explanatory, but wherever possible you should only expose the ports on your service that are absolutely essential for communication or metric gathering. |
-3rd Party Dependency Security | It is a good practice to regularly scan your application's third party libraries for known security vulnerabilities. Each programming language has a tool for performing this check automatically. |
-Static Code Analysis | Most languages provide a way for a snippet of code to be analyzed for any potentially unsafe coding practices. Whenever possible you should perform checks using automated tooling that can scan codebases for common security errors. Some of the tools can be found at: https://owasp.org/www-community/Source_Code_Analysis_Tools |
-Dynamic probing attacks | There are a few automated tools that you can run against your service to try some of the well known service attacks. These include SQL injection, CSRF, and XSS. One of the most popular dynamic analysis tools is the [OWASP Zed Attack proxy](https://www.zaproxy.org/) tool. |
-
-{{< /table >}}
-
-## {{% heading "whatsnext" %}}
-
-Learn about related Kubernetes security topics:
-
-* [Pod security standards](/docs/concepts/security/pod-security-standards/)
-* [Network policies for Pods](/docs/concepts/services-networking/network-policies/)
-* [Controlling Access to the Kubernetes API](/docs/concepts/security/controlling-access)
-* [Securing your cluster](/docs/tasks/administer-cluster/securing-a-cluster/)
-* [Data encryption in transit](/docs/tasks/tls/managing-tls-in-a-cluster/) for the control plane
-* [Data encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
-* [Secrets in Kubernetes](/docs/concepts/configuration/secret/)
-* [Runtime class](/docs/concepts/containers/runtime-class)
diff --git a/content/en/docs/concepts/security/pod-security-standards.md b/content/en/docs/concepts/security/pod-security-standards.md
index 886137f0f75d9..9757e581598a2 100644
--- a/content/en/docs/concepts/security/pod-security-standards.md
+++ b/content/en/docs/concepts/security/pod-security-standards.md
@@ -5,7 +5,7 @@ title: Pod Security Standards
description: >
A detailed look at the different policy levels defined in the Pod Security Standards.
content_type: concept
-weight: 10
+weight: 15
---
diff --git a/content/en/docs/concepts/security/service-accounts.md b/content/en/docs/concepts/security/service-accounts.md
index a7b3d54d76d33..4ae41f8008e3a 100644
--- a/content/en/docs/concepts/security/service-accounts.md
+++ b/content/en/docs/concepts/security/service-accounts.md
@@ -3,7 +3,7 @@ title: Service Accounts
description: >
Learn about ServiceAccount objects in Kubernetes.
content_type: concept
-weight: 10
+weight: 25
---
diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md
index fd992995288da..2fbbf1673831a 100644
--- a/content/en/docs/concepts/services-networking/service.md
+++ b/content/en/docs/concepts/services-networking/service.md
@@ -621,7 +621,7 @@ can define your own (provider specific) annotations on the Service that specify
#### Load balancers with mixed protocol types
-{{< feature-state for_k8s_version="v1.26" state="stable" >}}
+{{< feature-state feature_gate_name="MixedProtocolLBService" >}}
By default, for LoadBalancer type of Services, when there is more than one port defined, all
ports must have the same protocol, and the protocol must be one which is supported
@@ -670,7 +670,7 @@ Unprefixed names are reserved for end-users.
#### Specifying IPMode of load balancer status {#load-balancer-ip-mode}
-{{< feature-state for_k8s_version="v1.29" state="alpha" >}}
+{{< feature-state feature_gate_name="LoadBalancerIPMode" >}}
Starting as Alpha in Kubernetes 1.29,
a [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
diff --git a/content/en/docs/concepts/storage/dynamic-provisioning.md b/content/en/docs/concepts/storage/dynamic-provisioning.md
index 54ab391d80636..903945f12b6e5 100644
--- a/content/en/docs/concepts/storage/dynamic-provisioning.md
+++ b/content/en/docs/concepts/storage/dynamic-provisioning.md
@@ -119,9 +119,10 @@ When a default `StorageClass` exists in a cluster and a user creates a
`DefaultStorageClass` admission controller automatically adds the
`storageClassName` field pointing to the default storage class.
-Note that there can be at most one *default* storage class on a cluster, or
-a `PersistentVolumeClaim` without `storageClassName` explicitly specified cannot
-be created.
+Note that if you set the `storageclass.kubernetes.io/is-default-class`
+annotation to true on more than one StorageClass in your cluster, and you then
+create a `PersistentVolumeClaim` with no `storageClassName` set, Kubernetes
+uses the most recently created default StorageClass.
## Topology Awareness
diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md
index b7dbf54651a07..13b83596f4de6 100644
--- a/content/en/docs/concepts/storage/volumes.md
+++ b/content/en/docs/concepts/storage/volumes.md
@@ -194,7 +194,7 @@ keyed with `log_level`.
{{< note >}}
-* You must create a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/)
+* You must [create a ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/#create-a-configmap)
before you can use it.
* A ConfigMap is always mounted as `readOnly`.
diff --git a/content/en/docs/concepts/windows/intro.md b/content/en/docs/concepts/windows/intro.md
index 3e22aa7d624f0..5bb8c60fe6d95 100644
--- a/content/en/docs/concepts/windows/intro.md
+++ b/content/en/docs/concepts/windows/intro.md
@@ -352,6 +352,40 @@ Windows Server SAC release
The Kubernetes [version-skew policy](/docs/setup/release/version-skew-policy/) also applies.
+## Hardware recommendations and considerations {#windows-hardware-recommendations}
+
+{{% thirdparty-content %}}
+
+{{< note >}}
+The following hardware specifications outlined here should be regarded as sensible default values.
+They are not intended to represent minimum requirements or specific recommendations for production environments.
+Depending on the requirements for your workload these values may need to be adjusted.
+{{< /note >}}
+
+- 64-bit processor 4 CPU cores or more, capable of supporting virtualization
+- 8GB or more of RAM
+- 50GB or more of free disk space
+
+Refer to
+[Hardware requirements for Windows Server Microsoft documentation](https://learn.microsoft.com/en-us/windows-server/get-started/hardware-requirements)
+for the most up-to-date information on minimum hardware requirements. For guidance on deciding on resources for
+production worker nodes refer to [Production worker nodes Kubernetes documentation](https://kubernetes.io/docs/setup/production-environment/#production-worker-nodes).
+
+To optimize system resources, if a graphical user interface is not required,
+it may be preferable to use a Windows Server OS installation that excludes
+the [Windows Desktop Experience](https://learn.microsoft.com/en-us/windows-server/get-started/install-options-server-core-desktop-experience)
+installation option, as this configuration typically frees up more system
+resources.
+
+In assessing disk space for Windows worker nodes, take note that Windows container images are typically larger than
+Linux container images, with container image sizes ranging
+from [300MB to over 10GB](https://techcommunity.microsoft.com/t5/containers/nano-server-x-server-core-x-server-which-base-image-is-the-right/ba-p/2835785)
+for a single image. Additionally, take note that the `C:` drive in Windows containers represents a virtual free size of
+20GB by default, which is not the actual consumed space, but rather the disk size for which a single container can grow
+to occupy when using local storage on the host.
+See [Containers on Windows - Container Storage Documentation](https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/container-storage#storage-limits)
+for more detail.
+
## Getting help and troubleshooting {#troubleshooting}
Your main source of help for troubleshooting your Kubernetes cluster should start
diff --git a/content/en/docs/concepts/workloads/autoscaling.md b/content/en/docs/concepts/workloads/autoscaling.md
new file mode 100644
index 0000000000000..5ecd2755e23cd
--- /dev/null
+++ b/content/en/docs/concepts/workloads/autoscaling.md
@@ -0,0 +1,146 @@
+---
+title: Autoscaling Workloads
+description: >-
+ With autoscaling, you can automatically update your workloads in one way or another. This allows your cluster to react to changes in resource demand more elastically and efficiently.
+content_type: concept
+weight: 40
+---
+
+
+
+In Kubernetes, you can _scale_ a workload depending on the current demand of resources.
+This allows your cluster to react to changes in resource demand more elastically and efficiently.
+
+When you scale a workload, you can either increase or decrease the number of replicas managed by
+the workload, or adjust the resources available to the replicas in-place.
+
+The first approach is referred to as _horizontal scaling_, while the second is referred to as
+_vertical scaling_.
+
+There are manual and automatic ways to scale your workloads, depending on your use case.
+
+
+
+## Scaling workloads manually
+
+Kubernetes supports _manual scaling_ of workloads. Horizontal scaling can be done
+using the `kubectl` CLI.
+For vertical scaling, you need to _patch_ the resource definition of your workload.
+
+See below for examples of both strategies.
+
+- **Horizontal scaling**: [Running multiple instances of your app](/docs/tutorials/kubernetes-basics/scale/scale-intro/)
+- **Vertical scaling**: [Resizing CPU and memory resources assigned to containers](/docs/tasks/configure-pod-container/resize-container-resources)
+
+## Scaling workloads automatically
+
+Kubernetes also supports _automatic scaling_ of workloads, which is the focus of this page.
+
+The concept of _Autoscaling_ in Kubernetes refers to the ability to automatically update an
+object that manages a set of Pods (for example a
+{{< glossary_tooltip text="Deployment" term_id="deployment" >}}.
+
+### Scaling workloads horizontally
+
+In Kubernetes, you can automatically scale a workload horizontally using a _HorizontalPodAutoscaler_ (HPA).
+
+It is implemented as a Kubernetes API resource and a {{< glossary_tooltip text="controller" term_id="controller" >}}
+and periodically adjusts the number of {{< glossary_tooltip text="replicas" term_id="replica" >}}
+in a workload to match observed resource utilization such as CPU or memory usage.
+
+There is a [walkthrough tutorial](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough) of configuring a HorizontalPodAutoscaler for a Deployment.
+
+### Scaling workloads vertically
+
+{{< feature-state for_k8s_version="v1.25" state="stable" >}}
+
+You can automatically scale a workload vertically using a _VerticalPodAutoscaler_ (VPA).
+Different to the HPA, the VPA doesn't come with Kubernetes by default, but is a separate project
+that can be found [on GitHub](https://github.com/kubernetes/autoscaler/tree/9f87b78df0f1d6e142234bb32e8acbd71295585a/vertical-pod-autoscaler).
+
+Once installed, it allows you to create {{< glossary_tooltip text="CustomResourceDefinitions" term_id="customresourcedefinition" >}}
+(CRDs) for your workloads which define _how_ and _when_ to scale the resources of the managed replicas.
+
+{{< note >}}
+You will need to have the [Metrics Server](https://github.com/kubernetes-sigs/metrics-server)
+installed to your cluster for the HPA to work.
+{{< /note >}}
+
+At the moment, the VPA can operate in four different modes:
+
+{{< table caption="Different modes of the VPA" >}}
+Mode | Description
+:----|:-----------
+`Auto` | Currently `Recreate`, might change to in-place updates in the future
+`Recreate` | The VPA assigns resource requests on pod creation as well as updates them on existing pods by evicting them when the requested resources differ significantly from the new recommendation
+`Initial` | The VPA only assigns resource requests on pod creation and never changes them later.
+`Off` | The VPA does not automatically change the resource requirements of the pods. The recommendations are calculated and can be inspected in the VPA object.
+{{< /table >}}
+
+#### Requirements for in-place resizing
+
+{{< feature-state for_k8s_version="v1.27" state="alpha" >}}
+
+Resizing a workload in-place **without** restarting the {{< glossary_tooltip text="Pods" term_id="pod" >}}
+or its {{< glossary_tooltip text="Containers" term_id="container" >}} requires Kubernetes version 1.27 or later.
+Additionally, the `InPlaceVerticalScaling` feature gate needs to be enabled.
+
+{{< feature-gate-description name="InPlacePodVerticalScaling" >}}
+
+### Autoscaling based on cluster size
+
+For workloads that need to be scaled based on the size of the cluster (for example
+`cluster-dns` or other system components), you can use the
+[_Cluster Proportional Autoscaler_](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler).
+Just like the VPA, it is not part of the Kubernetes core, but hosted as its
+own project on GitHub.
+
+The Cluster Proportional Autoscaler watches the number of schedulable {{< glossary_tooltip text="nodes" term_id="node" >}}
+and cores and scales the number of replicas of the target workload accordingly.
+
+If the number of replicas should stay the same, you can scale your workloads vertically according to the cluster size using
+the [_Cluster Proportional Vertical Autoscaler_](https://github.com/kubernetes-sigs/cluster-proportional-vertical-autoscaler).
+The project is **currently in beta** and can be found on GitHub.
+
+While the Cluster Proportional Autoscaler scales the number of replicas of a workload, the Cluster Proportional Vertical Autoscaler
+adjusts the resource requests for a workload (for example a Deployment or DaemonSet) based on the number of nodes and/or cores
+in the cluster.
+
+### Event driven Autoscaling
+
+It is also possible to scale workloads based on events, for example using the
+[_Kubernetes Event Driven Autoscaler_ (**KEDA**)](https://keda.sh/).
+
+KEDA is a CNCF graduated enabling you to scale your workloads based on the number
+of events to be processed, for example the amount of messages in a queue. There exists
+a wide range of adapters for different event sources to choose from.
+
+### Autoscaling based on schedules
+
+Another strategy for scaling your workloads is to **schedule** the scaling operations, for example in order to
+reduce resource consumption during off-peak hours.
+
+Similar to event driven autoscaling, such behavior can be achieved using KEDA in conjunction with
+its [`Cron` scaler](https://keda.sh/docs/2.13/scalers/cron/). The `Cron` scaler allows you to define schedules
+(and time zones) for scaling your workloads in or out.
+
+## Scaling cluster infrastructure
+
+If scaling workloads isn't enough to meet your needs, you can also scale your cluster infrastructure itself.
+
+Scaling the cluster infrastructure normally means adding or removing {{< glossary_tooltip text="nodes" term_id="node" >}}.
+This can be done using one of two available autoscalers:
+
+- [**Cluster Autoscaler**](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler)
+- [**Karpenter**](https://github.com/kubernetes-sigs/karpenter?tab=readme-ov-file)
+
+Both scalers work by watching for pods marked as _unschedulable_ or _underutilized_ nodes and then adding or
+removing nodes as needed.
+
+## {{% heading "whatsnext" %}}
+
+- Learn more about scaling horizontally
+ - [Scale a StatefulSet](/docs/tasks/run-application/scale-stateful-set/)
+ - [HorizontalPodAutoscaler Walkthrough](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/)
+- [Resize Container Resources In-Place](/docs/tasks/configure-pod-container/resize-container-resources/)
+- [Autoscale the DNS Service in a Cluster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/)
diff --git a/content/en/docs/concepts/workloads/pods/_index.md b/content/en/docs/concepts/workloads/pods/_index.md
index 1132c38793c5a..1ae8a7c7febfb 100644
--- a/content/en/docs/concepts/workloads/pods/_index.md
+++ b/content/en/docs/concepts/workloads/pods/_index.md
@@ -19,10 +19,10 @@ containers which are relatively tightly coupled.
In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host.
As well as application containers, a Pod can contain
-[init containers](/docs/concepts/workloads/pods/init-containers/) that run
+{{< glossary_tooltip text="init containers" term_id="init-container" >}} that run
during Pod startup. You can also inject
-[ephemeral containers](/docs/concepts/workloads/pods/ephemeral-containers/)
-for debugging if your cluster offers this.
+{{< glossary_tooltip text="ephemeral containers" term_id="ephemeral-container" >}}
+for debugging a running Pod.
@@ -39,6 +39,26 @@ further sub-isolations applied.
A Pod is similar to a set of containers with shared namespaces and shared filesystem volumes.
+Pods in a Kubernetes cluster are used in two main ways:
+
+* **Pods that run a single container**. The "one-container-per-Pod" model is the
+ most common Kubernetes use case; in this case, you can think of a Pod as a
+ wrapper around a single container; Kubernetes manages Pods rather than managing
+ the containers directly.
+* **Pods that run multiple containers that need to work together**. A Pod can
+ encapsulate an application composed of
+ [multiple co-located containers](#how-pods-manage-multiple-containers) that are
+ tightly coupled and need to share resources. These co-located containers
+ form a single cohesive unit.
+
+ Grouping multiple co-located and co-managed containers in a single Pod is a
+ relatively advanced use case. You should use this pattern only in specific
+ instances in which your containers are tightly coupled.
+
+ You don't need to run multiple containers to provide replication (for resilience
+ or capacity); if you need multiple replicas, see
+ [Workload management](/docs/concepts/workloads/controllers/).
+
## Using Pods
The following is an example of a Pod which consists of a container running the image `nginx:1.14.2`.
@@ -61,26 +81,6 @@ term_id="deployment" >}} or {{< glossary_tooltip text="Job" term_id="job" >}}.
If your Pods need to track state, consider the
{{< glossary_tooltip text="StatefulSet" term_id="statefulset" >}} resource.
-Pods in a Kubernetes cluster are used in two main ways:
-
-* **Pods that run a single container**. The "one-container-per-Pod" model is the
- most common Kubernetes use case; in this case, you can think of a Pod as a
- wrapper around a single container; Kubernetes manages Pods rather than managing
- the containers directly.
-* **Pods that run multiple containers that need to work together**. A Pod can
- encapsulate an application composed of multiple co-located containers that are
- tightly coupled and need to share resources. These co-located containers
- form a single cohesive unit of service—for example, one container serving data
- stored in a shared volume to the public, while a separate _sidecar_ container
- refreshes or updates those files.
- The Pod wraps these containers, storage resources, and an ephemeral network
- identity together as a single unit.
-
- {{< note >}}
- Grouping multiple co-located and co-managed containers in a single Pod is a
- relatively advanced use case. You should use this pattern only in specific
- instances in which your containers are tightly coupled.
- {{< /note >}}
Each Pod is meant to run a single instance of a given application. If you want to
scale your application horizontally (to provide more overall resources by running
@@ -93,36 +93,10 @@ See [Pods and controllers](#pods-and-controllers) for more information on how
Kubernetes uses workload resources, and their controllers, to implement application
scaling and auto-healing.
-### How Pods manage multiple containers
-
-Pods are designed to support multiple cooperating processes (as containers) that form
-a cohesive unit of service. The containers in a Pod are automatically co-located and
-co-scheduled on the same physical or virtual machine in the cluster. The containers
-can share resources and dependencies, communicate with one another, and coordinate
-when and how they are terminated.
-
-For example, you might have a container that
-acts as a web server for files in a shared volume, and a separate "sidecar" container
-that updates those files from a remote source, as in the following diagram:
-
-{{< figure src="/images/docs/pod.svg" alt="Pod creation diagram" class="diagram-medium" >}}
-
-Some Pods have {{< glossary_tooltip text="init containers" term_id="init-container" >}}
-as well as {{< glossary_tooltip text="app containers" term_id="app-container" >}}.
-By default, init containers run and complete before the app containers are started.
-
-{{< feature-state for_k8s_version="v1.29" state="beta" >}}
-
-Enabled by default, the `SidecarContainers` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
-allows you to specify `restartPolicy: Always` for init containers.
-Setting the `Always` restart policy ensures that the init containers where you set it are
-kept running during the entire lifetime of the Pod.
-See [Sidecar containers and restartPolicy](/docs/concepts/workloads/pods/init-containers/#sidecar-containers-and-restartpolicy)
-for more details.
-
Pods natively provide two kinds of shared resources for their constituent containers:
[networking](#pod-networking) and [storage](#pod-storage).
+
## Working with Pods
You'll rarely create individual Pods directly in Kubernetes—even singleton Pods. This
@@ -343,6 +317,57 @@ The `spec` of a static Pod cannot refer to other API objects
{{< glossary_tooltip text="Secret" term_id="secret" >}}, etc).
{{< /note >}}
+## Pods with multiple containers {#how-pods-manage-multiple-containers}
+
+Pods are designed to support multiple cooperating processes (as containers) that form
+a cohesive unit of service. The containers in a Pod are automatically co-located and
+co-scheduled on the same physical or virtual machine in the cluster. The containers
+can share resources and dependencies, communicate with one another, and coordinate
+when and how they are terminated.
+
+
+Pods in a Kubernetes cluster are used in two main ways:
+
+* **Pods that run a single container**. The "one-container-per-Pod" model is the
+ most common Kubernetes use case; in this case, you can think of a Pod as a
+ wrapper around a single container; Kubernetes manages Pods rather than managing
+ the containers directly.
+* **Pods that run multiple containers that need to work together**. A Pod can
+ encapsulate an application composed of
+ multiple co-located containers that are
+ tightly coupled and need to share resources. These co-located containers
+ form a single cohesive unit of service—for example, one container serving data
+ stored in a shared volume to the public, while a separate
+ {{< glossary_tooltip text="sidecar container" term_id="sidecar-container" >}}
+ refreshes or updates those files.
+ The Pod wraps these containers, storage resources, and an ephemeral network
+ identity together as a single unit.
+
+For example, you might have a container that
+acts as a web server for files in a shared volume, and a separate
+[sidecar container](/docs/concepts/workloads/pods/sidecar-containers/)
+that updates those files from a remote source, as in the following diagram:
+
+{{< figure src="/images/docs/pod.svg" alt="Pod creation diagram" class="diagram-medium" >}}
+
+Some Pods have {{< glossary_tooltip text="init containers" term_id="init-container" >}}
+as well as {{< glossary_tooltip text="app containers" term_id="app-container" >}}.
+By default, init containers run and complete before the app containers are started.
+
+You can also have [sidecar containers](/docs/concepts/workloads/pods/sidecar-containers/)
+that provide auxiliary services to the main application Pod (for example: a service mesh).
+
+{{< feature-state for_k8s_version="v1.29" state="beta" >}}
+
+Enabled by default, the `SidecarContainers` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
+allows you to specify `restartPolicy: Always` for init containers.
+Setting the `Always` restart policy ensures that the containers where you set it are
+treated as _sidecars_ that are kept running during the entire lifetime of the Pod.
+Containers that you explicitly define as sidecar containers
+start up before the main application Pod and remain running until the Pod is
+shut down.
+
+
## Container probes
A _probe_ is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet can invoke different actions:
diff --git a/content/en/docs/concepts/workloads/pods/disruptions.md b/content/en/docs/concepts/workloads/pods/disruptions.md
index 1d2b33d55f5d7..83befbe71d931 100644
--- a/content/en/docs/concepts/workloads/pods/disruptions.md
+++ b/content/en/docs/concepts/workloads/pods/disruptions.md
@@ -5,7 +5,7 @@ reviewers:
- davidopp
title: Disruptions
content_type: concept
-weight: 60
+weight: 70
---
diff --git a/content/en/docs/concepts/workloads/pods/ephemeral-containers.md b/content/en/docs/concepts/workloads/pods/ephemeral-containers.md
index dfd7c366c114d..efdf0e1a0c771 100644
--- a/content/en/docs/concepts/workloads/pods/ephemeral-containers.md
+++ b/content/en/docs/concepts/workloads/pods/ephemeral-containers.md
@@ -4,7 +4,7 @@ reviewers:
- yujuhong
title: Ephemeral Containers
content_type: concept
-weight: 80
+weight: 60
---
diff --git a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md
index ff73d7bc2310a..c07ed9bb3824c 100644
--- a/content/en/docs/concepts/workloads/pods/pod-lifecycle.md
+++ b/content/en/docs/concepts/workloads/pods/pod-lifecycle.md
@@ -161,7 +161,7 @@ the Pod level `restartPolicy` is either `OnFailure` or `Always`.
When the kubelet is handling container restarts according to the configured restart
policy, that only applies to restarts that make replacement containers inside the
same Pod and running on the same node. After containers in a Pod exit, the kubelet
-restarts them with an exponential back-off delay (10s, 20s,40s, …), that is capped at
+restarts them with an exponential back-off delay (10s, 20s, 40s, …), that is capped at
five minutes. Once a container has executed for 10 minutes without any problems, the
kubelet resets the restart backoff timer for that container.
[Sidecar containers and Pod lifecycle](/docs/concepts/workloads/pods/sidecar-containers/#sidecar-containers-and-pod-lifecycle)
diff --git a/content/en/docs/concepts/workloads/pods/pod-qos.md b/content/en/docs/concepts/workloads/pods/pod-qos.md
index 491a4af2effb0..e656208958c83 100644
--- a/content/en/docs/concepts/workloads/pods/pod-qos.md
+++ b/content/en/docs/concepts/workloads/pods/pod-qos.md
@@ -87,7 +87,7 @@ Containers in a Pod can request other resources (not CPU or memory) and still be
## Memory QoS with cgroup v2
-{{< feature-state for_k8s_version="v1.22" state="alpha" >}}
+{{< feature-state feature-gate-name="MemoryQoS" >}}
Memory QoS uses the memory controller of cgroup v2 to guarantee memory resources in Kubernetes.
Memory requests and limits of containers in pod are used to set specific interfaces `memory.min`
diff --git a/content/en/docs/contribute/style/hugo-shortcodes/index.md b/content/en/docs/contribute/style/hugo-shortcodes/index.md
index 6112080eb8552..18ff7822406bb 100644
--- a/content/en/docs/contribute/style/hugo-shortcodes/index.md
+++ b/content/en/docs/contribute/style/hugo-shortcodes/index.md
@@ -49,6 +49,21 @@ Renders to:
{{< feature-state for_k8s_version="v1.10" state="beta" >}}
+### Feature state retrieval from description file
+
+To dynamically determine the state of the feature, make use of the `feature_gate_name`
+shortcode parameter. The feature state details will be extracted from the corresponding feature gate
+description file located in `content/en/docs/reference/command-line-tools-reference/feature-gates/`.
+For example:
+
+```
+{{* feature-state feature_gate_name="NodeSwap" */>}}
+```
+
+Renders to:
+
+{{< feature-state feature_gate_name="NodeSwap" >}}
+
## Feature gate description
In a Markdown page (`.md` file) on this site, you can add a shortcode to
diff --git a/content/en/docs/reference/access-authn-authz/service-accounts-admin.md b/content/en/docs/reference/access-authn-authz/service-accounts-admin.md
index 1f1fa64f60b46..92e631fc20897 100644
--- a/content/en/docs/reference/access-authn-authz/service-accounts-admin.md
+++ b/content/en/docs/reference/access-authn-authz/service-accounts-admin.md
@@ -62,7 +62,7 @@ for a number of reasons:
## Bound service account token volume mechanism {#bound-service-account-token-volume}
-{{< feature-state for_k8s_version="v1.22" state="stable" >}}
+{{< feature-state feature_gate_name="BoundServiceAccountTokenVolume" >}}
By default, the Kubernetes control plane (specifically, the
[ServiceAccount admission controller](#serviceaccount-admission-controller))
@@ -249,7 +249,7 @@ it does the following when a Pod is created:
### Legacy ServiceAccount token tracking controller
-{{< feature-state for_k8s_version="v1.28" state="stable" >}}
+{{< feature-state feature_gate_name="LegacyServiceAccountTokenTracking" >}}
This controller generates a ConfigMap called
`kube-system/kube-apiserver-legacy-service-account-token-tracking` in the
@@ -258,7 +258,7 @@ account tokens began to be monitored by the system.
### Legacy ServiceAccount token cleaner
-{{< feature-state for_k8s_version="v1.29" state="beta" >}}
+{{< feature-state feature_gate_name="LegacyServiceAccountTokenCleanUp" >}}
The legacy ServiceAccount token cleaner runs as part of the
`kube-controller-manager` and checks every 24 hours to see if any auto-generated
diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates/readonly-apidata-volumes.md b/content/en/docs/reference/command-line-tools-reference/feature-gates/readonly-apidata-volumes.md
new file mode 100644
index 0000000000000..6e2e37ed67b2d
--- /dev/null
+++ b/content/en/docs/reference/command-line-tools-reference/feature-gates/readonly-apidata-volumes.md
@@ -0,0 +1,27 @@
+---
+# Removed from Kubernetes
+title: ReadOnlyAPIDataVolumes
+content_type: feature_gate
+
+_build:
+ list: never
+ render: false
+
+stages:
+ - stage: beta
+ defaultValue: true
+ fromVersion: "1.8"
+ toVersion: "1.9"
+ - stage: stable
+ fromVersion: "1.10"
+ toVersion: "1.10"
+
+removed: true
+---
+Set [`configMap`](/docs/concepts/storage/volumes/#configmap),
+[`secret`](/docs/concepts/storage/volumes/#secret),
+[`downwardAPI`](/docs/concepts/storage/volumes/#downwardapi) and
+[`projected`](/docs/concepts/storage/volumes/#projected)
+{{< glossary_tooltip term_id="volume" text="volumes" >}} to be mounted read-only.
+
+Since Kubernetes v1.10, these volume types are always read-only and you cannot opt out.
diff --git a/content/en/docs/reference/glossary/init-container.md b/content/en/docs/reference/glossary/init-container.md
index a999042e3056f..dfb29b5b4bfec 100644
--- a/content/en/docs/reference/glossary/init-container.md
+++ b/content/en/docs/reference/glossary/init-container.md
@@ -5,7 +5,7 @@ date: 2018-04-12
full_link:
short_description: >
One or more initialization containers that must run to completion before any app containers run.
-
+full_link: /docs/concepts/workloads/pods/init-containers/
aka:
tags:
- fundamental
@@ -15,3 +15,7 @@ tags:
Initialization (init) containers are like regular app containers, with one difference: init containers must run to completion before any app containers can start. Init containers run in series: each init container must run to completion before the next init container begins.
+
+Unlike {{< glossary_tooltip text="sidecar containers" term_id="sidecar-container" >}}, init containers do not remain running after Pod startup.
+
+For more information, read [init containers](/docs/concepts/workloads/pods/init-containers/).
diff --git a/content/en/docs/reference/glossary/sidecar-container.md b/content/en/docs/reference/glossary/sidecar-container.md
new file mode 100644
index 0000000000000..97faa10153216
--- /dev/null
+++ b/content/en/docs/reference/glossary/sidecar-container.md
@@ -0,0 +1,20 @@
+---
+title: Sidecar Container
+id: sidecar-container
+date: 2018-04-12
+full_link:
+short_description: >
+ An auxilliary container that stays running throughout the lifecycle of a Pod.
+full_link: /docs/concepts/workloads/pods/sidecar-containers/
+tags:
+- fundamental
+---
+ One or more {{< glossary_tooltip text="containers" term_id="container" >}} that are typically started before any app containers run.
+
+
+
+Sidecar containers are like regular app containers, but with a different purpose: the sidecar provides a Pod-local service to the main app container.
+Unlike {{< glossary_tooltip text="init containers" term_id="init-container" >}}, sidecar containers
+continue running after Pod startup.
+
+Read [Sidecar containers](/docs/concepts/workloads/pods/sidecar-containers/) for more information.
diff --git a/content/en/docs/reference/kubectl/quick-reference.md b/content/en/docs/reference/kubectl/quick-reference.md
index b88cff5b183c0..a4e2df27d6ee5 100644
--- a/content/en/docs/reference/kubectl/quick-reference.md
+++ b/content/en/docs/reference/kubectl/quick-reference.md
@@ -287,7 +287,7 @@ kubectl label pods my-pod new-label=awesome # Add a Label
kubectl label pods my-pod new-label- # Remove a label
kubectl label pods my-pod new-label=new-value --overwrite # Overwrite an existing value
kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq # Add an annotation
-kubectl annotate pods my-pod icon- # Remove annotation
+kubectl annotate pods my-pod icon-url- # Remove annotation
kubectl autoscale deployment foo --min=2 --max=10 # Auto scale a deployment "foo"
```
diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md
index 7c12db03205aa..4849f60f9aabb 100644
--- a/content/en/docs/reference/labels-annotations-taints/_index.md
+++ b/content/en/docs/reference/labels-annotations-taints/_index.md
@@ -2255,7 +2255,8 @@ Starting in v1.16, this annotation was removed in favor of
- [`pod-security.kubernetes.io/audit-violations`](/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-audit-violations)
- [`pod-security.kubernetes.io/enforce-policy`](/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-enforce-policy)
- [`pod-security.kubernetes.io/exempt`](/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-exempt)
-
+- [`validation.policy.admission.k8s.io/validation_failure`](/docs/reference/labels-annotations-taints/audit-annotations/#validation-policy-admission-k8s-io-validation-failure)
+
See more details on [Audit Annotations](/docs/reference/labels-annotations-taints/audit-annotations/).
## kubeadm
diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-reset.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-reset.md
index 9a5f9b29fde69..944d0accaa880 100644
--- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-reset.md
+++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-reset.md
@@ -34,6 +34,17 @@ etcdctl del "" --prefix
See the [etcd documentation](https://github.com/coreos/etcd/tree/master/etcdctl) for more information.
+### Graceful kube-apiserver shutdown
+
+If you have your `kube-apiserver` configured with the `--shutdown-delay-duration` flag,
+you can run the following commands to attempt a graceful shutdown for the running API server Pod,
+before you run `kubeadm reset`:
+
+```bash
+yq eval -i '.spec.containers[0].command = []' /etc/kubernetes/manifests/kube-apiserver.yaml
+timeout 60 sh -c 'while pgrep kube-apiserver >/dev/null; do sleep 1; done' || true
+```
+
## {{% heading "whatsnext" %}}
* [kubeadm init](/docs/reference/setup-tools/kubeadm/kubeadm-init/) to bootstrap a Kubernetes control-plane node
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
index dd30bc2fee3e0..a2078c77fc4b6 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
@@ -568,37 +568,6 @@ reference documentation for more information about this subcommand and its
options.
-
-
-
-## What's next {#whats-next}
-
-* Verify that your cluster is running properly with [Sonobuoy](https://github.com/heptio/sonobuoy)
-* See [Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
- for details about upgrading your cluster using `kubeadm`.
-* Learn about advanced `kubeadm` usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/)
-* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/reference/kubectl/).
-* See the [Cluster Networking](/docs/concepts/cluster-administration/networking/) page for a bigger list
- of Pod network add-ons.
-* See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to
- explore other add-ons, including tools for logging, monitoring, network policy, visualization &
- control of your Kubernetes cluster.
-* Configure how your cluster handles logs for cluster events and from
- applications running in Pods.
- See [Logging Architecture](/docs/concepts/cluster-administration/logging/) for
- an overview of what is involved.
-
-### Feedback {#feedback}
-
-* For bugs, visit the [kubeadm GitHub issue tracker](https://github.com/kubernetes/kubeadm/issues)
-* For support, visit the
- [#kubeadm](https://kubernetes.slack.com/messages/kubeadm/) Slack channel
-* General SIG Cluster Lifecycle development Slack channel:
- [#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
-* SIG Cluster Lifecycle [SIG information](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle#readme)
-* SIG Cluster Lifecycle mailing list:
- [kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
-
## Version skew policy {#version-skew-policy}
While kubeadm allows version skew against some components that it manages, it is recommended that you
@@ -619,8 +588,8 @@ Example:
### kubeadm's skew against the kubelet
-Similarly to the Kubernetes version, kubeadm can be used with a kubelet version that is the same
-version as kubeadm or one version older.
+Similarly to the Kubernetes version, kubeadm can be used with a kubelet version that is
+the same version as kubeadm or three versions older.
Example:
* kubeadm is at {{< skew currentVersion >}}
@@ -686,3 +655,33 @@ supports your chosen platform.
If you are running into difficulties with kubeadm, please consult our
[troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).
+
+
+
+## What's next {#whats-next}
+
+* Verify that your cluster is running properly with [Sonobuoy](https://github.com/heptio/sonobuoy)
+* See [Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
+ for details about upgrading your cluster using `kubeadm`.
+* Learn about advanced `kubeadm` usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/)
+* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/reference/kubectl/).
+* See the [Cluster Networking](/docs/concepts/cluster-administration/networking/) page for a bigger list
+ of Pod network add-ons.
+* See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to
+ explore other add-ons, including tools for logging, monitoring, network policy, visualization &
+ control of your Kubernetes cluster.
+* Configure how your cluster handles logs for cluster events and from
+ applications running in Pods.
+ See [Logging Architecture](/docs/concepts/cluster-administration/logging/) for
+ an overview of what is involved.
+
+### Feedback {#feedback}
+
+* For bugs, visit the [kubeadm GitHub issue tracker](https://github.com/kubernetes/kubeadm/issues)
+* For support, visit the
+ [#kubeadm](https://kubernetes.slack.com/messages/kubeadm/) Slack channel
+* General SIG Cluster Lifecycle development Slack channel:
+ [#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
+* SIG Cluster Lifecycle [SIG information](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle#readme)
+* SIG Cluster Lifecycle mailing list:
+ [kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md
index abd3f3e0e4968..64a4ce2286897 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md
@@ -213,7 +213,7 @@ in kube-apiserver logs. To fix the issue you must follow these steps:
`kubeadm kubeconfig user --org system:nodes --client-name system:node:$NODE > kubelet.conf`.
`$NODE` must be set to the name of the existing failed node in the cluster.
Modify the resulted `kubelet.conf` manually to adjust the cluster name and server endpoint,
- or pass `kubeconfig user --config` (it accepts `InitConfiguration`). If your cluster does not have
+ or pass `kubeconfig user --config` (see [Generating kubeconfig files for additional users](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#kubeconfig-additional-users)). If your cluster does not have
the `ca.key` you must sign the embedded certificates in the `kubelet.conf` externally.
1. Copy this resulted `kubelet.conf` to `/etc/kubernetes/kubelet.conf` on the failed node.
1. Restart the kubelet (`systemctl restart kubelet`) on the failed node and wait for
diff --git a/content/en/docs/tasks/administer-cluster/encrypt-data.md b/content/en/docs/tasks/administer-cluster/encrypt-data.md
index d113cdd836a28..6e7a05ef09c16 100644
--- a/content/en/docs/tasks/administer-cluster/encrypt-data.md
+++ b/content/en/docs/tasks/administer-cluster/encrypt-data.md
@@ -168,19 +168,31 @@ encrypt all resources, even custom resources that are added after API server sta
since part of the configuration would be ineffective. The `resources` list's processing order and precedence
are determined by the order it's listed in the configuration. {{< /note >}}
-Opting out of encryption for specific resources while wildcard is enabled can be achieved by adding a new
-`resources` array item with the resource name, followed by the `providers` array item with the `identity` provider.
-For example, if '`*.*`' is enabled and you want to opt-out encryption for the `events` resource, add a new item
-to the `resources` array with `events` as the resource name, followed by the providers array item with `identity`.
-The new item should look like this:
+If you have a wildcard covering resources and want to opt out of at-rest encryption for a particular kind
+of resource, you achieve that by adding a separate `resources` array item with the name of the resource that
+you want to exempt, followed by a `providers` array item where you specify the `identity` provider. You add
+this item to the list so that it appears earlier than the configuration where you do specify encryption
+(a provider that is not `identity`).
+
+For example, if '`*.*`' is enabled and you want to opt out of encryption for Events and ConfigMaps, add a
+new **earlier** item to the `resources`, followed by the providers array item with `identity` as the
+provider. The more specific entry must come before the wildcard entry.
+
+The new item would look similar to:
```yaml
-- resources:
- - events
- providers:
- - identity: {}
+ ...
+ - resources:
+ - configmaps. # specifically from the core API group,
+ # because of trailing "."
+ - events
+ providers:
+ - identity: {}
+ # and then other entries in resources
```
-Ensure that the new item is listed before the wildcard '`*.*`' item in the resources array to give it precedence.
+
+Ensure that the exemption is listed _before_ the wildcard '`*.*`' item in the resources array
+to give it precedence.
For more detailed information about the `EncryptionConfiguration` struct, please refer to the
[encryption configuration API](/docs/reference/config-api/apiserver-encryption.v1/).
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md
index df93d79c4687c..623e528aa80d6 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md
@@ -46,8 +46,46 @@ CA key on disk.
Instead, run the controller-manager standalone with `--controllers=csrsigner` and
point to the CA certificate and key.
-[PKI certificates and requirements](/docs/setup/best-practices/certificates/) includes guidance on
-setting up a cluster to use an external CA.
+There are various ways to prepare the component credentials when using external CA mode.
+
+### Manual preparation of component credentials
+
+[PKI certificates and requirements](/docs/setup/best-practices/certificates/) includes information
+on how to prepare all the required by kubeadm component credentials manually.
+
+### Preparation of credentials by signing CSRs generated by kubeadm
+
+kubeadm can [generate CSR files](#signing-csr) that you can sign manually with tools like
+`openssl` and your external CA. These CSR files will include all the specification for credentials
+that components deployed by kubeadm require.
+
+### Automated preparation of component credentials by using kubeadm phases
+
+Alternatively, it is possible to use kubeadm phase commands to automate this process.
+
+- Go to a host that you want to prepare as a kubeadm control plane node with external CA.
+- Copy the external CA files `ca.crt` and `ca.key` that you have into `/etc/kubernetes/pki` on the node.
+- Prepare a temporary [kubeadm configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file)
+called `config.yaml` that can be used with `kubeadm init`. Make sure that this file includes
+any relevant cluster wide or host-specific information that could be included in certificates, such as,
+`ClusterConfiguration.controlPlaneEndpoint`, `ClusterConfiguration.certSANs` and `InitConfiguration.APIEndpoint`.
+- On the same host execute the commands `kubeadm init phase kubeconfig all --config config.yaml` and
+`kubeadm init phase certs all --config config.yaml`. This will generate all required kubeconfig
+files and certificates under `/etc/kubernetes/` and its `pki` sub directory.
+- Inspect the generated files. Delete `/etc/kubernetes/pki/ca.key`, delete or move to a safe location
+the file `/etc/kubernetes/super-admin.conf`.
+- On nodes where `kubeadm join` will be called also delete `/etc/kubernetes/kubelet.conf`.
+This file is only required on the first node where `kubeadm init` will be called.
+- Note that some files such `pki/sa.*`, `pki/front-proxy-ca.*` and `pki/etc/ca.*` are
+shared between control plane nodes, You can generate them once and
+[distribute them manually](/docs/setup/production-environment/tools/kubeadm/high-availability/#manual-certs)
+to nodes where `kubeadm join` will be called, or you can use the
+[`--upload-certs`](/docs/setup/production-environment/tools/kubeadm/high-availability/#stacked-control-plane-and-etcd-nodes)
+functionality of `kubeadm init` and `--certificate-key` of `kubeadm join` to automate this distribution.
+
+Once the credentials are prepared on all nodes, call `kubeadm init` and `kubeadm join` for these nodes to
+join the cluster. kubeadm will use the existing kubeconfig and certificate files under `/etc/kubernetes/`
+and its `pki` sub directory.
## Check certificate expiration
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md
index 09f43dd6341dc..9cb522c416e1d 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md
@@ -43,7 +43,7 @@ The upgrade workflow at high level is the following:
they could be running CoreDNS Pods or other critical workloads. For more information see
[Draining nodes](/docs/tasks/administer-cluster/safely-drain-node/).
- The Kubernetes project recommends that you match your kubelet and kubeadm versions.
- You can instead use an a version of kubelet that is older than kubeadm, provided it is within the
+ You can instead use a version of kubelet that is older than kubeadm, provided it is within the
range of supported versions.
For more details, please visit [kubeadm's skew against the kubelet](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#kubeadm-s-skew-against-the-kubelet).
- All containers are restarted after upgrade, because the container spec hash value is changed.
@@ -75,8 +75,8 @@ Find the latest patch release for Kubernetes {{< skew currentVersion >}} using t
```shell
# Find the latest {{< skew currentVersion >}} version in the list.
# It should look like {{< skew currentVersion >}}.x-*, where x is the latest patch.
-apt update
-apt-cache madison kubeadm
+sudo apt update
+sudo apt-cache madison kubeadm
```
{{% /tab %}}
@@ -85,7 +85,7 @@ apt-cache madison kubeadm
```shell
# Find the latest {{< skew currentVersion >}} version in the list.
# It should look like {{< skew currentVersion >}}.x-*, where x is the latest patch.
-yum list --showduplicates kubeadm --disableexcludes=kubernetes
+sudo yum list --showduplicates kubeadm --disableexcludes=kubernetes
```
{{% /tab %}}
@@ -107,9 +107,9 @@ Pick a control plane node that you wish to upgrade first. It must have the `/etc
```shell
# replace x in {{< skew currentVersion >}}.x-* with the latest patch version
- apt-mark unhold kubeadm && \
- apt-get update && apt-get install -y kubeadm='{{< skew currentVersion >}}.x-*' && \
- apt-mark hold kubeadm
+ sudo apt-mark unhold kubeadm && \
+ sudo apt-get update && sudo apt-get install -y kubeadm='{{< skew currentVersion >}}.x-*' && \
+ sudo apt-mark hold kubeadm
```
{{% /tab %}}
@@ -117,7 +117,7 @@ Pick a control plane node that you wish to upgrade first. It must have the `/etc
```shell
# replace x in {{< skew currentVersion >}}.x-* with the latest patch version
- yum install -y kubeadm-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
+ sudo yum install -y kubeadm-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
```
{{% /tab %}}
@@ -132,7 +132,7 @@ Pick a control plane node that you wish to upgrade first. It must have the `/etc
1. Verify the upgrade plan:
```shell
- kubeadm upgrade plan
+ sudo kubeadm upgrade plan
```
This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to.
@@ -221,9 +221,9 @@ kubectl drain --ignore-daemonsets
```shell
# replace x in {{< skew currentVersion >}}.x-* with the latest patch version
- apt-mark unhold kubelet kubectl && \
- apt-get update && apt-get install -y kubelet='{{< skew currentVersion >}}.x-*' kubectl='{{< skew currentVersion >}}.x-*' && \
- apt-mark hold kubelet kubectl
+ sudo apt-mark unhold kubelet kubectl && \
+ sudo apt-get update && sudo apt-get install -y kubelet='{{< skew currentVersion >}}.x-*' kubectl='{{< skew currentVersion >}}.x-*' && \
+ sudo apt-mark hold kubelet kubectl
```
{{% /tab %}}
@@ -231,7 +231,7 @@ kubectl drain --ignore-daemonsets
```shell
# replace x in {{< skew currentVersion >}}.x-* with the latest patch version
- yum install -y kubelet-'{{< skew currentVersion >}}.x-*' kubectl-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
+ sudo yum install -y kubelet-'{{< skew currentVersion >}}.x-*' kubectl-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
```
{{% /tab %}}
@@ -279,7 +279,7 @@ The `STATUS` column should show `Ready` for all your nodes, and the version numb
If `kubeadm upgrade` fails and does not roll back, for example because of an unexpected shutdown during execution, you can run `kubeadm upgrade` again.
This command is idempotent and eventually makes sure that the actual state is the desired state you declare.
-To recover from a bad state, you can also run `kubeadm upgrade apply --force` without changing the version that your cluster is running.
+To recover from a bad state, you can also run `sudo kubeadm upgrade apply --force` without changing the version that your cluster is running.
During upgrade kubeadm writes the following backup folders under `/etc/kubernetes/tmp`:
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes.md b/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes.md
index e61c6f3d2b134..70b63288533e7 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes.md
@@ -36,15 +36,15 @@ Upgrade kubeadm:
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-* with the latest patch version
-apt-mark unhold kubeadm && \
-apt-get update && apt-get install -y kubeadm='{{< skew currentVersion >}}.x-*' && \
-apt-mark hold kubeadm
+sudo apt-mark unhold kubeadm && \
+sudo apt-get update && sudo apt-get install -y kubeadm='{{< skew currentVersion >}}.x-*' && \
+sudo apt-mark hold kubeadm
```
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-* with the latest patch version
-yum install -y kubeadm-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
+sudo yum install -y kubeadm-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
```
{{% /tab %}}
{{< /tabs >}}
@@ -75,15 +75,15 @@ kubectl drain --ignore-daemonsets
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-* with the latest patch version
- apt-mark unhold kubelet kubectl && \
- apt-get update && apt-get install -y kubelet='{{< skew currentVersion >}}.x-*' kubectl='{{< skew currentVersion >}}.x-*' && \
- apt-mark hold kubelet kubectl
+ sudo apt-mark unhold kubelet kubectl && \
+ sudo apt-get update && sudo apt-get install -y kubelet='{{< skew currentVersion >}}.x-*' kubectl='{{< skew currentVersion >}}.x-*' && \
+ sudo apt-mark hold kubelet kubectl
```
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
```shell
# replace x in {{< skew currentVersion >}}.x-* with the latest patch version
- yum install -y kubelet-'{{< skew currentVersion >}}.x-*' kubectl-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
+ sudo yum install -y kubelet-'{{< skew currentVersion >}}.x-*' kubectl-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
```
{{% /tab %}}
{{< /tabs >}}
diff --git a/content/en/docs/tasks/extend-kubernetes/socks5-proxy-access-api.md b/content/en/docs/tasks/extend-kubernetes/socks5-proxy-access-api.md
index 3e607a9cbaa38..549753f54bb33 100644
--- a/content/en/docs/tasks/extend-kubernetes/socks5-proxy-access-api.md
+++ b/content/en/docs/tasks/extend-kubernetes/socks5-proxy-access-api.md
@@ -39,7 +39,7 @@ Figure 1 represents what you're going to achieve in this task.
graph LR;
subgraph local[Local client machine]
- client([client])-- local traffic .-> local_ssh[Local SSH SOCKS5 proxy];
+ client([client])-. local traffic .-> local_ssh[Local SSH SOCKS5 proxy];
end
local_ssh[SSH SOCKS5 proxy]-- SSH Tunnel -->sshd
diff --git a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md
index 3f94e249eb2dc..7afaf2fc4cae4 100644
--- a/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md
+++ b/content/en/docs/tasks/job/coarse-parallel-processing-work-queue.md
@@ -48,14 +48,14 @@ Start RabbitMQ as follows:
```shell
# make a Service for the StatefulSet to use
-kubectl create -f https://kubernetes.io/examples/application/job/rabbitmq-service.yaml
+kubectl create -f https://kubernetes.io/examples/application/job/rabbitmq/rabbitmq-service.yaml
```
```
service "rabbitmq-service" created
```
```shell
-kubectl create -f https://kubernetes.io/examples/application/job/rabbitmq-statefulset.yaml
+kubectl create -f https://kubernetes.io/examples/application/job/rabbitmq/rabbitmq-statefulset.yaml
```
```
statefulset "rabbitmq" created
diff --git a/content/en/docs/tutorials/kubernetes-basics/scale/scale-intro.html b/content/en/docs/tutorials/kubernetes-basics/scale/scale-intro.html
index 6a15b53c9eab6..04dadeed81637 100644
--- a/content/en/docs/tutorials/kubernetes-basics/scale/scale-intro.html
+++ b/content/en/docs/tutorials/kubernetes-basics/scale/scale-intro.html
@@ -17,23 +17,19 @@
-
-
Objectives
-
-
Scale an app using kubectl.
-
-
-
+
+
Objectives
+
+
Scale an app using kubectl.
+
+
Scaling an application
Previously we created a Deployment, and then exposed it publicly via a Service. The Deployment created only one Pod for running our application. When traffic increases, we will need to scale the application to keep up with user demand.
Scaling is accomplished by changing the number of replicas in a Deployment
- {{< note >}}
-
If you are trying this after the previous section, you may have deleted the Service exposing the Deployment. In that case, please expose the Deployment again using the following command:
Scaling is accomplished by changing the number of replicas in a Deployment.
@@ -47,7 +43,14 @@
Summary:
-
+
+
+
+ {{< note >}}
+
If you are trying this after the previous section, you may have deleted the Service exposing the Deployment. In that case, please expose the Deployment again using the following command:
@@ -573,7 +573,7 @@ kubectl label pods my-pod new-label=awesome # 添加标签
kubectl label pods my-pod new-label- # 移除标签
kubectl label pods my-pod new-label=new-value --overwrite # 覆盖现有的值
kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq # 添加注解
-kubectl annotate pods my-pod icon- # 移除注解
+kubectl annotate pods my-pod icon-url- # 移除注解
kubectl autoscale deployment foo --min=2 --max=10 # 对 "foo" Deployment 自动扩缩容
```
diff --git a/content/zh-cn/docs/reference/kubernetes-api/other-resources/validating-admission-policy-binding-list-v1alpha1.md b/content/zh-cn/docs/reference/kubernetes-api/other-resources/validating-admission-policy-binding-list-v1beta1.md
similarity index 55%
rename from content/zh-cn/docs/reference/kubernetes-api/other-resources/validating-admission-policy-binding-list-v1alpha1.md
rename to content/zh-cn/docs/reference/kubernetes-api/other-resources/validating-admission-policy-binding-list-v1beta1.md
index dffd13e539838..1c9700de6b271 100644
--- a/content/zh-cn/docs/reference/kubernetes-api/other-resources/validating-admission-policy-binding-list-v1alpha1.md
+++ b/content/zh-cn/docs/reference/kubernetes-api/other-resources/validating-admission-policy-binding-list-v1beta1.md
@@ -8,18 +8,9 @@ description: ""
title: "ValidatingAdmissionPolicyBindingList v1beta1"
weight: 1
---
-
`apiVersion: admissionregistration.k8s.io/v1beta1`
`import "k8s.io/api/admissionregistration/v1beta1"`
+
+
diff --git a/content/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md b/content/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
index e02fa6c56f125..bbac127f41312 100644
--- a/content/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
+++ b/content/zh-cn/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm.md
@@ -620,9 +620,15 @@ See a list of add-ons that implement the
[Kubernetes 网络模型](/zh-cn/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-network-model)的附加组件列表。
+请参阅[安装插件](/zh-cn/docs/concepts/cluster-administration/addons/#networking-and-network-policy)页面,
+了解 Kubernetes 支持的网络插件的非详尽列表。
+
你可以使用以下命令在控制平面节点或具有 kubeconfig 凭据的节点上安装 Pod 网络附加组件:
```bash
@@ -707,6 +713,19 @@ scheduler will then be able to schedule Pods everywhere.
污点的节点(包括控制平面节点)上移除该污点。
这意味着调度程序将能够在任何地方调度 Pod。
+
+此外,你可以执行以下命令从控制平面节点中删除
+[`node.kubernetes.io/exclude-from-external-load-balancers`](/zh-cn/docs/reference/labels-annotations-taints/#node-kubernetes-io-exclude-from-external-load-balancers)
+标签,这会将其从后端服务器列表中排除:
+
+```bash
+kubectl label nodes --all node.kubernetes.io/exclude-from-external-load-balancers-
+```
+
@@ -1006,64 +1025,6 @@ options.
有关此子命令及其选项的更多信息,请参见
[`kubeadm reset`](/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-reset/) 参考文档。
-
-
-
-## 下一步 {#whats-next}
-
-
-* 使用 [Sonobuoy](https://github.com/heptio/sonobuoy) 验证集群是否正常运行。
-* 有关使用 kubeadm 升级集群的详细信息,
- 请参阅[升级 kubeadm 集群](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)。
-* 在 [kubeadm 参考文档](/zh-cn/docs/reference/setup-tools/kubeadm/)中了解有关 `kubeadm` 进阶用法的信息。
-* 了解有关 Kubernetes [概念](/zh-cn/docs/concepts/)和 [`kubectl`](/zh-cn/docs/reference/kubectl/)的更多信息。
-* 有关 Pod 网络附加组件的更多列表,请参见[集群网络](/zh-cn/docs/concepts/cluster-administration/networking/)页面。
-* 请参阅[附加组件列表](/zh-cn/docs/concepts/cluster-administration/addons/)以探索其他附加组件,
- 包括用于 Kubernetes 集群的日志记录、监视、网络策略、可视化和控制的工具。
-* 配置集群如何处理集群事件的日志以及在 Pod 中运行的应用程序。
- 有关所涉及内容的概述,请参见[日志架构](/zh-cn/docs/concepts/cluster-administration/logging/)。
-
-
-### 反馈 {#feedback}
-
-
-* 有关漏洞,访问 [kubeadm GitHub issue tracker](https://github.com/kubernetes/kubeadm/issues)
-* 有关支持,访问
- [#kubeadm](https://kubernetes.slack.com/messages/kubeadm/) Slack 频道
-* 常规的 SIG Cluster Lifecycle 开发 Slack 频道:
- [#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
-* SIG Cluster Lifecycle 的 [SIG 资料](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle#readme)
-* SIG Cluster Lifecycle 邮件列表:
- [kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
-
@@ -1260,3 +1221,61 @@ If you are running into difficulties with kubeadm, please consult our
-->
如果你在使用 kubeadm 时遇到困难,
请查阅我们的[故障排除文档](/zh-cn/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/)。
+
+
+
+
+## 下一步 {#whats-next}
+
+
+* 使用 [Sonobuoy](https://github.com/heptio/sonobuoy) 验证集群是否正常运行。
+* 有关使用 kubeadm 升级集群的详细信息,
+ 请参阅[升级 kubeadm 集群](/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)。
+* 在 [kubeadm 参考文档](/zh-cn/docs/reference/setup-tools/kubeadm/)中了解有关 `kubeadm` 进阶用法的信息。
+* 了解有关 Kubernetes [概念](/zh-cn/docs/concepts/)和 [`kubectl`](/zh-cn/docs/reference/kubectl/)的更多信息。
+* 有关 Pod 网络附加组件的更多列表,请参见[集群网络](/zh-cn/docs/concepts/cluster-administration/networking/)页面。
+* 请参阅[附加组件列表](/zh-cn/docs/concepts/cluster-administration/addons/)以探索其他附加组件,
+ 包括用于 Kubernetes 集群的日志记录、监视、网络策略、可视化和控制的工具。
+* 配置集群如何处理集群事件的日志以及在 Pod 中运行的应用程序。
+ 有关所涉及内容的概述,请参见[日志架构](/zh-cn/docs/concepts/cluster-administration/logging/)。
+
+
+### 反馈 {#feedback}
+
+
+* 有关漏洞,访问 [kubeadm GitHub issue tracker](https://github.com/kubernetes/kubeadm/issues)
+* 有关支持,访问
+ [#kubeadm](https://kubernetes.slack.com/messages/kubeadm/) Slack 频道
+* 常规的 SIG Cluster Lifecycle 开发 Slack 频道:
+ [#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
+* SIG Cluster Lifecycle 的 [SIG 资料](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle#readme)
+* SIG Cluster Lifecycle 邮件列表:
+ [kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
diff --git a/content/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md
index 254ba6914690b..c066100703dcd 100644
--- a/content/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md
+++ b/content/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md
@@ -82,7 +82,7 @@ The upgrade workflow at high level is the following:
they could be running CoreDNS Pods or other critical workloads. For more information see
[Draining nodes](/docs/tasks/administer-cluster/safely-drain-node/).
- The Kubernetes project recommends that you match your kubelet and kubeadm versions.
- You can instead use an a version of kubelet that is older than kubeadm, provided it is within the
+ You can instead use a version of kubelet that is older than kubeadm, provided it is within the
range of supported versions.
For more details, please visit [kubeadm's skew against the kubelet](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#kubeadm-s-skew-against-the-kubelet).
- All containers are restarted after upgrade, because the container spec hash value is changed.
@@ -139,28 +139,37 @@ Find the latest patch release for Kubernetes {{< skew currentVersion >}} using t
使用操作系统的包管理器找到最新的补丁版本 Kubernetes {{< skew currentVersion >}}:
{{< tabs name="k8s_install_versions" >}}
-{{% tab name="Ubuntu, Debian or HypriotOS" %}}
+{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
+
```shell
# 在列表中查找最新的 {{< skew currentVersion >}} 版本
# 它看起来应该是 {{< skew currentVersion >}}.x-*,其中 x 是最新的补丁版本
-apt update
-apt-cache madison kubeadm
+sudo apt update
+sudo apt-cache madison kubeadm
```
{{% /tab %}}
-{{% tab name="CentOS, RHEL or Fedora" %}}
+{{% tab name="CentOS、RHEL 或 Fedora" %}}
+
```shell
# 在列表中查找最新的 {{< skew currentVersion >}} 版本
# 它看起来应该是 {{< skew currentVersion >}}.x-*,其中 x 是最新的补丁版本
-yum list --showduplicates kubeadm --disableexcludes=kubernetes
+sudo yum list --showduplicates kubeadm --disableexcludes=kubernetes
```
{{% /tab %}}
@@ -195,27 +204,35 @@ Pick a control plane node that you wish to upgrade first. It must have the `/etc
1. 升级 kubeadm:
{{< tabs name="k8s_install_kubeadm_first_cp" >}}
- {{% tab name="Ubuntu, Debian or HypriotOS" %}}
+ {{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
```shell
# 用最新的补丁版本号替换 {{< skew currentVersion >}}.x-* 中的 x
- apt-mark unhold kubeadm && \
- apt-get update && apt-get install -y kubeadm='{{< skew currentVersion >}}.x-*' && \
- apt-mark hold kubeadm
+ sudo apt-mark unhold kubeadm && \
+ sudo apt-get update && sudo apt-get install -y kubeadm='{{< skew currentVersion >}}.x-*' && \
+ sudo apt-mark hold kubeadm
```
{{% /tab %}}
- {{% tab name="CentOS, RHEL or Fedora" %}}
+ {{% tab name="CentOS、RHEL 或 Fedora" %}}
```shell
# 用最新的补丁版本号替换 {{< skew currentVersion >}}.x-* 中的 x
- yum install -y kubeadm-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
+ sudo yum install -y kubeadm-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
```
{{% /tab %}}
@@ -236,7 +253,7 @@ Pick a control plane node that you wish to upgrade first. It must have the `/etc
3. 验证升级计划:
```shell
- kubeadm upgrade plan
+ sudo kubeadm upgrade plan
```
```shell
# 用最新的补丁版本替换 {{< skew currentVersion >}}.x-* 中的 x
- apt-mark unhold kubelet kubectl && \
- apt-get update && apt-get install -y kubelet='{{< skew currentVersion >}}.x-*' kubectl='{{< skew currentVersion >}}.x-*' && \
- apt-mark hold kubelet kubectl
+ sudo apt-mark unhold kubelet kubectl && \
+ sudo apt-get update && sudo apt-get install -y kubelet='{{< skew currentVersion >}}.x-*' kubectl='{{< skew currentVersion >}}.x-*' && \
+ sudo apt-mark hold kubelet kubectl
```
{{% /tab %}}
- {{% tab name="CentOS, RHEL or Fedora" %}}
+ {{% tab name="CentOS、RHEL 或 Fedora" %}}
```shell
# 用最新的补丁版本号替换 {{< skew currentVersion >}}.x-* 中的 x
- yum install -y kubelet-'{{< skew currentVersion >}}.x-*' kubectl-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
+ sudo yum install -y kubelet-'{{< skew currentVersion >}}.x-*' kubectl-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
```
{{% /tab %}}
@@ -517,9 +542,9 @@ This command is idempotent and eventually makes sure that the actual state is th
此命令是幂等的,并最终确保实际状态是你声明的期望状态。
-要从故障状态恢复,你还可以运行 `kubeadm upgrade apply --force` 而无需更改集群正在运行的版本。
+要从故障状态恢复,你还可以运行 `sudo kubeadm upgrade apply --force` 而无需更改集群正在运行的版本。
```shell
# 将 {{< skew currentVersion >}}.x-* 中的 x 替换为最新的补丁版本
-apt-mark unhold kubeadm && \
-apt-get update && apt-get install -y kubeadm='{{< skew currentVersion >}}.x-*' && \
-apt-mark hold kubeadm
+sudo apt-mark unhold kubeadm && \
+sudo apt-get update && sudo apt-get install -y kubeadm='{{< skew currentVersion >}}.x-*' && \
+sudo apt-mark hold kubeadm
```
{{% /tab %}}
{{% tab name="CentOS、RHEL 或 Fedora" %}}
```shell
# 将 {{< skew currentVersion >}}.x-* 中的 x 替换为最新的补丁版本
-yum install -y kubeadm-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
+sudo yum install -y kubeadm-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
```
{{% /tab %}}
{{< /tabs >}}
@@ -105,8 +113,11 @@ Prepare the node for maintenance by marking it unschedulable and evicting the wo
将节点标记为不可调度并驱逐所有负载,准备节点的维护:
```shell
# 在控制平面节点上执行此命令
@@ -126,22 +137,30 @@ kubectl drain --ignore-daemonsets
{{< tabs name="k8s_kubelet_and_kubectl" >}}
{{% tab name="Ubuntu、Debian 或 HypriotOS" %}}
```shell
# 将 {{< skew currentVersion >}}.x-* 中的 x 替换为最新的补丁版本
- apt-mark unhold kubelet kubectl && \
- apt-get update && apt-get install -y kubelet='{{< skew currentVersion >}}.x-*' kubectl='{{< skew currentVersion >}}.x-*' && \
- apt-mark hold kubelet kubectl
+ sudo apt-mark unhold kubelet kubectl && \
+ sudo apt-get update && sudo apt-get install -y kubelet='{{< skew currentVersion >}}.x-*' kubectl='{{< skew currentVersion >}}.x-*' && \
+ sudo apt-mark hold kubelet kubectl
```
{{% /tab %}}
{{% tab name="CentOS、RHEL 或 Fedora" %}}
```shell
# 将 {{< skew currentVersion >}}.x-* 中的 x 替换为最新的补丁版本
- yum install -y kubelet-'{{< skew currentVersion >}}.x-*' kubectl-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
+ sudo yum install -y kubelet-'{{< skew currentVersion >}}.x-*' kubectl-'{{< skew currentVersion >}}.x-*' --disableexcludes=kubernetes
```
{{% /tab %}}
{{< /tabs >}}
@@ -166,8 +185,11 @@ Bring the node back online by marking it schedulable:
通过将节点标记为可调度,让节点重新上线:
```shell
# 在控制平面节点上执行此命令
diff --git a/content/zh-cn/docs/tasks/extend-kubernetes/socks5-proxy-access-api.md b/content/zh-cn/docs/tasks/extend-kubernetes/socks5-proxy-access-api.md
index 9d25d1f87fb01..872b391e621cf 100644
--- a/content/zh-cn/docs/tasks/extend-kubernetes/socks5-proxy-access-api.md
+++ b/content/zh-cn/docs/tasks/extend-kubernetes/socks5-proxy-access-api.md
@@ -64,7 +64,7 @@ Figure 1 represents what you're going to achieve in this task.
graph LR;
subgraph local[Local client machine]
- client([client])-- local traffic .-> local_ssh[Local SSH SOCKS5 proxy];
+ client([client])-. local traffic .-> local_ssh[Local SSH SOCKS5 proxy];
end
local_ssh[SSH SOCKS5 proxy]-- SSH Tunnel --\>sshd
@@ -86,9 +86,9 @@ graph LR;
graph LR;
subgraph local[本地客户端机器]
- client([客户端])-- 本地 流量.-> local_ssh[本地 SSH SOCKS5 代理];
+ client([客户端])-. 本地 流量.-> local_ssh[本地 SSH SOCKS5 代理];
end
- ocal_ssh[SSH SOCKS5 代理]-- SSH 隧道 -->sshd
+ local_ssh[SSH SOCKS5 代理]-- SSH 隧道 -->sshd
subgraph remote[远程服务器]
sshd[SSH 服务器]-- 本地流量 -->service1;
diff --git a/content/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue.md b/content/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue.md
index 80a193750d5cd..f7fa4c95c47bb 100644
--- a/content/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue.md
+++ b/content/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue.md
@@ -92,14 +92,14 @@ Start RabbitMQ as follows:
-->
```shell
# 为 StatefulSet 创建一个 Service 来使用
-kubectl create -f https://kubernetes.io/examples/application/job/rabbitmq-service.yaml
+kubectl create -f https://kubernetes.io/examples/application/job/rabbitmq/rabbitmq-service.yaml
```
```
service "rabbitmq-service" created
```
```shell
-kubectl create -f https://kubernetes.io/examples/application/job/rabbitmq-statefulset.yaml
+kubectl create -f https://kubernetes.io/examples/application/job/rabbitmq/rabbitmq-statefulset.yaml
```
```
diff --git a/data/i18n/en/en.toml b/data/i18n/en/en.toml
index 4ada028a4737a..31a1bc5bd335f 100644
--- a/data/i18n/en/en.toml
+++ b/data/i18n/en/en.toml
@@ -151,6 +151,12 @@ other = "Until"
[feature_state]
other = "FEATURE STATE:"
+[feature_state_kubernetes_label]
+other = "Kubernetes"
+
+[feature_state_feature_gate_tooltip]
+other = "Feature Gate:"
+
[feedback_heading]
other = "Feedback"
@@ -541,10 +547,13 @@ other="""Third party content advice"""
[thirdparty_message_single_item]
other = """🛇 This item links to a third party project or product that is not part of Kubernetes itself. More information"""
-
[thirdparty_message_disclaimer]
other = """
Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or projects. See the CNCF website guidelines for more details.
You should read the content guide before proposing a change that adds an extra third-party link.
+{{ $feature_gate_name := .Get "feature_gate_name" }}
+
+
+{{ if not $feature_gate_name }}
+ {{ if not $is_valid }}
+ {{ errorf "%q is not a valid feature-state, use one of %q" $state $valid_states }}
+ {{ else }}
+