diff --git a/.editorconfig b/.editorconfig index bc1dfe40c8faa..1a235f9c90853 100644 --- a/.editorconfig +++ b/.editorconfig @@ -16,5 +16,8 @@ indent_size = 2 indent_style = space indent_size = 4 +[*.{yaml}] +insert_final_newline = true + [Makefile] indent_style = tab diff --git a/README-pl.md b/README-pl.md index 7544de45835a6..62dc2d0ee22f3 100644 --- a/README-pl.md +++ b/README-pl.md @@ -43,7 +43,7 @@ make container-image make container-serve ``` -Jeśli widzisz błędy, prawdopodobnie kontener z Hugo nie dysponuje wystarczającymi zasobami. Aby rozwiązać ten problem, zwiększ ilość dostępnych zasobów CPU i pamięci dla Dockera na Twojej maszynie ([MacOSX](https://docs.docker.com/docker-for-mac/#resources) i [Windows](https://docs.docker.com/docker-for-windows/#resources)). +Jeśli widzisz błędy, prawdopodobnie kontener z Hugo nie dysponuje wystarczającymi zasobami. Aby rozwiązać ten problem, zwiększ ilość dostępnych zasobów CPU i pamięci dla Dockera na Twojej maszynie ([MacOS](https://docs.docker.com/desktop/settings/mac/) i [Windows](https://docs.docker.com/desktop/settings/windows/)). Aby obejrzeć zawartość serwisu, otwórz w przeglądarce adres . Po każdej zmianie plików źródłowych, Hugo automatycznie aktualizuje stronę i odświeża jej widok w przeglądarce. diff --git a/README-pt.md b/README-pt.md index 3de16340509e3..ae2f644ed869d 100644 --- a/README-pt.md +++ b/README-pt.md @@ -49,7 +49,7 @@ Para executar o build do website em um contêiner, execute o comando abaixo: make container-serve ``` -Caso ocorram erros, é provável que o contêiner que está executando o Hugo não tenha recursos suficientes. A solução é aumentar a quantidade de CPU e memória disponível para o Docker ([MacOSX](https://docs.docker.com/docker-for-mac/#resources) e [Windows](https://docs.docker.com/docker-for-windows/#resources)). +Caso ocorram erros, é provável que o contêiner que está executando o Hugo não tenha recursos suficientes. A solução é aumentar a quantidade de CPU e memória disponível para o Docker ([MacOS](https://docs.docker.com/desktop/settings/mac/) e [Windows](https://docs.docker.com/desktop/settings/windows/)). Abra seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, o Hugo atualiza o website e força a atualização do navegador. diff --git a/assets/scss/_base.scss b/assets/scss/_base.scss index 71983485eda66..c432b9d26addc 100644 --- a/assets/scss/_base.scss +++ b/assets/scss/_base.scss @@ -902,9 +902,16 @@ section#cncf { margin: 0; } +//Table Content +.tab-content table{ + border-collapse: separate; + border-spacing: 6px; +} + .tab-pane { border-radius: 0.25rem; padding: 0 16px 16px; + overflow: auto; border: 1px solid #dee2e6; &:first-of-type.active { diff --git a/assets/scss/_case-studies.scss b/assets/scss/_case-studies.scss index 4f44864127525..5c907d1c08809 100644 --- a/assets/scss/_case-studies.scss +++ b/assets/scss/_case-studies.scss @@ -1,7 +1,7 @@ // SASS for Case Studies pages go here: hr { - background-color: #999999; + background-color: #303030; margin-top: 0; } diff --git a/content/de/docs/tasks/tools/install-kubectl-linux.md b/content/de/docs/tasks/tools/install-kubectl-linux.md index 93126b9e4a7d3..78d31379f87ae 100644 --- a/content/de/docs/tasks/tools/install-kubectl-linux.md +++ b/content/de/docs/tasks/tools/install-kubectl-linux.md @@ -51,7 +51,7 @@ Um kubectl auf Linux zu installieren, gibt es die folgenden Möglichkeiten: Download der kubectl Checksum-Datei: ```bash - curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" + curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" ``` Kubectl Binary mit der Checksum-Datei validieren: @@ -236,7 +236,7 @@ Untenstehend ist beschrieben, wie die Autovervollständigungen für Fish und Zsh Download der kubectl-convert Checksum-Datei: ```bash - curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256" + curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256" ``` Kubectl-convert Binary mit der Checksum-Datei validieren: diff --git a/content/en/_index.html b/content/en/_index.html index 7ad3ba6752237..5df38bb42073a 100644 --- a/content/en/_index.html +++ b/content/en/_index.html @@ -47,12 +47,12 @@

The Challenges of Migrating 150+ Microservices to Kubernetes



- Attend KubeCon + CloudNativeCon Europe on April 18-21, 2023 + Attend KubeCon + CloudNativeCon North America on November 6-9, 2023



- Attend KubeCon + CloudNativeCon North America on November 6-9, 2023 + Attend KubeCon + CloudNativeCon Europe on March 19-22, 2024
diff --git a/content/en/blog/_posts/2020-09-03-warnings/index.md b/content/en/blog/_posts/2020-09-03-warnings/index.md index a5cfb9f710db7..5d31aedef2f41 100644 --- a/content/en/blog/_posts/2020-09-03-warnings/index.md +++ b/content/en/blog/_posts/2020-09-03-warnings/index.md @@ -63,7 +63,7 @@ This metric has labels for the API `group`, `version`, `resource`, and `subresou and a `removed_release` label that indicates the Kubernetes release in which the API will no longer be served. This is an example query using `kubectl`, [prom2json](https://github.com/prometheus/prom2json), -and [jq](https://stedolan.github.io/jq/) to determine which deprecated APIs have been requested +and [jq](https://jqlang.github.io/jq/) to determine which deprecated APIs have been requested from the current instance of the API server: ```sh diff --git a/content/en/blog/_posts/2021-10-18-kpng-specialized-proxiers.md b/content/en/blog/_posts/2021-10-18-kpng-specialized-proxiers.md index 2c60c12f3f079..1e1b32b265ce3 100644 --- a/content/en/blog/_posts/2021-10-18-kpng-specialized-proxiers.md +++ b/content/en/blog/_posts/2021-10-18-kpng-specialized-proxiers.md @@ -210,7 +210,7 @@ podip=$(cat /tmp/out | jq -r '.Endpoints[]|select(.Local == true)|select(.IPs.V6 ip6tables -t nat -A PREROUTING -d $xip/128 -j DNAT --to-destination $podip ``` -Assuming the JSON output above is stored in `/tmp/out` ([jq](https://stedolan.github.io/jq/) is an *awesome* program!). +Assuming the JSON output above is stored in `/tmp/out` ([jq](https://jqlang.github.io/jq/) is an *awesome* program!). As this is an example we make it really simple for ourselves by using diff --git a/content/en/blog/_posts/2022-08-31-cgroupv2-ga.md b/content/en/blog/_posts/2022-08-31-cgroupv2-ga.md index d4345195746b8..4071d4458160d 100644 --- a/content/en/blog/_posts/2022-08-31-cgroupv2-ga.md +++ b/content/en/blog/_posts/2022-08-31-cgroupv2-ga.md @@ -118,8 +118,8 @@ Scenarios in which you might need to update to cgroup v2 include the following: DaemonSet for monitoring pods and containers, update it to v0.43.0 or later. * If you deploy Java applications, prefer to use versions which fully support cgroup v2: * [OpenJDK / HotSpot](https://bugs.openjdk.org/browse/JDK-8230305): jdk8u372, 11.0.16, 15 and later - * [IBM Semeru Runtimes](https://www.eclipse.org/openj9/docs/version0.33/#control-groups-v2-support): jdk8u345-b01, 11.0.16.0, 17.0.4.0, 18.0.2.0 and later - * [IBM Java](https://www.ibm.com/docs/en/sdk-java-technology/8?topic=new-service-refresh-7#whatsnew_sr7__fp15): 8.0.7.15 and later + * [IBM Semeru Runtimes](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.382.0, 11.0.20.0, 17.0.8.0, and later + * [IBM Java](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.8.6 and later ## Learn more diff --git a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-best-effort.svg b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-best-effort.svg index cf9283885855e..e35b2f39509bb 100644 --- a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-best-effort.svg +++ b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-best-effort.svg @@ -1,87 +1,395 @@ - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-limit.svg b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-limit.svg index 3a545f20dd85f..a2ba00c58fd4e 100644 --- a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-limit.svg +++ b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-limit.svg @@ -1,226 +1,1072 @@ - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-no-limits.svg b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-no-limits.svg index 845f5d0d07bb2..57b207b80a0be 100644 --- a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-no-limits.svg +++ b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-no-limits.svg @@ -1,203 +1,951 @@ - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high.svg b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high.svg index 02357ef901582..4ba0b15957a28 100644 --- a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high.svg +++ b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high.svg @@ -1,252 +1,1195 @@ - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/index.md b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/index.md index 26b1d626fd171..ff72afe083322 100644 --- a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/index.md +++ b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/index.md @@ -128,18 +128,14 @@ enforces the limit to prevent the container from using more than the configured resource limit. If a process in a container tries to consume more than the specified limit, kernel terminates a process(es) with an Out of Memory (OOM) error. -```formula -memory.max = pod.spec.containers[i].resources.limits[memory] -``` +{{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-max.svg" title="memory.max maps to limits.memory" alt="memory.max maps to limits.memory" >}} `memory.min` is mapped to `requests.memory`, which results in reservation of memory resources that should never be reclaimed by the kernel. This is how Memory QoS ensures the availability of memory for Kubernetes pods. If there's no unprotected reclaimable memory available, the OOM killer is invoked to make more memory available. -```formula -memory.min = pod.spec.containers[i].resources.requests[memory] -``` +{{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-min.svg" title="memory.min maps to requests.memory" alt="memory.min maps to requests.memory" >}} For memory protection, in addition to the original way of limiting memory usage, Memory QoS throttles workload approaching its memory limit, ensuring that the system is not overwhelmed @@ -149,10 +145,7 @@ the KubeletConfiguration when you enable MemoryQoS feature. It is set to 0.9 by `requests.memory` and `limits.memory` as in the formula below, and rounding down the value to the nearest page size: -```formula -memory.high = pod.spec.containers[i].resources.requests[memory] + MemoryThrottlingFactor * -{(pod.spec.containers[i].resources.limits[memory] or NodeAllocatableMemory) - pod.spec.containers[i].resources.requests[memory]} -``` +{{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-high.svg" title="memory.high formula" alt="memory.high formula" >}} {{< note >}} If a container has no memory limits specified, `limits.memory` is substituted for node allocatable memory. @@ -256,26 +249,18 @@ as per QOS classes: * When requests.memory and limits.memory are set, the formula is used as-is: - ```formula - memory.high = pod.spec.containers[i].resources.requests[memory] + MemoryThrottlingFactor * - {(pod.spec.containers[i].resources.limits[memory]) - pod.spec.containers[i].resources.requests[memory]} - ``` + {{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-high-limit.svg" title="memory.high when requests and limits are set" alt="memory.high when requests and limits are set" >}} * When requests.memory is set and limits.memory is not set, limits.memory is substituted for node allocatable memory in the formula: - ```formula - memory.high = pod.spec.containers[i].resources.requests[memory] + MemoryThrottlingFactor * - {(NodeAllocatableMemory) - pod.spec.containers[i].resources.requests[memory]} - ``` + {{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-high-no-limits.svg" title="memory.high when requests and limits are not set" alt="memory.high when requests and limits are not set" >}} 1. **BestEffort** by their QoS definition do not require any memory or CPU limits or requests. For this case, kubernetes sets requests.memory = 0 and substitute limits.memory for node allocatable memory in the formula: - ```formula - memory.high = MemoryThrottlingFactor * NodeAllocatableMemory - ``` + {{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-high-best-effort.svg" title="memory.high for BestEffort Pod" alt="memory.high for BestEffort Pod" >}} **Summary**: Only Pods in Burstable and BestEffort QoS classes will set `memory.high`. Guaranteed QoS pods do not set `memory.high` as their memory is guaranteed. diff --git a/content/en/blog/_posts/2023-10-12-bootstrap-an-air-gapped-cluster-with-kubeadm/index.md b/content/en/blog/_posts/2023-10-12-bootstrap-an-air-gapped-cluster-with-kubeadm/index.md index eebdcf6110a1d..f10b4eeec7472 100644 --- a/content/en/blog/_posts/2023-10-12-bootstrap-an-air-gapped-cluster-with-kubeadm/index.md +++ b/content/en/blog/_posts/2023-10-12-bootstrap-an-air-gapped-cluster-with-kubeadm/index.md @@ -22,7 +22,7 @@ A real air-gapped network can take some effort to set up, so for this post, I wi ### Local topology -This VM will have its network connectivity disabled but in a way that doesn't shut down the VM's virtual NIC. Instead, its network will be downed by injecting a default route to a dummy interface, making anything internet-hosted unreachable. However, the VM still has a connected route to the bridge interface on the host, which means that network connectivity to the host is still working. This posture means that data can be transferred from the host/laptop to the VM via scp, even with the default route on the VM black-holing all traffic that isn't destined for the local bridge subnet. This type of transfer is analogous to carrying data across the air gap and will be used throughout this post. +This VM will have its network connectivity disabled but in a way that doesn't shut down the VM's virtual NIC. Instead, its network will be downed by injecting a default route to a dummy interface, making anything internet-hosted unreachable. However, the VM still has a connected route to the bridge interface on the host, which means that network connectivity to the host is still working. This posture means that data can be transferred from the host/laptop to the VM via `scp`, even with the default route on the VM black-holing all traffic that isn't destined for the local bridge subnet. This type of transfer is analogous to carrying data across the air gap and will be used throughout this post. Other details about the lab setup: @@ -35,7 +35,7 @@ While this single VM lab is a simplified example, the below diagram more approxi {{< figure src="example_production_topology.svg" alt="Example production topology which shows 3 control plane Kubernetes nodes and 'n' worker nodes along with a Docker registry in an air-gapped environment. Additionally shows two workstations, one on each side of the air gap and an IT admin which physically carries the artifacts across." >}} -Note, there is still intentional isolation between the envirnment and the internet. There are also some things that are not shown in order to keep the diagram simple, for example malware scanning on the secure side of the air gap. +Note, there is still intentional isolation between the environment and the internet. There are also some things that are not shown in order to keep the diagram simple, for example malware scanning on the secure side of the air gap. Back to the single VM lab environment. @@ -144,7 +144,7 @@ reboot On the laptop/host machine, download all of the artifacts enumerated in the previous section. Since the air gapped VM is running Fedora 37, all of the dependencies shown in this part are for Fedora 37. Note, this procedure will only work on AArch64 or AMD64 CPU architectures as they are the most popular and widely available.. You can execute this procedure anywhere you have write permissions; your home directory is a perfectly suitable choice. -Note, operating system packages for the Kubernetes artifacts that need to be carried across can now be found at [pkgs.k8s.io](https://kubernetes.io/blog/2023/08/15/pkgs-k8s-io-introduction/). This blog post will use a combination of Fedora repositories and GitHub in order to download all of the required artifacts. When you’re doing this on your own cluster, you should decide whether to use the official Kubernetes packages, or the official packages from your operating system distribution - both are valid choices. +Note, operating system packages for the Kubernetes artifacts that need to be carried across can now be found at [pkgs.k8s.io](/blog/2023/08/15/pkgs-k8s-io-introduction/). This blog post will use a combination of Fedora repositories and GitHub in order to download all of the required artifacts. When you’re doing this on your own cluster, you should decide whether to use the official Kubernetes packages, or the official packages from your operating system distribution - both are valid choices. @@ -612,7 +612,7 @@ export ZARF_VERSION=v0.28.3 curl -LO "https://github.com/defenseunicorns/zarf/releases/download/${ZARF_VERSION}/zarf_${ZARF_VERSION}_Linux_${K8s_ARCH}" ``` Zarf needs to bootstrap itself into a Kubernetes cluster through the use of an init package. That also needs to be transported across the air gap so let's download it onto the host/laptop: -```bash +```bash curl -LO "https://github.com/defenseunicorns/zarf/releases/download/${ZARF_VERSION}/zarf-init-${K8s_ARCH}-${ZARF_VERSION}.tar.zst" ``` The way that Zarf is declarative is through the use of a zarf.yaml file. Here is the zarf.yaml file that will be used for this Podinfo installation. Write it to whatever directory you you have write access to on your host/laptop; your home directory is fine: diff --git a/content/en/blog/_posts/2023-10-20-kcs-shanghai/index.md b/content/en/blog/_posts/2023-10-20-kcs-shanghai/index.md new file mode 100644 index 0000000000000..e32bdcb0df615 --- /dev/null +++ b/content/en/blog/_posts/2023-10-20-kcs-shanghai/index.md @@ -0,0 +1,114 @@ +--- +layout: blog +title: "A Quick Recap of 2023 China Kubernetes Contributor Summit" +slug: kcs-shanghai +date: 2023-10-20 +canonicalUrl: https://www.kubernetes.dev/blog/2023/10/20/kcs-shanghai/ +--- + +**Author:** Paco Xu and Michael Yao (DaoCloud) + +On September 26, 2023, the first day of +[KubeCon + CloudNativeCon + Open Source Summit China 2023](https://www.lfasiallc.com/kubecon-cloudnativecon-open-source-summit-china/), +nearly 50 contributors gathered in Shanghai for the Kubernetes Contributor Summit. + +{{< figure src="/blog/2023/10/20/kcs-shanghai/kcs04.jpeg" alt="All participants in the 2023 Kubernetes Contributor Summit" caption="All participants in the 2023 Kubernetes Contributor Summit" >}} + +This marked the first in-person offline gathering held in China after three years of the pandemic. + +## A joyful meetup + +The event began with welcome speeches from [Kevin Wang](https://github.com/kevin-wangzefeng) from Huawei Cloud, +one of the co-chairs of KubeCon, and [Puja](https://github.com/puja108) from Giant Swarm. + +Following the opening remarks, the contributors introduced themselves briefly. Most attendees were from China, +while some contributors had made the journey from Europe and the United States specifically for the conference. +Technical experts from companies such as Microsoft, Intel, Huawei, as well as emerging forces like DaoCloud, +were present. Laughter and cheerful voices filled the room, regardless of whether English was spoken with +European or American accents or if conversations were carried out in authentic Chinese language. This created +an atmosphere of comfort, joy, respect, and anticipation. Past contributions brought everyone closer, and +mutual recognition and accomplishments made this offline gathering possible. + +{{< figure src="/blog/2023/10/20/kcs-shanghai/kcs06.jpeg" alt="Face to face meeting in Shanghai" caption="Face to face meeting in Shanghai" >}} + +The attending contributors were no longer just GitHub IDs; they transformed into vivid faces. +From sitting together and capturing group photos to attempting to identify "Who is who," +a loosely connected collective emerged. This team structure, although loosely knit and free-spirited, +was established to pursue shared dreams. + +As the saying goes, "You reap what you sow." Each effort has been diligently documented within +the Kubernetes community contributions. Regardless of the passage of time, the community will +not erase those shining traces. Brilliance can be found in your PRs, issues, or comments. +It can also be seen in the smiling faces captured in meetup photos or heard through stories +passed down among contributors. + +## Technical sharing and discussions + +Next, there were three technical sharing sessions: + +- [sig-multi-cluster](https://github.com/kubernetes/community/blob/master/sig-multicluster/README.md): + [Hongcai Ren](https://github.com/RainbowMango), a maintainer of Karmada, provided an introduction to + the responsibilities and roles of this SIG. Their focus is on designing, discussing, implementing, + and maintaining APIs, tools, and documentation related to multi-cluster management. + Cluster Federation, one of Karmada's core concepts, is also part of their work. + +- [helmfile](https://github.com/helmfile/helmfile): [yxxhero](https://github.com/yxxhero) + from [GitLab](https://gitlab.cn/) presented how to deploy Kubernetes manifests declaratively, + customize configurations, and leverage the latest features of Helm, including Helmfile. + +- [sig-scheduling](https://github.com/kubernetes/community/blob/master/sig-scheduling/README.md): + [william-wang](https://github.com/william-wang) from Huawei Cloud shared the recent updates and + future plans of SIG Scheduling. This SIG is responsible for designing, developing, and testing + components related to Pod scheduling. + +{{< figure src="/blog/2023/10/20/kcs-shanghai/kcs03.jpeg" alt="A technical session about sig-multi-cluster" caption="A technical session about sig-multi-cluster" >}} + +Following the sessions, a video featuring a call for contributors by [Sergey Kanzhelev](https://github.com/SergeyKanzhelev), +the SIG-Node Chair, was played. The purpose was to encourage more contributors to join the Kubernetes community, +with a special emphasis on the popular SIG-Node. + +Lastly, Kevin hosted an Unconference collective discussion session covering topics such as +multi-cluster management, scheduling, elasticity, AI, and more. For detailed minutes of +the Unconference meeting, please refer to . + +## China's contributor statistics + +The contributor summit took place in Shanghai, with 90% of the attendees being Chinese. +Within the Cloud Native Computing Foundation (CNCF) ecosystem, contributions from China have been steadily increasing. Currently: + +- Chinese contributors account for 9% of the total. +- Contributions from China make up 11.7% of the overall volume. +- China ranks second globally in terms of contributions. + +{{< note >}} +The data is from KubeCon keynotes by Chris Aniszczyk, CTO of Cloud Native Computing Foundation, +on September 26, 2023. This probably understates Chinese contributions. A lot of Chinese contributors +use VPNs and may not show up as being from China in the stats accurately. +{{< /note >}} + +The Kubernetes Contributor Summit is an inclusive meetup that welcomes all community contributors, including: + +- New Contributors +- Current Contributors + - docs + - code + - community management +- Subproject members +- Members of Special Interest Group (SIG) / Working Group (WG) +- Active Contributors +- Casual Contributors + +## Acknowledgments + +We would like to express our gratitude to the organizers of this event: + +- [Kevin Wang](https://github.com/kevin-wangzefeng), the co-chair of KubeCon and the lead of the kubernetes contributor summit. +- [Paco Xu](https://github.com/pacoxu), who actively coordinated the venue, meals, invited contributors from both China and + international sources, and established WeChat groups to collect agenda topics. They also shared details of the event + before and after its occurrence through [pre and post announcements](https://github.com/kubernetes/community/issues/7510). +- [Mengjiao Liu](https://github.com/mengjiao-liu), who was responsible for organizing, coordinating, + and facilitating various matters related to the summit. + +We extend our appreciation to all the contributors who attended the China Kubernetes Contributor Summit in Shanghai. +Your dedication and commitment to the Kubernetes community are invaluable. +Together, we continue to push the boundaries of cloud native technology and shape the future of this ecosystem. diff --git a/content/en/blog/_posts/2023-10-20-kcs-shanghai/kcs03.jpeg b/content/en/blog/_posts/2023-10-20-kcs-shanghai/kcs03.jpeg new file mode 100644 index 0000000000000..c6131bfc911f2 Binary files /dev/null and b/content/en/blog/_posts/2023-10-20-kcs-shanghai/kcs03.jpeg differ diff --git a/content/en/blog/_posts/2023-10-20-kcs-shanghai/kcs04.jpeg b/content/en/blog/_posts/2023-10-20-kcs-shanghai/kcs04.jpeg new file mode 100644 index 0000000000000..61cb7ef8526fe Binary files /dev/null and b/content/en/blog/_posts/2023-10-20-kcs-shanghai/kcs04.jpeg differ diff --git a/content/en/blog/_posts/2023-10-20-kcs-shanghai/kcs06.jpeg b/content/en/blog/_posts/2023-10-20-kcs-shanghai/kcs06.jpeg new file mode 100644 index 0000000000000..f66c505e7a8d4 Binary files /dev/null and b/content/en/blog/_posts/2023-10-20-kcs-shanghai/kcs06.jpeg differ diff --git a/content/en/blog/_posts/2023-10-23-pv-last-phase-transtition-time.md b/content/en/blog/_posts/2023-10-23-pv-last-phase-transtition-time.md new file mode 100644 index 0000000000000..819e8f76353b7 --- /dev/null +++ b/content/en/blog/_posts/2023-10-23-pv-last-phase-transtition-time.md @@ -0,0 +1,105 @@ +--- +layout: blog +title: PersistentVolume Last Phase Transition Time in Kubernetes +date: 2023-10-23 +slug: persistent-volume-last-phase-transition-time +--- + +**Author:** Roman Bednář (Red Hat) + +In the recent Kubernetes v1.28 release, we (SIG Storage) introduced a new alpha feature that aims to improve PersistentVolume (PV) +storage management and help cluster administrators gain better insights into the lifecycle of PVs. +With the addition of the `lastPhaseTransitionTime` field into the status of a PV, +cluster administrators are now able to track the last time a PV transitioned to a different +[phase](/docs/concepts/storage/persistent-volumes/#phase), allowing for more efficient +and informed resource management. + +## Why do we need new PV field? {#why-new-field} + +PersistentVolumes in Kubernetes play a crucial role in providing storage resources to workloads running in the cluster. +However, managing these PVs effectively can be challenging, especially when it comes +to determining the last time a PV transitioned between different phases, such as +`Pending`, `Bound` or `Released`. +Administrators often need to know when a PV was last used or transitioned to certain +phases; for instance, to implement retention policies, perform cleanup, or monitor storage health. + +In the past, Kubernetes users have faced data loss issues when using the `Delete` retain policy and had to resort to the safer `Retain` policy. +When we planned the work to introduce the new `lastPhaseTransitionTime` field, we +wanted to provide a more generic solution that can be used for various use cases, +including manual cleanup based on the time a volume was last used or producing alerts based on phase transition times. + +## How lastPhaseTransitionTime helps + +Provided you've enabled the feature gate (see [How to use it](#how-to-use-it), the new `.status.lastPhaseTransitionTime` field of a PersistentVolume (PV) +is updated every time that PV transitions from one phase to another. +`` +Whether it's transitioning from `Pending` to `Bound`, `Bound` to `Released`, or any other phase transition, the `lastPhaseTransitionTime` will be recorded. +For newly created PVs the phase will be set to `Pending` and the `lastPhaseTransitionTime` will be recorded as well. + +This feature allows cluster administrators to: + +1. Implement Retention Policies + + With the `lastPhaseTransitionTime`, administrators can now track when a PV was last used or transitioned to the `Released` phase. + This information can be crucial for implementing retention policies to clean up resources that have been in the `Released` phase for a specific duration. + For example, it is now trivial to write a script or a policy that deletes all PVs that have been in the `Released` phase for a week. + +2. Monitor Storage Health + + By analyzing the phase transition times of PVs, administrators can monitor storage health more effectively. + For example, they can identify PVs that have been in the `Pending` phase for an unusually long time, which may indicate underlying issues with the storage provisioner. + +## How to use it + +The `lastPhaseTransitionTime` field is alpha starting from Kubernetes v1.28, so it requires +the `PersistentVolumeLastPhaseTransitionTime` feature gate to be enabled. + +If you want to test the feature whilst it's alpha, you need to enable this feature gate on the `kube-controller-manager` and the `kube-apiserver`. + +Use the `--feature-gates` command line argument: + +```shell +--feature-gates="...,PersistentVolumeLastPhaseTransitionTime=true" +``` + +Keep in mind that the feature enablement does not have immediate effect; the new field will be populated whenever a PV is updated and transitions between phases. +Administrators can then access the new field through the PV status, which can be retrieved using standard Kubernetes API calls or through Kubernetes client libraries. + +Here is an example of how to retrieve the `lastPhaseTransitionTime` for a specific PV using the `kubectl` command-line tool: + +```shell +kubectl get pv -o jsonpath='{.status.lastPhaseTransitionTime}' +``` + +## Going forward + +This feature was initially introduced as an alpha feature, behind a feature gate that is disabled by default. +During the alpha phase, we (Kubernetes SIG Storage) will collect feedback from the end user community and address any issues or improvements identified. + +Once sufficient feedback has been received, or no complaints are received the feature can move to beta. +The beta phase will allow us to further validate the implementation and ensure its stability. + +At least two Kubernetes releases will happen between the release where this field graduates +to beta and the release that graduates the field to general availability (GA). That means that +the earliest release where this field could be generally available is Kubernetes 1.32, +likely to be scheduled for early 2025. + +## Getting involved + +We always welcome new contributors so if you would like to get involved you can +join our [Kubernetes Storage Special-Interest-Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG). + +If you would like to share feedback, you can do so on our +[public Slack channel](https://app.slack.com/client/T09NY5SBT/C09QZFCE5). +If you're not already part of that Slack workspace, you can visit https://slack.k8s.io/ for an invitation. + +Special thanks to all the contributors that provided great reviews, shared valuable insight and helped implement this feature (alphabetical order): + +- Han Kang ([logicalhan](https://github.com/logicalhan)) +- Jan Šafránek ([jsafrane](https://github.com/jsafrane)) +- Jordan Liggitt ([liggitt](https://github.com/liggitt)) +- Kiki ([carlory](https://github.com/carlory)) +- Michelle Au ([msau42](https://github.com/msau42)) +- Tim Bannister ([sftim](https://github.com/sftim)) +- Wojciech Tyczynski ([wojtek-t](https://github.com/wojtek-t)) +- Xing Yang ([xing-yang](https://github.com/xing-yang)) diff --git a/content/en/blog/_posts/2023-10-24-kubernetes-1.28-release-interview.md b/content/en/blog/_posts/2023-10-24-kubernetes-1.28-release-interview.md new file mode 100644 index 0000000000000..6311b7808714b --- /dev/null +++ b/content/en/blog/_posts/2023-10-24-kubernetes-1.28-release-interview.md @@ -0,0 +1,223 @@ +--- +layout: blog +title: "Plants, process and parties: the Kubernetes 1.28 release interview" +date: 2023-10-24 +--- + +**Author**: Craig Box + +Since 2018, one of my favourite contributions to the Kubernetes community has been to [share the story of each release](https://www.google.com/search?q=%22release+interview%22+site%3Akubernetes.io%2Fblog). Many of these stories were told on behalf of a past employer; by popular demand, I've brought them back, now under my own name. If you were a fan of the old show, I would be delighted if you would [subscribe](https://craigbox.substack.com/about). + +Back in August, [we welcomed the release of Kubernetes 1.28](/blog/2023/08/15/kubernetes-v1-28-release/). That release was led by [Grace Nguyen](https://twitter.com/gracenng), a CS student at the University of Waterloo. Grace joined me for the traditional release interview, and while you can read her story below, [I encourage you to listen to it if you can](https://craigbox.substack.com/p/the-kubernetes-128-release-interview). + +*This transcript has been lightly edited and condensed for clarity.* + +--- + +**You're a student at the University of Waterloo, so I want to spend the first two minutes of this interview talking about the Greater Kitchener-Waterloo region. It's August, so this is one of the four months of the year when there's no snow visible on the ground?**
+Well, it's not that bad. I think the East Coast has it kind of good. I grew up in Calgary, but I do love summer here in Waterloo. We have a [petting zoo](https://goo.gl/maps/W1nM7LjNZPv) close to our university campus, so I go and see the llamas sometimes. + +**Is that a new thing?**
+I'm not sure, it seems like it's been around five-ish years, the Waterloo Park? + +**I lived there in 2007, for a couple of years, just to set the scene for why we're talking about this. I think they were building a lot of the park then. I do remember, of course, that [Kitchener holds the second largest Oktoberfest in the world](https://www.oktoberfest.ca/). Is that something you've had a chance to check out?**
+I have not. I actually didn't know that was a fact. + +**The local civic organization is going to have to do a bit more work, I feel. Do you like ribs?**
+I have mixed feelings about ribs. It's kind of a hit or miss situation for me so far. + +**Again, that might be something that's changed over the last few years. The Ribfests used to have a lot of trophies with little pigs on top of them, but I feel that the shifting dining habits of the world might mean they have to offer some vegan or vegetarian options, to please the modern palette.**
+[LAUGHS] For sure. Do you recommend the Oktoberfest here? Have you been? + +**I went a couple of times. It was a lot of fun.**
+Okay. + +**It's basically just drinking. I would have recommended it back then; I'm not sure it would be quite what I'd be doing today.**
+All right, good to know. + +**The Ribfest, however, I would go back just for that.**
+Oh, ok. + +**And the great thing about Ribfests as a concept is that they have one in every little town. [The Kitchener Ribfest](https://kitchenerribandbeerfest.com/), I looked it up, it's in July; you've just missed that. But, you could go to the [Waterloo Ribfest](https://northernheatribseries.ca/waterloo/) in September.**
+Oh, it is in September? They have their own Ribfest? + +**They do. I think Guelph has one, and Cambridge has one. That's the advantage of the region — there are lots of little cities. Kitchener and Waterloo are two cities that grew into each other — they do call them the Twin Cities. I hear that they finally built the light rail link between the two of them?**
+It is fantastic, and makes the city so much more walkable. + +**Yes, you can go from one mall to the other. That's Canada for you.**
+Well, Uptown is really nice. I quite like it. It's quite cozy. + +**Do you ever cross the border over into Kitchener? Or only when you've lost a bet?**
+Yeah, not a lot. Only for farmer's market, I say. + +**It's worthwhile. There's a lot of good food there, I remember.**
+Yeah. Quite lovely. + +**Now we've got all that out of the way, let's travel back in time a little bit. You mentioned there that you went to high school in Calgary?**
+I did. I had not been to Ontario before I went to university. Calgary was frankly too cold and not walkable enough for me. + +**I basically say the same thing about Waterloo and that's why I moved to England.**
+Fascinating. Gets better. + +**How did you get into tech?**
+I took a computer science class in high school. I was one of maybe only three women in the class, and I kind of stuck with it since. + +**Was the gender distribution part of your thought process at the time?**
+Yeah, I think I was drawn to it partially because I didn't see a lot of people who looked like me in the class. + +**You followed it through to university. What is it that you're studying?**
+I am studying computer engineering, so a lot of hardware stuff. + +**You're involved in the [UW Cybersecurity Club](https://www.facebook.com/groups/uwcyber/). What can you tell me about that without having to kill me?**
+Oh, we are very nice and friendly people! I told myself I'm going to have a nice and chill summer and then I got chosen to lead the release and also ended up running the Waterloo Cybersecurity Club. The club kind of died out during the pandemic, because we weren't on campus, but we have so many smart and amazing people who are in cybersecurity, so it's great to get them together and I learned so many things. + +**Is that like the modern equivalent of the [LAN party](https://en.wikipedia.org/wiki/LAN_party)? You're all getting into a dark room and trying to hack the Gibson?**
+[LAUGHS] Well, you'll have to explain to me again what a LAN party is. Do you bring your own PC? + +**You used to. Back in the day it was incomprehensible that you could communicate with a different person in a different place at a fast enough speed, so you had to physically sit next to somebody and plug a cable in between you.**
+Okay, well kind of the same, I guess. We bring our own laptop and we go to CTF competitions together. + +**They didn't have laptops back in the days of LAN parties. You'd bring a giant 19-inch square monitor, and everything. It was a badge of honor what you could carry.**
+Okay. Can't relate, but good to know. [LAUGHS] + +**One of the more unique aspects of UW is its [co-op system](https://uwaterloo.ca/future-students/co-op). Tell us a little bit about that?**
+As part of my degree, I am required to do minimum five and maximum six co-ops. I've done all six of them. Two of them were in Kubernetes and that's how I got started. + +**A co-op is a placement, as opposed to something you do on campus?**
+Right, so co-op is basically an internship. My first one was at the Canada Revenue Agency. We didn't have wifi and I had my own cubicle, which is interesting. They don't do that anymore, they have open office space. But my second was at Ericsson, where I learned about Kubernetes. It was during the pandemic. KubeCon offered virtual attendance for students and I signed up and I poked around and I have been around since. + +**What was it like going through university during the COVID years? What did that mean in terms of the fact you would previously have traveled to these internships? Did you do them all from home?**
+I'm not totally sure what I missed out on. For sure, a lot of relationship building, but also that we do have to move a lot as part of the co-op experience. Last fall I was in San Francisco, I was in Palo Alto earlier this year. A lot of that dynamic has already been the case. + +**Definitely different weather systems, Palo Alto versus Waterloo.**
+Oh, for sure. Yes, yes. Really glad I was there over the winter. + +**The first snow would fall in Ontario about the end of October and it would pile up over the next few months. There were still piles that hadn't melted by June. That's why I say, there were only four months of the year, July through September, where there was no snow on the ground.**
+ That's true. Didn't catch any snow in Palo Alto, and honestly, that's great. [CHUCKLES] + +**Thank you, global warming, I guess.**
+Oh no! [LAUGHS] + +**Tell me about the co-op term that you did working with Kubernetes at Ericsson?**
+This was such a long time ago, but we were trying to build some sort of pipeline to deploy testing. It was running inside a cluster, and I learned Helm charts and all that good stuff. And then, for the co-op after that, I worked at a Canadian startup in FinTech. It was 24/7 Kubernetes, [building their secret injection system, using ArgoCD to automatically pull secrets from 1Password](https://medium.com/@nng.grace/automated-kubernetes-secret-injection-with-1password-secret-automation-and-hashicorp-vault-8db826c50c1d). + +**How did that lead you on to involvement with the release team?**
+It was over the pandemic, so I didn't have a lot to do, I went to the conference, saw so many cool talks. One that really stuck out to me was [a Kubernetes hacking talk by Tabitha Sable and V Korbes](https://www.youtube.com/watch?v=-4W3ChRVeLI). I thought it was the most amazing thing and it was so cool. One of my friends was on the release team at the time, and she showed me what she does. I applied and thankfully got in. I didn't have any open source experience. It was fully like one of those things where someone took a chance on me. + +**How would you characterize the experience that you've had to date? You have had involvement with pretty much every release since then.**
+Yeah, I think it was a really formative experience, and the community has been such a big part of it. + +**You started as an enhancement shadow with Kubernetes 1.22, eventually moving up to enhancements lead, then you moved on to be the release lead shadow. Obviously, you are the lead for 1.28, but for 1.27 you did something a bit different. What was that, and why did you do it?**
+For 1.25 and 1.26, I was release lead shadow, so I had an understanding of what that role was like. I wanted to shadow another team, and at that time I thought CI Signal was a big black box to me. I joined the team, but I also had capacity for other things, I joined as a branch manager associate as well. + +**What is the difference between that role and the traditional release team roles we think about?**
+Yeah, that's a great question. So the branch management role is a more constant role. They don't necessarily get swapped out every release. You shadow as an associate, so you do things like cut releases, distribute them, update distros, things like that. It's a really important role, and the folks that are in there are more technical. So if you have been on the release team for a long time and are looking for more permanent role, I recommend looking into that. + +**Congratulations again on [the release of 1.28 today](/blog/2023/08/15/kubernetes-v1-28-release/).**
+Yeah, thank you. + +**What is the best new feature in Kubernetes 1.28, and why is it [sidecar container support](/blog/2023/08/25/native-sidecar-containers/)?**
+Great question. I am as excited as you. In 1.28, we have a new feature in alpha, which is sidecar container support. We introduced a new field called restartPolicy for init containers, that allows the containers to live throughout the life cycle of the pod and not block the pod from terminating. Craig, you know a lot about this, but there are so many use cases for this. It is a very common pattern. You use it for logging, monitoring, metrics; also configs and secrets as well. + +**And the service mesh!**
+And the service mesh. + +**Very popular. I will say that the Sidecar pattern was called out very early on, in [a blog post Brendan Burns wrote](/blog/2015/06/the-distributed-system-toolkit-patterns/), talking about how you can achieve some of the things you just mentioned. Support for it in Kubernetes has been— it's been a while, shall we say. I've been doing these interviews since 2018, and September 2019 was when [I first had a conversation with a release manager](/blog/2019/12/06/when-youre-in-the-release-team-youre-family-the-kubernetes-1.16-release-interview/) who felt they had to apologize for Sidecar containers not shipping in that release.**
+Well, here we are! + +**Thank you for not letting the side down.**
+[LAUGHS] + +**There are a bunch of other features that are going to GA in 1.28. Tell me about what's new with [kubectl events](https://github.com/kubernetes/enhancements/issues/1440)?**
+It got a new CLI and now it is separate from kubectl get. I think that changes in the CLI are always a little bit more apparent because they are user-facing. + +**Are there a lot of other user-facing changes, or are most of the things in the release very much behind the scenes?**
+I would say it's a good mix of both; it depends on what you're interested in. + +**I am interested, of course, in [non-graceful node shutdown support](https://github.com/kubernetes/enhancements/issues/2268). What can you tell us about that?**
+Right, so for situations where you have a hardware failure or a broken OS, we have added additional support for a better graceful shutdown. + +**If someone trips over the power cord at your LAN party and your cluster goes offline as a result?**
+Right, exactly. More availability! That's always good. + +**And if it's not someone tripping over your power cord, it's probably DNS that broke your cluster. What's changed in terms of DNS configuration?**
+Oh, we introduced [a new feature gate to allow more DNS search path](https://github.com/kubernetes/enhancements/issues/2595). + +**Is that all there is to it?**
+That's pretty much it. [LAUGHING] Yeah, you can have more and longer DNS search path. + +**It can never be long enough. Just search everything! If .com doesn't work, try .net and try .io after that.**
+Surely. + +**Those are a few of the big features that are moving to stable. Obviously, over the course of the last few releases, features come in, moving from Alpha to Beta and so on. New features coming in today might not be available to people for a while. As you mentioned, there are feature gates that you can enable to allow people to have access to these. What are some of the newest features that have been introduced that are in Alpha, that are particularly interesting to you personally?**
+I have two. The first one is [`kubectl delete --interactive`](https://github.com/kubernetes/enhancements/issues/3895). I'm always nervous when I delete something, you know, it's going to be a typo or it's going to be on the wrong tab. So we have an `--interactive` flag for that now. + +**So you can get feedback on what you're about to delete before you do it?**
+Right; confirmation is good! + +**You mentioned two there, what was the second one?**
+Right; this one is close to my heart. It is a SIG Release KEP, [publishing on community infrastructure](https://github.com/kubernetes/enhancements/issues/1731). I'm not sure if you know, but as part of my branch management associate role in 1.27, I had the opportunity to cut a few releases. It takes up to 12 hours sometimes. And now, we are hoping that the process only includes release managers, so we don't have to call up the folks at Google and, you know, lengthen that process anymore. + +**Is 12 hours the expected length for software of this size, or is there work in place to try and bring that down?**
+There's so much work in place to bring that down. I think 12 hours is on the shorter end of it. Unfortunately, we have had a situation where we have to, you know, switch the release manager because it's just so late at night for them. + +**They've fallen asleep halfway through?**
+Exactly, yeah. 6 to 12 hours, I think, is our status quo. + +**The theme for this release is "[Planternetes](/blog/2023/08/15/kubernetes-v1-28-release/#release-theme-and-logo)". That's going to need some explanation, I feel.**
+Okay. I had full creative control over this. It is summer in the northern hemisphere, and I am a big house plant fanatic. It's always a little sad when I have to move cities for co-op and can't take my plants with me. + +**Is that a border control thing? They don't let you take them over the border?**
+It's not even that; they're just so clunky and fragile. It's usually not worth the effort. But I think our community is very much like a garden. We have very critical roles in the ecosystem and we all have to work together. + +**Will you be posting seeds out to contributors and growing something together all around the world?**
+That would be so cool if we had merch, like a little card with seeds embedded in it. I don't think we have the budget for that though. [LAUGHS] + +**You say that. There are people who are inspired in many different areas. I love talking to the release managers and hearing the things that they're interested in. You should think about taking some seeds off one of your plants, and just spreading them around the world. People can take pictures, and tag you in them on Instagram.**
+That's cool. You know how we have a SIG Beard? We can have a SIG Plant. + +**You worked for a long time with the release lead for 1.27. Xander Grzywinski. One of the benefits of having [done my interview with him in writing](https://craigbox.substack.com/p/kubernetes-and-chill) and not as a podcast is I didn't have to try and butcher pronouncing his surname. Can you help me out here?**
+I unfortunately cannot. I don't want to butcher it either! + +**Anyway, Xander told me that he suspected that in this release you would have to deal with some very last-minute PRs, as is tradition. Was that the case?**
+I vividly remember the last minute PRs from last release because I was trying to cut the releases, as part of the branch management team. Thankfully, that was not the case this release. We have had other challenges, of course. + +**Can you tell me some of those challenges?**
+I think improvement on documentation is always a big part. The KEP process can be very daunting to new contributors. How do you get people to review your KEPs? How do you opt in? All that stuff. We're improving documentations for that. + +**As someone who has been through a lot of releases, I've been feeling, like you've said, that the last minute nature has slowed down a little. The process is perhaps improving. Do you see that, or do you think there's still a long way to go for the leads to improve it?**
+I think we've come very far. When I started in 1.22, we were using spreadsheets to track a hundred enhancements. It was a monster; I was terrified to touch it. Now, we're on GitHub boards. As a result of that, we are actually merging the bug triage and CI Signal team in 1.29. + +**What's the impact of that?**
+The bug triage team is now using the GitHub board to track issues, which is much more efficient. We are able to merge the two teams together. + +**I have heard a rumor that GitHub boards are powered by spreadsheets underneath.**
+Honestly, even if that's true, the fact that it's on the same platform and it has better version control is just magical. + +**At this time, the next release lead has not yet been announced, but tradition dictates that you write down your feelings, best wishes and instructions to them in an envelope, which you'll leave in their desk drawer. What are you going to put inside that envelope?**
+Our 1.28 release lead is fantastic and they're so capable of handling the release— + +**That's you, isn't it?**
+1.29? [LAUGHS] No, I'm too tired. I need to catch up on my sleep. My advice for them? It's going to be okay. It's all going to be okay. I was going to echo Leo's and Cici's words, to overcommunicate, but I think that has been said enough times already. + +**You've communicated enough. Stop! No more communication!**
+Yeah, no more communication. [LAUGHS] It's going to be okay. And honestly, shout out to my emeritus advisor, Leo, for reminding me that. Sometimes there are a lot of fires and it can be overwhelming, but it will be okay. + +**As we've alluded to a little bit throughout our conversation, there are a lot of people in the Kubernetes community who, for want of a better term, have had "a lot of experience" at running these systems. Then there are, of course, a lot of people who are just at the beginning of their careers; like yourself, at university. How do you see the difference between how those groups interact? Is there one team throughout, or what do you think that each can learn from the other?**
+I think the diversity of the team is one of its strengths and I really enjoy it. I learn so much from folks who have been doing this for 20 years or folks who are new to the industry like I am. + +**I know the CNCF goes to a lot of effort to enable new people to take part. Is there anything that you can say about how people might get involved?**
+Firstly, I think SIG Release has started a wonderful tradition, or system, of [helping new folks join the release team as a shadow](https://github.com/kubernetes/sig-release/blob/master/release-team/shadows.md), and helping them grow into bigger positions, like leads. I think other SIGs are also following that template as well. But a big part of me joining and sticking with the community has been the ability to go to conferences. As I said, my first conference was KubeCon, when I was not involved in the community at all. And so a big shout-out to the CNCF and the companies that sponsor the Dan Kohn and the speaker scholarships. They have been the sole reason that I was able to attend KubeCon, meet people, and feel the power of the community. + +**Last year's KubeCon in North America was in Detroit?**
+Detroit, [I was there, yeah](https://medium.com/@nng.grace/kubecon-in-the-motor-city-4e23e0446751). + +**That's quite a long drive?**
+I was in SF, so I flew over. + +**You live right next door! If only you'd been in Waterloo.**
+Yeah, but who knows? Maybe I'll do a road trip from Waterloo to Chicago this year. + +--- + +_[Grace Nguyen](https://twitter.com/GraceNNG) is a student at the University of Waterloo, and was the release team lead for Kubernetes 1.28. Subscribe to [Let's Get To The News](https://craigbox.substack.com/about#§follow-the-podcast), or search for it wherever you get your podcasts._ \ No newline at end of file diff --git a/content/en/docs/concepts/architecture/_index.md b/content/en/docs/concepts/architecture/_index.md index 61fb48e7142b7..7c9a45c71e294 100644 --- a/content/en/docs/concepts/architecture/_index.md +++ b/content/en/docs/concepts/architecture/_index.md @@ -5,3 +5,4 @@ description: > The architectural concepts behind Kubernetes. --- +{{< figure src="/images/docs/kubernetes-cluster-architecture.svg" alt="Components of Kubernetes" caption="Kubernetes cluster architecture" class="diagram-large" >}} diff --git a/content/en/docs/concepts/architecture/cgroups.md b/content/en/docs/concepts/architecture/cgroups.md index b0a98af6604b0..b96d89e0d6dd4 100644 --- a/content/en/docs/concepts/architecture/cgroups.md +++ b/content/en/docs/concepts/architecture/cgroups.md @@ -104,8 +104,8 @@ updated to newer versions that support cgroup v2. For example: DaemonSet for monitoring pods and containers, update it to v0.43.0 or later. * If you deploy Java applications, prefer to use versions which fully support cgroup v2: * [OpenJDK / HotSpot](https://bugs.openjdk.org/browse/JDK-8230305): jdk8u372, 11.0.16, 15 and later - * [IBM Semeru Runtimes](https://www.eclipse.org/openj9/docs/version0.33/#control-groups-v2-support): jdk8u345-b01, 11.0.16.0, 17.0.4.0, 18.0.2.0 and later - * [IBM Java](https://www.ibm.com/docs/en/sdk-java-technology/8?topic=new-service-refresh-7#whatsnew_sr7__fp15): 8.0.7.15 and later + * [IBM Semeru Runtimes](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.382.0, 11.0.20.0, 17.0.8.0, and later + * [IBM Java](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.8.6 and later * If you are using the [uber-go/automaxprocs](https://github.com/uber-go/automaxprocs) package, make sure the version you use is v1.5.1 or higher. diff --git a/content/en/docs/concepts/cluster-administration/addons.md b/content/en/docs/concepts/cluster-administration/addons.md index f01736c10351c..aa36d4c97a846 100644 --- a/content/en/docs/concepts/cluster-administration/addons.md +++ b/content/en/docs/concepts/cluster-administration/addons.md @@ -37,7 +37,7 @@ installation instructions. The list does not try to be exhaustive. network policies on L3-L7 using an identity-based security model that is decoupled from network addressing. Cilium can act as a replacement for kube-proxy; it also offers additional, opt-in observability and security features. - Cilium is a [CNCF project at the Incubation level](https://www.cncf.io/projects/cilium/). + Cilium is a [CNCF project at the Graduated level](https://www.cncf.io/projects/cilium/). * [CNI-Genie](https://github.com/cni-genie/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, or Weave. CNI-Genie is a [CNCF project at the Sandbox level](https://www.cncf.io/projects/cni-genie/). diff --git a/content/en/docs/concepts/cluster-administration/flow-control.md b/content/en/docs/concepts/cluster-administration/flow-control.md index 7b44cf4e5dc9e..a00c146d5144a 100644 --- a/content/en/docs/concepts/cluster-administration/flow-control.md +++ b/content/en/docs/concepts/cluster-administration/flow-control.md @@ -488,6 +488,8 @@ exports additional metrics. Monitoring these can help you determine whether your configuration is inappropriately throttling important traffic, or find poorly-behaved workloads that may be harming system health. +#### Maturity level BETA + * `apiserver_flowcontrol_rejected_requests_total` is a counter vector (cumulative since server start) of requests that were rejected, broken down by the labels `flow_schema` (indicating the one that @@ -509,6 +511,37 @@ poorly-behaved workloads that may be harming system health. vector (cumulative since server start) of requests that began executing, broken down by `flow_schema` and `priority_level`. +* `apiserver_flowcontrol_current_inqueue_requests` is a gauge vector + holding the instantaneous number of queued (not executing) requests, + broken down by `priority_level` and `flow_schema`. + +* `apiserver_flowcontrol_current_executing_requests` is a gauge vector + holding the instantaneous number of executing (not waiting in a + queue) requests, broken down by `priority_level` and `flow_schema`. + +* `apiserver_flowcontrol_current_executing_seats` is a gauge vector + holding the instantaneous number of occupied seats, broken down by + `priority_level` and `flow_schema`. + +* `apiserver_flowcontrol_request_wait_duration_seconds` is a histogram + vector of how long requests spent queued, broken down by the labels + `flow_schema`, `priority_level`, and `execute`. The `execute` label + indicates whether the request has started executing. + + {{< note >}} + Since each FlowSchema always assigns requests to a single + PriorityLevelConfiguration, you can add the histograms for all the + FlowSchemas for one priority level to get the effective histogram for + requests assigned to that priority level. + {{< /note >}} + +* `apiserver_flowcontrol_nominal_limit_seats` is a gauge vector + holding each priority level's nominal concurrency limit, computed + from the API server's total concurrency limit and the priority + level's configured nominal concurrency shares. + +#### Maturity level ALPHA + * `apiserver_current_inqueue_requests` is a gauge vector of recent high water marks of the number of queued requests, grouped by a label named `request_kind` whose value is `mutating` or `readOnly`. @@ -518,6 +551,10 @@ poorly-behaved workloads that may be harming system health. last window's high water mark of number of requests actively being served. +* `apiserver_current_inqueue_seats` is a gauge vector of the sum over + queued requests of the largest number of seats each will occupy, + grouped by labels named `flow_schema` and `priority_level`. + * `apiserver_flowcontrol_read_vs_write_current_requests` is a histogram vector of observations, made at the end of every nanosecond, of the number of requests broken down by the labels @@ -528,14 +565,6 @@ poorly-behaved workloads that may be harming system health. number of requests (queue volume limit for waiting and concurrency limit for executing). -* `apiserver_flowcontrol_current_inqueue_requests` is a gauge vector - holding the instantaneous number of queued (not executing) requests, - broken down by `priority_level` and `flow_schema`. - -* `apiserver_flowcontrol_current_executing_requests` is a gauge vector - holding the instantaneous number of executing (not waiting in a - queue) requests, broken down by `priority_level` and `flow_schema`. - * `apiserver_flowcontrol_request_concurrency_in_use` is a gauge vector holding the instantaneous number of occupied seats, broken down by `priority_level` and `flow_schema`. @@ -584,11 +613,6 @@ poorly-behaved workloads that may be harming system health. was always equal to `apiserver_flowcontrol_current_limit_seats` (which did not exist as a distinct metric). -* `apiserver_flowcontrol_nominal_limit_seats` is a gauge vector - holding each priority level's nominal concurrency limit, computed - from the API server's total concurrency limit and the priority - level's configured nominal concurrency shares. - * `apiserver_flowcontrol_lower_limit_seats` is a gauge vector holding the lower bound on each priority level's dynamic concurrency limit. @@ -631,18 +655,6 @@ poorly-behaved workloads that may be harming system health. holding, for each priority level, the dynamic concurrency limit derived in the last adjustment. -* `apiserver_flowcontrol_request_wait_duration_seconds` is a histogram - vector of how long requests spent queued, broken down by the labels - `flow_schema`, `priority_level`, and `execute`. The `execute` label - indicates whether the request has started executing. - - {{< note >}} - Since each FlowSchema always assigns requests to a single - PriorityLevelConfiguration, you can add the histograms for all the - FlowSchemas for one priority level to get the effective histogram for - requests assigned to that priority level. - {{< /note >}} - * `apiserver_flowcontrol_request_execution_seconds` is a histogram vector of how long requests took to actually execute, broken down by `flow_schema` and `priority_level`. @@ -661,6 +673,11 @@ poorly-behaved workloads that may be harming system health. to a request being dispatched but did not, due to lack of available concurrency, broken down by `flow_schema` and `priority_level`. +* `apiserver_flowcontrol_epoch_advance_total` is a counter vector of + the number of attempts to jump a priority level's progress meter + backward to avoid numeric overflow, grouped by `priority_level` and + `success`. + ## Good practices for using API Priority and Fairness When a given priority level exceeds its permitted concurrency, requests can diff --git a/content/en/docs/concepts/cluster-administration/system-logs.md b/content/en/docs/concepts/cluster-administration/system-logs.md index d2a9d46bbd7e6..1feeecd3db7e5 100644 --- a/content/en/docs/concepts/cluster-administration/system-logs.md +++ b/content/en/docs/concepts/cluster-administration/system-logs.md @@ -17,6 +17,13 @@ scheduler decisions). +{{< warning >}} +In contrast to the command line flags described here, the *log +output* itself does *not* fall under the Kubernetes API stability guarantees: +individual log entries and their formatting may change from one release +to the next! +{{< /warning >}} + ## Klog klog is the Kubernetes logging library. [klog](https://github.com/kubernetes/klog) diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index d546ec12e4964..fb3140ca11a51 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -6,8 +6,8 @@ content_type: concept feature: title: Secret and configuration management description: > - Deploy and update secrets and application configuration without rebuilding your image - and without exposing secrets in your stack configuration. + Deploy and update Secrets and application configuration without rebuilding your image + and without exposing Secrets in your stack configuration. weight: 30 --- @@ -24,7 +24,7 @@ Because Secrets can be created independently of the Pods that use them, there is less risk of the Secret (and its data) being exposed during the workflow of creating, viewing, and editing Pods. Kubernetes, and applications that run in your cluster, can also take additional precautions with Secrets, such as avoiding -writing secret data to nonvolatile storage. +writing sensitive data to nonvolatile storage. Secrets are similar to {{< glossary_tooltip text="ConfigMaps" term_id="configmap" >}} but are specifically intended to hold confidential data. @@ -68,7 +68,7 @@ help automate node registration. ### Use case: dotfiles in a secret volume You can make your data "hidden" by defining a key that begins with a dot. -This key represents a dotfile or "hidden" file. For example, when the following secret +This key represents a dotfile or "hidden" file. For example, when the following Secret is mounted into a volume, `secret-volume`, the volume will contain a single file, called `.secret-file`, and the `dotfile-test-container` will have this file present at the path `/etc/secret-volume/.secret-file`. @@ -78,35 +78,7 @@ Files beginning with dot characters are hidden from the output of `ls -l`; you must use `ls -la` to see them when listing directory contents. {{< /note >}} -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: dotfile-secret -data: - .secret-file: dmFsdWUtMg0KDQo= ---- -apiVersion: v1 -kind: Pod -metadata: - name: secret-dotfiles-pod -spec: - volumes: - - name: secret-volume - secret: - secretName: dotfile-secret - containers: - - name: dotfile-test-container - image: registry.k8s.io/busybox - command: - - ls - - "-l" - - "/etc/secret-volume" - volumeMounts: - - name: secret-volume - readOnly: true - mountPath: "/etc/secret-volume" -``` +{{% code language="yaml" file="secret/dotfile-secret.yaml" %}} ### Use case: Secret visible to one container in a Pod @@ -135,8 +107,8 @@ Here are some of your options: [ServiceAccount](/docs/reference/access-authn-authz/authentication/#service-account-tokens) and its tokens to identify your client. - There are third-party tools that you can run, either within or outside your cluster, - that provide secrets management. For example, a service that Pods access over HTTPS, - that reveals a secret if the client correctly authenticates (for example, with a ServiceAccount + that manage sensitive data. For example, a service that Pods access over HTTPS, + that reveals a Secret if the client correctly authenticates (for example, with a ServiceAccount token). - For authentication, you can implement a custom signer for X.509 certificates, and use [CertificateSigningRequests](/docs/reference/access-authn-authz/certificate-signing-requests/) @@ -251,18 +223,7 @@ fills in some other fields such as the `kubernetes.io/service-account.uid` annot The following example configuration declares a ServiceAccount token Secret: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: secret-sa-sample - annotations: - kubernetes.io/service-account.name: "sa-name" -type: kubernetes.io/service-account-token -data: - # You can include additional key value pairs as you do with Opaque Secrets - extra: YmFyCg== -``` +{{% code language="yaml" file="secret/serviceaccount-token-secret.yaml" %}} After creating the Secret, wait for Kubernetes to populate the `token` key in the `data` field. @@ -290,16 +251,7 @@ you must use one of the following `type` values for that Secret: Below is an example for a `kubernetes.io/dockercfg` type of Secret: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: secret-dockercfg -type: kubernetes.io/dockercfg -data: - .dockercfg: | - "" -``` +{{% code language="yaml" file="secret/dockercfg-secret.yaml" %}} {{< note >}} If you do not want to perform the base64 encoding, you can choose to use the @@ -369,16 +321,11 @@ Secret manifest. The following manifest is an example of a basic authentication Secret: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: secret-basic-auth -type: kubernetes.io/basic-auth -stringData: - username: admin # required field for kubernetes.io/basic-auth - password: t0p-Secret # required field for kubernetes.io/basic-auth -``` +{{% code language="yaml" file="secret/basicauth-secret.yaml" %}} + +{{< note >}} +The `stringData` field for a Secret does not work well with server-side apply. +{{< /note >}} The basic authentication Secret type is provided only for convenience. You can create an `Opaque` type for credentials used for basic authentication. @@ -397,17 +344,7 @@ as the SSH credential to use. The following manifest is an example of a Secret used for SSH public/private key authentication: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: secret-ssh-auth -type: kubernetes.io/ssh-auth -data: - # the data is abbreviated in this example - ssh-privatekey: | - MIIEpQIBAAKCAQEAulqb/Y ... -``` +{{% code language="yaml" file="secret/ssh-auth-secret.yaml" %}} The SSH authentication Secret type is provided only for convenience. You can create an `Opaque` type for credentials used for SSH authentication. @@ -440,21 +377,7 @@ the base64 encoded certificate and private key. For details, see The following YAML contains an example config for a TLS Secret: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: secret-tls -type: kubernetes.io/tls -stringData: - # the data is abbreviated in this example - tls.crt: | - --------BEGIN CERTIFICATE----- - MIIC2DCCAcCgAwIBAgIBATANBgkqh ... - tls.key: | - -----BEGIN RSA PRIVATE KEY----- - MIIEpgIBAAKCAQEA7yn3bRHQ5FHMQ ... -``` +{{% code language="yaml" file="secret/tls-auth-secret.yaml" %}} The TLS Secret type is provided only for convenience. You can create an `Opaque` type for credentials used for TLS authentication. @@ -486,26 +409,12 @@ string of the token ID. As a Kubernetes manifest, a bootstrap token Secret might look like the following: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: bootstrap-token-5emitj - namespace: kube-system -type: bootstrap.kubernetes.io/token -data: - auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4= - expiration: MjAyMC0wOS0xM1QwNDozOToxMFo= - token-id: NWVtaXRq - token-secret: a3E0Z2lodnN6emduMXAwcg== - usage-bootstrap-authentication: dHJ1ZQ== - usage-bootstrap-signing: dHJ1ZQ== -``` +{{% code language="yaml" file="secret/bootstrap-token-secret-base64.yaml" %}} A bootstrap token Secret has the following keys specified under `data`: - `token-id`: A random 6 character string as the token identifier. Required. -- `token-secret`: A random 16 character string as the actual token secret. Required. +- `token-secret`: A random 16 character string as the actual token Secret. Required. - `description`: A human-readable string that describes what the token is used for. Optional. - `expiration`: An absolute UTC time using [RFC3339](https://datatracker.ietf.org/doc/html/rfc3339) specifying when the token @@ -518,26 +427,11 @@ A bootstrap token Secret has the following keys specified under `data`: You can alternatively provide the values in the `stringData` field of the Secret without base64 encoding them: -```yaml -apiVersion: v1 -kind: Secret -metadata: - # Note how the Secret is named - name: bootstrap-token-5emitj - # A bootstrap token Secret usually resides in the kube-system namespace - namespace: kube-system -type: bootstrap.kubernetes.io/token -stringData: - auth-extra-groups: "system:bootstrappers:kubeadm:default-node-token" - expiration: "2020-09-13T04:39:10Z" - # This token ID is used in the name - token-id: "5emitj" - token-secret: "kq4gihvszzgn1p0r" - # This token can be used for authentication - usage-bootstrap-authentication: "true" - # and it can be used for signing - usage-bootstrap-signing: "true" -``` +{{% code language="yaml" file="secret/bootstrap-token-secret-literal.yaml" %}} + +{{< note >}} +The `stringData` field for a Secret does not work well with server-side apply. +{{< /note >}} ## Working with Secrets @@ -568,9 +462,9 @@ precedence. #### Size limit {#restriction-data-size} -Individual secrets are limited to 1MiB in size. This is to discourage creation -of very large secrets that could exhaust the API server and kubelet memory. -However, creation of many smaller secrets could also exhaust memory. You can +Individual Secrets are limited to 1MiB in size. This is to discourage creation +of very large Secrets that could exhaust the API server and kubelet memory. +However, creation of many smaller Secrets could also exhaust memory. You can use a [resource quota](/docs/concepts/policy/resource-quotas/) to limit the number of Secrets (or other resources) in a namespace. @@ -613,25 +507,7 @@ When you reference a Secret in a Pod, you can mark the Secret as _optional_, such as in the following example. If an optional Secret doesn't exist, Kubernetes ignores it. -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: mypod -spec: - containers: - - name: mypod - image: redis - volumeMounts: - - name: foo - mountPath: "/etc/foo" - readOnly: true - volumes: - - name: foo - secret: - secretName: mysecret - optional: true -``` +{{% code language="yaml" file="secret/optional-secret.yaml" %}} By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available. @@ -708,17 +584,17 @@ LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT 0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names. ``` -### Container image pull secrets {#using-imagepullsecrets} +### Container image pull Secrets {#using-imagepullsecrets} If you want to fetch container images from a private repository, you need a way for the kubelet on each node to authenticate to that repository. You can configure -_image pull secrets_ to make this possible. These secrets are configured at the Pod +_image pull Secrets_ to make this possible. These Secrets are configured at the Pod level. #### Using imagePullSecrets -The `imagePullSecrets` field is a list of references to secrets in the same namespace. -You can use an `imagePullSecrets` to pass a secret that contains a Docker (or other) image registry +The `imagePullSecrets` field is a list of references to Secrets in the same namespace. +You can use an `imagePullSecrets` to pass a Secret that contains a Docker (or other) image registry password to the kubelet. The kubelet uses this information to pull a private image on behalf of your Pod. See the [PodSpec API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) for more information about the `imagePullSecrets` field. @@ -787,7 +663,7 @@ Secrets it expects to interact with, other apps within the same namespace can render those assumptions invalid. A Secret is only sent to a node if a Pod on that node requires it. -For mounting secrets into Pods, the kubelet stores a copy of the data into a `tmpfs` +For mounting Secrets into Pods, the kubelet stores a copy of the data into a `tmpfs` so that the confidential data is not written to durable storage. Once the Pod that depends on the Secret is deleted, the kubelet deletes its local copy of the confidential data from the Secret. diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md index b01b2fd112eef..b4b837ae32fc6 100644 --- a/content/en/docs/concepts/containers/images.md +++ b/content/en/docs/concepts/containers/images.md @@ -265,38 +265,26 @@ See [Configure a kubelet image credential provider](/docs/tasks/administer-clust The interpretation of `config.json` varies between the original Docker implementation and the Kubernetes interpretation. In Docker, the `auths` keys can only specify root URLs, whereas Kubernetes allows glob URLs as well as -prefix-matched paths. This means that a `config.json` like this is valid: +prefix-matched paths. The only limitation is that glob patterns (`*`) have to +include the dot (`.`) for each subdomain. The amount of matched subdomains has +to be equal to the amount of glob patterns (`*.`), for example: + +- `*.kubernetes.io` will *not* match `kubernetes.io`, but `abc.kubernetes.io` +- `*.*.kubernetes.io` will *not* match `abc.kubernetes.io`, but `abc.def.kubernetes.io` +- `prefix.*.io` will match `prefix.kubernetes.io` +- `*-good.kubernetes.io` will match `prefix-good.kubernetes.io` + +This means that a `config.json` like this is valid: ```json { "auths": { - "*my-registry.io/images": { - "auth": "…" - } + "my-registry.io/images": { "auth": "…" }, + "*.my-registry.io/images": { "auth": "…" } } } ``` -The root URL (`*my-registry.io`) is matched by using the following syntax: - -``` -pattern: - { term } - -term: - '*' matches any sequence of non-Separator characters - '?' matches any single non-Separator character - '[' [ '^' ] { character-range } ']' - character class (must be non-empty) - c matches character c (c != '*', '?', '\\', '[') - '\\' c matches character c - -character-range: - c matches character c (c != '\\', '-', ']') - '\\' c matches character c - lo '-' hi matches character c for lo <= c <= hi -``` - Image pull operations would now pass the credentials to the CRI container runtime for every valid pattern. For example the following container image names would match successfully: @@ -305,10 +293,14 @@ would match successfully: - `my-registry.io/images/my-image` - `my-registry.io/images/another-image` - `sub.my-registry.io/images/my-image` + +But not: + - `a.sub.my-registry.io/images/my-image` +- `a.b.sub.my-registry.io/images/my-image` The kubelet performs image pulls sequentially for every found credential. This -means, that multiple entries in `config.json` are possible, too: +means, that multiple entries in `config.json` for different paths are possible, too: ```json { diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md index 5c6cfa7fc5842..0487ca61ca29d 100644 --- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md +++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md @@ -172,3 +172,7 @@ metadata: ## {{% heading "whatsnext" %}} +- Learn more about [Cluster Networking](/docs/concepts/cluster-administration/networking/) +- Learn more about [Network Policies](/docs/concepts/services-networking/network-policies/) +- Learn about the [Troubleshooting CNI plugin-related errors](/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/) + diff --git a/content/en/docs/concepts/extend-kubernetes/operator.md b/content/en/docs/concepts/extend-kubernetes/operator.md index 69b13915603a3..16cbf37ecf275 100644 --- a/content/en/docs/concepts/extend-kubernetes/operator.md +++ b/content/en/docs/concepts/extend-kubernetes/operator.md @@ -129,7 +129,7 @@ operator. * Read the {{< glossary_tooltip text="CNCF" term_id="cncf" >}} - [Operator White Paper](https://github.com/cncf/tag-app-delivery/blob/eece8f7307f2970f46f100f51932db106db46968/operator-wg/whitepaper/Operator-WhitePaper_v1-0.md). + [Operator White Paper](https://github.com/cncf/tag-app-delivery/blob/163962c4b1cd70d085107fc579e3e04c2e14d59c/operator-wg/whitepaper/Operator-WhitePaper_v1-0.md). * Learn more about [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/) * Find ready-made operators on [OperatorHub.io](https://operatorhub.io/) to suit your use case * [Publish](https://operatorhub.io/) your operator for other people to use diff --git a/content/en/docs/concepts/overview/_index.md b/content/en/docs/concepts/overview/_index.md index 200b3e2ea337e..12c150c6cafa8 100644 --- a/content/en/docs/concepts/overview/_index.md +++ b/content/en/docs/concepts/overview/_index.md @@ -129,6 +129,14 @@ Kubernetes provides you with: Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration. +* **Batch execution** + In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired. +* **Horizontal scaling** + Scale your application up and down with a simple command, with a UI, or automatically based on CPU usage. +* **IPv4/IPv6 dual-stack** + Allocation of IPv4 and IPv6 addresses to Pods and Services +* **Designed for extensibility** + Add features to your Kubernetes cluster without changing upstream source code. ## What Kubernetes is not diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md index 11919ec3ebed3..d3a3a3966b691 100644 --- a/content/en/docs/concepts/policy/resource-quotas.md +++ b/content/en/docs/concepts/policy/resource-quotas.md @@ -465,7 +465,7 @@ from getting scheduled in a failure domain. Using this scope operators can prevent certain namespaces (`foo-ns` in the example below) from having pods that use cross-namespace pod affinity by creating a resource quota object in -that namespace with `CrossNamespaceAffinity` scope and hard limit of 0: +that namespace with `CrossNamespacePodAffinity` scope and hard limit of 0: ```yaml apiVersion: v1 @@ -478,11 +478,12 @@ spec: pods: "0" scopeSelector: matchExpressions: - - scopeName: CrossNamespaceAffinity + - scopeName: CrossNamespacePodAffinity + operator: Exists ``` If operators want to disallow using `namespaces` and `namespaceSelector` by default, and -only allow it for specific namespaces, they could configure `CrossNamespaceAffinity` +only allow it for specific namespaces, they could configure `CrossNamespacePodAffinity` as a limited resource by setting the kube-apiserver flag --admission-control-config-file to the path of the following configuration file: @@ -497,12 +498,13 @@ plugins: limitedResources: - resource: pods matchScopes: - - scopeName: CrossNamespaceAffinity + - scopeName: CrossNamespacePodAffinity + operator: Exists ``` With the above configuration, pods can use `namespaces` and `namespaceSelector` in pod affinity only if the namespace where they are created have a resource quota object with -`CrossNamespaceAffinity` scope and a hard limit greater than or equal to the number of pods using those fields. +`CrossNamespacePodAffinity` scope and a hard limit greater than or equal to the number of pods using those fields. ## Requests compared to Limits {#requests-vs-limits} diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index 1f9cd85e9e2a5..c33184b83e08c 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -35,8 +35,10 @@ specific Pods: ## Node labels {#built-in-node-labels} Like many other Kubernetes objects, nodes have -[labels](/docs/concepts/overview/working-with-objects/labels/). You can [attach labels manually](/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node). -Kubernetes also populates a [standard set of labels](/docs/reference/node/node-labels/) on all nodes in a cluster. +[labels](/docs/concepts/overview/working-with-objects/labels/). You can +[attach labels manually](/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node). +Kubernetes also populates a [standard set of labels](/docs/reference/node/node-labels/) +on all nodes in a cluster. {{}} The value of these labels is cloud provider specific and is not guaranteed to be reliable. @@ -303,17 +305,23 @@ Pod affinity rule uses the "hard" `requiredDuringSchedulingIgnoredDuringExecution`, while the anti-affinity rule uses the "soft" `preferredDuringSchedulingIgnoredDuringExecution`. -The affinity rule says that the scheduler can only schedule a Pod onto a node if -the node is in the same zone as one or more existing Pods with the label -`security=S1`. More precisely, the scheduler must place the Pod on a node that has the -`topology.kubernetes.io/zone=V` label, as long as there is at least one node in -that zone that currently has one or more Pods with the Pod label `security=S1`. - -The anti-affinity rule says that the scheduler should try to avoid scheduling -the Pod onto a node that is in the same zone as one or more Pods with the label -`security=S2`. More precisely, the scheduler should try to avoid placing the Pod on a node that has the -`topology.kubernetes.io/zone=R` label if there are other nodes in the -same zone currently running Pods with the `Security=S2` Pod label. +The affinity rule specifies that the scheduler is allowed to place the example Pod +on a node only if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/) +where other Pods have been labeled with `security=S1`. +For instance, if we have a cluster with a designated zone, let's call it "Zone V," +consisting of nodes labeled with `topology.kubernetes.io/zone=V`, the scheduler can +assign the Pod to any node within Zone V, as long as there is at least one Pod within +Zone V already labeled with `security=S1`. Conversely, if there are no Pods with `security=S1` +labels in Zone V, the scheduler will not assign the example Pod to any node in that zone. + +The anti-affinity rule specifies that the scheduler should try to avoid scheduling the Pod +on a node if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/) +where other Pods have been labeled with `security=S2`. +For instance, if we have a cluster with a designated zone, let's call it "Zone R," +consisting of nodes labeled with `topology.kubernetes.io/zone=R`, the scheduler should avoid +assigning the Pod to any node within Zone R, as long as there is at least one Pod within +Zone R already labeled with `security=S2`. Conversely, the anti-affinity rule does not impact +scheduling into Zone R if there are no Pods with `security=S2` labels. To get yourself more familiar with the examples of Pod affinity and anti-affinity, refer to the [design proposal](https://git.k8s.io/design-proposals-archive/scheduling/podaffinity.md). @@ -327,7 +335,8 @@ to learn more about how these work. In principle, the `topologyKey` can be any allowed label key with the following exceptions for performance and security reasons: -- For Pod affinity and anti-affinity, an empty `topologyKey` field is not allowed in both `requiredDuringSchedulingIgnoredDuringExecution` +- For Pod affinity and anti-affinity, an empty `topologyKey` field is not allowed in both + `requiredDuringSchedulingIgnoredDuringExecution` and `preferredDuringSchedulingIgnoredDuringExecution`. - For `requiredDuringSchedulingIgnoredDuringExecution` Pod anti-affinity rules, the admission controller `LimitPodHardAntiAffinityTopology` limits diff --git a/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md b/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md index 80367800153a6..381e291488fa1 100644 --- a/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md +++ b/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md @@ -105,13 +105,11 @@ does not support other configurations. Some kubelet garbage collection features are deprecated in favor of eviction: -| Existing Flag | New Flag | Rationale | -| ------------- | -------- | --------- | -| `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection | -| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | eviction reclaims achieve the same behavior | -| `--maximum-dead-containers` | - | deprecated once old logs are stored outside of container's context | -| `--maximum-dead-containers-per-container` | - | deprecated once old logs are stored outside of container's context | -| `--minimum-container-ttl-duration` | - | deprecated once old logs are stored outside of container's context | +| Existing Flag | Rationale | +| ------------- | --------- | +| `--maximum-dead-containers` | deprecated once old logs are stored outside of container's context | +| `--maximum-dead-containers-per-container` | deprecated once old logs are stored outside of container's context | +| `--minimum-container-ttl-duration` | deprecated once old logs are stored outside of container's context | ### Eviction thresholds diff --git a/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md b/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md index 24b5032c07f5b..0b671ecbfcbe7 100644 --- a/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md +++ b/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md @@ -6,7 +6,7 @@ weight: 40 -{{< feature-state for_k8s_version="v1.26" state="alpha" >}} +{{< feature-state for_k8s_version="v1.27" state="beta" >}} Pods were considered ready for scheduling once created. Kubernetes scheduler does its due diligence to find nodes to place all pending Pods. However, in a diff --git a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md index 1b44a1fab4ccb..c9afb795a11c2 100644 --- a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md +++ b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md @@ -85,9 +85,27 @@ An empty `effect` matches all effects with key `key1`. {{< /note >}} The above example used `effect` of `NoSchedule`. Alternatively, you can use `effect` of `PreferNoSchedule`. -This is a "preference" or "soft" version of `NoSchedule` -- the system will *try* to avoid placing a -pod that does not tolerate the taint on the node, but it is not required. The third kind of `effect` is -`NoExecute`, described later. + + +The allowed values for the `effect` field are: + +`NoExecute` +: This affects pods that are already running on the node as follows: + * Pods that do not tolerate the taint are evicted immediately + * Pods that tolerate the taint without specifying `tolerationSeconds` in + their toleration specification remain bound forever + * Pods that tolerate the taint with a specified `tolerationSeconds` remain + bound for the specified amount of time. After that time elapses, the node + lifecycle controller evicts the Pods from the node. + +`NoSchedule` +: No new Pods will be scheduled on the tainted node unless they have a matching + toleration. Pods currently running on the node are **not** evicted. + +`PreferNoSchedule` +: `PreferNoSchedule` is a "preference" or "soft" version of `NoSchedule`. + The control plane will *try* to avoid placing a Pod that does not tolerate + the taint on the node, but it is not guaranteed. You can put multiple taints on the same node and multiple tolerations on the same pod. The way Kubernetes processes multiple taints and tolerations is like a filter: start @@ -194,14 +212,7 @@ when there are node problems, which is described in the next section. {{< feature-state for_k8s_version="v1.18" state="stable" >}} -The `NoExecute` taint effect, mentioned above, affects pods that are already -running on the node as follows - * pods that do not tolerate the taint are evicted immediately - * pods that tolerate the taint without specifying `tolerationSeconds` in - their toleration specification remain bound forever - * pods that tolerate the taint with a specified `tolerationSeconds` remain - bound for the specified amount of time The node controller automatically taints a Node when certain conditions are true. The following taints are built in: @@ -221,7 +232,9 @@ are true. The following taints are built in: this node, the kubelet removes this taint. In case a node is to be drained, the node controller or the kubelet adds relevant taints -with `NoExecute` effect. If the fault condition returns to normal the kubelet or node +with `NoExecute` effect. This effect is added by default for the +`node.kubernetes.io/not-ready` and `node.kubernetes.io/unreachable` taints. +If the fault condition returns to normal, the kubelet or node controller can remove the relevant taint(s). In some cases when the node is unreachable, the API server is unable to communicate diff --git a/content/en/docs/concepts/services-networking/ingress-controllers.md b/content/en/docs/concepts/services-networking/ingress-controllers.md index 924604e5dd8a7..3b7c50378f63a 100644 --- a/content/en/docs/concepts/services-networking/ingress-controllers.md +++ b/content/en/docs/concepts/services-networking/ingress-controllers.md @@ -28,6 +28,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet {{% thirdparty-content %}} * [AKS Application Gateway Ingress Controller](https://docs.microsoft.com/azure/application-gateway/tutorial-ingress-controller-add-on-existing?toc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Faks%2Ftoc.json&bc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fbread%2Ftoc.json) is an ingress controller that configures the [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/overview). +* [Alibaba Cloud MSE Ingress](https://www.alibabacloud.com/help/en/mse/user-guide/overview-of-mse-ingress-gateways) is an ingress controller that configures the [Alibaba Cloud Native Gateway](https://www.alibabacloud.com/help/en/mse/product-overview/cloud-native-gateway-overview?spm=a2c63.p38356.0.0.20563003HJK9is), which is also the commercial version of [Higress](https://github.com/alibaba/higress). * [Apache APISIX ingress controller](https://github.com/apache/apisix-ingress-controller) is an [Apache APISIX](https://github.com/apache/apisix)-based ingress controller. * [Avi Kubernetes Operator](https://github.com/vmware/load-balancer-and-ingress-services-for-kubernetes) provides L4-L7 load-balancing using [VMware NSX Advanced Load Balancer](https://avinetworks.com/). * [BFE Ingress Controller](https://github.com/bfenetworks/ingress-bfe) is a [BFE](https://www.bfe-networks.net)-based ingress controller. @@ -46,6 +47,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet which offers API gateway functionality. * [HAProxy Ingress](https://haproxy-ingress.github.io/) is an ingress controller for [HAProxy](https://www.haproxy.org/#desc). +* [Higress](https://github.com/alibaba/higress) is an [Envoy](https://www.envoyproxy.io) based API gateway that can run as an ingress controller. * The [HAProxy Ingress Controller for Kubernetes](https://github.com/haproxytech/kubernetes-ingress#readme) is also an ingress controller for [HAProxy](https://www.haproxy.org/#desc). * [Istio Ingress](https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/) diff --git a/content/en/docs/concepts/services-networking/ingress.md b/content/en/docs/concepts/services-networking/ingress.md index 836373b3b8898..cad103a7d05d0 100644 --- a/content/en/docs/concepts/services-networking/ingress.md +++ b/content/en/docs/concepts/services-networking/ingress.md @@ -84,7 +84,7 @@ is the [rewrite-target annotation](https://github.com/kubernetes/ingress-nginx/b Different [Ingress controllers](/docs/concepts/services-networking/ingress-controllers) support different annotations. Review the documentation for your choice of Ingress controller to learn which annotations are supported. -The Ingress [spec](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) +The [Ingress spec](/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec) has all the information needed to configure a load balancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Ingress resource only supports rules for directing HTTP(S) traffic. @@ -94,8 +94,8 @@ should be defined. There are some ingress controllers, that work without the definition of a default `IngressClass`. For example, the Ingress-NGINX controller can be -configured with a [flag](https://kubernetes.github.io/ingress-nginx/#what-is-the-flag-watch-ingress-without-class) -`--watch-ingress-without-class`. It is [recommended](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do) though, to specify the +configured with a [flag](https://kubernetes.github.io/ingress-nginx/user-guide/k8s-122-migration/#what-is-the-flag-watch-ingress-without-class) +`--watch-ingress-without-class`. It is [recommended](https://kubernetes.github.io/ingress-nginx/user-guide/k8s-122-migration/#i-have-only-one-ingress-controller-in-my-cluster-what-should-i-do) though, to specify the default `IngressClass` as shown [below](#default-ingress-class). ### Ingress rules diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index 9ca2cd7f6d6df..ddacce134cc37 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -175,7 +175,6 @@ spec: targetPort: http-web-svc ``` - This works even if there is a mixture of Pods in the Service using a single configured name, with the same network protocol available via different port numbers. This offers a lot of flexibility for deploying and evolving @@ -269,7 +268,8 @@ as a destination. {{< /note >}} For an EndpointSlice that you create yourself, or in your own code, -you should also pick a value to use for the [`endpointslice.kubernetes.io/managed-by`](/docs/reference/labels-annotations-taints/#endpointslicekubernetesiomanaged-by) label. +you should also pick a value to use for the label +[`endpointslice.kubernetes.io/managed-by`](/docs/reference/labels-annotations-taints/#endpointslicekubernetesiomanaged-by). If you create your own controller code to manage EndpointSlices, consider using a value similar to `"my-domain.example/name-of-controller"`. If you are using a third party tool, use the name of the tool in all-lowercase and change spaces and other @@ -283,7 +283,8 @@ managed by Kubernetes' own control plane. #### Accessing a Service without a selector {#service-no-selector-access} Accessing a Service without a selector works the same as if it had a selector. -In the [example](#services-without-selectors) for a Service without a selector, traffic is routed to one of the two endpoints defined in +In the [example](#services-without-selectors) for a Service without a selector, +traffic is routed to one of the two endpoints defined in the EndpointSlice manifest: a TCP connection to 10.1.2.3 or 10.4.5.6, on port 9376. {{< note >}} @@ -334,8 +335,7 @@ affects the legacy Endpoints API. In that case, Kubernetes selects at most 1000 possible backend endpoints to store into the Endpoints object, and sets an -{{< glossary_tooltip text="annotation" term_id="annotation" >}} on the -Endpoints: +{{< glossary_tooltip text="annotation" term_id="annotation" >}} on the Endpoints: [`endpoints.kubernetes.io/over-capacity: truncated`](/docs/reference/labels-annotations-taints/#endpoints-kubernetes-io-over-capacity). The control plane also removes that annotation if the number of backend Pods drops below 1000. @@ -349,7 +349,8 @@ The same API limit means that you cannot manually update an Endpoints to have mo {{< feature-state for_k8s_version="v1.20" state="stable" >}} The `appProtocol` field provides a way to specify an application protocol for -each Service port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. +each Service port. This is used as a hint for implementations to offer +richer behavior for protocols that they understand. The value of this field is mirrored by the corresponding Endpoints and EndpointSlice objects. @@ -365,8 +366,6 @@ This field follows standard Kubernetes label syntax. Valid values are one of: |----------|-------------| | `kubernetes.io/h2c` | HTTP/2 over cleartext as described in [RFC 7540](https://www.rfc-editor.org/rfc/rfc7540) | - - ### Multi-port Services For some Services, you need to expose more than one port. @@ -402,7 +401,6 @@ also start and end with an alphanumeric character. For example, the names `123-abc` and `web` are valid, but `123_abc` and `-web` are not. {{< /note >}} - ## Service type {#publishing-services-service-types} For some parts of your application (for example, frontends) you may want to expose a @@ -417,7 +415,8 @@ The available `type` values and their behaviors are: : Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default that is used if you don't explicitly specify a `type` for a Service. - You can expose the Service to the public internet using an [Ingress](/docs/concepts/services-networking/ingress/) or a + You can expose the Service to the public internet using an + [Ingress](/docs/concepts/services-networking/ingress/) or a [Gateway](https://gateway-api.sigs.k8s.io/). [`NodePort`](#type-nodeport) @@ -437,8 +436,9 @@ The available `type` values and their behaviors are: No proxying of any kind is set up. The `type` field in the Service API is designed as nested functionality - each level -adds to the previous. This is not strictly required on all cloud providers, but -the Kubernetes API design for Service requires it anyway. +adds to the previous. However there is an exception to this nested design. You can +define a `LoadBalancer` Service by +[disabling the load balancer `NodePort` allocation](/docs/concepts/services-networking/service/#load-balancer-nodeport-allocation). ### `type: ClusterIP` {#type-clusterip} @@ -508,11 +508,13 @@ spec: selector: app.kubernetes.io/name: MyApp ports: - # By default and for convenience, the `targetPort` is set to the same value as the `port` field. - port: 80 + # By default and for convenience, the `targetPort` is set to + # the same value as the `port` field. targetPort: 80 # Optional field - # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767) + # By default and for convenience, the Kubernetes control plane + # will allocate a port from a range (default: 30000-32767) nodePort: 30007 ``` @@ -538,8 +540,7 @@ control plane). If you want to specify particular IP address(es) to proxy the port, you can set the `--nodeport-addresses` flag for kube-proxy or the equivalent `nodePortAddresses` -field of the -[kube-proxy configuration file](/docs/reference/config-api/kube-proxy-config.v1alpha1/) +field of the [kube-proxy configuration file](/docs/reference/config-api/kube-proxy-config.v1alpha1/) to particular IP block(s). This flag takes a comma-delimited list of IP blocks (e.g. `10.0.0.0/8`, `192.0.2.0/25`) @@ -553,7 +554,8 @@ This means that kube-proxy should consider all available network interfaces for {{< note >}} This Service is visible as `:spec.ports[*].nodePort` and `.spec.clusterIP:spec.ports[*].port`. If the `--nodeport-addresses` flag for kube-proxy or the equivalent field -in the kube-proxy configuration file is set, `` would be a filtered node IP address (or possibly IP addresses). +in the kube-proxy configuration file is set, `` would be a filtered +node IP address (or possibly IP addresses). {{< /note >}} ### `type: LoadBalancer` {#loadbalancer} @@ -607,7 +609,8 @@ set is ignored. {{< note >}} The`.spec.loadBalancerIP` field for a Service was deprecated in Kubernetes v1.24. -This field was under-specified and its meaning varies across implementations. It also cannot support dual-stack networking. This field may be removed in a future API version. +This field was under-specified and its meaning varies across implementations. +It also cannot support dual-stack networking. This field may be removed in a future API version. If you're integrating with a provider that supports specifying the load balancer IP address(es) for a Service via a (provider specific) annotation, you should switch to doing that. @@ -703,117 +706,97 @@ depending on the cloud service provider you're using: {{% tab name="Default" %}} Select one of the tabs. {{% /tab %}} + {{% tab name="GCP" %}} ```yaml -[...] metadata: - name: my-service - annotations: - networking.gke.io/load-balancer-type: "Internal" -[...] + name: my-service + annotations: + networking.gke.io/load-balancer-type: "Internal" ``` - {{% /tab %}} {{% tab name="AWS" %}} ```yaml -[...] metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: "true" -[...] ``` {{% /tab %}} {{% tab name="Azure" %}} ```yaml -[...] metadata: - name: my-service - annotations: - service.beta.kubernetes.io/azure-load-balancer-internal: "true" -[...] + name: my-service + annotations: + service.beta.kubernetes.io/azure-load-balancer-internal: "true" ``` {{% /tab %}} {{% tab name="IBM Cloud" %}} ```yaml -[...] metadata: - name: my-service - annotations: - service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: "private" -[...] + name: my-service + annotations: + service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: "private" ``` {{% /tab %}} {{% tab name="OpenStack" %}} ```yaml -[...] metadata: - name: my-service - annotations: - service.beta.kubernetes.io/openstack-internal-load-balancer: "true" -[...] + name: my-service + annotations: + service.beta.kubernetes.io/openstack-internal-load-balancer: "true" ``` {{% /tab %}} {{% tab name="Baidu Cloud" %}} ```yaml -[...] metadata: - name: my-service - annotations: - service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true" -[...] + name: my-service + annotations: + service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true" ``` {{% /tab %}} {{% tab name="Tencent Cloud" %}} ```yaml -[...] metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx -[...] ``` {{% /tab %}} {{% tab name="Alibaba Cloud" %}} ```yaml -[...] metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: "intranet" -[...] ``` {{% /tab %}} {{% tab name="OCI" %}} ```yaml -[...] metadata: - name: my-service - annotations: - service.beta.kubernetes.io/oci-load-balancer-internal: true -[...] + name: my-service + annotations: + service.beta.kubernetes.io/oci-load-balancer-internal: true ``` {{% /tab %}} {{< /tabs >}} ### `type: ExternalName` {#externalname} - - Services of type ExternalName map a Service to a DNS name, not to a typical selector such as `my-service` or `cassandra`. You specify these Services with the `spec.externalName` parameter. @@ -832,11 +815,14 @@ spec: ``` {{< note >}} -A Service of `type: ExternalName` accepts an IPv4 address string, but treats that string as a DNS name comprised of digits, -not as an IP address (the internet does not however allow such names in DNS). Services with external names that resemble IPv4 +A Service of `type: ExternalName` accepts an IPv4 address string, +but treats that string as a DNS name comprised of digits, +not as an IP address (the internet does not however allow such names in DNS). +Services with external names that resemble IPv4 addresses are not resolved by DNS servers. -If you want to map a Service directly to a specific IP address, consider using [headless Services](#headless-services). +If you want to map a Service directly to a specific IP address, consider using +[headless Services](#headless-services). {{< /note >}} When looking up the host `my-service.prod.svc.cluster.local`, the cluster DNS Service @@ -902,7 +888,8 @@ finding a Service: environment variables and DNS. When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. It adds `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables, where the Service name is upper-cased and dashes are converted to underscores. -It also supports variables (see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72)) +It also supports variables +(see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72)) that are compatible with Docker Engine's "_[legacy container links](https://docs.docker.com/network/links/)_" feature. @@ -1034,7 +1021,9 @@ about the [Service API object](/docs/reference/generated/kubernetes-api/{{< para ## {{% heading "whatsnext" %}} Learn more about Services and how they fit into Kubernetes: -* Follow the [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/) tutorial. + +* Follow the [Connecting Applications with Services](/docs/tutorials/services/connect-applications-service/) + tutorial. * Read about [Ingress](/docs/concepts/services-networking/ingress/), which exposes HTTP and HTTPS routes from outside the cluster to Services within your cluster. @@ -1042,6 +1031,7 @@ Learn more about Services and how they fit into Kubernetes: Kubernetes that provides more flexibility than Ingress. For more context, read the following: + * [Virtual IPs and Service Proxies](/docs/reference/networking/virtual-ips/) * [EndpointSlices](/docs/concepts/services-networking/endpoint-slices/) * [Service API reference](/docs/reference/kubernetes-api/service-resources/service-v1/) diff --git a/content/en/docs/concepts/storage/ephemeral-volumes.md b/content/en/docs/concepts/storage/ephemeral-volumes.md index 77844874348d7..f92f544768fd0 100644 --- a/content/en/docs/concepts/storage/ephemeral-volumes.md +++ b/content/en/docs/concepts/storage/ephemeral-volumes.md @@ -47,8 +47,7 @@ different purposes: [secret](/docs/concepts/storage/volumes/#secret): inject different kinds of Kubernetes data into a Pod - [CSI ephemeral volumes](#csi-ephemeral-volumes): - similar to the previous volume kinds, but provided by special - [CSI drivers](https://github.com/container-storage-interface/spec/blob/master/spec.md) + similar to the previous volume kinds, but provided by special {{< glossary_tooltip text="CSI" term_id="csi" >}} drivers which specifically [support this feature](https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html) - [generic ephemeral volumes](#generic-ephemeral-volumes), which can be provided by all storage drivers that also support persistent volumes diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index 59f98cc2c1c9d..2fc96d97778ed 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -245,9 +245,8 @@ The `emptyDir.medium` field controls where `emptyDir` volumes are stored. By default `emptyDir` volumes are stored on whatever medium that backs the node such as disk, SSD, or network storage, depending on your environment. If you set the `emptyDir.medium` field to `"Memory"`, Kubernetes mounts a tmpfs (RAM-backed -filesystem) for you instead. While tmpfs is very fast, be aware that unlike -disks, tmpfs is cleared on node reboot and any files you write count against -your container's memory limit. +filesystem) for you instead. While tmpfs is very fast be aware that, unlike +disks, files you write count against the memory limit of the container that wrote them. A size limit can be specified for the default medium, which limits the capacity diff --git a/content/en/docs/concepts/storage/windows-storage.md b/content/en/docs/concepts/storage/windows-storage.md index 1aa3941a1f208..6bfc117029d8e 100644 --- a/content/en/docs/concepts/storage/windows-storage.md +++ b/content/en/docs/concepts/storage/windows-storage.md @@ -41,7 +41,7 @@ As a result, the following storage functionality is not supported on Windows nod * Block device mapping * Memory as the storage medium (for example, `emptyDir.medium` set to `Memory`) * File system features like uid/gid; per-user Linux filesystem permissions -* Setting [secret permissions with DefaultMode](/docs/concepts/configuration/secret/#secret-files-permissions) (due to UID/GID dependency) +* Setting [secret permissions with DefaultMode](/docs/tasks/inject-data-application/distribute-credentials-secure/#set-posix-permissions-for-secret-keys) (due to UID/GID dependency) * NFS based storage/volume support * Expanding the mounted volume (resizefs) diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md index 17b8b9f221b25..9b1e5f065bd06 100644 --- a/content/en/docs/concepts/workloads/controllers/deployment.md +++ b/content/en/docs/concepts/workloads/controllers/deployment.md @@ -1197,6 +1197,105 @@ rolling update starts, such that the total number of old and new Pods does not e Pods. Once old Pods have been killed, the new ReplicaSet can be scaled up further, ensuring that the total number of Pods running at any time during the update is at most 130% of desired Pods. +Here are some Rolling Update Deployment examples that use the `maxUnavailable` and `maxSurge`: + +{{< tabs name="tab_with_md" >}} +{{% tab name="Max Unavailable" %}} + + ```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + strategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 1 + ``` + +{{% /tab %}} +{{% tab name="Max Surge" %}} + + ```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + strategy: + type: RollingUpdate + rollingUpdate: + maxSurge: 1 + ``` + +{{% /tab %}} +{{% tab name="Hybrid" %}} + + ```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + strategy: + type: RollingUpdate + rollingUpdate: + maxSurge: 1 + maxUnavailable: 1 + ``` + +{{% /tab %}} +{{< /tabs >}} + ### Progress Deadline Seconds `.spec.progressDeadlineSeconds` is an optional field that specifies the number of seconds you want diff --git a/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md b/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md index c7199208130be..e33239a5da429 100644 --- a/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md +++ b/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md @@ -5,9 +5,9 @@ - You need to have these tools installed: - - [Python](https://www.python.org/downloads/) v3.7.x + - [Python](https://www.python.org/downloads/) v3.7.x+ - [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) - - [Golang](https://golang.org/doc/install) version 1.13+ + - [Golang](https://go.dev/dl/) version 1.13+ - [Pip](https://pypi.org/project/pip/) used to install PyYAML - [PyYAML](https://pyyaml.org/) v5.1.2 - [make](https://www.gnu.org/software/make/) @@ -19,4 +19,3 @@ - You need to know how to create a pull request to a GitHub repository. This involves creating your own fork of the repository. For more information, see [Work from a local clone](/docs/contribute/new-content/open-a-pr/#fork-the-repo). - diff --git a/content/en/docs/contribute/participate/issue-wrangler.md b/content/en/docs/contribute/participate/issue-wrangler.md new file mode 100644 index 0000000000000..a12c2118a979d --- /dev/null +++ b/content/en/docs/contribute/participate/issue-wrangler.md @@ -0,0 +1,78 @@ +--- +title: Issue Wranglers +content_type: concept +weight: 20 +--- + + + +Alongside the [PR Wrangler](/docs/contribute/participate/pr-wranglers),formal approvers, and reviewers, members of SIG Docs take week long shifts [triaging and categorising issues](/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues) for the repository. + + + +## Duties + +Each day in a week-long shift the Issue Wrangler will be responsible for: + +- Triaging and tagging incoming issues daily. See [Triage and categorize issues](https://github.com/kubernetes/website/blob/main/content/en/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues) for guidelines on how SIG Docs uses metadata. +- Keeping an eye on stale & rotten issues within the kubernetes/website repository. +- Maintenance of the [Issues board](https://github.com/orgs/kubernetes/projects/72/views/1). + +### Requirements + +- Must be an active member of the Kubernetes organization. +- A minimum of 15 [non-trivial](https://www.kubernetes.dev/docs/guide/pull-requests/#trivial-edits) contributions to Kubernetes (of which a certain amount should be directed towards kubernetes/website). +- Performing the role in an informal capacity already + +### Helpful [Prow commands](https://prow.k8s.io/command-help) for wranglers + +``` +# reopen an issue +/reopen + +# transfer issues that don't fit in k/website to another repository +/transfer[-issue] + +# change the state of rotten issues +/remove-lifecycle rotten + +# change the state of stale issues +/remove-lifecycle stale + +# assign sig to an issue +/sig + +# add specific area +/area + +# for beginner friendly issues +/good-first-issue + +# issues that needs help +/help wanted + +# tagging issue as support specific +/kind support + +# to accept triaging for an issue +/triage accepted + +# closing an issue we won't be working on and haven't fixed yet +/close not-planned +``` + +### When to close Issues + +For an open source project to succeed, good issue management is crucial. But it is also critical to resolve issues in order to maintain the repository and communicate clearly with contributors and users. + +Close issues when: + +- A similar issue is reported more than once.You will first need to tag it as /triage duplicate; link it to the main issue & then close it. It is also advisable to direct the users to the original issue. +- It is very difficult to understand and address the issue presented by the author with the information provided. + However, encourage the user to provide more details or reopen the issue if they can reproduce it later. +- The same functionality is implemented elsewhere. One can close this issue and direct user to the appropriate place. +- The reported issue is not currently planned or aligned with the project's goals. +- If the issue appears to be spam and is clearly unrelated. +- If the issue is related to an external limitation or dependency and is beyond the control of the project. + +To close an issue, leave a `/close` comment on the issue. diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md index 7279339f597ba..1290723db5d1e 100644 --- a/content/en/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md @@ -24,7 +24,7 @@ Kubernetes API server prior to persistence of the object, but after the request is authenticated and authorized. Admission controllers may be _validating_, _mutating_, or both. Mutating -controllers may modify related objects to the requests they admit; validating controllers may not. +controllers may modify objects related to the requests they admit; validating controllers may not. Admission controllers limit requests to create, delete, modify objects. Admission controllers can also block custom verbs, such as a request connect to a Pod via diff --git a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md index ae33b68c29e84..9da1dd6c1af9a 100644 --- a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md +++ b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md @@ -488,7 +488,7 @@ O is the group that this user will belong to. You can refer to ```shell openssl genrsa -out myuser.key 2048 -openssl req -new -key myuser.key -out myuser.csr +openssl req -new -key myuser.key -out myuser.csr -subj "/CN=myuser" ``` ### Create a CertificateSigningRequest {#create-certificatessigningrequest} diff --git a/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md b/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md index c1b33647407c1..c4393b261e205 100644 --- a/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md +++ b/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md @@ -11,31 +11,35 @@ weight: 120 -In a Kubernetes cluster, the components on the worker nodes - kubelet and kube-proxy - need to communicate with Kubernetes control plane components, specifically kube-apiserver. -In order to ensure that communication is kept private, not interfered with, and ensure that each component of the cluster is talking to another trusted component, we strongly +In a Kubernetes cluster, the components on the worker nodes - kubelet and kube-proxy - need +to communicate with Kubernetes control plane components, specifically kube-apiserver. +In order to ensure that communication is kept private, not interfered with, and ensure that +each component of the cluster is talking to another trusted component, we strongly recommend using client TLS certificates on nodes. -The normal process of bootstrapping these components, especially worker nodes that need certificates so they can communicate safely with kube-apiserver, -can be a challenging process as it is often outside of the scope of Kubernetes and requires significant additional work. +The normal process of bootstrapping these components, especially worker nodes that need certificates +so they can communicate safely with kube-apiserver, can be a challenging process as it is often outside +of the scope of Kubernetes and requires significant additional work. This in turn, can make it challenging to initialize or scale a cluster. -In order to simplify the process, beginning in version 1.4, Kubernetes introduced a certificate request and signing API. The proposal can be -found [here](https://github.com/kubernetes/kubernetes/pull/20439). +In order to simplify the process, beginning in version 1.4, Kubernetes introduced a certificate request +and signing API. The proposal can be found [here](https://github.com/kubernetes/kubernetes/pull/20439). This document describes the process of node initialization, how to set up TLS client certificate bootstrapping for kubelets, and how it works. -## Initialization Process +## Initialization process When a worker node starts up, the kubelet does the following: 1. Look for its `kubeconfig` file -2. Retrieve the URL of the API server and credentials, normally a TLS key and signed certificate from the `kubeconfig` file -3. Attempt to communicate with the API server using the credentials. +1. Retrieve the URL of the API server and credentials, normally a TLS key and signed certificate from the `kubeconfig` file +1. Attempt to communicate with the API server using the credentials. -Assuming that the kube-apiserver successfully validates the kubelet's credentials, it will treat the kubelet as a valid node, and begin to assign pods to it. +Assuming that the kube-apiserver successfully validates the kubelet's credentials, +it will treat the kubelet as a valid node, and begin to assign pods to it. Note that the above process depends upon: @@ -45,35 +49,36 @@ Note that the above process depends upon: All of the following are responsibilities of whoever sets up and manages the cluster: 1. Creating the CA key and certificate -2. Distributing the CA certificate to the control plane nodes, where kube-apiserver is running -3. Creating a key and certificate for each kubelet; strongly recommended to have a unique one, with a unique CN, for each kubelet -4. Signing the kubelet certificate using the CA key -5. Distributing the kubelet key and signed certificate to the specific node on which the kubelet is running +1. Distributing the CA certificate to the control plane nodes, where kube-apiserver is running +1. Creating a key and certificate for each kubelet; strongly recommended to have a unique one, with a unique CN, for each kubelet +1. Signing the kubelet certificate using the CA key +1. Distributing the kubelet key and signed certificate to the specific node on which the kubelet is running -The TLS Bootstrapping described in this document is intended to simplify, and partially or even completely automate, steps 3 onwards, as these are the most common when initializing or scaling +The TLS Bootstrapping described in this document is intended to simplify, and partially or even +completely automate, steps 3 onwards, as these are the most common when initializing or scaling a cluster. -### Bootstrap Initialization +### Bootstrap initialization In the bootstrap initialization process, the following occurs: 1. kubelet begins -2. kubelet sees that it does _not_ have a `kubeconfig` file -3. kubelet searches for and finds a `bootstrap-kubeconfig` file -4. kubelet reads its bootstrap file, retrieving the URL of the API server and a limited usage "token" -5. kubelet connects to the API server, authenticates using the token -6. kubelet now has limited credentials to create and retrieve a certificate signing request (CSR) -7. kubelet creates a CSR for itself with the signerName set to `kubernetes.io/kube-apiserver-client-kubelet` -8. CSR is approved in one of two ways: +1. kubelet sees that it does _not_ have a `kubeconfig` file +1. kubelet searches for and finds a `bootstrap-kubeconfig` file +1. kubelet reads its bootstrap file, retrieving the URL of the API server and a limited usage "token" +1. kubelet connects to the API server, authenticates using the token +1. kubelet now has limited credentials to create and retrieve a certificate signing request (CSR) +1. kubelet creates a CSR for itself with the signerName set to `kubernetes.io/kube-apiserver-client-kubelet` +1. CSR is approved in one of two ways: * If configured, kube-controller-manager automatically approves the CSR * If configured, an outside process, possibly a person, approves the CSR using the Kubernetes API or via `kubectl` -9. Certificate is created for the kubelet -10. Certificate is issued to the kubelet -11. kubelet retrieves the certificate -12. kubelet creates a proper `kubeconfig` with the key and signed certificate -13. kubelet begins normal operation -14. Optional: if configured, kubelet automatically requests renewal of the certificate when it is close to expiry -15. The renewed certificate is approved and issued, either automatically or manually, depending on configuration. +1. Certificate is created for the kubelet +1. Certificate is issued to the kubelet +1. kubelet retrieves the certificate +1. kubelet creates a proper `kubeconfig` with the key and signed certificate +1. kubelet begins normal operation +1. Optional: if configured, kubelet automatically requests renewal of the certificate when it is close to expiry +1. The renewed certificate is approved and issued, either automatically or manually, depending on configuration. The rest of this document describes the necessary steps to configure TLS Bootstrapping, and its limitations. @@ -90,13 +95,16 @@ In addition, you need your Kubernetes Certificate Authority (CA). ## Certificate Authority -As without bootstrapping, you will need a Certificate Authority (CA) key and certificate. As without bootstrapping, these will be used -to sign the kubelet certificate. As before, it is your responsibility to distribute them to control plane nodes. +As without bootstrapping, you will need a Certificate Authority (CA) key and certificate. +As without bootstrapping, these will be used to sign the kubelet certificate. As before, +it is your responsibility to distribute them to control plane nodes. -For the purposes of this document, we will assume these have been distributed to control plane nodes at `/var/lib/kubernetes/ca.pem` (certificate) and `/var/lib/kubernetes/ca-key.pem` (key). +For the purposes of this document, we will assume these have been distributed to control +plane nodes at `/var/lib/kubernetes/ca.pem` (certificate) and `/var/lib/kubernetes/ca-key.pem` (key). We will refer to these as "Kubernetes CA certificate and key". -All Kubernetes components that use these certificates - kubelet, kube-apiserver, kube-controller-manager - assume the key and certificate to be PEM-encoded. +All Kubernetes components that use these certificates - kubelet, kube-apiserver, +kube-controller-manager - assume the key and certificate to be PEM-encoded. ## kube-apiserver configuration @@ -116,24 +124,27 @@ containing the signing certificate, for example ### Initial bootstrap authentication -In order for the bootstrapping kubelet to connect to kube-apiserver and request a certificate, it must first authenticate to the server. -You can use any [authenticator](/docs/reference/access-authn-authz/authentication/) that can authenticate the kubelet. +In order for the bootstrapping kubelet to connect to kube-apiserver and request a certificate, +it must first authenticate to the server. You can use any +[authenticator](/docs/reference/access-authn-authz/authentication/) that can authenticate the kubelet. While any authentication strategy can be used for the kubelet's initial bootstrap credentials, the following two authenticators are recommended for ease of provisioning. 1. [Bootstrap Tokens](#bootstrap-tokens) -2. [Token authentication file](#token-authentication-file) +1. [Token authentication file](#token-authentication-file) -Using bootstrap tokens is a simpler and more easily managed method to authenticate kubelets, and does not require any additional flags when starting kube-apiserver. +Using bootstrap tokens is a simpler and more easily managed method to authenticate kubelets, +and does not require any additional flags when starting kube-apiserver. Whichever method you choose, the requirement is that the kubelet be able to authenticate as a user with the rights to: 1. create and retrieve CSRs -2. be automatically approved to request node client certificates, if automatic approval is enabled. +1. be automatically approved to request node client certificates, if automatic approval is enabled. -A kubelet authenticating using bootstrap tokens is authenticated as a user in the group `system:bootstrappers`, which is the standard method to use. +A kubelet authenticating using bootstrap tokens is authenticated as a user in the group +`system:bootstrappers`, which is the standard method to use. As this feature matures, you should ensure tokens are bound to a Role Based Access Control (RBAC) policy @@ -144,17 +155,20 @@ particular bootstrap group's access when you are done provisioning the nodes. #### Bootstrap tokens -Bootstrap tokens are described in detail [here](/docs/reference/access-authn-authz/bootstrap-tokens/). These are tokens that are stored as secrets in the Kubernetes cluster, -and then issued to the individual kubelet. You can use a single token for an entire cluster, or issue one per worker node. +Bootstrap tokens are described in detail [here](/docs/reference/access-authn-authz/bootstrap-tokens/). +These are tokens that are stored as secrets in the Kubernetes cluster, and then issued to the individual kubelet. +You can use a single token for an entire cluster, or issue one per worker node. The process is two-fold: 1. Create a Kubernetes secret with the token ID, secret and scope(s). -2. Issue the token to the kubelet +1. Issue the token to the kubelet From the kubelet's perspective, one token is like another and has no special meaning. -From the kube-apiserver's perspective, however, the bootstrap token is special. Due to its `type`, `namespace` and `name`, kube-apiserver recognizes it as a special token, -and grants anyone authenticating with that token special bootstrap rights, notably treating them as a member of the `system:bootstrappers` group. This fulfills a basic requirement +From the kube-apiserver's perspective, however, the bootstrap token is special. +Due to its `type`, `namespace` and `name`, kube-apiserver recognizes it as a special token, +and grants anyone authenticating with that token special bootstrap rights, notably treating +them as a member of the `system:bootstrappers` group. This fulfills a basic requirement for TLS bootstrapping. The details for creating the secret are available [here](/docs/reference/access-authn-authz/bootstrap-tokens/). @@ -198,7 +212,8 @@ certificate signing request (CSR) as well as retrieve it when done. Fortunately, Kubernetes ships with a `ClusterRole` with precisely these (and only these) permissions, `system:node-bootstrapper`. -To do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` group to the cluster role `system:node-bootstrapper`. +To do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` +group to the cluster role `system:node-bootstrapper`. ```yaml # enable bootstrapping nodes to create CSR @@ -237,9 +252,10 @@ In order for the controller-manager to sign certificates, it needs the following As described earlier, you need to create a Kubernetes CA key and certificate, and distribute it to the control plane nodes. These will be used by the controller-manager to sign the kubelet certificates. -Since these signed certificates will, in turn, be used by the kubelet to authenticate as a regular kubelet to kube-apiserver, it is important that the CA -provided to the controller-manager at this stage also be trusted by kube-apiserver for authentication. This is provided to kube-apiserver -with the flag `--client-ca-file=FILENAME` (for example, `--client-ca-file=/var/lib/kubernetes/ca.pem`), as described in the kube-apiserver configuration section. +Since these signed certificates will, in turn, be used by the kubelet to authenticate as a regular kubelet +to kube-apiserver, it is important that the CA provided to the controller-manager at this stage also be +trusted by kube-apiserver for authentication. This is provided to kube-apiserver with the flag `--client-ca-file=FILENAME` +(for example, `--client-ca-file=/var/lib/kubernetes/ca.pem`), as described in the kube-apiserver configuration section. To provide the Kubernetes CA key and certificate to kube-controller-manager, use the following flags: @@ -266,10 +282,14 @@ RBAC permissions to the correct group. There are two distinct sets of permissions: -* `nodeclient`: If a node is creating a new certificate for a node, then it does not have a certificate yet. It is authenticating using one of the tokens listed above, and thus is part of the group `system:bootstrappers`. -* `selfnodeclient`: If a node is renewing its certificate, then it already has a certificate (by definition), which it uses continuously to authenticate as part of the group `system:nodes`. +* `nodeclient`: If a node is creating a new certificate for a node, then it does not have a certificate yet. + It is authenticating using one of the tokens listed above, and thus is part of the group `system:bootstrappers`. +* `selfnodeclient`: If a node is renewing its certificate, then it already has a certificate (by definition), + which it uses continuously to authenticate as part of the group `system:nodes`. -To enable the kubelet to request and receive a new certificate, create a `ClusterRoleBinding` that binds the group in which the bootstrapping node is a member `system:bootstrappers` to the `ClusterRole` that grants it permission, `system:certificates.k8s.io:certificatesigningrequests:nodeclient`: +To enable the kubelet to request and receive a new certificate, create a `ClusterRoleBinding` that binds +the group in which the bootstrapping node is a member `system:bootstrappers` to the `ClusterRole` that +grants it permission, `system:certificates.k8s.io:certificatesigningrequests:nodeclient`: ```yaml # Approve all CSRs for the group "system:bootstrappers" @@ -287,7 +307,8 @@ roleRef: apiGroup: rbac.authorization.k8s.io ``` -To enable the kubelet to renew its own client certificate, create a `ClusterRoleBinding` that binds the group in which the fully functioning node is a member `system:nodes` to the `ClusterRole` that +To enable the kubelet to renew its own client certificate, create a `ClusterRoleBinding` that binds +the group in which the fully functioning node is a member `system:nodes` to the `ClusterRole` that grants it permission, `system:certificates.k8s.io:certificatesigningrequests:selfnodeclient`: ```yaml @@ -316,10 +337,10 @@ built-in approver doesn't explicitly deny CSRs. It only ignores unauthorized requests. The controller also prunes expired certificates as part of garbage collection. - ## kubelet configuration -Finally, with the control plane nodes properly set up and all of the necessary authentication and authorization in place, we can configure the kubelet. +Finally, with the control plane nodes properly set up and all of the necessary +authentication and authorization in place, we can configure the kubelet. The kubelet requires the following configuration to bootstrap: @@ -385,7 +406,7 @@ referencing the generated key and obtained certificate is written to the path specified by `--kubeconfig`. The certificate and key file will be placed in the directory specified by `--cert-dir`. -### Client and Serving Certificates +### Client and serving certificates All of the above relate to kubelet _client_ certificates, specifically, the certificates a kubelet uses to authenticate to kube-apiserver. @@ -402,7 +423,7 @@ be used as serving certificates, or `server auth`. However, you _can_ enable its server certificate, at least partially, via certificate rotation. -### Certificate Rotation +### Certificate rotation Kubernetes v1.8 and higher kubelet implements features for enabling rotation of its client and/or serving certificates. Note, rotation of serving @@ -420,7 +441,7 @@ or pass the following command line argument to the kubelet (deprecated): Enabling `RotateKubeletServerCertificate` causes the kubelet **both** to request a serving certificate after bootstrapping its client credentials **and** to rotate that -certificate. To enable this behavior, use the field `serverTLSBootstrap` of +certificate. To enable this behavior, use the field `serverTLSBootstrap` of the [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/) or pass the following command line argument to the kubelet (deprecated): @@ -430,8 +451,8 @@ or pass the following command line argument to the kubelet (deprecated): {{< note >}} The CSR approving controllers implemented in core Kubernetes do not -approve node _serving_ certificates for [security -reasons](https://github.com/kubernetes/community/pull/1982). To use +approve node _serving_ certificates for +[security reasons](https://github.com/kubernetes/community/pull/1982). To use `RotateKubeletServerCertificate` operators need to run a custom approving controller, or manually approve the serving certificate requests. @@ -439,9 +460,9 @@ A deployment-specific approval process for kubelet serving certificates should t 1. are requested by nodes (ensure the `spec.username` field is of the form `system:node:` and `spec.groups` contains `system:nodes`) -2. request usages for a serving certificate (ensure `spec.usages` contains `server auth`, +1. request usages for a serving certificate (ensure `spec.usages` contains `server auth`, optionally contains `digital signature` and `key encipherment`, and contains no other usages) -3. only have IP and DNS subjectAltNames that belong to the requesting node, +1. only have IP and DNS subjectAltNames that belong to the requesting node, and have no URI and Email subjectAltNames (parse the x509 Certificate Signing Request in `spec.request` to verify `subjectAltNames`) @@ -457,8 +478,11 @@ Like the kubelet, these other components also require a method of authenticating You have several options for generating these credentials: * The old way: Create and distribute certificates the same way you did for kubelet before TLS bootstrapping -* DaemonSet: Since the kubelet itself is loaded on each node, and is sufficient to start base services, you can run kube-proxy and other node-specific services not as a standalone process, but rather as a daemonset in the `kube-system` namespace. Since it will be in-cluster, you can give it a proper service account with appropriate permissions to perform its activities. This may be the simplest way to configure such services. - +* DaemonSet: Since the kubelet itself is loaded on each node, and is sufficient to start base services, + you can run kube-proxy and other node-specific services not as a standalone process, but rather as a + daemonset in the `kube-system` namespace. Since it will be in-cluster, you can give it a proper service + account with appropriate permissions to perform its activities. This may be the simplest way to configure + such services. ## kubectl approval diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index c6f53a87f426e..1d88ec7b0f39c 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -185,7 +185,7 @@ For a reference to old feature gates that are removed, please refer to | `SELinuxMountReadWriteOncePod` | `false` | Alpha | 1.25 | 1.26 | | `SELinuxMountReadWriteOncePod` | `false` | Beta | 1.27 | 1.27 | | `SELinuxMountReadWriteOncePod` | `true` | Beta | 1.28 | | -| `SchedulerQueueingHints` | `false` | Alpha | 1.28 | | +| `SchedulerQueueingHints` | `true` | Beta | 1.28 | | | `SecurityContextDeny` | `false` | Alpha | 1.27 | | | `SidecarContainers` | `false` | Alpha | 1.28 | | | `SizeMemoryBackedVolumes` | `false` | Alpha | 1.20 | 1.21 | @@ -688,8 +688,11 @@ Each feature gate is designed for enabling/disabling a specific feature: - `SELinuxMountReadWriteOncePod`: Speeds up container startup by allowing kubelet to mount volumes for a Pod directly with the correct SELinux label instead of changing each file on the volumes recursively. The initial implementation focused on ReadWriteOncePod volumes. -- `SchedulerQueueingHints`: Enables the scheduler's _queueing hints_ enhancement, +- `SchedulerQueueingHints`: Enables [the scheduler's _queueing hints_ enhancement](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/4247-queueinghint/README.md), which benefits to reduce the useless requeueing. + The scheduler retries scheduling pods if something changes in the cluster that could make the pod scheduled. + Queueing hints are internal signals that allow the scheduler to filter the changes in the cluster + that are relevant to the unscheduled pod, based on previous scheduling attempts. - `SeccompDefault`: Enables the use of `RuntimeDefault` as the default seccomp profile for all workloads. The seccomp profile is specified in the `securityContext` of a Pod and/or a Container. diff --git a/content/en/docs/reference/config-api/apiserver-admission.v1.md b/content/en/docs/reference/config-api/apiserver-admission.v1.md index 5555e6f5c12b4..0423f38cf2d53 100644 --- a/content/en/docs/reference/config-api/apiserver-admission.v1.md +++ b/content/en/docs/reference/config-api/apiserver-admission.v1.md @@ -11,7 +11,6 @@ auto_generated: true - [AdmissionReview](#admission-k8s-io-v1-AdmissionReview) - ## `AdmissionReview` {#admission-k8s-io-v1-AdmissionReview} diff --git a/content/en/docs/reference/config-api/apiserver-audit.v1.md b/content/en/docs/reference/config-api/apiserver-audit.v1.md index abab04f1bd2e1..b874126a28716 100644 --- a/content/en/docs/reference/config-api/apiserver-audit.v1.md +++ b/content/en/docs/reference/config-api/apiserver-audit.v1.md @@ -14,7 +14,6 @@ auto_generated: true - [Policy](#audit-k8s-io-v1-Policy) - [PolicyList](#audit-k8s-io-v1-PolicyList) - ## `Event` {#audit-k8s-io-v1-Event} diff --git a/content/en/docs/reference/config-api/apiserver-config.v1.md b/content/en/docs/reference/config-api/apiserver-config.v1.md index ec78a45da1a51..c133724ec70bd 100644 --- a/content/en/docs/reference/config-api/apiserver-config.v1.md +++ b/content/en/docs/reference/config-api/apiserver-config.v1.md @@ -12,7 +12,6 @@ auto_generated: true - [AdmissionConfiguration](#apiserver-config-k8s-io-v1-AdmissionConfiguration) - ## `AdmissionConfiguration` {#apiserver-config-k8s-io-v1-AdmissionConfiguration} diff --git a/content/en/docs/reference/config-api/apiserver-config.v1alpha1.md b/content/en/docs/reference/config-api/apiserver-config.v1alpha1.md index 0c85b397f61f7..47899f794e7fe 100644 --- a/content/en/docs/reference/config-api/apiserver-config.v1alpha1.md +++ b/content/en/docs/reference/config-api/apiserver-config.v1alpha1.md @@ -15,6 +15,47 @@ auto_generated: true - [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration) + + +## `TracingConfiguration` {#TracingConfiguration} + + +**Appears in:** + +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) + +- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration) + + +

TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.

+ + + + + + + + + + + + + + +
FieldDescription
endpoint
+string +
+

Endpoint of the collector this component will report traces to. +The connection is insecure, and does not currently support TLS. +Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.

+
samplingRatePerMillion
+int32 +
+

SamplingRatePerMillion is the number of samples to collect per million spans. +Recommended is unset. If unset, sampler respects its parent span's sampling +rate, but otherwise never samples.

+
+ ## `AdmissionConfiguration` {#apiserver-k8s-io-v1alpha1-AdmissionConfiguration} @@ -360,45 +401,4 @@ This does not use a unix:// prefix. (Eg: /etc/srv/kubernetes/konnectivity-server - - - - -## `TracingConfiguration` {#TracingConfiguration} - - -**Appears in:** - -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) - -- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration) - - -

TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.

- - - - - - - - - - - - - - -
FieldDescription
endpoint
-string -
-

Endpoint of the collector this component will report traces to. -The connection is insecure, and does not currently support TLS. -Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.

-
samplingRatePerMillion
-int32 -
-

SamplingRatePerMillion is the number of samples to collect per million spans. -Recommended is unset. If unset, sampler respects its parent span's sampling -rate, but otherwise never samples.

-
\ No newline at end of file + \ No newline at end of file diff --git a/content/en/docs/reference/config-api/apiserver-config.v1beta1.md b/content/en/docs/reference/config-api/apiserver-config.v1beta1.md index 6acb3540cd06f..06dfaab72291e 100644 --- a/content/en/docs/reference/config-api/apiserver-config.v1beta1.md +++ b/content/en/docs/reference/config-api/apiserver-config.v1beta1.md @@ -14,6 +14,49 @@ auto_generated: true - [TracingConfiguration](#apiserver-k8s-io-v1beta1-TracingConfiguration) + + +## `TracingConfiguration` {#TracingConfiguration} + + +**Appears in:** + +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) + +- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration) + +- [TracingConfiguration](#apiserver-k8s-io-v1beta1-TracingConfiguration) + + +

TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.

+ + + + + + + + + + + + + + +
FieldDescription
endpoint
+string +
+

Endpoint of the collector this component will report traces to. +The connection is insecure, and does not currently support TLS. +Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.

+
samplingRatePerMillion
+int32 +
+

SamplingRatePerMillion is the number of samples to collect per million spans. +Recommended is unset. If unset, sampler respects its parent span's sampling +rate, but otherwise never samples.

+
+ ## `EgressSelectorConfiguration` {#apiserver-k8s-io-v1beta1-EgressSelectorConfiguration} @@ -291,47 +334,4 @@ This does not use a unix:// prefix. (Eg: /etc/srv/kubernetes/konnectivity-server - - - - -## `TracingConfiguration` {#TracingConfiguration} - - -**Appears in:** - -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) - -- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration) - -- [TracingConfiguration](#apiserver-k8s-io-v1beta1-TracingConfiguration) - - -

TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.

- - - - - - - - - - - - - - -
FieldDescription
endpoint
-string -
-

Endpoint of the collector this component will report traces to. -The connection is insecure, and does not currently support TLS. -Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.

-
samplingRatePerMillion
-int32 -
-

SamplingRatePerMillion is the number of samples to collect per million spans. -Recommended is unset. If unset, sampler respects its parent span's sampling -rate, but otherwise never samples.

-
\ No newline at end of file + \ No newline at end of file diff --git a/content/en/docs/reference/config-api/apiserver-encryption.v1.md b/content/en/docs/reference/config-api/apiserver-encryption.v1.md index 148dc374e8cad..49e7695dc5062 100644 --- a/content/en/docs/reference/config-api/apiserver-encryption.v1.md +++ b/content/en/docs/reference/config-api/apiserver-encryption.v1.md @@ -12,7 +12,6 @@ auto_generated: true - [EncryptionConfiguration](#apiserver-config-k8s-io-v1-EncryptionConfiguration) - ## `EncryptionConfiguration` {#apiserver-config-k8s-io-v1-EncryptionConfiguration} @@ -20,7 +19,7 @@ auto_generated: true

EncryptionConfiguration stores the complete configuration for encryption providers. It also allows the use of wildcards to specify the resources that should be encrypted. -Use '*.<group>' to encrypt all resources within a group or '*.*' to encrypt all resources. +Use '*<group>o encrypt all resources within a group or '*.*' to encrypt all resources. '*.' can be used to encrypt all resource in the core group. '*.*' will encrypt all resources, even custom resources that are added after API server start. Use of wildcards that overlap within the same resource list or across multiple diff --git a/content/en/docs/reference/config-api/apiserver-eventratelimit.v1alpha1.md b/content/en/docs/reference/config-api/apiserver-eventratelimit.v1alpha1.md index 2189c4910d277..60a5bcbedf9d2 100644 --- a/content/en/docs/reference/config-api/apiserver-eventratelimit.v1alpha1.md +++ b/content/en/docs/reference/config-api/apiserver-eventratelimit.v1alpha1.md @@ -11,7 +11,6 @@ auto_generated: true - [Configuration](#eventratelimit-admission-k8s-io-v1alpha1-Configuration) - ## `Configuration` {#eventratelimit-admission-k8s-io-v1alpha1-Configuration} diff --git a/content/en/docs/reference/config-api/apiserver-webhookadmission.v1.md b/content/en/docs/reference/config-api/apiserver-webhookadmission.v1.md index b806f3b6c6075..9520d2ce53768 100644 --- a/content/en/docs/reference/config-api/apiserver-webhookadmission.v1.md +++ b/content/en/docs/reference/config-api/apiserver-webhookadmission.v1.md @@ -12,7 +12,6 @@ auto_generated: true - [WebhookAdmission](#apiserver-config-k8s-io-v1-WebhookAdmission) - ## `WebhookAdmission` {#apiserver-config-k8s-io-v1-WebhookAdmission} diff --git a/content/en/docs/reference/config-api/client-authentication.v1.md b/content/en/docs/reference/config-api/client-authentication.v1.md index 53e602d0f22a2..33150093d9488 100644 --- a/content/en/docs/reference/config-api/client-authentication.v1.md +++ b/content/en/docs/reference/config-api/client-authentication.v1.md @@ -11,7 +11,6 @@ auto_generated: true - [ExecCredential](#client-authentication-k8s-io-v1-ExecCredential) - ## `ExecCredential` {#client-authentication-k8s-io-v1-ExecCredential} diff --git a/content/en/docs/reference/config-api/client-authentication.v1beta1.md b/content/en/docs/reference/config-api/client-authentication.v1beta1.md index d9e55d0ee2beb..95f65e4bbd597 100644 --- a/content/en/docs/reference/config-api/client-authentication.v1beta1.md +++ b/content/en/docs/reference/config-api/client-authentication.v1beta1.md @@ -11,7 +11,6 @@ auto_generated: true - [ExecCredential](#client-authentication-k8s-io-v1beta1-ExecCredential) - ## `ExecCredential` {#client-authentication-k8s-io-v1beta1-ExecCredential} diff --git a/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md b/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md index f6eaa915a8b41..e3ffcf0b73e2b 100644 --- a/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md +++ b/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md @@ -11,7 +11,6 @@ auto_generated: true - [ImageReview](#imagepolicy-k8s-io-v1alpha1-ImageReview) - ## `ImageReview` {#imagepolicy-k8s-io-v1alpha1-ImageReview} diff --git a/content/en/docs/reference/config-api/kube-controller-manager-config.v1alpha1.md b/content/en/docs/reference/config-api/kube-controller-manager-config.v1alpha1.md index 348c557807eed..d63e35f68a973 100644 --- a/content/en/docs/reference/config-api/kube-controller-manager-config.v1alpha1.md +++ b/content/en/docs/reference/config-api/kube-controller-manager-config.v1alpha1.md @@ -9,301 +9,366 @@ auto_generated: true ## Resource Types -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) - [LeaderMigrationConfiguration](#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration) +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + -## `KubeControllerManagerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration} +## `NodeControllerConfiguration` {#NodeControllerConfiguration} +**Appears in:** -

KubeControllerManagerConfiguration contains elements describing kube-controller manager.

+- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) + + +

NodeControllerConfiguration contains elements describing NodeController.

- - - - - +
FieldDescription
apiVersion
string
kubecontrollermanager.config.k8s.io/v1alpha1
kind
string
KubeControllerManagerConfiguration
Generic [Required]
-GenericControllerManagerConfiguration +
ConcurrentNodeSyncs [Required]
+int32
-

Generic holds configuration for a generic controller-manager

+

ConcurrentNodeSyncs is the number of workers +concurrently synchronizing nodes

KubeCloudShared [Required]
-KubeCloudSharedConfiguration +
+ +## `ServiceControllerConfiguration` {#ServiceControllerConfiguration} + + +**Appears in:** + +- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

ServiceControllerConfiguration contains elements describing ServiceController.

+ + + + + + + + - +
FieldDescription
ConcurrentServiceSyncs [Required]
+int32
-

KubeCloudSharedConfiguration holds configuration for shared related features -both in cloud controller manager and kube-controller manager.

+

concurrentServiceSyncs is the number of services that are +allowed to sync concurrently. Larger number = more responsive service +management, but more CPU (and network) load.

AttachDetachController [Required]
-AttachDetachControllerConfiguration +
+ + +## `CloudControllerManagerConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration} + + + +

CloudControllerManagerConfiguration contains elements describing cloud-controller manager.

+ + + + + + + + + + + - - - - - - +
FieldDescription
apiVersion
string
cloudcontrollermanager.config.k8s.io/v1alpha1
kind
string
CloudControllerManagerConfiguration
Generic [Required]
+GenericControllerManagerConfiguration
-

AttachDetachControllerConfiguration holds configuration for -AttachDetachController related features.

+

Generic holds configuration for a generic controller-manager

CSRSigningController [Required]
-CSRSigningControllerConfiguration +
KubeCloudShared [Required]
+KubeCloudSharedConfiguration
-

CSRSigningControllerConfiguration holds configuration for -CSRSigningController related features.

+

KubeCloudSharedConfiguration holds configuration for shared related features +both in cloud controller manager and kube-controller manager.

DaemonSetController [Required]
-DaemonSetControllerConfiguration +
NodeController [Required]
+NodeControllerConfiguration
-

DaemonSetControllerConfiguration holds configuration for DaemonSetController +

NodeController holds configuration for node controller related features.

DeploymentController [Required]
-DeploymentControllerConfiguration +
ServiceController [Required]
+ServiceControllerConfiguration
-

DeploymentControllerConfiguration holds configuration for -DeploymentController related features.

+

ServiceControllerConfiguration holds configuration for ServiceController +related features.

StatefulSetController [Required]
-StatefulSetControllerConfiguration +
NodeStatusUpdateFrequency [Required]
+meta/v1.Duration
-

StatefulSetControllerConfiguration holds configuration for -StatefulSetController related features.

+

NodeStatusUpdateFrequency is the frequency at which the controller updates nodes' status

DeprecatedController [Required]
-DeprecatedControllerConfiguration +
Webhook [Required]
+WebhookConfiguration
-

DeprecatedControllerConfiguration holds configuration for some deprecated -features.

+

Webhook is the configuration for cloud-controller-manager hosted webhooks

EndpointController [Required]
-EndpointControllerConfiguration +
+ +## `CloudProviderConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration} + + +**Appears in:** + +- [KubeCloudSharedConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration) + + +

CloudProviderConfiguration contains basically elements about cloud provider.

+ + + + + + + + - - +
FieldDescription
Name [Required]
+string
-

EndpointControllerConfiguration holds configuration for EndpointController -related features.

+

Name is the provider for cloud services.

EndpointSliceController [Required]
-EndpointSliceControllerConfiguration +
CloudConfigFile [Required]
+string
-

EndpointSliceControllerConfiguration holds configuration for -EndpointSliceController related features.

+

cloudConfigFile is the path to the cloud provider configuration file.

EndpointSliceMirroringController [Required]
-EndpointSliceMirroringControllerConfiguration +
+ +## `KubeCloudSharedConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration} + + +**Appears in:** + +- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

KubeCloudSharedConfiguration contains elements shared by both kube-controller manager +and cloud-controller manager, but not genericconfig.

+ + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
FieldDescription
CloudProvider [Required]
+CloudProviderConfiguration
-

EndpointSliceMirroringControllerConfiguration holds configuration for -EndpointSliceMirroringController related features.

+

CloudProviderConfiguration holds configuration for CloudProvider related features.

EphemeralVolumeController [Required]
-EphemeralVolumeControllerConfiguration +
ExternalCloudVolumePlugin [Required]
+string
-

EphemeralVolumeControllerConfiguration holds configuration for EphemeralVolumeController -related features.

+

externalCloudVolumePlugin specifies the plugin to use when cloudProvider is "external". +It is currently used by the in repo cloud providers to handle node and volume control in the KCM.

GarbageCollectorController [Required]
-GarbageCollectorControllerConfiguration +
UseServiceAccountCredentials [Required]
+bool
-

GarbageCollectorControllerConfiguration holds configuration for -GarbageCollectorController related features.

+

useServiceAccountCredentials indicates whether controllers should be run with +individual service account credentials.

HPAController [Required]
-HPAControllerConfiguration +
AllowUntaggedCloud [Required]
+bool
-

HPAControllerConfiguration holds configuration for HPAController related features.

+

run with untagged cloud instances

JobController [Required]
-JobControllerConfiguration +
RouteReconciliationPeriod [Required]
+meta/v1.Duration
-

JobControllerConfiguration holds configuration for JobController related features.

+

routeReconciliationPeriod is the period for reconciling routes created for Nodes by cloud provider..

CronJobController [Required]
-CronJobControllerConfiguration +
NodeMonitorPeriod [Required]
+meta/v1.Duration
-

CronJobControllerConfiguration holds configuration for CronJobController related features.

+

nodeMonitorPeriod is the period for syncing NodeStatus in NodeController.

LegacySATokenCleaner [Required]
-LegacySATokenCleanerConfiguration +
ClusterName [Required]
+string
-

LegacySATokenCleanerConfiguration holds configuration for LegacySATokenCleaner related features.

+

clusterName is the instance prefix for the cluster.

NamespaceController [Required]
-NamespaceControllerConfiguration +
ClusterCIDR [Required]
+string
-

NamespaceControllerConfiguration holds configuration for NamespaceController -related features.

+

clusterCIDR is CIDR Range for Pods in cluster.

NodeIPAMController [Required]
-NodeIPAMControllerConfiguration +
AllocateNodeCIDRs [Required]
+bool
-

NodeIPAMControllerConfiguration holds configuration for NodeIPAMController -related features.

+

AllocateNodeCIDRs enables CIDRs for Pods to be allocated and, if +ConfigureCloudRoutes is true, to be set on the cloud provider.

NodeLifecycleController [Required]
-NodeLifecycleControllerConfiguration +
CIDRAllocatorType [Required]
+string
-

NodeLifecycleControllerConfiguration holds configuration for -NodeLifecycleController related features.

+

CIDRAllocatorType determines what kind of pod CIDR allocator will be used.

PersistentVolumeBinderController [Required]
-PersistentVolumeBinderControllerConfiguration +
ConfigureCloudRoutes [Required]
+bool
-

PersistentVolumeBinderControllerConfiguration holds configuration for -PersistentVolumeBinderController related features.

+

configureCloudRoutes enables CIDRs allocated with allocateNodeCIDRs +to be configured on the cloud provider.

PodGCController [Required]
-PodGCControllerConfiguration +
NodeSyncPeriod [Required]
+meta/v1.Duration
-

PodGCControllerConfiguration holds configuration for PodGCController -related features.

+

nodeSyncPeriod is the period for syncing nodes from cloudprovider. Longer +periods will result in fewer calls to cloud provider, but may delay addition +of new nodes to cluster.

ReplicaSetController [Required]
-ReplicaSetControllerConfiguration -
-

ReplicaSetControllerConfiguration holds configuration for ReplicaSet related features.

-
ReplicationController [Required]
-ReplicationControllerConfiguration -
-

ReplicationControllerConfiguration holds configuration for -ReplicationController related features.

-
ResourceQuotaController [Required]
-ResourceQuotaControllerConfiguration -
-

ResourceQuotaControllerConfiguration holds configuration for -ResourceQuotaController related features.

-
SAController [Required]
-SAControllerConfiguration -
-

SAControllerConfiguration holds configuration for ServiceAccountController -related features.

-
ServiceController [Required]
-ServiceControllerConfiguration -
-

ServiceControllerConfiguration holds configuration for ServiceController -related features.

-
TTLAfterFinishedController [Required]
-TTLAfterFinishedControllerConfiguration -
-

TTLAfterFinishedControllerConfiguration holds configuration for -TTLAfterFinishedController related features.

-
ValidatingAdmissionPolicyStatusController [Required]
-ValidatingAdmissionPolicyStatusControllerConfiguration +
+ +## `WebhookConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-WebhookConfiguration} + + +**Appears in:** + +- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) + + +

WebhookConfiguration contains configuration related to +cloud-controller-manager hosted webhooks

+ + + + + + + +
FieldDescription
Webhooks [Required]
+[]string
-

ValidatingAdmissionPolicyStatusControllerConfiguration holds configuration for -ValidatingAdmissionPolicyStatusController related features.

+

Webhooks is the list of webhooks to enable or disable +'*' means "all enabled by default webhooks" +'foo' means "enable 'foo'" +'-foo' means "disable 'foo'" +first item for a particular name wins

+ + -## `AttachDetachControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-AttachDetachControllerConfiguration} +## `LeaderMigrationConfiguration` {#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration} **Appears in:** -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) +- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration) -

AttachDetachControllerConfiguration contains elements describing AttachDetachController.

+

LeaderMigrationConfiguration provides versioned configuration for all migrating leader locks.

+ + + - - + + +
FieldDescription
apiVersion
string
controllermanager.config.k8s.io/v1alpha1
kind
string
LeaderMigrationConfiguration
DisableAttachDetachReconcilerSync [Required]
-bool +
leaderName [Required]
+string
-

Reconciler runs a periodic loop to reconcile the desired state of the with -the actual state of the world by triggering attach detach operations. -This flag enables or disables reconcile. Is false by default, and thus enabled.

+

LeaderName is the name of the leader election resource that protects the migration +E.g. 1-20-KCM-to-1-21-CCM

ReconcilerSyncLoopPeriod [Required]
-meta/v1.Duration +
resourceLock [Required]
+string
-

ReconcilerSyncLoopPeriod is the amount of time the reconciler sync states loop -wait between successive executions. Is set to 5 sec by default.

+

ResourceLock indicates the resource object type that will be used to lock +Should be "leases" or "endpoints"

+
controllerLeaders [Required]
+[]ControllerLeaderConfiguration +
+

ControllerLeaders contains a list of migrating leader lock configurations

-## `CSRSigningConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningConfiguration} +## `ControllerLeaderConfiguration` {#controllermanager-config-k8s-io-v1alpha1-ControllerLeaderConfiguration} **Appears in:** -- [CSRSigningControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningControllerConfiguration) +- [LeaderMigrationConfiguration](#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration) -

CSRSigningConfiguration holds information about a particular CSR signer

+

ControllerLeaderConfiguration provides the configuration for a migrating leader lock.

@@ -311,34 +376,37 @@ wait between successive executions. Is set to 5 sec by default.

- -
CertFile [Required]
+
name [Required]
string
-

certFile is the filename containing a PEM-encoded -X509 CA certificate used to issue certificates

+

Name is the name of the controller being migrated +E.g. service-controller, route-controller, cloud-node-controller, etc

KeyFile [Required]
+
component [Required]
string
-

keyFile is the filename containing a PEM-encoded -RSA or ECDSA private key used to issue certificates

+

Component is the name of the component in which the controller should be running. +E.g. kube-controller-manager, cloud-controller-manager, etc +Or '*' meaning the controller can be run under any component that participates in the migration

-## `CSRSigningControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningControllerConfiguration} +## `GenericControllerManagerConfiguration` {#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration} **Appears in:** +- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) + - [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

CSRSigningControllerConfiguration contains elements describing CSRSigningController.

+

GenericControllerManagerConfiguration holds configuration for a generic controller-manager.

@@ -346,534 +414,332 @@ RSA or ECDSA private key used to issue certificates

- - - - - - - - -
ClusterSigningCertFile [Required]
-string +
Port [Required]
+int32
-

clusterSigningCertFile is the filename containing a PEM-encoded -X509 CA certificate used to issue cluster-scoped certificates

+

port is the port that the controller-manager's http service runs on.

ClusterSigningKeyFile [Required]
+
Address [Required]
string
-

clusterSigningCertFile is the filename containing a PEM-encoded -RSA or ECDSA private key used to issue cluster-scoped certificates

+

address is the IP address to serve on (set to 0.0.0.0 for all interfaces).

KubeletServingSignerConfiguration [Required]
-CSRSigningConfiguration +
MinResyncPeriod [Required]
+meta/v1.Duration
-

kubeletServingSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kubelet-serving signer

+

minResyncPeriod is the resync period in reflectors; will be random between +minResyncPeriod and 2*minResyncPeriod.

KubeletClientSignerConfiguration [Required]
-CSRSigningConfiguration +
ClientConnection [Required]
+ClientConnectionConfiguration
-

kubeletClientSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kube-apiserver-client-kubelet

+

ClientConnection specifies the kubeconfig file and client connection +settings for the proxy server to use when communicating with the apiserver.

KubeAPIServerClientSignerConfiguration [Required]
-CSRSigningConfiguration +
ControllerStartInterval [Required]
+meta/v1.Duration
-

kubeAPIServerClientSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kube-apiserver-client

+

How long to wait between starting controller managers

LegacyUnknownSignerConfiguration [Required]
-CSRSigningConfiguration +
LeaderElection [Required]
+LeaderElectionConfiguration
-

legacyUnknownSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/legacy-unknown

+

leaderElection defines the configuration of leader election client.

ClusterSigningDuration [Required]
-meta/v1.Duration +
Controllers [Required]
+[]string
-

clusterSigningDuration is the max length of duration signed certificates will be given. -Individual CSRs may request shorter certs by setting spec.expirationSeconds.

+

Controllers is the list of controllers to enable or disable +'*' means "all enabled by default controllers" +'foo' means "enable 'foo'" +'-foo' means "disable 'foo'" +first item for a particular name wins

- -## `CronJobControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CronJobControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

CronJobControllerConfiguration contains elements describing CrongJob2Controller.

- - - - - - - - + + + + + +
FieldDescription
ConcurrentCronJobSyncs [Required]
-int32 +
Debugging [Required]
+DebuggingConfiguration
-

concurrentCronJobSyncs is the number of job objects that are -allowed to sync concurrently. Larger number = more responsive jobs, -but more CPU (and network) load.

+

DebuggingConfiguration holds configuration for Debugging related features.

+
LeaderMigrationEnabled [Required]
+bool +
+

LeaderMigrationEnabled indicates whether Leader Migration should be enabled for the controller manager.

+
LeaderMigration [Required]
+LeaderMigrationConfiguration +
+

LeaderMigration holds the configuration for Leader Migration.

+ + -## `DaemonSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DaemonSetControllerConfiguration} +## `KubeControllerManagerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration} -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - -

DaemonSetControllerConfiguration contains elements describing DaemonSetController.

+

KubeControllerManagerConfiguration contains elements describing kube-controller manager.

+ + + - - -
FieldDescription
apiVersion
string
kubecontrollermanager.config.k8s.io/v1alpha1
kind
string
KubeControllerManagerConfiguration
ConcurrentDaemonSetSyncs [Required]
-int32 +
Generic [Required]
+GenericControllerManagerConfiguration
-

concurrentDaemonSetSyncs is the number of daemonset objects that are -allowed to sync concurrently. Larger number = more responsive daemonset, -but more CPU (and network) load.

+

Generic holds configuration for a generic controller-manager

- -## `DeploymentControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DeploymentControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

DeploymentControllerConfiguration contains elements describing DeploymentController.

- - - - - - - - - -
FieldDescription
ConcurrentDeploymentSyncs [Required]
-int32 +
KubeCloudShared [Required]
+KubeCloudSharedConfiguration
-

concurrentDeploymentSyncs is the number of deployment objects that are -allowed to sync concurrently. Larger number = more responsive deployments, -but more CPU (and network) load.

+

KubeCloudSharedConfiguration holds configuration for shared related features +both in cloud controller manager and kube-controller manager.

- -## `DeprecatedControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DeprecatedControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

DeprecatedControllerConfiguration contains elements be deprecated.

- - - - -## `EndpointControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

EndpointControllerConfiguration contains elements describing EndpointController.

- - - - - - - - - - -
FieldDescription
ConcurrentEndpointSyncs [Required]
-int32 +
AttachDetachController [Required]
+AttachDetachControllerConfiguration
-

concurrentEndpointSyncs is the number of endpoint syncing operations -that will be done concurrently. Larger number = faster endpoint updating, -but more CPU (and network) load.

+

AttachDetachControllerConfiguration holds configuration for +AttachDetachController related features.

EndpointUpdatesBatchPeriod [Required]
-meta/v1.Duration +
CSRSigningController [Required]
+CSRSigningControllerConfiguration
-

EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period. -Processing of pod changes will be delayed by this duration to join them with potential -upcoming updates and reduce the overall number of endpoints updates.

+

CSRSigningControllerConfiguration holds configuration for +CSRSigningController related features.

- -## `EndpointSliceControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointSliceControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

EndpointSliceControllerConfiguration contains elements describing -EndpointSliceController.

- - - - - - - - - - - -
FieldDescription
ConcurrentServiceEndpointSyncs [Required]
-int32 +
DaemonSetController [Required]
+DaemonSetControllerConfiguration
-

concurrentServiceEndpointSyncs is the number of service endpoint syncing -operations that will be done concurrently. Larger number = faster -endpoint slice updating, but more CPU (and network) load.

+

DaemonSetControllerConfiguration holds configuration for DaemonSetController +related features.

MaxEndpointsPerSlice [Required]
-int32 +
DeploymentController [Required]
+DeploymentControllerConfiguration
-

maxEndpointsPerSlice is the maximum number of endpoints that will be -added to an EndpointSlice. More endpoints per slice will result in fewer -and larger endpoint slices, but larger resources.

+

DeploymentControllerConfiguration holds configuration for +DeploymentController related features.

EndpointUpdatesBatchPeriod [Required]
-meta/v1.Duration +
StatefulSetController [Required]
+StatefulSetControllerConfiguration
-

EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period. -Processing of pod changes will be delayed by this duration to join them with potential -upcoming updates and reduce the overall number of endpoints updates.

+

StatefulSetControllerConfiguration holds configuration for +StatefulSetController related features.

- -## `EndpointSliceMirroringControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointSliceMirroringControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

EndpointSliceMirroringControllerConfiguration contains elements describing -EndpointSliceMirroringController.

- - - - - - - - - - - -
FieldDescription
MirroringConcurrentServiceEndpointSyncs [Required]
-int32 +
DeprecatedController [Required]
+DeprecatedControllerConfiguration
-

mirroringConcurrentServiceEndpointSyncs is the number of service endpoint -syncing operations that will be done concurrently. Larger number = faster -endpoint slice updating, but more CPU (and network) load.

+

DeprecatedControllerConfiguration holds configuration for some deprecated +features.

MirroringMaxEndpointsPerSubset [Required]
-int32 +
EndpointController [Required]
+EndpointControllerConfiguration
-

mirroringMaxEndpointsPerSubset is the maximum number of endpoints that -will be mirrored to an EndpointSlice for an EndpointSubset.

+

EndpointControllerConfiguration holds configuration for EndpointController +related features.

MirroringEndpointUpdatesBatchPeriod [Required]
-meta/v1.Duration +
EndpointSliceController [Required]
+EndpointSliceControllerConfiguration
-

mirroringEndpointUpdatesBatchPeriod can be used to batch EndpointSlice -updates. All updates triggered by EndpointSlice changes will be delayed -by up to 'mirroringEndpointUpdatesBatchPeriod'. If other addresses in the -same Endpoints resource change in that period, they will be batched to a -single EndpointSlice update. Default 0 value means that each Endpoints -update triggers an EndpointSlice update.

+

EndpointSliceControllerConfiguration holds configuration for +EndpointSliceController related features.

- -## `EphemeralVolumeControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EphemeralVolumeControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

EphemeralVolumeControllerConfiguration contains elements describing EphemeralVolumeController.

- - - - - - - - - -
FieldDescription
ConcurrentEphemeralVolumeSyncs [Required]
-int32 +
EndpointSliceMirroringController [Required]
+EndpointSliceMirroringControllerConfiguration
-

ConcurrentEphemeralVolumeSyncseSyncs is the number of ephemeral volume syncing operations -that will be done concurrently. Larger number = faster ephemeral volume updating, -but more CPU (and network) load.

+

EndpointSliceMirroringControllerConfiguration holds configuration for +EndpointSliceMirroringController related features.

- -## `GarbageCollectorControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-GarbageCollectorControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

GarbageCollectorControllerConfiguration contains elements describing GarbageCollectorController.

- - - - - - - - - - - -
FieldDescription
EnableGarbageCollector [Required]
-bool +
EphemeralVolumeController [Required]
+EphemeralVolumeControllerConfiguration
-

enables the generic garbage collector. MUST be synced with the -corresponding flag of the kube-apiserver. WARNING: the generic garbage -collector is an alpha feature.

+

EphemeralVolumeControllerConfiguration holds configuration for EphemeralVolumeController +related features.

ConcurrentGCSyncs [Required]
-int32 +
GarbageCollectorController [Required]
+GarbageCollectorControllerConfiguration
-

concurrentGCSyncs is the number of garbage collector workers that are -allowed to sync concurrently.

+

GarbageCollectorControllerConfiguration holds configuration for +GarbageCollectorController related features.

GCIgnoredResources [Required]
-[]GroupResource +
HPAController [Required]
+HPAControllerConfiguration
-

gcIgnoredResources is the list of GroupResources that garbage collection should ignore.

+

HPAControllerConfiguration holds configuration for HPAController related features.

- -## `GroupResource` {#kubecontrollermanager-config-k8s-io-v1alpha1-GroupResource} - - -**Appears in:** - -- [GarbageCollectorControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-GarbageCollectorControllerConfiguration) - - -

GroupResource describes an group resource.

- - - - - - - - - - -
FieldDescription
Group [Required]
-string +
JobController [Required]
+JobControllerConfiguration
-

group is the group portion of the GroupResource.

+

JobControllerConfiguration holds configuration for JobController related features.

Resource [Required]
-string +
CronJobController [Required]
+CronJobControllerConfiguration
-

resource is the resource portion of the GroupResource.

+

CronJobControllerConfiguration holds configuration for CronJobController related features.

- -## `HPAControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-HPAControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

HPAControllerConfiguration contains elements describing HPAController.

- - - - - - - - - - - - - - - - -
FieldDescription
ConcurrentHorizontalPodAutoscalerSyncs [Required]
-int32 +
LegacySATokenCleaner [Required]
+LegacySATokenCleanerConfiguration
-

ConcurrentHorizontalPodAutoscalerSyncs is the number of HPA objects that are allowed to sync concurrently. -Larger number = more responsive HPA processing, but more CPU (and network) load.

+

LegacySATokenCleanerConfiguration holds configuration for LegacySATokenCleaner related features.

HorizontalPodAutoscalerSyncPeriod [Required]
-meta/v1.Duration +
NamespaceController [Required]
+NamespaceControllerConfiguration
-

HorizontalPodAutoscalerSyncPeriod is the period for syncing the number of -pods in horizontal pod autoscaler.

+

NamespaceControllerConfiguration holds configuration for NamespaceController +related features.

HorizontalPodAutoscalerUpscaleForbiddenWindow [Required]
-meta/v1.Duration +
NodeIPAMController [Required]
+NodeIPAMControllerConfiguration
-

HorizontalPodAutoscalerUpscaleForbiddenWindow is a period after which next upscale allowed.

+

NodeIPAMControllerConfiguration holds configuration for NodeIPAMController +related features.

HorizontalPodAutoscalerDownscaleStabilizationWindow [Required]
-meta/v1.Duration +
NodeLifecycleController [Required]
+NodeLifecycleControllerConfiguration
-

HorizontalPodAutoscalerDowncaleStabilizationWindow is a period for which autoscaler will look -backwards and not scale down below any recommendation it made during that period.

+

NodeLifecycleControllerConfiguration holds configuration for +NodeLifecycleController related features.

HorizontalPodAutoscalerDownscaleForbiddenWindow [Required]
-meta/v1.Duration +
PersistentVolumeBinderController [Required]
+PersistentVolumeBinderControllerConfiguration
-

HorizontalPodAutoscalerDownscaleForbiddenWindow is a period after which next downscale allowed.

+

PersistentVolumeBinderControllerConfiguration holds configuration for +PersistentVolumeBinderController related features.

HorizontalPodAutoscalerTolerance [Required]
-float64 +
PodGCController [Required]
+PodGCControllerConfiguration
-

HorizontalPodAutoscalerTolerance is the tolerance for when -resource usage suggests upscaling/downscaling

+

PodGCControllerConfiguration holds configuration for PodGCController +related features.

HorizontalPodAutoscalerCPUInitializationPeriod [Required]
-meta/v1.Duration +
ReplicaSetController [Required]
+ReplicaSetControllerConfiguration
-

HorizontalPodAutoscalerCPUInitializationPeriod is the period after pod start when CPU samples -might be skipped.

+

ReplicaSetControllerConfiguration holds configuration for ReplicaSet related features.

HorizontalPodAutoscalerInitialReadinessDelay [Required]
-meta/v1.Duration +
ReplicationController [Required]
+ReplicationControllerConfiguration
-

HorizontalPodAutoscalerInitialReadinessDelay is period after pod start during which readiness -changes are treated as readiness being set for the first time. The only effect of this is that -HPA will disregard CPU samples from unready pods that had last readiness change during that -period.

+

ReplicationControllerConfiguration holds configuration for +ReplicationController related features.

- -## `JobControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-JobControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

JobControllerConfiguration contains elements describing JobController.

- - - - - - - - - -
FieldDescription
ConcurrentJobSyncs [Required]
-int32 +
ResourceQuotaController [Required]
+ResourceQuotaControllerConfiguration
-

concurrentJobSyncs is the number of job objects that are -allowed to sync concurrently. Larger number = more responsive jobs, -but more CPU (and network) load.

+

ResourceQuotaControllerConfiguration holds configuration for +ResourceQuotaController related features.

- -## `LegacySATokenCleanerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-LegacySATokenCleanerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

LegacySATokenCleanerConfiguration contains elements describing LegacySATokenCleaner

- - - - - - - - + + + + + + + + +
FieldDescription
CleanUpPeriod [Required]
-meta/v1.Duration +
SAController [Required]
+SAControllerConfiguration +
+

SAControllerConfiguration holds configuration for ServiceAccountController +related features.

+
ServiceController [Required]
+ServiceControllerConfiguration +
+

ServiceControllerConfiguration holds configuration for ServiceController +related features.

+
TTLAfterFinishedController [Required]
+TTLAfterFinishedControllerConfiguration +
+

TTLAfterFinishedControllerConfiguration holds configuration for +TTLAfterFinishedController related features.

+
ValidatingAdmissionPolicyStatusController [Required]
+ValidatingAdmissionPolicyStatusControllerConfiguration
-

CleanUpPeriod is the period of time since the last usage of an -auto-generated service account token before it can be deleted.

+

ValidatingAdmissionPolicyStatusControllerConfiguration holds configuration for +ValidatingAdmissionPolicyStatusController related features.

-## `NamespaceControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NamespaceControllerConfiguration} +## `AttachDetachControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-AttachDetachControllerConfiguration} **Appears in:** @@ -881,7 +747,7 @@ auto-generated service account token before it can be deleted.

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

NamespaceControllerConfiguration contains elements describing NamespaceController.

+

AttachDetachControllerConfiguration contains elements describing AttachDetachController.

@@ -889,34 +755,35 @@ auto-generated service account token before it can be deleted.

- -
NamespaceSyncPeriod [Required]
-meta/v1.Duration +
DisableAttachDetachReconcilerSync [Required]
+bool
-

namespaceSyncPeriod is the period for syncing namespace life-cycle -updates.

+

Reconciler runs a periodic loop to reconcile the desired state of the with +the actual state of the world by triggering attach detach operations. +This flag enables or disables reconcile. Is false by default, and thus enabled.

ConcurrentNamespaceSyncs [Required]
-int32 +
ReconcilerSyncLoopPeriod [Required]
+meta/v1.Duration
-

concurrentNamespaceSyncs is the number of namespace objects that are -allowed to sync concurrently.

+

ReconcilerSyncLoopPeriod is the amount of time the reconciler sync states loop +wait between successive executions. Is set to 5 sec by default.

-## `NodeIPAMControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NodeIPAMControllerConfiguration} +## `CSRSigningConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningConfiguration} **Appears in:** -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) +- [CSRSigningControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningControllerConfiguration) -

NodeIPAMControllerConfiguration contains elements describing NodeIpamController.

+

CSRSigningConfiguration holds information about a particular CSR signer

@@ -924,45 +791,26 @@ allowed to sync concurrently.

- - - - - - - - - - -
ServiceCIDR [Required]
+
CertFile [Required]
string
-

serviceCIDR is CIDR Range for Services in cluster.

+

certFile is the filename containing a PEM-encoded +X509 CA certificate used to issue certificates

SecondaryServiceCIDR [Required]
+
KeyFile [Required]
string
-

secondaryServiceCIDR is CIDR Range for Services in cluster. This is used in dual stack clusters. SecondaryServiceCIDR must be of different IP family than ServiceCIDR

-
NodeCIDRMaskSize [Required]
-int32 -
-

NodeCIDRMaskSize is the mask size for node cidr in cluster.

-
NodeCIDRMaskSizeIPv4 [Required]
-int32 -
-

NodeCIDRMaskSizeIPv4 is the mask size for node cidr in dual-stack cluster.

-
NodeCIDRMaskSizeIPv6 [Required]
-int32 -
-

NodeCIDRMaskSizeIPv6 is the mask size for node cidr in dual-stack cluster.

+

keyFile is the filename containing a PEM-encoded +RSA or ECDSA private key used to issue certificates

-## `NodeLifecycleControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NodeLifecycleControllerConfiguration} +## `CSRSigningControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningControllerConfiguration} **Appears in:** @@ -970,7 +818,7 @@ allowed to sync concurrently.

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

NodeLifecycleControllerConfiguration contains elements describing NodeLifecycleController.

+

CSRSigningControllerConfiguration contains elements describing CSRSigningController.

@@ -978,64 +826,62 @@ allowed to sync concurrently.

- - - - - - -
NodeEvictionRate [Required]
-float32 +
ClusterSigningCertFile [Required]
+string
-

nodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is healthy

+

clusterSigningCertFile is the filename containing a PEM-encoded +X509 CA certificate used to issue cluster-scoped certificates

SecondaryNodeEvictionRate [Required]
-float32 +
ClusterSigningKeyFile [Required]
+string
-

secondaryNodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy

+

clusterSigningCertFile is the filename containing a PEM-encoded +RSA or ECDSA private key used to issue cluster-scoped certificates

NodeStartupGracePeriod [Required]
-meta/v1.Duration +
KubeletServingSignerConfiguration [Required]
+CSRSigningConfiguration
-

nodeStartupGracePeriod is the amount of time which we allow starting a node to -be unresponsive before marking it unhealthy.

+

kubeletServingSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kubelet-serving signer

NodeMonitorGracePeriod [Required]
-meta/v1.Duration +
KubeletClientSignerConfiguration [Required]
+CSRSigningConfiguration
-

nodeMontiorGracePeriod is the amount of time which we allow a running node to be -unresponsive before marking it unhealthy. Must be N times more than kubelet's -nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet -to post node status.

+

kubeletClientSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kube-apiserver-client-kubelet

PodEvictionTimeout [Required]
-meta/v1.Duration +
KubeAPIServerClientSignerConfiguration [Required]
+CSRSigningConfiguration
-

podEvictionTimeout is the grace period for deleting pods on failed nodes.

+

kubeAPIServerClientSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kube-apiserver-client

LargeClusterSizeThreshold [Required]
-int32 +
LegacyUnknownSignerConfiguration [Required]
+CSRSigningConfiguration
-

secondaryNodeEvictionRate is implicitly overridden to 0 for clusters smaller than or equal to largeClusterSizeThreshold

+

legacyUnknownSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/legacy-unknown

UnhealthyZoneThreshold [Required]
-float32 +
ClusterSigningDuration [Required]
+meta/v1.Duration
-

Zone is treated as unhealthy in nodeEvictionRate and secondaryNodeEvictionRate when at least -unhealthyZoneThreshold (no less than 3) of Nodes in the zone are NotReady

+

clusterSigningDuration is the max length of duration signed certificates will be given. +Individual CSRs may request shorter certs by setting spec.expirationSeconds.

-## `PersistentVolumeBinderControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeBinderControllerConfiguration} +## `CronJobControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CronJobControllerConfiguration} **Appears in:** @@ -1043,8 +889,7 @@ unhealthyZoneThreshold (no less than 3) of Nodes in the zone are NotReady

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

PersistentVolumeBinderControllerConfiguration contains elements describing -PersistentVolumeBinderController.

+

CronJobControllerConfiguration contains elements describing CrongJob2Controller.

@@ -1052,49 +897,27 @@ PersistentVolumeBinderController.

- - - - - - - - - -
PVClaimBinderSyncPeriod [Required]
-meta/v1.Duration -
-

pvClaimBinderSyncPeriod is the period for syncing persistent volumes -and persistent volume claims.

-
VolumeConfiguration [Required]
-VolumeConfiguration -
-

volumeConfiguration holds configuration for volume related features.

-
VolumeHostCIDRDenylist [Required]
-[]string -
-

DEPRECATED: VolumeHostCIDRDenylist is a list of CIDRs that should not be reachable by the -controller from plugins.

-
VolumeHostAllowLocalLoopback [Required]
-bool +
ConcurrentCronJobSyncs [Required]
+int32
-

DEPRECATED: VolumeHostAllowLocalLoopback indicates if local loopback hosts (127.0.0.1, etc) -should be allowed from plugins.

+

concurrentCronJobSyncs is the number of job objects that are +allowed to sync concurrently. Larger number = more responsive jobs, +but more CPU (and network) load.

-## `PersistentVolumeRecyclerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeRecyclerConfiguration} +## `DaemonSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DaemonSetControllerConfiguration} **Appears in:** -- [VolumeConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-VolumeConfiguration) +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

PersistentVolumeRecyclerConfiguration contains elements describing persistent volume plugins.

+

DaemonSetControllerConfiguration contains elements describing DaemonSetController.

@@ -1102,69 +925,19 @@ should be allowed from plugins.

- - - - - - - - - - - - - - - - - - -
MaximumRetry [Required]
-int32 -
-

maximumRetry is number of retries the PV recycler will execute on failure to recycle -PV.

-
MinimumTimeoutNFS [Required]
-int32 -
-

minimumTimeoutNFS is the minimum ActiveDeadlineSeconds to use for an NFS Recycler -pod.

-
PodTemplateFilePathNFS [Required]
-string -
-

podTemplateFilePathNFS is the file path to a pod definition used as a template for -NFS persistent volume recycling

-
IncrementTimeoutNFS [Required]
-int32 -
-

incrementTimeoutNFS is the increment of time added per Gi to ActiveDeadlineSeconds -for an NFS scrubber pod.

-
PodTemplateFilePathHostPath [Required]
-string -
-

podTemplateFilePathHostPath is the file path to a pod definition used as a template for -HostPath persistent volume recycling. This is for development and testing only and -will not work in a multi-node cluster.

-
MinimumTimeoutHostPath [Required]
-int32 -
-

minimumTimeoutHostPath is the minimum ActiveDeadlineSeconds to use for a HostPath -Recycler pod. This is for development and testing only and will not work in a multi-node -cluster.

-
IncrementTimeoutHostPath [Required]
+
ConcurrentDaemonSetSyncs [Required]
int32
-

incrementTimeoutHostPath is the increment of time added per Gi to ActiveDeadlineSeconds -for a HostPath scrubber pod. This is for development and testing only and will not work -in a multi-node cluster.

+

concurrentDaemonSetSyncs is the number of daemonset objects that are +allowed to sync concurrently. Larger number = more responsive daemonset, +but more CPU (and network) load.

-## `PodGCControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PodGCControllerConfiguration} +## `DeploymentControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DeploymentControllerConfiguration} **Appears in:** @@ -1172,7 +945,7 @@ in a multi-node cluster.

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

PodGCControllerConfiguration contains elements describing PodGCController.

+

DeploymentControllerConfiguration contains elements describing DeploymentController.

@@ -1180,19 +953,19 @@ in a multi-node cluster.

-
TerminatedPodGCThreshold [Required]
+
ConcurrentDeploymentSyncs [Required]
int32
-

terminatedPodGCThreshold is the number of terminated pods that can exist -before the terminated pod garbage collector starts deleting terminated pods. -If <= 0, the terminated pod garbage collector is disabled.

+

concurrentDeploymentSyncs is the number of deployment objects that are +allowed to sync concurrently. Larger number = more responsive deployments, +but more CPU (and network) load.

-## `ReplicaSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ReplicaSetControllerConfiguration} +## `DeprecatedControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DeprecatedControllerConfiguration} **Appears in:** @@ -1200,27 +973,12 @@ If <= 0, the terminated pod garbage collector is disabled.

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

ReplicaSetControllerConfiguration contains elements describing ReplicaSetController.

+

DeprecatedControllerConfiguration contains elements be deprecated.

- - - - - - - - - -
FieldDescription
ConcurrentRSSyncs [Required]
-int32 -
-

concurrentRSSyncs is the number of replica sets that are allowed to sync -concurrently. Larger number = more responsive replica management, but more -CPU (and network) load.

-
-## `ReplicationControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ReplicationControllerConfiguration} + +## `EndpointControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointControllerConfiguration} **Appears in:** @@ -1228,7 +986,7 @@ CPU (and network) load.

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

ReplicationControllerConfiguration contains elements describing ReplicationController.

+

EndpointControllerConfiguration contains elements describing EndpointController.

@@ -1236,19 +994,28 @@ CPU (and network) load.

- + + +
ConcurrentRCSyncs [Required]
+
ConcurrentEndpointSyncs [Required]
int32
-

concurrentRCSyncs is the number of replication controllers that are -allowed to sync concurrently. Larger number = more responsive replica -management, but more CPU (and network) load.

+

concurrentEndpointSyncs is the number of endpoint syncing operations +that will be done concurrently. Larger number = faster endpoint updating, +but more CPU (and network) load.

+
EndpointUpdatesBatchPeriod [Required]
+meta/v1.Duration +
+

EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period. +Processing of pod changes will be delayed by this duration to join them with potential +upcoming updates and reduce the overall number of endpoints updates.

-## `ResourceQuotaControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ResourceQuotaControllerConfiguration} +## `EndpointSliceControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointSliceControllerConfiguration} **Appears in:** @@ -1256,7 +1023,8 @@ management, but more CPU (and network) load.

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

ResourceQuotaControllerConfiguration contains elements describing ResourceQuotaController.

+

EndpointSliceControllerConfiguration contains elements describing +EndpointSliceController.

@@ -1264,27 +1032,37 @@ management, but more CPU (and network) load.

- - + + +
ResourceQuotaSyncPeriod [Required]
-meta/v1.Duration +
ConcurrentServiceEndpointSyncs [Required]
+int32
-

resourceQuotaSyncPeriod is the period for syncing quota usage status -in the system.

+

concurrentServiceEndpointSyncs is the number of service endpoint syncing +operations that will be done concurrently. Larger number = faster +endpoint slice updating, but more CPU (and network) load.

ConcurrentResourceQuotaSyncs [Required]
+
MaxEndpointsPerSlice [Required]
int32
-

concurrentResourceQuotaSyncs is the number of resource quotas that are -allowed to sync concurrently. Larger number = more responsive quota -management, but more CPU (and network) load.

+

maxEndpointsPerSlice is the maximum number of endpoints that will be +added to an EndpointSlice. More endpoints per slice will result in fewer +and larger endpoint slices, but larger resources.

+
EndpointUpdatesBatchPeriod [Required]
+meta/v1.Duration +
+

EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period. +Processing of pod changes will be delayed by this duration to join them with potential +upcoming updates and reduce the overall number of endpoints updates.

-## `SAControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-SAControllerConfiguration} +## `EndpointSliceMirroringControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointSliceMirroringControllerConfiguration} **Appears in:** @@ -1292,7 +1070,8 @@ management, but more CPU (and network) load.

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

SAControllerConfiguration contains elements describing ServiceAccountController.

+

EndpointSliceMirroringControllerConfiguration contains elements describing +EndpointSliceMirroringController.

@@ -1300,34 +1079,39 @@ management, but more CPU (and network) load.

- - -
ServiceAccountKeyFile [Required]
-string +
MirroringConcurrentServiceEndpointSyncs [Required]
+int32
-

serviceAccountKeyFile is the filename containing a PEM-encoded private RSA key -used to sign service account tokens.

+

mirroringConcurrentServiceEndpointSyncs is the number of service endpoint +syncing operations that will be done concurrently. Larger number = faster +endpoint slice updating, but more CPU (and network) load.

ConcurrentSATokenSyncs [Required]
+
MirroringMaxEndpointsPerSubset [Required]
int32
-

concurrentSATokenSyncs is the number of service account token syncing operations -that will be done concurrently.

+

mirroringMaxEndpointsPerSubset is the maximum number of endpoints that +will be mirrored to an EndpointSlice for an EndpointSubset.

RootCAFile [Required]
-string +
MirroringEndpointUpdatesBatchPeriod [Required]
+meta/v1.Duration
-

rootCAFile is the root certificate authority will be included in service -account's token secret. This must be a valid PEM-encoded CA bundle.

+

mirroringEndpointUpdatesBatchPeriod can be used to batch EndpointSlice +updates. All updates triggered by EndpointSlice changes will be delayed +by up to 'mirroringEndpointUpdatesBatchPeriod'. If other addresses in the +same Endpoints resource change in that period, they will be batched to a +single EndpointSlice update. Default 0 value means that each Endpoints +update triggers an EndpointSlice update.

-## `StatefulSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-StatefulSetControllerConfiguration} +## `EphemeralVolumeControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EphemeralVolumeControllerConfiguration} **Appears in:** @@ -1335,7 +1119,7 @@ account's token secret. This must be a valid PEM-encoded CA bundle.

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

StatefulSetControllerConfiguration contains elements describing StatefulSetController.

+

EphemeralVolumeControllerConfiguration contains elements describing EphemeralVolumeController.

@@ -1343,19 +1127,19 @@ account's token secret. This must be a valid PEM-encoded CA bundle.

-
ConcurrentStatefulSetSyncs [Required]
+
ConcurrentEphemeralVolumeSyncs [Required]
int32
-

concurrentStatefulSetSyncs is the number of statefulset objects that are -allowed to sync concurrently. Larger number = more responsive statefulsets, +

ConcurrentEphemeralVolumeSyncseSyncs is the number of ephemeral volume syncing operations +that will be done concurrently. Larger number = faster ephemeral volume updating, but more CPU (and network) load.

-## `TTLAfterFinishedControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-TTLAfterFinishedControllerConfiguration} +## `GarbageCollectorControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-GarbageCollectorControllerConfiguration} **Appears in:** @@ -1363,7 +1147,7 @@ but more CPU (and network) load.

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

TTLAfterFinishedControllerConfiguration contains elements describing TTLAfterFinishedController.

+

GarbageCollectorControllerConfiguration contains elements describing GarbageCollectorController.

@@ -1371,26 +1155,42 @@ but more CPU (and network) load.

- + + + + + +
ConcurrentTTLSyncs [Required]
+
EnableGarbageCollector [Required]
+bool +
+

enables the generic garbage collector. MUST be synced with the +corresponding flag of the kube-apiserver. WARNING: the generic garbage +collector is an alpha feature.

+
ConcurrentGCSyncs [Required]
int32
-

concurrentTTLSyncs is the number of TTL-after-finished collector workers that are +

concurrentGCSyncs is the number of garbage collector workers that are allowed to sync concurrently.

GCIgnoredResources [Required]
+[]GroupResource +
+

gcIgnoredResources is the list of GroupResources that garbage collection should ignore.

+
-## `ValidatingAdmissionPolicyStatusControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ValidatingAdmissionPolicyStatusControllerConfiguration} +## `GroupResource` {#kubecontrollermanager-config-k8s-io-v1alpha1-GroupResource} **Appears in:** -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) +- [GarbageCollectorControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-GarbageCollectorControllerConfiguration) -

ValidatingAdmissionPolicyStatusControllerConfiguration contains elements describing ValidatingAdmissionPolicyStatusController.

+

GroupResource describes an group resource.

@@ -1398,32 +1198,32 @@ allowed to sync concurrently.

- + + +
ConcurrentPolicySyncs [Required]
-int32 +
Group [Required]
+string
-

ConcurrentPolicySyncs is the number of policy objects that are -allowed to sync concurrently. Larger number = quicker type checking, -but more CPU (and network) load. -The default value is 5.

+

group is the group portion of the GroupResource.

+
Resource [Required]
+string +
+

resource is the resource portion of the GroupResource.

-## `VolumeConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-VolumeConfiguration} +## `HPAControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-HPAControllerConfiguration} **Appears in:** -- [PersistentVolumeBinderControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeBinderControllerConfiguration) +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

VolumeConfiguration contains all enumerated flags meant to configure all volume -plugins. From this config, the controller-manager binary will create many instances of -volume.VolumeConfig, each containing only the configuration needed for that plugin which -are then passed to the appropriate plugin. The ControllerManager binary is the only part -of the code which knows what plugins are supported and which flags correspond to each plugin.

+

HPAControllerConfiguration contains elements describing HPAController.

@@ -1431,54 +1231,82 @@ of the code which knows what plugins are supported and which flags correspond to - - - - + + + + + + + + + + + +
EnableHostPathProvisioning [Required]
-bool +
ConcurrentHorizontalPodAutoscalerSyncs [Required]
+int32
-

enableHostPathProvisioning enables HostPath PV provisioning when running without a -cloud provider. This allows testing and development of provisioning features. HostPath -provisioning is not supported in any way, won't work in a multi-node cluster, and -should not be used for anything other than testing or development.

+

ConcurrentHorizontalPodAutoscalerSyncs is the number of HPA objects that are allowed to sync concurrently. +Larger number = more responsive HPA processing, but more CPU (and network) load.

EnableDynamicProvisioning [Required]
-bool +
HorizontalPodAutoscalerSyncPeriod [Required]
+meta/v1.Duration
-

enableDynamicProvisioning enables the provisioning of volumes when running within an environment -that supports dynamic provisioning. Defaults to true.

+

HorizontalPodAutoscalerSyncPeriod is the period for syncing the number of +pods in horizontal pod autoscaler.

PersistentVolumeRecyclerConfiguration [Required]
-PersistentVolumeRecyclerConfiguration +
HorizontalPodAutoscalerUpscaleForbiddenWindow [Required]
+meta/v1.Duration
-

persistentVolumeRecyclerConfiguration holds configuration for persistent volume plugins.

+

HorizontalPodAutoscalerUpscaleForbiddenWindow is a period after which next upscale allowed.

FlexVolumePluginDir [Required]
-string +
HorizontalPodAutoscalerDownscaleStabilizationWindow [Required]
+meta/v1.Duration
-

volumePluginDir is the full path of the directory in which the flex -volume plugin should search for additional third party volume plugins

+

HorizontalPodAutoscalerDowncaleStabilizationWindow is a period for which autoscaler will look +backwards and not scale down below any recommendation it made during that period.

+
HorizontalPodAutoscalerDownscaleForbiddenWindow [Required]
+meta/v1.Duration +
+

HorizontalPodAutoscalerDownscaleForbiddenWindow is a period after which next downscale allowed.

+
HorizontalPodAutoscalerTolerance [Required]
+float64 +
+

HorizontalPodAutoscalerTolerance is the tolerance for when +resource usage suggests upscaling/downscaling

+
HorizontalPodAutoscalerCPUInitializationPeriod [Required]
+meta/v1.Duration +
+

HorizontalPodAutoscalerCPUInitializationPeriod is the period after pod start when CPU samples +might be skipped.

+
HorizontalPodAutoscalerInitialReadinessDelay [Required]
+meta/v1.Duration +
+

HorizontalPodAutoscalerInitialReadinessDelay is period after pod start during which readiness +changes are treated as readiness being set for the first time. The only effect of this is that +HPA will disregard CPU samples from unready pods that had last readiness change during that +period.

- - - -## `NodeControllerConfiguration` {#NodeControllerConfiguration} +## `JobControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-JobControllerConfiguration} **Appears in:** -- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

NodeControllerConfiguration contains elements describing NodeController.

+

JobControllerConfiguration contains elements describing JobController.

@@ -1486,28 +1314,54 @@ volume plugin should search for additional third party volume plugins

- + + +
ConcurrentNodeSyncs [Required]
+
ConcurrentJobSyncs [Required]
int32
-

ConcurrentNodeSyncs is the number of workers -concurrently synchronizing nodes

+

concurrentJobSyncs is the number of job objects that are +allowed to sync concurrently. Larger number = more responsive jobs, +but more CPU (and network) load.

+
+ +## `LegacySATokenCleanerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-LegacySATokenCleanerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

LegacySATokenCleanerConfiguration contains elements describing LegacySATokenCleaner

+ + + + + + + + +
FieldDescription
CleanUpPeriod [Required]
+meta/v1.Duration +
+

CleanUpPeriod is the period of time since the last usage of an +auto-generated service account token before it can be deleted.

-## `ServiceControllerConfiguration` {#ServiceControllerConfiguration} +## `NamespaceControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NamespaceControllerConfiguration} **Appears in:** -- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) - - [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

ServiceControllerConfiguration contains elements describing ServiceController.

+

NamespaceControllerConfiguration contains elements describing NamespaceController.

@@ -1515,92 +1369,88 @@ concurrently synchronizing nodes

- + + +
ConcurrentServiceSyncs [Required]
+
NamespaceSyncPeriod [Required]
+meta/v1.Duration +
+

namespaceSyncPeriod is the period for syncing namespace life-cycle +updates.

+
ConcurrentNamespaceSyncs [Required]
int32
-

concurrentServiceSyncs is the number of services that are -allowed to sync concurrently. Larger number = more responsive service -management, but more CPU (and network) load.

+

concurrentNamespaceSyncs is the number of namespace objects that are +allowed to sync concurrently.

- - -## `CloudControllerManagerConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration} +## `NodeIPAMControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NodeIPAMControllerConfiguration} +**Appears in:** -

CloudControllerManagerConfiguration contains elements describing cloud-controller manager.

+- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

NodeIPAMControllerConfiguration contains elements describing NodeIpamController.

- - - - - - - - - - -
FieldDescription
apiVersion
string
cloudcontrollermanager.config.k8s.io/v1alpha1
kind
string
CloudControllerManagerConfiguration
Generic [Required]
-GenericControllerManagerConfiguration -
-

Generic holds configuration for a generic controller-manager

-
KubeCloudShared [Required]
-KubeCloudSharedConfiguration +
ServiceCIDR [Required]
+string
-

KubeCloudSharedConfiguration holds configuration for shared related features -both in cloud controller manager and kube-controller manager.

+

serviceCIDR is CIDR Range for Services in cluster.

NodeController [Required]
-NodeControllerConfiguration +
SecondaryServiceCIDR [Required]
+string
-

NodeController holds configuration for node controller -related features.

+

secondaryServiceCIDR is CIDR Range for Services in cluster. This is used in dual stack clusters. SecondaryServiceCIDR must be of different IP family than ServiceCIDR

ServiceController [Required]
-ServiceControllerConfiguration +
NodeCIDRMaskSize [Required]
+int32
-

ServiceControllerConfiguration holds configuration for ServiceController -related features.

+

NodeCIDRMaskSize is the mask size for node cidr in cluster.

NodeStatusUpdateFrequency [Required]
-meta/v1.Duration +
NodeCIDRMaskSizeIPv4 [Required]
+int32
-

NodeStatusUpdateFrequency is the frequency at which the controller updates nodes' status

+

NodeCIDRMaskSizeIPv4 is the mask size for node cidr in dual-stack cluster.

Webhook [Required]
-WebhookConfiguration +
NodeCIDRMaskSizeIPv6 [Required]
+int32
-

Webhook is the configuration for cloud-controller-manager hosted webhooks

+

NodeCIDRMaskSizeIPv6 is the mask size for node cidr in dual-stack cluster.

-## `CloudProviderConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration} +## `NodeLifecycleControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NodeLifecycleControllerConfiguration} **Appears in:** -- [KubeCloudSharedConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration) +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

CloudProviderConfiguration contains basically elements about cloud provider.

+

NodeLifecycleControllerConfiguration contains elements describing NodeLifecycleController.

@@ -1608,35 +1458,73 @@ related features.

- - + + + + + + + + + + + + + + +
Name [Required]
-string +
NodeEvictionRate [Required]
+float32
-

Name is the provider for cloud services.

+

nodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is healthy

CloudConfigFile [Required]
-string +
SecondaryNodeEvictionRate [Required]
+float32
-

cloudConfigFile is the path to the cloud provider configuration file.

+

secondaryNodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy

+
NodeStartupGracePeriod [Required]
+meta/v1.Duration +
+

nodeStartupGracePeriod is the amount of time which we allow starting a node to +be unresponsive before marking it unhealthy.

+
NodeMonitorGracePeriod [Required]
+meta/v1.Duration +
+

nodeMontiorGracePeriod is the amount of time which we allow a running node to be +unresponsive before marking it unhealthy. Must be N times more than kubelet's +nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet +to post node status.

+
PodEvictionTimeout [Required]
+meta/v1.Duration +
+

podEvictionTimeout is the grace period for deleting pods on failed nodes.

+
LargeClusterSizeThreshold [Required]
+int32 +
+

secondaryNodeEvictionRate is implicitly overridden to 0 for clusters smaller than or equal to largeClusterSizeThreshold

+
UnhealthyZoneThreshold [Required]
+float32 +
+

Zone is treated as unhealthy in nodeEvictionRate and secondaryNodeEvictionRate when at least +unhealthyZoneThreshold (no less than 3) of Nodes in the zone are NotReady

-## `KubeCloudSharedConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration} +## `PersistentVolumeBinderControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeBinderControllerConfiguration} **Appears in:** -- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) - - [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

KubeCloudSharedConfiguration contains elements shared by both kube-controller manager -and cloud-controller manager, but not genericconfig.

+

PersistentVolumeBinderControllerConfiguration contains elements describing +PersistentVolumeBinderController.

@@ -1644,109 +1532,155 @@ and cloud-controller manager, but not genericconfig.

- - - - - +
CloudProvider [Required]
-CloudProviderConfiguration +
PVClaimBinderSyncPeriod [Required]
+meta/v1.Duration
-

CloudProviderConfiguration holds configuration for CloudProvider related features.

+

pvClaimBinderSyncPeriod is the period for syncing persistent volumes +and persistent volume claims.

ExternalCloudVolumePlugin [Required]
-string +
VolumeConfiguration [Required]
+VolumeConfiguration
-

externalCloudVolumePlugin specifies the plugin to use when cloudProvider is "external". -It is currently used by the in repo cloud providers to handle node and volume control in the KCM.

+

volumeConfiguration holds configuration for volume related features.

UseServiceAccountCredentials [Required]
-bool +
VolumeHostCIDRDenylist [Required]
+[]string
-

useServiceAccountCredentials indicates whether controllers should be run with -individual service account credentials.

+

DEPRECATED: VolumeHostCIDRDenylist is a list of CIDRs that should not be reachable by the +controller from plugins.

AllowUntaggedCloud [Required]
+
VolumeHostAllowLocalLoopback [Required]
bool
-

run with untagged cloud instances

+

DEPRECATED: VolumeHostAllowLocalLoopback indicates if local loopback hosts (127.0.0.1, etc) +should be allowed from plugins.

RouteReconciliationPeriod [Required]
-meta/v1.Duration +
+ +## `PersistentVolumeRecyclerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeRecyclerConfiguration} + + +**Appears in:** + +- [VolumeConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-VolumeConfiguration) + + +

PersistentVolumeRecyclerConfiguration contains elements describing persistent volume plugins.

+ + + + + + + + - - - - - - - +
FieldDescription
MaximumRetry [Required]
+int32
-

routeReconciliationPeriod is the period for reconciling routes created for Nodes by cloud provider..

+

maximumRetry is number of retries the PV recycler will execute on failure to recycle +PV.

NodeMonitorPeriod [Required]
-meta/v1.Duration +
MinimumTimeoutNFS [Required]
+int32
-

nodeMonitorPeriod is the period for syncing NodeStatus in NodeController.

+

minimumTimeoutNFS is the minimum ActiveDeadlineSeconds to use for an NFS Recycler +pod.

ClusterName [Required]
+
PodTemplateFilePathNFS [Required]
string
-

clusterName is the instance prefix for the cluster.

+

podTemplateFilePathNFS is the file path to a pod definition used as a template for +NFS persistent volume recycling

ClusterCIDR [Required]
-string +
IncrementTimeoutNFS [Required]
+int32
-

clusterCIDR is CIDR Range for Pods in cluster.

+

incrementTimeoutNFS is the increment of time added per Gi to ActiveDeadlineSeconds +for an NFS scrubber pod.

AllocateNodeCIDRs [Required]
-bool +
PodTemplateFilePathHostPath [Required]
+string
-

AllocateNodeCIDRs enables CIDRs for Pods to be allocated and, if -ConfigureCloudRoutes is true, to be set on the cloud provider.

+

podTemplateFilePathHostPath is the file path to a pod definition used as a template for +HostPath persistent volume recycling. This is for development and testing only and +will not work in a multi-node cluster.

CIDRAllocatorType [Required]
-string +
MinimumTimeoutHostPath [Required]
+int32
-

CIDRAllocatorType determines what kind of pod CIDR allocator will be used.

+

minimumTimeoutHostPath is the minimum ActiveDeadlineSeconds to use for a HostPath +Recycler pod. This is for development and testing only and will not work in a multi-node +cluster.

ConfigureCloudRoutes [Required]
-bool +
IncrementTimeoutHostPath [Required]
+int32
-

configureCloudRoutes enables CIDRs allocated with allocateNodeCIDRs -to be configured on the cloud provider.

+

incrementTimeoutHostPath is the increment of time added per Gi to ActiveDeadlineSeconds +for a HostPath scrubber pod. This is for development and testing only and will not work +in a multi-node cluster.

NodeSyncPeriod [Required]
-meta/v1.Duration +
+ +## `PodGCControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PodGCControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

PodGCControllerConfiguration contains elements describing PodGCController.

+ + + + + + + +
FieldDescription
TerminatedPodGCThreshold [Required]
+int32
-

nodeSyncPeriod is the period for syncing nodes from cloudprovider. Longer -periods will result in fewer calls to cloud provider, but may delay addition -of new nodes to cluster.

+

terminatedPodGCThreshold is the number of terminated pods that can exist +before the terminated pod garbage collector starts deleting terminated pods. +If <= 0, the terminated pod garbage collector is disabled.

-## `WebhookConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-WebhookConfiguration} +## `ReplicaSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ReplicaSetControllerConfiguration} **Appears in:** -- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

WebhookConfiguration contains configuration related to -cloud-controller-manager hosted webhooks

+

ReplicaSetControllerConfiguration contains elements describing ReplicaSetController.

@@ -1754,77 +1688,55 @@ cloud-controller-manager hosted webhooks

-
Webhooks [Required]
-[]string +
ConcurrentRSSyncs [Required]
+int32
-

Webhooks is the list of webhooks to enable or disable -'*' means "all enabled by default webhooks" -'foo' means "enable 'foo'" -'-foo' means "disable 'foo'" -first item for a particular name wins

+

concurrentRSSyncs is the number of replica sets that are allowed to sync +concurrently. Larger number = more responsive replica management, but more +CPU (and network) load.

- - - -## `LeaderMigrationConfiguration` {#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration} +## `ReplicationControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ReplicationControllerConfiguration} **Appears in:** -- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration) +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

LeaderMigrationConfiguration provides versioned configuration for all migrating leader locks.

+

ReplicationControllerConfiguration contains elements describing ReplicationController.

- - - - - - - - - -
FieldDescription
apiVersion
string
controllermanager.config.k8s.io/v1alpha1
kind
string
LeaderMigrationConfiguration
leaderName [Required]
-string -
-

LeaderName is the name of the leader election resource that protects the migration -E.g. 1-20-KCM-to-1-21-CCM

-
resourceLock [Required]
-string -
-

ResourceLock indicates the resource object type that will be used to lock -Should be "leases" or "endpoints"

-
controllerLeaders [Required]
-[]ControllerLeaderConfiguration +
ConcurrentRCSyncs [Required]
+int32
-

ControllerLeaders contains a list of migrating leader lock configurations

+

concurrentRCSyncs is the number of replication controllers that are +allowed to sync concurrently. Larger number = more responsive replica +management, but more CPU (and network) load.

-## `ControllerLeaderConfiguration` {#controllermanager-config-k8s-io-v1alpha1-ControllerLeaderConfiguration} +## `ResourceQuotaControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ResourceQuotaControllerConfiguration} **Appears in:** -- [LeaderMigrationConfiguration](#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration) +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

ControllerLeaderConfiguration provides the configuration for a migrating leader lock.

+

ResourceQuotaControllerConfiguration contains elements describing ResourceQuotaController.

@@ -1832,37 +1744,35 @@ Should be "leases" or "endpoints"

- -
name [Required]
-string +
ResourceQuotaSyncPeriod [Required]
+meta/v1.Duration
-

Name is the name of the controller being migrated -E.g. service-controller, route-controller, cloud-node-controller, etc

+

resourceQuotaSyncPeriod is the period for syncing quota usage status +in the system.

component [Required]
-string +
ConcurrentResourceQuotaSyncs [Required]
+int32
-

Component is the name of the component in which the controller should be running. -E.g. kube-controller-manager, cloud-controller-manager, etc -Or '*' meaning the controller can be run under any component that participates in the migration

+

concurrentResourceQuotaSyncs is the number of resource quotas that are +allowed to sync concurrently. Larger number = more responsive quota +management, but more CPU (and network) load.

-## `GenericControllerManagerConfiguration` {#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration} +## `SAControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-SAControllerConfiguration} **Appears in:** -- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) - - [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

GenericControllerManagerConfiguration holds configuration for a generic controller-manager.

+

SAControllerConfiguration contains elements describing ServiceAccountController.

@@ -1870,80 +1780,168 @@ Or '*' meaning the controller can be run under any component that participates i - - - - +
Port [Required]
-int32 +
ServiceAccountKeyFile [Required]
+string
-

port is the port that the controller-manager's http service runs on.

+

serviceAccountKeyFile is the filename containing a PEM-encoded private RSA key +used to sign service account tokens.

Address [Required]
-string +
ConcurrentSATokenSyncs [Required]
+int32
-

address is the IP address to serve on (set to 0.0.0.0 for all interfaces).

+

concurrentSATokenSyncs is the number of service account token syncing operations +that will be done concurrently.

MinResyncPeriod [Required]
-meta/v1.Duration +
RootCAFile [Required]
+string
-

minResyncPeriod is the resync period in reflectors; will be random between -minResyncPeriod and 2*minResyncPeriod.

+

rootCAFile is the root certificate authority will be included in service +account's token secret. This must be a valid PEM-encoded CA bundle.

ClientConnection [Required]
-ClientConnectionConfiguration +
+ +## `StatefulSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-StatefulSetControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

StatefulSetControllerConfiguration contains elements describing StatefulSetController.

+ + + + + + + + - +
FieldDescription
ConcurrentStatefulSetSyncs [Required]
+int32
-

ClientConnection specifies the kubeconfig file and client connection -settings for the proxy server to use when communicating with the apiserver.

+

concurrentStatefulSetSyncs is the number of statefulset objects that are +allowed to sync concurrently. Larger number = more responsive statefulsets, +but more CPU (and network) load.

ControllerStartInterval [Required]
-meta/v1.Duration +
+ +## `TTLAfterFinishedControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-TTLAfterFinishedControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

TTLAfterFinishedControllerConfiguration contains elements describing TTLAfterFinishedController.

+ + + + + + + + - +
FieldDescription
ConcurrentTTLSyncs [Required]
+int32
-

How long to wait between starting controller managers

+

concurrentTTLSyncs is the number of TTL-after-finished collector workers that are +allowed to sync concurrently.

LeaderElection [Required]
-LeaderElectionConfiguration +
+ +## `ValidatingAdmissionPolicyStatusControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ValidatingAdmissionPolicyStatusControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

ValidatingAdmissionPolicyStatusControllerConfiguration contains elements describing ValidatingAdmissionPolicyStatusController.

+ + + + + + + + - +
FieldDescription
ConcurrentPolicySyncs [Required]
+int32
-

leaderElection defines the configuration of leader election client.

+

ConcurrentPolicySyncs is the number of policy objects that are +allowed to sync concurrently. Larger number = quicker type checking, +but more CPU (and network) load. +The default value is 5.

Controllers [Required]
-[]string +
+ +## `VolumeConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-VolumeConfiguration} + + +**Appears in:** + +- [PersistentVolumeBinderControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeBinderControllerConfiguration) + + +

VolumeConfiguration contains all enumerated flags meant to configure all volume +plugins. From this config, the controller-manager binary will create many instances of +volume.VolumeConfig, each containing only the configuration needed for that plugin which +are then passed to the appropriate plugin. The ControllerManager binary is the only part +of the code which knows what plugins are supported and which flags correspond to each plugin.

+ + + + + + + + - - - diff --git a/content/en/docs/reference/config-api/kube-proxy-config.v1alpha1.md b/content/en/docs/reference/config-api/kube-proxy-config.v1alpha1.md index 328655b5d117e..ddc65f29800e7 100644 --- a/content/en/docs/reference/config-api/kube-proxy-config.v1alpha1.md +++ b/content/en/docs/reference/config-api/kube-proxy-config.v1alpha1.md @@ -12,6 +12,7 @@ auto_generated: true - [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration) + ## `ClientConnectionConfiguration` {#ClientConnectionConfiguration} @@ -80,10 +81,10 @@ client.

**Appears in:** -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration) - - [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration) + - [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration) @@ -201,7 +202,6 @@ during leader election cycles.

FieldDescription
EnableHostPathProvisioning [Required]
+bool
-

Controllers is the list of controllers to enable or disable -'*' means "all enabled by default controllers" -'foo' means "enable 'foo'" -'-foo' means "disable 'foo'" -first item for a particular name wins

+

enableHostPathProvisioning enables HostPath PV provisioning when running without a +cloud provider. This allows testing and development of provisioning features. HostPath +provisioning is not supported in any way, won't work in a multi-node cluster, and +should not be used for anything other than testing or development.

Debugging [Required]
-DebuggingConfiguration +
EnableDynamicProvisioning [Required]
+bool
-

DebuggingConfiguration holds configuration for Debugging related features.

+

enableDynamicProvisioning enables the provisioning of volumes when running within an environment +that supports dynamic provisioning. Defaults to true.

LeaderMigrationEnabled [Required]
-bool +
PersistentVolumeRecyclerConfiguration [Required]
+PersistentVolumeRecyclerConfiguration
-

LeaderMigrationEnabled indicates whether Leader Migration should be enabled for the controller manager.

+

persistentVolumeRecyclerConfiguration holds configuration for persistent volume plugins.

LeaderMigration [Required]
-LeaderMigrationConfiguration +
FlexVolumePluginDir [Required]
+string
-

LeaderMigration holds the configuration for Leader Migration.

+

volumePluginDir is the full path of the directory in which the flex +volume plugin should search for additional third party volume plugins

- ## `KubeProxyConfiguration` {#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration} diff --git a/content/en/docs/reference/config-api/kube-scheduler-config.v1.md b/content/en/docs/reference/config-api/kube-scheduler-config.v1.md index cb07bc0654b23..d2159b93e1b55 100644 --- a/content/en/docs/reference/config-api/kube-scheduler-config.v1.md +++ b/content/en/docs/reference/config-api/kube-scheduler-config.v1.md @@ -19,6 +19,7 @@ auto_generated: true - [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1-VolumeBindingArgs) + ## `ClientConnectionConfiguration` {#ClientConnectionConfiguration} @@ -119,10 +120,10 @@ enableProfiling is true.

**Appears in:** -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration) - - [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration) +

LeaderElectionConfiguration defines the configuration of leader election clients for components that can run with leader election enabled.

@@ -200,7 +201,6 @@ during leader election cycles.

- ## `DefaultPreemptionArgs` {#kubescheduler-config-k8s-io-v1-DefaultPreemptionArgs} diff --git a/content/en/docs/reference/config-api/kube-scheduler-config.v1beta3.md b/content/en/docs/reference/config-api/kube-scheduler-config.v1beta3.md index 6fc64f5bba2d4..7060addcd1119 100644 --- a/content/en/docs/reference/config-api/kube-scheduler-config.v1beta3.md +++ b/content/en/docs/reference/config-api/kube-scheduler-config.v1beta3.md @@ -19,6 +19,182 @@ auto_generated: true - [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1beta3-VolumeBindingArgs) + + +## `ClientConnectionConfiguration` {#ClientConnectionConfiguration} + + +**Appears in:** + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) + + +

ClientConnectionConfiguration contains details for constructing a client.

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
kubeconfig [Required]
+string +
+

kubeconfig is the path to a KubeConfig file.

+
acceptContentTypes [Required]
+string +
+

acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the +default value of 'application/json'. This field will control all connections to the server used by a particular +client.

+
contentType [Required]
+string +
+

contentType is the content type used when sending data to the server from this client.

+
qps [Required]
+float32 +
+

qps controls the number of queries per second allowed for this connection.

+
burst [Required]
+int32 +
+

burst allows extra queries to accumulate when a client is exceeding its rate.

+
+ +## `DebuggingConfiguration` {#DebuggingConfiguration} + + +**Appears in:** + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) + + +

DebuggingConfiguration holds configuration for Debugging related features.

+ + + + + + + + + + + + + + +
FieldDescription
enableProfiling [Required]
+bool +
+

enableProfiling enables profiling via web interface host:port/debug/pprof/

+
enableContentionProfiling [Required]
+bool +
+

enableContentionProfiling enables block profiling, if +enableProfiling is true.

+
+ +## `LeaderElectionConfiguration` {#LeaderElectionConfiguration} + + +**Appears in:** + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) + + +

LeaderElectionConfiguration defines the configuration of leader election +clients for components that can run with leader election enabled.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
leaderElect [Required]
+bool +
+

leaderElect enables a leader election client to gain leadership +before executing the main loop. Enable this when running replicated +components for high availability.

+
leaseDuration [Required]
+meta/v1.Duration +
+

leaseDuration is the duration that non-leader candidates will wait +after observing a leadership renewal until attempting to acquire +leadership of a led but unrenewed leader slot. This is effectively the +maximum duration that a leader can be stopped before it is replaced +by another candidate. This is only applicable if leader election is +enabled.

+
renewDeadline [Required]
+meta/v1.Duration +
+

renewDeadline is the interval between attempts by the acting master to +renew a leadership slot before it stops leading. This must be less +than or equal to the lease duration. This is only applicable if leader +election is enabled.

+
retryPeriod [Required]
+meta/v1.Duration +
+

retryPeriod is the duration the clients should wait between attempting +acquisition and renewal of a leadership. This is only applicable if +leader election is enabled.

+
resourceLock [Required]
+string +
+

resourceLock indicates the resource object type that will be used to lock +during leader election cycles.

+
resourceName [Required]
+string +
+

resourceName indicates the name of resource object that will be used to lock +during leader election cycles.

+
resourceNamespace [Required]
+string +
+

resourceName indicates the namespace of resource object that will be used to lock +during leader election cycles.

+
+ ## `DefaultPreemptionArgs` {#kubescheduler-config-k8s-io-v1beta3-DefaultPreemptionArgs} @@ -1074,180 +1250,4 @@ Weight defaults to 1 if not specified or explicitly set to 0.

- - - - -## `ClientConnectionConfiguration` {#ClientConnectionConfiguration} - - -**Appears in:** - -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) - - -

ClientConnectionConfiguration contains details for constructing a client.

- - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
kubeconfig [Required]
-string -
-

kubeconfig is the path to a KubeConfig file.

-
acceptContentTypes [Required]
-string -
-

acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the -default value of 'application/json'. This field will control all connections to the server used by a particular -client.

-
contentType [Required]
-string -
-

contentType is the content type used when sending data to the server from this client.

-
qps [Required]
-float32 -
-

qps controls the number of queries per second allowed for this connection.

-
burst [Required]
-int32 -
-

burst allows extra queries to accumulate when a client is exceeding its rate.

-
- -## `DebuggingConfiguration` {#DebuggingConfiguration} - - -**Appears in:** - -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) - - -

DebuggingConfiguration holds configuration for Debugging related features.

- - - - - - - - - - - - - - -
FieldDescription
enableProfiling [Required]
-bool -
-

enableProfiling enables profiling via web interface host:port/debug/pprof/

-
enableContentionProfiling [Required]
-bool -
-

enableContentionProfiling enables block profiling, if -enableProfiling is true.

-
- -## `LeaderElectionConfiguration` {#LeaderElectionConfiguration} - - -**Appears in:** - -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) - - -

LeaderElectionConfiguration defines the configuration of leader election -clients for components that can run with leader election enabled.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
leaderElect [Required]
-bool -
-

leaderElect enables a leader election client to gain leadership -before executing the main loop. Enable this when running replicated -components for high availability.

-
leaseDuration [Required]
-meta/v1.Duration -
-

leaseDuration is the duration that non-leader candidates will wait -after observing a leadership renewal until attempting to acquire -leadership of a led but unrenewed leader slot. This is effectively the -maximum duration that a leader can be stopped before it is replaced -by another candidate. This is only applicable if leader election is -enabled.

-
renewDeadline [Required]
-meta/v1.Duration -
-

renewDeadline is the interval between attempts by the acting master to -renew a leadership slot before it stops leading. This must be less -than or equal to the lease duration. This is only applicable if leader -election is enabled.

-
retryPeriod [Required]
-meta/v1.Duration -
-

retryPeriod is the duration the clients should wait between attempting -acquisition and renewal of a leadership. This is only applicable if -leader election is enabled.

-
resourceLock [Required]
-string -
-

resourceLock indicates the resource object type that will be used to lock -during leader election cycles.

-
resourceName [Required]
-string -
-

resourceName indicates the name of resource object that will be used to lock -during leader election cycles.

-
resourceNamespace [Required]
-string -
-

resourceName indicates the namespace of resource object that will be used to lock -during leader election cycles.

-
\ No newline at end of file + \ No newline at end of file diff --git a/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md b/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md index 3972691620bb0..9d94c614de168 100644 --- a/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md +++ b/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md @@ -264,6 +264,109 @@ node only (e.g. the node ip).

- [JoinConfiguration](#kubeadm-k8s-io-v1beta3-JoinConfiguration) + + +## `BootstrapToken` {#BootstrapToken} + + +**Appears in:** + +- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration) + + +

BootstrapToken describes one bootstrap token, stored as a Secret in the cluster

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
token [Required]
+BootstrapTokenString +
+

token is used for establishing bidirectional trust between nodes and control-planes. +Used for joining nodes in the cluster.

+
description
+string +
+

description sets a human-friendly message why this token exists and what it's used +for, so other administrators can know its purpose.

+
ttl
+meta/v1.Duration +
+

ttl defines the time to live for this token. Defaults to 24h. +expires and ttl are mutually exclusive.

+
expires
+meta/v1.Time +
+

expires specifies the timestamp when this token expires. Defaults to being set +dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.

+
usages
+[]string +
+

usages describes the ways in which this token can be used. Can by default be used +for establishing bidirectional trust, but that can be changed here.

+
groups
+[]string +
+

groups specifies the extra groups that this token will authenticate as when/if +used for authentication

+
+ +## `BootstrapTokenString` {#BootstrapTokenString} + + +**Appears in:** + +- [BootstrapToken](#BootstrapToken) + + +

BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used +for both validation of the practically of the API server from a joining node's point +of view and as an authentication method for the node in the bootstrap phase of +"kubeadm join". This token is and should be short-lived.

+ + + + + + + + + + + + + + +
FieldDescription
- [Required]
+string +
+ No description provided.
- [Required]
+string +
+ No description provided.
+ ## `ClusterConfiguration` {#kubeadm-k8s-io-v1beta3-ClusterConfiguration} @@ -1237,107 +1340,4 @@ first alpha-numerically.

- - - - -## `BootstrapToken` {#BootstrapToken} - - -**Appears in:** - -- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration) - - -

BootstrapToken describes one bootstrap token, stored as a Secret in the cluster

- - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
token [Required]
-BootstrapTokenString -
-

token is used for establishing bidirectional trust between nodes and control-planes. -Used for joining nodes in the cluster.

-
description
-string -
-

description sets a human-friendly message why this token exists and what it's used -for, so other administrators can know its purpose.

-
ttl
-meta/v1.Duration -
-

ttl defines the time to live for this token. Defaults to 24h. -expires and ttl are mutually exclusive.

-
expires
-meta/v1.Time -
-

expires specifies the timestamp when this token expires. Defaults to being set -dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.

-
usages
-[]string -
-

usages describes the ways in which this token can be used. Can by default be used -for establishing bidirectional trust, but that can be changed here.

-
groups
-[]string -
-

groups specifies the extra groups that this token will authenticate as when/if -used for authentication

-
- -## `BootstrapTokenString` {#BootstrapTokenString} - - -**Appears in:** - -- [BootstrapToken](#BootstrapToken) - - -

BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used -for both validation of the practically of the API server from a joining node's point -of view and as an authentication method for the node in the bootstrap phase of -"kubeadm join". This token is and should be short-lived.

- - - - - - - - - - - - - - -
FieldDescription
- [Required]
-string -
- No description provided.
- [Required]
-string -
- No description provided.
\ No newline at end of file + \ No newline at end of file diff --git a/content/en/docs/reference/config-api/kubeadm-config.v1beta4.md b/content/en/docs/reference/config-api/kubeadm-config.v1beta4.md index f7349db30c606..1689232505839 100644 --- a/content/en/docs/reference/config-api/kubeadm-config.v1beta4.md +++ b/content/en/docs/reference/config-api/kubeadm-config.v1beta4.md @@ -291,6 +291,111 @@ node only (e.g. the node ip).

- [ResetConfiguration](#kubeadm-k8s-io-v1beta4-ResetConfiguration) + + +## `BootstrapToken` {#BootstrapToken} + + +**Appears in:** + +- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration) + +- [InitConfiguration](#kubeadm-k8s-io-v1beta4-InitConfiguration) + + +

BootstrapToken describes one bootstrap token, stored as a Secret in the cluster

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
token [Required]
+BootstrapTokenString +
+

token is used for establishing bidirectional trust between nodes and control-planes. +Used for joining nodes in the cluster.

+
description
+string +
+

description sets a human-friendly message why this token exists and what it's used +for, so other administrators can know its purpose.

+
ttl
+meta/v1.Duration +
+

ttl defines the time to live for this token. Defaults to 24h. +expires and ttl are mutually exclusive.

+
expires
+meta/v1.Time +
+

expires specifies the timestamp when this token expires. Defaults to being set +dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.

+
usages
+[]string +
+

usages describes the ways in which this token can be used. Can by default be used +for establishing bidirectional trust, but that can be changed here.

+
groups
+[]string +
+

groups specifies the extra groups that this token will authenticate as when/if +used for authentication

+
+ +## `BootstrapTokenString` {#BootstrapTokenString} + + +**Appears in:** + +- [BootstrapToken](#BootstrapToken) + + +

BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used +for both validation of the practically of the API server from a joining node's point +of view and as an authentication method for the node in the bootstrap phase of +"kubeadm join". This token is and should be short-lived.

+ + + + + + + + + + + + + + +
FieldDescription
- [Required]
+string +
+ No description provided.
- [Required]
+string +
+ No description provided.
+ ## `ClusterConfiguration` {#kubeadm-k8s-io-v1beta4-ClusterConfiguration} @@ -424,7 +529,7 @@ information.

bootstrapTokens
-[]invalid type +[]BootstrapToken

BootstrapTokens is respected at kubeadm init time and describes a set of Bootstrap Tokens to create. @@ -1322,107 +1427,4 @@ first alpha-numerically.

- - - - -## `BootstrapToken` {#BootstrapToken} - - -**Appears in:** - -- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration) - - -

BootstrapToken describes one bootstrap token, stored as a Secret in the cluster

- - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
token [Required]
-BootstrapTokenString -
-

token is used for establishing bidirectional trust between nodes and control-planes. -Used for joining nodes in the cluster.

-
description
-string -
-

description sets a human-friendly message why this token exists and what it's used -for, so other administrators can know its purpose.

-
ttl
-meta/v1.Duration -
-

ttl defines the time to live for this token. Defaults to 24h. -expires and ttl are mutually exclusive.

-
expires
-meta/v1.Time -
-

expires specifies the timestamp when this token expires. Defaults to being set -dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.

-
usages
-[]string -
-

usages describes the ways in which this token can be used. Can by default be used -for establishing bidirectional trust, but that can be changed here.

-
groups
-[]string -
-

groups specifies the extra groups that this token will authenticate as when/if -used for authentication

-
- -## `BootstrapTokenString` {#BootstrapTokenString} - - -**Appears in:** - -- [BootstrapToken](#BootstrapToken) - - -

BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used -for both validation of the practically of the API server from a joining node's point -of view and as an authentication method for the node in the bootstrap phase of -"kubeadm join". This token is and should be short-lived.

- - - - - - - - - - - - - - -
FieldDescription
- [Required]
-string -
- No description provided.
- [Required]
-string -
- No description provided.
\ No newline at end of file + \ No newline at end of file diff --git a/content/en/docs/reference/config-api/kubeconfig.v1.md b/content/en/docs/reference/config-api/kubeconfig.v1.md index 42cf3bd7cc9c6..72a5c63358ce8 100644 --- a/content/en/docs/reference/config-api/kubeconfig.v1.md +++ b/content/en/docs/reference/config-api/kubeconfig.v1.md @@ -11,6 +11,83 @@ auto_generated: true - [Config](#Config) + + +## `Config` {#Config} + + + +

Config holds the information needed to build connect to remote kubernetes clusters as a given user

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
apiVersion
string
/v1
kind
string
Config
kind
+string +
+

Legacy field from pkg/api/types.go TypeMeta. +TODO(jlowdermilk): remove this after eliminating downstream dependencies.

+
apiVersion
+string +
+

Legacy field from pkg/api/types.go TypeMeta. +TODO(jlowdermilk): remove this after eliminating downstream dependencies.

+
preferences [Required]
+Preferences +
+

Preferences holds general information to be use for cli interactions

+
clusters [Required]
+[]NamedCluster +
+

Clusters is a map of referencable names to cluster configs

+
users [Required]
+[]NamedAuthInfo +
+

AuthInfos is a map of referencable names to user configs

+
contexts [Required]
+[]NamedContext +
+

Contexts is a map of referencable names to context configs

+
current-context [Required]
+string +
+

CurrentContext is the name of the context that you would like to use by default

+
extensions
+[]NamedExtension +
+

Extensions holds additional information. This is useful for extenders so that reads and writes don't clobber unknown fields

+
## `AuthInfo` {#AuthInfo} diff --git a/content/en/docs/reference/config-api/kubelet-config.v1.md b/content/en/docs/reference/config-api/kubelet-config.v1.md index cd7d676e072db..24ba05ca33e2a 100644 --- a/content/en/docs/reference/config-api/kubelet-config.v1.md +++ b/content/en/docs/reference/config-api/kubelet-config.v1.md @@ -11,7 +11,6 @@ auto_generated: true - [CredentialProviderConfig](#kubelet-config-k8s-io-v1-CredentialProviderConfig) - ## `CredentialProviderConfig` {#kubelet-config-k8s-io-v1-CredentialProviderConfig} @@ -82,7 +81,7 @@ and URL path.

Each entry in matchImages is a pattern which can optionally contain a port and a path. Globs can be used in the domain, but not in the port or the path. Globs are supported as subdomains like '*.k8s.io' or 'k8s.*.io', and top-level-domains such as 'k8s.*'. -Matching partial subdomains like 'app*.k8s.io' is also supported. Each glob can only match +Matching partial subdomains like 'app.k8s.io' is also supported. Each glob can only match a single subdomain segment, so *.io does not match *.k8s.io.

A match exists between an image and a matchImage when all of the below are true:

    diff --git a/content/en/docs/reference/config-api/kubelet-config.v1alpha1.md b/content/en/docs/reference/config-api/kubelet-config.v1alpha1.md index 6082c2f7ecfe1..99602ebceef6f 100644 --- a/content/en/docs/reference/config-api/kubelet-config.v1alpha1.md +++ b/content/en/docs/reference/config-api/kubelet-config.v1alpha1.md @@ -11,7 +11,6 @@ auto_generated: true - [CredentialProviderConfig](#kubelet-config-k8s-io-v1alpha1-CredentialProviderConfig) - ## `CredentialProviderConfig` {#kubelet-config-k8s-io-v1alpha1-CredentialProviderConfig} diff --git a/content/en/docs/reference/config-api/kubelet-config.v1beta1.md b/content/en/docs/reference/config-api/kubelet-config.v1beta1.md index 877e3c2240468..a760d11d1cd8a 100644 --- a/content/en/docs/reference/config-api/kubelet-config.v1beta1.md +++ b/content/en/docs/reference/config-api/kubelet-config.v1beta1.md @@ -14,6 +14,279 @@ auto_generated: true - [SerializedNodeConfigSource](#kubelet-config-k8s-io-v1beta1-SerializedNodeConfigSource) + + +## `FormatOptions` {#FormatOptions} + + +**Appears in:** + +- [LoggingConfiguration](#LoggingConfiguration) + + +

    FormatOptions contains options for the different logging formats.

    + + + + + + + + + + + +
    FieldDescription
    json [Required]
    +JSONOptions +
    +

    [Alpha] JSON contains options for logging format "json". +Only available when the LoggingAlphaOptions feature gate is enabled.

    +
    + +## `JSONOptions` {#JSONOptions} + + +**Appears in:** + +- [FormatOptions](#FormatOptions) + + +

    JSONOptions contains options for logging format "json".

    + + + + + + + + + + + + + + +
    FieldDescription
    splitStream [Required]
    +bool +
    +

    [Alpha] SplitStream redirects error messages to stderr while +info messages go to stdout, with buffering. The default is to write +both to stdout, without buffering. Only available when +the LoggingAlphaOptions feature gate is enabled.

    +
    infoBufferSize [Required]
    +k8s.io/apimachinery/pkg/api/resource.QuantityValue +
    +

    [Alpha] InfoBufferSize sets the size of the info stream when +using split streams. The default is zero, which disables buffering. +Only available when the LoggingAlphaOptions feature gate is enabled.

    +
    + +## `LogFormatFactory` {#LogFormatFactory} + + + +

    LogFormatFactory provides support for a certain additional, +non-default log format.

    + + + + +## `LoggingConfiguration` {#LoggingConfiguration} + + +**Appears in:** + +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) + + +

    LoggingConfiguration contains logging options.

    + + + + + + + + + + + + + + + + + + + + + + + +
    FieldDescription
    format [Required]
    +string +
    +

    Format Flag specifies the structure of log messages. +default value of format is text

    +
    flushFrequency [Required]
    +TimeOrMetaDuration +
    +

    Maximum time between log flushes. +If a string, parsed as a duration (i.e. "1s") +If an int, the maximum number of nanoseconds (i.e. 1s = 1000000000). +Ignored if the selected logging backend writes log messages without buffering.

    +
    verbosity [Required]
    +VerbosityLevel +
    +

    Verbosity is the threshold that determines which log messages are +logged. Default is zero which logs only the most important +messages. Higher values enable additional messages. Error messages +are always logged.

    +
    vmodule [Required]
    +VModuleConfiguration +
    +

    VModule overrides the verbosity threshold for individual files. +Only supported for "text" log format.

    +
    options [Required]
    +FormatOptions +
    +

    [Alpha] Options holds additional parameters that are specific +to the different logging formats. Only the options for the selected +format get used, but all of them get validated. +Only available when the LoggingAlphaOptions feature gate is enabled.

    +
    + +## `LoggingOptions` {#LoggingOptions} + + + +

    LoggingOptions can be used with ValidateAndApplyWithOptions to override +certain global defaults.

    + + + + + + + + + + + + + + +
    FieldDescription
    ErrorStream [Required]
    +io.Writer +
    +

    ErrorStream can be used to override the os.Stderr default.

    +
    InfoStream [Required]
    +io.Writer +
    +

    InfoStream can be used to override the os.Stdout default.

    +
    + +## `TimeOrMetaDuration` {#TimeOrMetaDuration} + + +**Appears in:** + +- [LoggingConfiguration](#LoggingConfiguration) + + +

    TimeOrMetaDuration is present only for backwards compatibility for the +flushFrequency field, and new fields should use metav1.Duration.

    + + + + + + + + + + + + + + +
    FieldDescription
    Duration [Required]
    +meta/v1.Duration +
    +

    Duration holds the duration

    +
    - [Required]
    +bool +
    +

    SerializeAsString controls whether the value is serialized as a string or an integer

    +
    + +## `TracingConfiguration` {#TracingConfiguration} + + +**Appears in:** + +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) + + +

    TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.

    + + + + + + + + + + + + + + +
    FieldDescription
    endpoint
    +string +
    +

    Endpoint of the collector this component will report traces to. +The connection is insecure, and does not currently support TLS. +Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.

    +
    samplingRatePerMillion
    +int32 +
    +

    SamplingRatePerMillion is the number of samples to collect per million spans. +Recommended is unset. If unset, sampler respects its parent span's sampling +rate, but otherwise never samples.

    +
    + +## `VModuleConfiguration` {#VModuleConfiguration} + +(Alias of `[]k8s.io/component-base/logs/api/v1.VModuleItem`) + +**Appears in:** + +- [LoggingConfiguration](#LoggingConfiguration) + + +

    VModuleConfiguration is a collection of individual file names or patterns +and the corresponding verbosity threshold.

    + + + + +## `VerbosityLevel` {#VerbosityLevel} + +(Alias of `uint32`) + +**Appears in:** + +- [LoggingConfiguration](#LoggingConfiguration) + + + +

    VerbosityLevel represents a klog or logr verbosity threshold.

    + + + + ## `CredentialProviderConfig` {#kubelet-config-k8s-io-v1beta1-CredentialProviderConfig} @@ -1180,366 +1453,100 @@ Default: 0.9

    registerWithTaints are an array of taints to add to a node object when the kubelet registers itself. This only takes effect when registerNode -is true and upon the initial registration of the node. -Default: nil

    - - -registerNode
    -bool - - -

    registerNode enables automatic registration with the apiserver. -Default: true

    - - -tracing
    -TracingConfiguration - - -

    Tracing specifies the versioned configuration for OpenTelemetry tracing clients. -See https://kep.k8s.io/2832 for more details. -Default: nil

    - - -localStorageCapacityIsolation
    -bool - - -

    LocalStorageCapacityIsolation enables local ephemeral storage isolation feature. The default setting is true. -This feature allows users to set request/limit for container's ephemeral storage and manage it in a similar way -as cpu and memory. It also allows setting sizeLimit for emptyDir volume, which will trigger pod eviction if disk -usage from the volume exceeds the limit. -This feature depends on the capability of detecting correct root file system disk usage. For certain systems, -such as kind rootless, if this capability cannot be supported, the feature LocalStorageCapacityIsolation should be -disabled. Once disabled, user should not set request/limit for container's ephemeral storage, or sizeLimit for emptyDir. -Default: true

    - - -containerRuntimeEndpoint [Required]
    -string - - -

    ContainerRuntimeEndpoint is the endpoint of container runtime. -Unix Domain Sockets are supported on Linux, while npipe and tcp endpoints are supported on Windows. -Examples:'unix:///path/to/runtime.sock', 'npipe:////./pipe/runtime'

    - - -imageServiceEndpoint
    -string - - -

    ImageServiceEndpoint is the endpoint of container image service. -Unix Domain Socket are supported on Linux, while npipe and tcp endpoints are supported on Windows. -Examples:'unix:///path/to/runtime.sock', 'npipe:////./pipe/runtime'. -If not specified, the value in containerRuntimeEndpoint is used.

    - - - - - -## `SerializedNodeConfigSource` {#kubelet-config-k8s-io-v1beta1-SerializedNodeConfigSource} - - - -

    SerializedNodeConfigSource allows us to serialize v1.NodeConfigSource. -This type is used internally by the Kubelet for tracking checkpointed dynamic configs. -It exists in the kubeletconfig API group because it is classified as a versioned input to the Kubelet.

    - - - - - - - - - - - - - - -
    FieldDescription
    apiVersion
    string
    kubelet.config.k8s.io/v1beta1
    kind
    string
    SerializedNodeConfigSource
    source
    -core/v1.NodeConfigSource -
    -

    source is the source that we are serializing.

    -
    - -## `CredentialProvider` {#kubelet-config-k8s-io-v1beta1-CredentialProvider} - - -**Appears in:** - -- [CredentialProviderConfig](#kubelet-config-k8s-io-v1beta1-CredentialProviderConfig) - - -

    CredentialProvider represents an exec plugin to be invoked by the kubelet. The plugin is only -invoked when an image being pulled matches the images handled by the plugin (see matchImages).

    - - - - - - - - - - - - - - - - - - - - - - - - - - -
    FieldDescription
    name [Required]
    -string -
    -

    name is the required name of the credential provider. It must match the name of the -provider executable as seen by the kubelet. The executable must be in the kubelet's -bin directory (set by the --image-credential-provider-bin-dir flag).

    -
    matchImages [Required]
    -[]string -
    -

    matchImages is a required list of strings used to match against images in order to -determine if this provider should be invoked. If one of the strings matches the -requested image from the kubelet, the plugin will be invoked and given a chance -to provide credentials. Images are expected to contain the registry domain -and URL path.

    -

    Each entry in matchImages is a pattern which can optionally contain a port and a path. -Globs can be used in the domain, but not in the port or the path. Globs are supported -as subdomains like '*.k8s.io' or 'k8s.*.io', and top-level-domains such as 'k8s.*'. -Matching partial subdomains like 'app*.k8s.io' is also supported. Each glob can only match -a single subdomain segment, so *.io does not match *.k8s.io.

    -

    A match exists between an image and a matchImage when all of the below are true:

    -
      -
    • Both contain the same number of domain parts and each part matches.
    • -
    • The URL path of an imageMatch must be a prefix of the target image URL path.
    • -
    • If the imageMatch contains a port, then the port must match in the image as well.
    • -
    -

    Example values of matchImages:

    -
      -
    • 123456789.dkr.ecr.us-east-1.amazonaws.com
    • -
    • *.azurecr.io
    • -
    • gcr.io
    • -
    • *.*.registry.io
    • -
    • registry.io:8080/path
    • -
    -
    defaultCacheDuration [Required]
    -meta/v1.Duration -
    -

    defaultCacheDuration is the default duration the plugin will cache credentials in-memory -if a cache duration is not provided in the plugin response. This field is required.

    -
    apiVersion [Required]
    -string -
    -

    Required input version of the exec CredentialProviderRequest. The returned CredentialProviderResponse -MUST use the same encoding version as the input. Current supported values are:

    -
      -
    • credentialprovider.kubelet.k8s.io/v1beta1
    • -
    -
    args
    -[]string -
    -

    Arguments to pass to the command when executing it.

    -
    env
    -[]ExecEnvVar -
    -

    Env defines additional environment variables to expose to the process. These -are unioned with the host's environment, as well as variables client-go uses -to pass argument to the plugin.

    -
    - -## `ExecEnvVar` {#kubelet-config-k8s-io-v1beta1-ExecEnvVar} - - -**Appears in:** - -- [CredentialProvider](#kubelet-config-k8s-io-v1beta1-CredentialProvider) - - -

    ExecEnvVar is used for setting environment variables when executing an exec-based -credential plugin.

    - - - - - - - - - - - - - - -
    FieldDescription
    name [Required]
    -string -
    - No description provided.
    value [Required]
    -string -
    - No description provided.
    - -## `KubeletAnonymousAuthentication` {#kubelet-config-k8s-io-v1beta1-KubeletAnonymousAuthentication} - - -**Appears in:** - -- [KubeletAuthentication](#kubelet-config-k8s-io-v1beta1-KubeletAuthentication) - - - - - - - - - - - - -
    FieldDescription
    enabled
    -bool -
    -

    enabled allows anonymous requests to the kubelet server. -Requests that are not rejected by another authentication method are treated as -anonymous requests. -Anonymous requests have a username of system:anonymous, and a group name of -system:unauthenticated.

    -
    - -## `KubeletAuthentication` {#kubelet-config-k8s-io-v1beta1-KubeletAuthentication} - - -**Appears in:** - -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) - - - - - - - - - + + - - - -
    FieldDescription
    x509
    -KubeletX509Authentication +is true and upon the initial registration of the node. +Default: nil

    +
    registerNode
    +bool
    -

    x509 contains settings related to x509 client certificate authentication.

    +

    registerNode enables automatic registration with the apiserver. +Default: true

    webhook
    -KubeletWebhookAuthentication +
    tracing
    +TracingConfiguration
    -

    webhook contains settings related to webhook bearer token authentication.

    +

    Tracing specifies the versioned configuration for OpenTelemetry tracing clients. +See https://kep.k8s.io/2832 for more details. +Default: nil

    anonymous
    -KubeletAnonymousAuthentication +
    localStorageCapacityIsolation
    +bool
    -

    anonymous contains settings related to anonymous authentication.

    +

    LocalStorageCapacityIsolation enables local ephemeral storage isolation feature. The default setting is true. +This feature allows users to set request/limit for container's ephemeral storage and manage it in a similar way +as cpu and memory. It also allows setting sizeLimit for emptyDir volume, which will trigger pod eviction if disk +usage from the volume exceeds the limit. +This feature depends on the capability of detecting correct root file system disk usage. For certain systems, +such as kind rootless, if this capability cannot be supported, the feature LocalStorageCapacityIsolation should be +disabled. Once disabled, user should not set request/limit for container's ephemeral storage, or sizeLimit for emptyDir. +Default: true

    - -## `KubeletAuthorization` {#kubelet-config-k8s-io-v1beta1-KubeletAuthorization} - - -**Appears in:** - -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) - - - - - - - - - -
    FieldDescription
    mode
    -KubeletAuthorizationMode +
    containerRuntimeEndpoint [Required]
    +string
    -

    mode is the authorization mode to apply to requests to the kubelet server. -Valid values are AlwaysAllow and Webhook. -Webhook mode uses the SubjectAccessReview API to determine authorization.

    +

    ContainerRuntimeEndpoint is the endpoint of container runtime. +Unix Domain Sockets are supported on Linux, while npipe and tcp endpoints are supported on Windows. +Examples:'unix:///path/to/runtime.sock', 'npipe:////./pipe/runtime'

    webhook
    -KubeletWebhookAuthorization +
    imageServiceEndpoint
    +string
    -

    webhook contains settings related to Webhook authorization.

    +

    ImageServiceEndpoint is the endpoint of container image service. +Unix Domain Socket are supported on Linux, while npipe and tcp endpoints are supported on Windows. +Examples:'unix:///path/to/runtime.sock', 'npipe:////./pipe/runtime'. +If not specified, the value in containerRuntimeEndpoint is used.

    -## `KubeletAuthorizationMode` {#kubelet-config-k8s-io-v1beta1-KubeletAuthorizationMode} - -(Alias of `string`) - -**Appears in:** - -- [KubeletAuthorization](#kubelet-config-k8s-io-v1beta1-KubeletAuthorization) - - - - - -## `KubeletWebhookAuthentication` {#kubelet-config-k8s-io-v1beta1-KubeletWebhookAuthentication} +## `SerializedNodeConfigSource` {#kubelet-config-k8s-io-v1beta1-SerializedNodeConfigSource} -**Appears in:** - -- [KubeletAuthentication](#kubelet-config-k8s-io-v1beta1-KubeletAuthentication) +

    SerializedNodeConfigSource allows us to serialize v1.NodeConfigSource. +This type is used internally by the Kubelet for tracking checkpointed dynamic configs. +It exists in the kubeletconfig API group because it is classified as a versioned input to the Kubelet.

    + + + - - - -
    FieldDescription
    apiVersion
    string
    kubelet.config.k8s.io/v1beta1
    kind
    string
    SerializedNodeConfigSource
    enabled
    -bool -
    -

    enabled allows bearer token authentication backed by the -tokenreviews.authentication.k8s.io API.

    -
    cacheTTL
    -meta/v1.Duration +
    source
    +core/v1.NodeConfigSource
    -

    cacheTTL enables caching of authentication results

    +

    source is the source that we are serializing.

    -## `KubeletWebhookAuthorization` {#kubelet-config-k8s-io-v1beta1-KubeletWebhookAuthorization} +## `CredentialProvider` {#kubelet-config-k8s-io-v1beta1-CredentialProvider} **Appears in:** -- [KubeletAuthorization](#kubelet-config-k8s-io-v1beta1-KubeletAuthorization) +- [CredentialProviderConfig](#kubelet-config-k8s-io-v1beta1-CredentialProviderConfig) + +

    CredentialProvider represents an exec plugin to be invoked by the kubelet. The plugin is only +invoked when an image being pulled matches the images handled by the plugin (see matchImages).

    @@ -1547,133 +1554,122 @@ tokenreviews.authentication.k8s.io API.

    - - + + + - -
    cacheAuthorizedTTL
    -meta/v1.Duration +
    name [Required]
    +string
    -

    cacheAuthorizedTTL is the duration to cache 'authorized' responses from the -webhook authorizer.

    +

    name is the required name of the credential provider. It must match the name of the +provider executable as seen by the kubelet. The executable must be in the kubelet's +bin directory (set by the --image-credential-provider-bin-dir flag).

    cacheUnauthorizedTTL
    +
    matchImages [Required]
    +[]string +
    +

    matchImages is a required list of strings used to match against images in order to +determine if this provider should be invoked. If one of the strings matches the +requested image from the kubelet, the plugin will be invoked and given a chance +to provide credentials. Images are expected to contain the registry domain +and URL path.

    +

    Each entry in matchImages is a pattern which can optionally contain a port and a path. +Globs can be used in the domain, but not in the port or the path. Globs are supported +as subdomains like '*.k8s.io' or 'k8s.*.io', and top-level-domains such as 'k8s.*'. +Matching partial subdomains like 'app*.k8s.io' is also supported. Each glob can only match +a single subdomain segment, so *.io does not match *.k8s.io.

    +

    A match exists between an image and a matchImage when all of the below are true:

    +
      +
    • Both contain the same number of domain parts and each part matches.
    • +
    • The URL path of an imageMatch must be a prefix of the target image URL path.
    • +
    • If the imageMatch contains a port, then the port must match in the image as well.
    • +
    +

    Example values of matchImages:

    +
      +
    • 123456789.dkr.ecr.us-east-1.amazonaws.com
    • +
    • *.azurecr.io
    • +
    • gcr.io
    • +
    • *.*.registry.io
    • +
    • registry.io:8080/path
    • +
    +
    defaultCacheDuration [Required]
    meta/v1.Duration
    -

    cacheUnauthorizedTTL is the duration to cache 'unauthorized' responses from -the webhook authorizer.

    +

    defaultCacheDuration is the default duration the plugin will cache credentials in-memory +if a cache duration is not provided in the plugin response. This field is required.

    - -## `KubeletX509Authentication` {#kubelet-config-k8s-io-v1beta1-KubeletX509Authentication} - - -**Appears in:** - -- [KubeletAuthentication](#kubelet-config-k8s-io-v1beta1-KubeletAuthentication) - - - - - - - - - - -
    FieldDescription
    clientCAFile
    +
    apiVersion [Required]
    string
    -

    clientCAFile is the path to a PEM-encoded certificate bundle. If set, any request -presenting a client certificate signed by one of the authorities in the bundle -is authenticated with a username corresponding to the CommonName, -and groups corresponding to the Organization in the client certificate.

    +

    Required input version of the exec CredentialProviderRequest. The returned CredentialProviderResponse +MUST use the same encoding version as the input. Current supported values are:

    +
      +
    • credentialprovider.kubelet.k8s.io/v1beta1
    • +
    - -## `MemoryReservation` {#kubelet-config-k8s-io-v1beta1-MemoryReservation} - - -**Appears in:** - -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) - - -

    MemoryReservation specifies the memory reservation of different types for each NUMA node

    - - - - - - - - +

    Arguments to pass to the command when executing it.

    + - +

    Env defines additional environment variables to expose to the process. These +are unioned with the host's environment, as well as variables client-go uses +to pass argument to the plugin.

    +
    FieldDescription
    numaNode [Required]
    -int32 +
    args
    +[]string
    - No description provided.
    limits [Required]
    -core/v1.ResourceList +
    env
    +[]ExecEnvVar
    - No description provided.
    -## `MemorySwapConfiguration` {#kubelet-config-k8s-io-v1beta1-MemorySwapConfiguration} +## `ExecEnvVar` {#kubelet-config-k8s-io-v1beta1-ExecEnvVar} **Appears in:** -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) +- [CredentialProvider](#kubelet-config-k8s-io-v1beta1-CredentialProvider) +

    ExecEnvVar is used for setting environment variables when executing an exec-based +credential plugin.

    + - + + +
    FieldDescription
    swapBehavior
    +
    name [Required]
    string
    -

    swapBehavior configures swap memory available to container workloads. May be one of -"", "LimitedSwap": workload combined memory and swap usage cannot exceed pod memory limit -"UnlimitedSwap": workloads can use unlimited swap, up to the allocatable limit.

    + No description provided.
    value [Required]
    +string
    + No description provided.
    -## `ResourceChangeDetectionStrategy` {#kubelet-config-k8s-io-v1beta1-ResourceChangeDetectionStrategy} - -(Alias of `string`) - -**Appears in:** - -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) - - -

    ResourceChangeDetectionStrategy denotes a mode in which internal -managers (secret, configmap) are discovering object changes.

    - - - - -## `ShutdownGracePeriodByPodPriority` {#kubelet-config-k8s-io-v1beta1-ShutdownGracePeriodByPodPriority} +## `KubeletAnonymousAuthentication` {#kubelet-config-k8s-io-v1beta1-KubeletAnonymousAuthentication} -**Appears in:** - -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) +**Appears in:** +- [KubeletAuthentication](#kubelet-config-k8s-io-v1beta1-KubeletAuthentication) -

    ShutdownGracePeriodByPodPriority specifies the shutdown grace period for Pods based on their associated priority class value

    @@ -1681,35 +1677,27 @@ managers (secret, configmap) are discovering object changes.

    - - - -
    priority [Required]
    -int32 -
    -

    priority is the priority value associated with the shutdown grace period

    -
    shutdownGracePeriodSeconds [Required]
    -int64 +
    enabled
    +bool
    -

    shutdownGracePeriodSeconds is the shutdown grace period in seconds

    +

    enabled allows anonymous requests to the kubelet server. +Requests that are not rejected by another authentication method are treated as +anonymous requests. +Anonymous requests have a username of system:anonymous, and a group name of +system:unauthenticated.

    - - - -## `FormatOptions` {#FormatOptions} +## `KubeletAuthentication` {#kubelet-config-k8s-io-v1beta1-KubeletAuthentication} **Appears in:** -- [LoggingConfiguration](#LoggingConfiguration) - +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) -

    FormatOptions contains options for the different logging formats.

    @@ -1717,26 +1705,37 @@ managers (secret, configmap) are discovering object changes.

    - + + + + + +
    json [Required]
    -JSONOptions +
    x509
    +KubeletX509Authentication
    -

    [Alpha] JSON contains options for logging format "json". -Only available when the LoggingAlphaOptions feature gate is enabled.

    +

    x509 contains settings related to x509 client certificate authentication.

    +
    webhook
    +KubeletWebhookAuthentication +
    +

    webhook contains settings related to webhook bearer token authentication.

    +
    anonymous
    +KubeletAnonymousAuthentication +
    +

    anonymous contains settings related to anonymous authentication.

    -## `JSONOptions` {#JSONOptions} +## `KubeletAuthorization` {#kubelet-config-k8s-io-v1beta1-KubeletAuthorization} **Appears in:** -- [FormatOptions](#FormatOptions) - +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) -

    JSONOptions contains options for logging format "json".

    @@ -1744,47 +1743,44 @@ Only available when the LoggingAlphaOptions feature gate is enabled.

    - -
    splitStream [Required]
    -bool +
    mode
    +KubeletAuthorizationMode
    -

    [Alpha] SplitStream redirects error messages to stderr while -info messages go to stdout, with buffering. The default is to write -both to stdout, without buffering. Only available when -the LoggingAlphaOptions feature gate is enabled.

    +

    mode is the authorization mode to apply to requests to the kubelet server. +Valid values are AlwaysAllow and Webhook. +Webhook mode uses the SubjectAccessReview API to determine authorization.

    infoBufferSize [Required]
    -k8s.io/apimachinery/pkg/api/resource.QuantityValue +
    webhook
    +KubeletWebhookAuthorization
    -

    [Alpha] InfoBufferSize sets the size of the info stream when -using split streams. The default is zero, which disables buffering. -Only available when the LoggingAlphaOptions feature gate is enabled.

    +

    webhook contains settings related to Webhook authorization.

    -## `LogFormatFactory` {#LogFormatFactory} +## `KubeletAuthorizationMode` {#kubelet-config-k8s-io-v1beta1-KubeletAuthorizationMode} +(Alias of `string`) +**Appears in:** -

    LogFormatFactory provides support for a certain additional, -non-default log format.

    +- [KubeletAuthorization](#kubelet-config-k8s-io-v1beta1-KubeletAuthorization) -## `LoggingConfiguration` {#LoggingConfiguration} + +## `KubeletWebhookAuthentication` {#kubelet-config-k8s-io-v1beta1-KubeletWebhookAuthentication} **Appears in:** -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) - +- [KubeletAuthentication](#kubelet-config-k8s-io-v1beta1-KubeletAuthentication) -

    LoggingConfiguration contains logging options.

    @@ -1792,61 +1788,64 @@ non-default log format.

    - - - - - - +
    format [Required]
    -string -
    -

    Format Flag specifies the structure of log messages. -default value of format is text

    -
    flushFrequency [Required]
    -TimeOrMetaDuration +
    enabled
    +bool
    -

    Maximum time between log flushes. -If a string, parsed as a duration (i.e. "1s") -If an int, the maximum number of nanoseconds (i.e. 1s = 1000000000). -Ignored if the selected logging backend writes log messages without buffering.

    +

    enabled allows bearer token authentication backed by the +tokenreviews.authentication.k8s.io API.

    verbosity [Required]
    -VerbosityLevel +
    cacheTTL
    +meta/v1.Duration
    -

    Verbosity is the threshold that determines which log messages are -logged. Default is zero which logs only the most important -messages. Higher values enable additional messages. Error messages -are always logged.

    +

    cacheTTL enables caching of authentication results

    vmodule [Required]
    -VModuleConfiguration +
    + +## `KubeletWebhookAuthorization` {#kubelet-config-k8s-io-v1beta1-KubeletWebhookAuthorization} + + +**Appears in:** + +- [KubeletAuthorization](#kubelet-config-k8s-io-v1beta1-KubeletAuthorization) + + + + + + + + + -
    FieldDescription
    cacheAuthorizedTTL
    +meta/v1.Duration
    -

    VModule overrides the verbosity threshold for individual files. -Only supported for "text" log format.

    +

    cacheAuthorizedTTL is the duration to cache 'authorized' responses from the +webhook authorizer.

    options [Required]
    -FormatOptions +
    cacheUnauthorizedTTL
    +meta/v1.Duration
    -

    [Alpha] Options holds additional parameters that are specific -to the different logging formats. Only the options for the selected -format get used, but all of them get validated. -Only available when the LoggingAlphaOptions feature gate is enabled.

    +

    cacheUnauthorizedTTL is the duration to cache 'unauthorized' responses from +the webhook authorizer.

    -## `LoggingOptions` {#LoggingOptions} +## `KubeletX509Authentication` {#kubelet-config-k8s-io-v1beta1-KubeletX509Authentication} +**Appears in:** + +- [KubeletAuthentication](#kubelet-config-k8s-io-v1beta1-KubeletAuthentication) -

    LoggingOptions can be used with ValidateAndApplyWithOptions to override -certain global defaults.

    @@ -1854,33 +1853,28 @@ certain global defaults.

    - - - -
    ErrorStream [Required]
    -io.Writer -
    -

    ErrorStream can be used to override the os.Stderr default.

    -
    InfoStream [Required]
    -io.Writer +
    clientCAFile
    +string
    -

    InfoStream can be used to override the os.Stdout default.

    +

    clientCAFile is the path to a PEM-encoded certificate bundle. If set, any request +presenting a client certificate signed by one of the authorities in the bundle +is authenticated with a username corresponding to the CommonName, +and groups corresponding to the Organization in the client certificate.

    -## `TimeOrMetaDuration` {#TimeOrMetaDuration} +## `MemoryReservation` {#kubelet-config-k8s-io-v1beta1-MemoryReservation} **Appears in:** -- [LoggingConfiguration](#LoggingConfiguration) +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) -

    TimeOrMetaDuration is present only for backwards compatibility for the -flushFrequency field, and new fields should use metav1.Duration.

    +

    MemoryReservation specifies the memory reservation of different types for each NUMA node

    @@ -1888,24 +1882,22 @@ flushFrequency field, and new fields should use metav1.Duration.

    - + No description provided. - + No description provided.
    Duration [Required]
    -meta/v1.Duration +
    numaNode [Required]
    +int32
    -

    Duration holds the duration

    -
    - [Required]
    -bool +
    limits [Required]
    +core/v1.ResourceList
    -

    SerializeAsString controls whether the value is serialized as a string or an integer

    -
    -## `TracingConfiguration` {#TracingConfiguration} +## `MemorySwapConfiguration` {#kubelet-config-k8s-io-v1beta1-MemorySwapConfiguration} **Appears in:** @@ -1913,60 +1905,69 @@ flushFrequency field, and new fields should use metav1.Duration.

    - [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) -

    TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.

    - - - - -
    FieldDescription
    endpoint
    +
    swapBehavior
    string
    -

    Endpoint of the collector this component will report traces to. -The connection is insecure, and does not currently support TLS. -Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.

    -
    samplingRatePerMillion
    -int32 -
    -

    SamplingRatePerMillion is the number of samples to collect per million spans. -Recommended is unset. If unset, sampler respects its parent span's sampling -rate, but otherwise never samples.

    +

    swapBehavior configures swap memory available to container workloads. May be one of +"", "LimitedSwap": workload combined memory and swap usage cannot exceed pod memory limit +"UnlimitedSwap": workloads can use unlimited swap, up to the allocatable limit.

    -## `VModuleConfiguration` {#VModuleConfiguration} +## `ResourceChangeDetectionStrategy` {#kubelet-config-k8s-io-v1beta1-ResourceChangeDetectionStrategy} -(Alias of `[]k8s.io/component-base/logs/api/v1.VModuleItem`) +(Alias of `string`) **Appears in:** -- [LoggingConfiguration](#LoggingConfiguration) +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) -

    VModuleConfiguration is a collection of individual file names or patterns -and the corresponding verbosity threshold.

    +

    ResourceChangeDetectionStrategy denotes a mode in which internal +managers (secret, configmap) are discovering object changes.

    -## `VerbosityLevel` {#VerbosityLevel} +## `ShutdownGracePeriodByPodPriority` {#kubelet-config-k8s-io-v1beta1-ShutdownGracePeriodByPodPriority} -(Alias of `uint32`) **Appears in:** -- [LoggingConfiguration](#LoggingConfiguration) - +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) -

    VerbosityLevel represents a klog or logr verbosity threshold.

    +

    ShutdownGracePeriodByPodPriority specifies the shutdown grace period for Pods based on their associated priority class value

    + + + + + + + + + + + + +
    FieldDescription
    priority [Required]
    +int32 +
    +

    priority is the priority value associated with the shutdown grace period

    +
    shutdownGracePeriodSeconds [Required]
    +int64 +
    +

    shutdownGracePeriodSeconds is the shutdown grace period in seconds

    +
    + diff --git a/content/en/docs/reference/config-api/kubelet-credentialprovider.v1.md b/content/en/docs/reference/config-api/kubelet-credentialprovider.v1.md index 9c8b754443e5a..579bbb7080570 100644 --- a/content/en/docs/reference/config-api/kubelet-credentialprovider.v1.md +++ b/content/en/docs/reference/config-api/kubelet-credentialprovider.v1.md @@ -12,7 +12,6 @@ auto_generated: true - [CredentialProviderRequest](#credentialprovider-kubelet-k8s-io-v1-CredentialProviderRequest) - [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1-CredentialProviderResponse) - ## `CredentialProviderRequest` {#credentialprovider-kubelet-k8s-io-v1-CredentialProviderRequest} diff --git a/content/en/docs/reference/config-api/kubelet-credentialprovider.v1alpha1.md b/content/en/docs/reference/config-api/kubelet-credentialprovider.v1alpha1.md index c8a7bd682e60a..309ae2295fdc9 100644 --- a/content/en/docs/reference/config-api/kubelet-credentialprovider.v1alpha1.md +++ b/content/en/docs/reference/config-api/kubelet-credentialprovider.v1alpha1.md @@ -12,7 +12,6 @@ auto_generated: true - [CredentialProviderRequest](#credentialprovider-kubelet-k8s-io-v1alpha1-CredentialProviderRequest) - [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1alpha1-CredentialProviderResponse) - ## `CredentialProviderRequest` {#credentialprovider-kubelet-k8s-io-v1alpha1-CredentialProviderRequest} diff --git a/content/en/docs/reference/config-api/kubelet-credentialprovider.v1beta1.md b/content/en/docs/reference/config-api/kubelet-credentialprovider.v1beta1.md index 7384939b5f35b..352157d626c87 100644 --- a/content/en/docs/reference/config-api/kubelet-credentialprovider.v1beta1.md +++ b/content/en/docs/reference/config-api/kubelet-credentialprovider.v1beta1.md @@ -12,7 +12,6 @@ auto_generated: true - [CredentialProviderRequest](#credentialprovider-kubelet-k8s-io-v1beta1-CredentialProviderRequest) - [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1beta1-CredentialProviderResponse) - ## `CredentialProviderRequest` {#credentialprovider-kubelet-k8s-io-v1beta1-CredentialProviderRequest} @@ -110,7 +109,7 @@ stopping after the first successfully authenticated pull.

  • 123456789.dkr.ecr.us-east-1.amazonaws.com
  • *.azurecr.io
  • gcr.io
  • -
  • *.*registry.io
  • +
  • *.*.registry.io
  • registry.io:8080/path
diff --git a/content/en/docs/reference/glossary/container-runtime-interface.md b/content/en/docs/reference/glossary/container-runtime-interface.md index e3ad8f5b092a0..c2dab628efb04 100644 --- a/content/en/docs/reference/glossary/container-runtime-interface.md +++ b/content/en/docs/reference/glossary/container-runtime-interface.md @@ -17,6 +17,6 @@ The main protocol for the communication between the {{< glossary_tooltip text="k The Kubernetes Container Runtime Interface (CRI) defines the main [gRPC](https://grpc.io) protocol for the communication between the -[cluster components](/docs/concepts/overview/components/#node-components) +[node components](/docs/concepts/overview/components/#node-components) {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}. diff --git a/content/en/docs/reference/glossary/group-version-resource.md b/content/en/docs/reference/glossary/group-version-resource.md new file mode 100644 index 0000000000000..cdd208fd5ed8e --- /dev/null +++ b/content/en/docs/reference/glossary/group-version-resource.md @@ -0,0 +1,18 @@ +--- +title: Group Version Resource +id: gvr +date: 2023-07-24 +short_description: > + The API group, API version and name of a Kubernetes API. + +aka: ["GVR"] +tags: +- architecture +--- +Means of representing unique Kubernetes API resource. + + + +Group Version Resources (GVRs) specify the API group, API version, and resource (name for the object kind as it appears in the URI) associated with accessing a particular id of object in Kubernetes. +GVRs let you define and distinguish different Kubernetes objects, and to specify a way of accessing +objects that is stable even as APIs change. \ No newline at end of file diff --git a/content/en/docs/reference/issues-security/security.md b/content/en/docs/reference/issues-security/security.md index e5d2a565ddfdc..64333620ee3a2 100644 --- a/content/en/docs/reference/issues-security/security.md +++ b/content/en/docs/reference/issues-security/security.md @@ -13,21 +13,27 @@ weight: 20 This page describes Kubernetes security and disclosure information. - ## Security Announcements -Join the [kubernetes-security-announce](https://groups.google.com/forum/#!forum/kubernetes-security-announce) group for emails about security and major API announcements. +Join the [kubernetes-security-announce](https://groups.google.com/forum/#!forum/kubernetes-security-announce) +group for emails about security and major API announcements. ## Report a Vulnerability -We're extremely grateful for security researchers and users that report vulnerabilities to the Kubernetes Open Source Community. All reports are thoroughly investigated by a set of community volunteers. +We're extremely grateful for security researchers and users that report vulnerabilities to +the Kubernetes Open Source Community. All reports are thoroughly investigated by a set of community volunteers. -To make a report, submit your vulnerability to the [Kubernetes bug bounty program](https://hackerone.com/kubernetes). This allows triage and handling of the vulnerability with standardized response times. +To make a report, submit your vulnerability to the [Kubernetes bug bounty program](https://hackerone.com/kubernetes). +This allows triage and handling of the vulnerability with standardized response times. -You can also email the private [security@kubernetes.io](mailto:security@kubernetes.io) list with the security details and the details expected for [all Kubernetes bug reports](https://github.com/kubernetes/kubernetes/blob/master/.github/ISSUE_TEMPLATE/bug-report.yaml). +You can also email the private [security@kubernetes.io](mailto:security@kubernetes.io) +list with the security details and the details expected for +[all Kubernetes bug reports](https://github.com/kubernetes/kubernetes/blob/master/.github/ISSUE_TEMPLATE/bug-report.yaml). -You may encrypt your email to this list using the GPG keys of the [Security Response Committee members](https://git.k8s.io/security/README.md#product-security-committee-psc). Encryption using GPG is NOT required to make a disclosure. +You may encrypt your email to this list using the GPG keys of the +[Security Response Committee members](https://git.k8s.io/security/README.md#product-security-committee-psc). +Encryption using GPG is NOT required to make a disclosure. ### When Should I Report a Vulnerability? @@ -36,7 +42,6 @@ You may encrypt your email to this list using the GPG keys of the [Security Resp - You think you discovered a vulnerability in another project that Kubernetes depends on - For projects with their own vulnerability reporting and disclosure process, please report it directly there - ### When Should I NOT Report a Vulnerability? - You need help tuning Kubernetes components for security @@ -45,13 +50,19 @@ You may encrypt your email to this list using the GPG keys of the [Security Resp ## Security Vulnerability Response -Each report is acknowledged and analyzed by Security Response Committee members within 3 working days. This will set off the [Security Release Process](https://git.k8s.io/security/security-release-process.md#disclosures). +Each report is acknowledged and analyzed by Security Response Committee members within 3 working days. +This will set off the [Security Release Process](https://git.k8s.io/security/security-release-process.md#disclosures). -Any vulnerability information shared with Security Response Committee stays within Kubernetes project and will not be disseminated to other projects unless it is necessary to get the issue fixed. +Any vulnerability information shared with Security Response Committee stays within Kubernetes project +and will not be disseminated to other projects unless it is necessary to get the issue fixed. As the security issue moves from triage, to identified fix, to release planning we will keep the reporter updated. ## Public Disclosure Timing -A public disclosure date is negotiated by the Kubernetes Security Response Committee and the bug submitter. We prefer to fully disclose the bug as soon as possible once a user mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure is from immediate (especially if it's already publicly known) to a few weeks. For a vulnerability with a straightforward mitigation, we expect report date to disclosure date to be on the order of 7 days. The Kubernetes Security Response Committee holds the final say when setting a disclosure date. - +A public disclosure date is negotiated by the Kubernetes Security Response Committee and the bug submitter. +We prefer to fully disclose the bug as soon as possible once a user mitigation is available. It is reasonable +to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, +or for vendor coordination. The timeframe for disclosure is from immediate (especially if it's already publicly known) +to a few weeks. For a vulnerability with a straightforward mitigation, we expect report date to disclosure date +to be on the order of 7 days. The Kubernetes Security Response Committee holds the final say when setting a disclosure date. diff --git a/content/en/docs/reference/kubectl/_index.md b/content/en/docs/reference/kubectl/_index.md index aefaae1fcc1cf..1f3c6b73f17de 100644 --- a/content/en/docs/reference/kubectl/_index.md +++ b/content/en/docs/reference/kubectl/_index.md @@ -25,7 +25,8 @@ For details about each command, including all the supported flags and subcommand For installation instructions, see [Installing kubectl](/docs/tasks/tools/#kubectl); for a quick guide, see the [cheat sheet](/docs/reference/kubectl/cheatsheet/). -If you're used to using the `docker` command-line tool, [`kubectl` for Docker Users](/docs/reference/kubectl/docker-cli-to-kubectl/) explains some equivalent commands for Kubernetes. +If you're used to using the `docker` command-line tool, +[`kubectl` for Docker Users](/docs/reference/kubectl/docker-cli-to-kubectl/) explains some equivalent commands for Kubernetes. @@ -39,37 +40,41 @@ kubectl [command] [TYPE] [NAME] [flags] where `command`, `TYPE`, `NAME`, and `flags` are: -* `command`: Specifies the operation that you want to perform on one or more resources, -for example `create`, `get`, `describe`, `delete`. +* `command`: Specifies the operation that you want to perform on one or more resources, + for example `create`, `get`, `describe`, `delete`. * `TYPE`: Specifies the [resource type](#resource-types). Resource types are case-insensitive and you can specify the singular, plural, or abbreviated forms. For example, the following commands produce the same output: - ```shell - kubectl get pod pod1 - kubectl get pods pod1 - kubectl get po pod1 - ``` + ```shell + kubectl get pod pod1 + kubectl get pods pod1 + kubectl get po pod1 + ``` -* `NAME`: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example `kubectl get pods`. +* `NAME`: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, + details for all resources are displayed, for example `kubectl get pods`. - When performing an operation on multiple resources, you can specify each resource by type and name or specify one or more files: + When performing an operation on multiple resources, you can specify each resource by + type and name or specify one or more files: - * To specify resources by type and name: + * To specify resources by type and name: - * To group resources if they are all the same type: `TYPE1 name1 name2 name<#>`.
+ * To group resources if they are all the same type: `TYPE1 name1 name2 name<#>`.
Example: `kubectl get pod example-pod1 example-pod2` - * To specify multiple resource types individually: `TYPE1/name1 TYPE1/name2 TYPE2/name3 TYPE<#>/name<#>`.
+ * To specify multiple resource types individually: `TYPE1/name1 TYPE1/name2 TYPE2/name3 TYPE<#>/name<#>`.
Example: `kubectl get pod/example-pod1 replicationcontroller/example-rc1` - * To specify resources with one or more files: `-f file1 -f file2 -f file<#>` + * To specify resources with one or more files: `-f file1 -f file2 -f file<#>` - * [Use YAML rather than JSON](/docs/concepts/configuration/overview/#general-configuration-tips) since YAML tends to be more user-friendly, especially for configuration files.
- Example: `kubectl get -f ./pod.yaml` + * [Use YAML rather than JSON](/docs/concepts/configuration/overview/#general-configuration-tips) + since YAML tends to be more user-friendly, especially for configuration files.
+ Example: `kubectl get -f ./pod.yaml` -* `flags`: Specifies optional flags. For example, you can use the `-s` or `--server` flags to specify the address and port of the Kubernetes API server.
+* `flags`: Specifies optional flags. For example, you can use the `-s` or `--server` flags + to specify the address and port of the Kubernetes API server.
{{< caution >}} Flags that you specify from the command line override default values and any corresponding environment variables. @@ -79,19 +84,29 @@ If you need help, run `kubectl help` from the terminal window. ## In-cluster authentication and namespace overrides -By default `kubectl` will first determine if it is running within a pod, and thus in a cluster. It starts by checking for the `KUBERNETES_SERVICE_HOST` and `KUBERNETES_SERVICE_PORT` environment variables and the existence of a service account token file at `/var/run/secrets/kubernetes.io/serviceaccount/token`. If all three are found in-cluster authentication is assumed. +By default `kubectl` will first determine if it is running within a pod, and thus in a cluster. +It starts by checking for the `KUBERNETES_SERVICE_HOST` and `KUBERNETES_SERVICE_PORT` environment +variables and the existence of a service account token file at `/var/run/secrets/kubernetes.io/serviceaccount/token`. +If all three are found in-cluster authentication is assumed. -To maintain backwards compatibility, if the `POD_NAMESPACE` environment variable is set during in-cluster authentication it will override the default namespace from the service account token. Any manifests or tools relying on namespace defaulting will be affected by this. +To maintain backwards compatibility, if the `POD_NAMESPACE` environment variable is set +during in-cluster authentication it will override the default namespace from the +service account token. Any manifests or tools relying on namespace defaulting will be affected by this. **`POD_NAMESPACE` environment variable** -If the `POD_NAMESPACE` environment variable is set, cli operations on namespaced resources will default to the variable value. For example, if the variable is set to `seattle`, `kubectl get pods` would return pods in the `seattle` namespace. This is because pods are a namespaced resource, and no namespace was provided in the command. Review the output of `kubectl api-resources` to determine if a resource is namespaced. +If the `POD_NAMESPACE` environment variable is set, cli operations on namespaced resources +will default to the variable value. For example, if the variable is set to `seattle`, +`kubectl get pods` would return pods in the `seattle` namespace. This is because pods are +a namespaced resource, and no namespace was provided in the command. Review the output +of `kubectl api-resources` to determine if a resource is namespaced. -Explicit use of `--namespace ` overrides this behavior. +Explicit use of `--namespace ` overrides this behavior. **How kubectl handles ServiceAccount tokens** If: + * there is Kubernetes service account token file mounted at `/var/run/secrets/kubernetes.io/serviceaccount/token`, and * the `KUBERNETES_SERVICE_HOST` environment variable is set, and @@ -230,11 +245,15 @@ The following table includes a list of all the supported resource types and thei ## Output options -Use the following sections for information about how you can format or sort the output of certain commands. For details about which commands support the various output options, see the [kubectl](/docs/reference/kubectl/kubectl/) reference documentation. +Use the following sections for information about how you can format or sort the output +of certain commands. For details about which commands support the various output options, +see the [kubectl](/docs/reference/kubectl/kubectl/) reference documentation. ### Formatting output -The default output format for all `kubectl` commands is the human readable plain-text format. To output details to your terminal window in a specific format, you can add either the `-o` or `--output` flags to a supported `kubectl` command. +The default output format for all `kubectl` commands is the human readable plain-text format. +To output details to your terminal window in a specific format, you can add either the `-o` +or `--output` flags to a supported `kubectl` command. #### Syntax @@ -324,7 +343,9 @@ pod-name 1m ### Sorting list objects -To output objects to a sorted list in your terminal window, you can add the `--sort-by` flag to a supported `kubectl` command. Sort your objects by specifying any numeric or string field with the `--sort-by` flag. To specify a field, use a [jsonpath](/docs/reference/kubectl/jsonpath/) expression. +To output objects to a sorted list in your terminal window, you can add the `--sort-by` flag +to a supported `kubectl` command. Sort your objects by specifying any numeric or string field +with the `--sort-by` flag. To specify a field, use a [jsonpath](/docs/reference/kubectl/jsonpath/) expression. #### Syntax @@ -508,10 +529,12 @@ The following kubectl-compatible plugins are available: `kubectl plugin list` also warns you about plugins that are not executable, or that are shadowed by other plugins; for example: + ```shell sudo chmod -x /usr/local/bin/kubectl-foo # remove execute permission kubectl plugin list ``` + ``` The following kubectl-compatible plugins are available: @@ -529,8 +552,10 @@ of the existing kubectl commands: ```shell cat ./kubectl-whoami ``` + The next few examples assume that you already made `kubectl-whoami` have the following contents: + ```shell #!/bin/bash diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index b933c6eaeecd8..41a2d557159de 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -213,7 +213,7 @@ kubectl get pods --field-selector=status.phase=Running kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}' # List Names of Pods that belong to Particular RC -# "jq" command useful for transformations that are too complex for jsonpath, it can be found at https://stedolan.github.io/jq/ +# "jq" command useful for transformations that are too complex for jsonpath, it can be found at https://jqlang.github.io/jq/ sel=${$(kubectl get rc my-rc --output=json | jq -j '.spec.selector | to_entries | .[] | "\(.key)=\(.value),"')%?} echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name}) @@ -224,6 +224,9 @@ kubectl get pods --show-labels JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \ && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True" +# Check which nodes are ready with custom-columns +kubectl get node -o custom-columns='NODE_NAME:.metadata.name,STATUS:.status.conditions[?(@.type=="Ready")].status' + # Output decoded secrets without external tools kubectl get secret my-secret -o go-template='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"\n"}}{{$v|base64decode}}{{"\n\n"}}{{end}}' diff --git a/content/en/docs/reference/kubectl/jsonpath.md b/content/en/docs/reference/kubectl/jsonpath.md index 230ded2640523..9b9b270f71f64 100644 --- a/content/en/docs/reference/kubectl/jsonpath.md +++ b/content/en/docs/reference/kubectl/jsonpath.md @@ -34,7 +34,12 @@ Given the JSON input: "items":[ { "kind":"None", - "metadata":{"name":"127.0.0.1"}, + "metadata":{ + "name":"127.0.0.1", + "labels":{ + "kubernetes.io/hostname":"127.0.0.1" + } + }, "status":{ "capacity":{"cpu":"4"}, "addresses":[{"type": "LegacyHostIP", "address":"127.0.0.1"}] @@ -65,18 +70,19 @@ Given the JSON input: } ``` -Function | Description | Example | Result ---------------------|---------------------------|-----------------------------------------------------------------|------------------ -`text` | the plain text | `kind is {.kind}` | `kind is List` -`@` | the current object | `{@}` | the same as input -`.` or `[]` | child operator | `{.kind}`, `{['kind']}` or `{['name\.type']}` | `List` -`..` | recursive descent | `{..name}` | `127.0.0.1 127.0.0.2 myself e2e` -`*` | wildcard. Get all objects | `{.items[*].metadata.name}` | `[127.0.0.1 127.0.0.2]` -`[start:end:step]` | subscript operator | `{.users[0].name}` | `myself` -`[,]` | union operator | `{.items[*]['metadata.name', 'status.capacity']}` | `127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8]` -`?()` | filter | `{.users[?(@.name=="e2e")].user.password}` | `secret` -`range`, `end` | iterate list | `{range .items[*]}[{.metadata.name}, {.status.capacity}] {end}` | `[127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]]` -`''` | quote interpreted string | `{range .items[*]}{.metadata.name}{'\t'}{end}` | `127.0.0.1 127.0.0.2` +Function | Description | Example | Result +--------------------|------------------------------|-----------------------------------------------------------------|------------------ +`text` | the plain text | `kind is {.kind}` | `kind is List` +`@` | the current object | `{@}` | the same as input +`.` or `[]` | child operator | `{.kind}`, `{['kind']}` or `{['name\.type']}` | `List` +`..` | recursive descent | `{..name}` | `127.0.0.1 127.0.0.2 myself e2e` +`*` | wildcard. Get all objects | `{.items[*].metadata.name}` | `[127.0.0.1 127.0.0.2]` +`[start:end:step]` | subscript operator | `{.users[0].name}` | `myself` +`[,]` | union operator | `{.items[*]['metadata.name', 'status.capacity']}` | `127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8]` +`?()` | filter | `{.users[?(@.name=="e2e")].user.password}` | `secret` +`range`, `end` | iterate list | `{range .items[*]}[{.metadata.name}, {.status.capacity}] {end}` | `[127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]]` +`''` | quote interpreted string | `{range .items[*]}{.metadata.name}{'\t'}{end}` | `127.0.0.1 127.0.0.2` +`\` | escape termination character | `{.items[0].metadata.labels.kubernetes\.io/hostname}` | `127.0.0.1` Examples using `kubectl` and JSONPath expressions: @@ -87,6 +93,7 @@ kubectl get pods -o=jsonpath='{.items[0]}' kubectl get pods -o=jsonpath='{.items[0].metadata.name}' kubectl get pods -o=jsonpath="{.items[*]['metadata.name', 'status.capacity']}" kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}' +kubectl get pods -o=jsonpath='{.items[0].metadata.labels.kubernetes\.io/hostname}' ``` {{< note >}} diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md index bdb54e504db1c..fd00019d528b0 100644 --- a/content/en/docs/reference/labels-annotations-taints/_index.md +++ b/content/en/docs/reference/labels-annotations-taints/_index.md @@ -299,6 +299,23 @@ This annotation is part of the Kubernetes Resource Model (KRM) Functions Specifi which is used by Kustomize and similar third-party tools. For example, Kustomize removes objects with this annotation from its final build output. + +### container.apparmor.security.beta.kubernetes.io/* (beta) {#container-apparmor-security-beta-kubernetes-io} + +Type: Annotation + +Example: `container.apparmor.security.beta.kubernetes.io/my-container: my-custom-profile` + +Used on: Pods + +This annotation allows you to specify the AppArmor security profile for a container within a +Kubernetes pod. +To learn more, see the [AppArmor](/docs/tutorials/security/apparmor/) tutorial. +The tutorial illustrates using AppArmor to restrict a container's abilities and access. + +The profile specified dictates the set of rules and restrictions that the containerized process must +adhere to. This helps enforce security policies and isolation for your containers. + ### internal.config.kubernetes.io/* (reserved prefix) {#internal.config.kubernetes.io-reserved-wildcard} Type: Annotation @@ -940,6 +957,22 @@ works in that release. There are no other valid values for this annotation. If you don't want topology aware hints for a Service, don't add this annotation. +### service.kubernetes.io/topology-mode + +Type: Annotation + +Example: `service.kubernetes.io/topology-mode: Auto` + +Used on: Service + +This annotation provides a way to define how Services handle network topology; +for example, you can configure a Service so that Kubernetes prefers keeping traffic between +a client and server within a single topology zone. +In some cases this can help reduce costs or improve network performance. + +See [Topology Aware Routing](/docs/concepts/services-networking/topology-aware-routing/) +for more details. + ### kubernetes.io/service-name {#kubernetesioservice-name} Type: Label @@ -1176,6 +1209,27 @@ has been truncated to 1000. If the number of backend endpoints falls below 1000, the control plane removes this annotation. +### control-plane.alpha.kubernetes.io/leader (deprecated) {#control-plane-alpha-kubernetes-io-leader} + +Type: Annotation + +Example: `control-plane.alpha.kubernetes.io/leader={"holderIdentity":"controller-0","leaseDurationSeconds":15,"acquireTime":"2023-01-19T13:12:57Z","renewTime":"2023-01-19T13:13:54Z","leaderTransitions":1}` + +Used on: Endpoints + +The {{< glossary_tooltip text="control plane" term_id="control-plane" >}} previously set annotation on +an [Endpoints](/docs/concepts/services-networking/service/#endpoints) object. This annotation provided +the following detail: + +- Who is the current leader. +- The time when the current leadership was acquired. +- The duration of the lease (of the leadership) in seconds. +- The time the current lease (the current leadership) should be renewed. +- The number of leadership transitions that happened in the past. + +Kubernetes now uses [Leases](/docs/concepts/architecture/leases/) to +manage leader assignment for the Kubernetes control plane. + ### batch.kubernetes.io/job-tracking (deprecated) {#batch-kubernetes-io-job-tracking} Type: Annotation @@ -1466,10 +1520,23 @@ This annotation records a comma-separated list of managed by [Node Feature Discovery](https://kubernetes-sigs.github.io/node-feature-discovery/) (NFD). NFD uses this for an internal mechanism. You should not edit this annotation yourself. +### nfd.node.kubernetes.io/node-name + +Type: Label + +Example: `nfd.node.kubernetes.io/node-name: node-1` + +Used on: Nodes + +It specifies which node the NodeFeature object is targeting. +Creators of NodeFeature objects must set this label and +consumers of the objects are supposed to use the label for +filtering features designated for a certain node. + {{< note >}} -These annotations only applies to nodes where NFD is running. -To learn more about NFD and its components go to its official -[documentation](https://kubernetes-sigs.github.io/node-feature-discovery/stable/get-started/). +These Node Feature Discovery (NFD) labels or annotations only apply to +the nodes where NFD is running. To learn more about NFD and +its components go to its official [documentation](https://kubernetes-sigs.github.io/node-feature-discovery/stable/get-started/). {{< /note >}} ### service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval (beta) {#service-beta-kubernetes-io-aws-load-balancer-access-log-emit-interval} @@ -1790,6 +1857,26 @@ uses this annotation. See [annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/) in the AWS load balancer controller documentation. +### service.beta.kubernetes.io/aws-load-balancer-security-groups (deprecated) {#service-beta-kubernetes-io-aws-load-balancer-security-groups} + +Example: `service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-53fae93f,sg-8725gr62r"` + +Used on: Service + +The AWS load balancer controller uses this annotation to specify a comma seperated list +of security groups you want to attach to an AWS load balancer. Both name and ID of security +are supported where name matches a `Name` tag, not the `groupName` attribute. + +When this annotation is added to a Service, the load-balancer controller attaches the security groups +referenced by the annotation to the load balancer. If you omit this annotation, the AWS load balancer +controller automatically creates a new security group and attaches it to the load balancer. + +{{< note >}} +Kubernetes v1.27 and later do not directly set or read this annotation. However, the AWS +load balancer controller (part of the Kubernetes project) does still use the +`service.beta.kubernetes.io/aws-load-balancer-security-groups` annotation. +{{< /note >}} + ### service.beta.kubernetes.io/load-balancer-source-ranges (deprecated) {#service-beta-kubernetes-io-load-balancer-source-ranges} Example: `service.beta.kubernetes.io/load-balancer-source-ranges: "192.0.2.0/25"` diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md index a766250ef1f41..f090a9f3c6a78 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md @@ -135,7 +135,7 @@ If your configuration is not using the latest version it is **recommended** that the [kubeadm config migrate](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command. For more information on the fields and usage of the configuration you can navigate to our -[API reference page](/docs/reference/config-api/kubeadm-config.v1beta4/). +[API reference page](/docs/reference/config-api/kubeadm-config.v1beta3/). ### Using kubeadm init with feature gates {#feature-gates} @@ -145,7 +145,7 @@ of the cluster. Feature gates are removed after a feature graduates to GA. To pass a feature gate you can either use the `--feature-gates` flag for `kubeadm init`, or you can add items into the `featureGates` field when you pass -a [configuration file](/docs/reference/config-api/kubeadm-config.v1beta4/#kubeadm-k8s-io-v1beta4-ClusterConfiguration) +a [configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/#kubeadm-k8s-io-v1beta3-ClusterConfiguration) using `--config`. Passing [feature gates for core Kubernetes components](/docs/reference/command-line-tools-reference/feature-gates) @@ -314,7 +314,7 @@ kubeadm init phase upload-certs --upload-certs --config=SOME_YAML_FILE ``` {{< note >}} A predefined `certificateKey` can be provided in `InitConfiguration` when passing the -[configuration file](/docs/reference/config-api/kubeadm-config.v1beta4/) with `--config`. +[configuration file](/docs/reference/config-api/kubeadm-config.v1beta3/) with `--config`. {{< /note >}} If a predefined certificate key is not passed to `kubeadm init` and diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md index 1ceb2cb4fb4c5..6589c0bca278f 100644 --- a/content/en/docs/reference/using-api/api-concepts.md +++ b/content/en/docs/reference/using-api/api-concepts.md @@ -34,7 +34,7 @@ API concepts: * A *resource type* is the name used in the URL (`pods`, `namespaces`, `services`) * All resource types have a concrete representation (their object schema) which is called a *kind* -* A list of instances of a resource is known as a *collection* +* A list of instances of a resource type is known as a *collection* * A single instance of a resource type is called a *resource*, and also usually represents an *object* * For some resource types, the API includes one or more *sub-resources*, which are represented as URI paths below the resource @@ -148,7 +148,7 @@ For example: 1. List all of the pods in a given namespace. - ```console + ``` GET /api/v1/namespaces/test/pods --- 200 OK @@ -204,7 +204,7 @@ to a given `resourceVersion` the client is requesting have already been sent. Th document representing the `BOOKMARK` event is of the type requested by the request, but only includes a `.metadata.resourceVersion` field. For example: -```console +``` GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true --- 200 OK @@ -262,7 +262,7 @@ is 10245 and there are two pods: `foo` and `bar`. Then sending the following req _consistent read_ by setting empty resource version using `resourceVersion=`) could result in the following sequence of events: -```console +``` GET /api/v1/namespaces/test/pods?watch=1&sendInitialEvents=true&allowWatchBookmarks=true&resourceVersion=&resourceVersionMatch=NotOlderThan --- 200 OK @@ -303,7 +303,7 @@ can be saved and the latency can be reduced. To verify if `APIResponseCompression` is working, you can send a **get** or **list** request to the API server with an `Accept-Encoding` header, and check the response size and headers. For example: -```console +``` GET /api/v1/pods Accept-Encoding: gzip --- @@ -354,7 +354,7 @@ of 500 pods at a time, request those chunks as follows: 1. List all of the pods on a cluster, retrieving up to 500 pods each time. - ```console + ``` GET /api/v1/pods?limit=500 --- 200 OK @@ -375,7 +375,7 @@ of 500 pods at a time, request those chunks as follows: 2. Continue the previous call, retrieving the next set of 500 pods. - ```console + ``` GET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN --- 200 OK @@ -396,7 +396,7 @@ of 500 pods at a time, request those chunks as follows: 3. Continue the previous call, retrieving the last 253 pods. - ```console + ``` GET /api/v1/pods?limit=500&continue=ENCODED_CONTINUE_TOKEN_2 --- 200 OK @@ -540,7 +540,7 @@ type. For example, list all of the pods on a cluster in the Table format. -```console +``` GET /api/v1/pods Accept: application/json;as=Table;g=meta.k8s.io;v=v1 --- @@ -561,7 +561,7 @@ For API resource types that do not have a custom Table definition known to the c plane, the API server returns a default Table response that consists of the resource's `name` and `creationTimestamp` fields. -```console +``` GET /apis/crd.example.com/v1alpha1/namespaces/default/resources --- 200 OK @@ -596,7 +596,7 @@ uses the Table information and must work against all resource types, including extensions, you should make requests that specify multiple content types in the `Accept` header. For example: -```console +``` Accept: application/json;as=Table;g=meta.k8s.io;v=v1, application/json ``` @@ -624,7 +624,7 @@ For example: 1. List all of the pods on a cluster in Protobuf format. - ```console + ``` GET /api/v1/pods Accept: application/vnd.kubernetes.protobuf --- @@ -637,7 +637,7 @@ For example: 1. Create a pod by sending Protobuf encoded data to the server, but request a response in JSON. - ```console + ``` POST /api/v1/namespaces/test/pods Content-Type: application/vnd.kubernetes.protobuf Accept: application/json @@ -662,7 +662,7 @@ As a client, if you might need to work with extension types you should specify m content types in the request `Accept` header to support fallback to JSON. For example: -```console +``` Accept: application/vnd.kubernetes.protobuf, application/json ``` @@ -675,7 +675,7 @@ describes the encoding and type of the underlying object and then contains the o The wrapper format is: -```console +``` A four byte magic number prefix: Bytes 0-3: "k8s\x00" [0x6b, 0x38, 0x73, 0x00] @@ -893,7 +893,7 @@ effects on any request marked as dry runs. Here is an example dry-run request that uses `?dryRun=All`: -```console +``` POST /api/v1/namespaces/test/pods?dryRun=All Content-Type: application/json Accept: application/json diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md b/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md index c82a84d616823..98ba9069ea324 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md @@ -218,8 +218,10 @@ option. Your cluster requirements may need a different configuration. kubeadm certs certificate-key ``` + The certificate key is a hex encoded string that is an AES key of size 32 bytes. + {{< note >}} - The `kubeadm-certs` Secret and decryption key expire after two hours. + The `kubeadm-certs` Secret and the decryption key expire after two hours. {{< /note >}} {{< caution >}} diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md index 8125e3857f96c..07265d0fac62b 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md @@ -15,10 +15,10 @@ This page shows how to install the `kubeadm` toolbox. For information on how to create a cluster with kubeadm once you have performed this installation process, see the [Creating a cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page. +{{< doc-versions-list "installation guide" >}} ## {{% heading "prerequisites" %}} - * A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions based on Debian and Red Hat, and those distributions without a package manager. * 2 GB or more of RAM per machine (any less will leave little room for your apps). @@ -33,6 +33,14 @@ see the [Creating a cluster with kubeadm](/docs/setup/production-environment/too will disable swapping temporarily. To make this change persistent across reboots, make sure swap is disabled in config files like `/etc/fstab`, `systemd.swap`, depending how it was configured on your system. +{{< note >}} +The `kubeadm` installation is done via binaries that use dynamic linking and assumes that your target system provides `glibc`. +This is a reasonable assumption on many Linux distributions (including Debian, Ubuntu, Fedora, CentOS, etc.) +but it is not always the case with custom and lightweight distributions which don't include `glibc` by default, such as Alpine Linux. +The expectation is that the distribution either includes `glibc` or a [compatibility layer](https://wiki.alpinelinux.org/wiki/Running_glibc_programs) +that provides the expected symbols. +{{< /note >}} + ## Verify the MAC address and product_uuid are unique for every node {#verify-mac-address} @@ -51,6 +59,7 @@ If you have more than one network adapter, and your Kubernetes components are no route, we recommend you add IP route(s) so Kubernetes cluster addresses go via the appropriate adapter. ## Check required ports + These [required ports](/docs/reference/networking/ports-and-protocols/) need to be open in order for Kubernetes components to communicate with each other. You can use tools like netcat to check if a port is open. For example: @@ -123,7 +132,7 @@ You will install these packages on all of your machines: * `kubeadm`: the command to bootstrap the cluster. * `kubelet`: the component that runs on all of the machines in your cluster - and does things like starting pods and containers. + and does things like starting pods and containers. * `kubectl`: the command line util to talk to your cluster. @@ -148,30 +157,17 @@ For more information on version skews, see: * Kubernetes [version and version-skew policy](/docs/setup/release/version-skew-policy/) * Kubeadm-specific [version skew policy](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#version-skew-policy) -{{< note >}} -Kubernetes has two different package repositories starting from August 2023. -The Google-hosted repository is deprecated and it's being replaced with the -Kubernetes (community-owned) package repositories. The Kubernetes project strongly -recommends using the Kubernetes community-owned package repositories, because the -project plans to stop publishing packages to the Google-hosted repository in the future. - -There are some important considerations for the Kubernetes package repositories: - -- The Kubernetes package repositories contain packages beginning with those - Kubernetes versions that were still under support when the community took - over the package builds. This means that anything before v1.24.0 will only be - available in the Google-hosted repository. -- There's a dedicated package repository for each Kubernetes minor version. - When upgrading to a different minor release, you must bear in mind that - the package repository details also change. +{{% legacy-repos-deprecation %}} +{{< note >}} +There's a dedicated package repository for each Kubernetes minor version. If you want to install +a minor version other than {{< skew currentVersion >}}, please see the installation guide for +your desired minor version. {{< /note >}} {{< tabs name="k8s_install" >}} {{% tab name="Debian-based distributions" %}} -### Kubernetes package repositories {#dpkg-k8s-package-repo} - These instructions are for Kubernetes {{< skew currentVersion >}}. 1. Update the `apt` package index and install packages needed to use the Kubernetes `apt` repository: @@ -179,16 +175,21 @@ These instructions are for Kubernetes {{< skew currentVersion >}}. ```shell sudo apt-get update # apt-transport-https may be a dummy package; if so, you can skip that package - sudo apt-get install -y apt-transport-https ca-certificates curl + sudo apt-get install -y apt-transport-https ca-certificates curl gpg ``` -2. Download the public signing key for the Kubernetes package repositories. The same signing key is used for all repositories so you can disregard the version in the URL: +2. Download the public signing key for the Kubernetes package repositories. + The same signing key is used for all repositories so you can disregard the version in the URL: ```shell curl -fsSL https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg ``` -3. Add the appropriate Kubernetes `apt` repository: +3. Add the appropriate Kubernetes `apt` repository. Please note that this repository have packages + only for Kubernetes {{< skew currentVersion >}}; for other Kubernetes minor versions, you need to + change the Kubernetes minor version in the URL to match your desired minor version + (you should also check that you are reading the documentation for the version of Kubernetes + that you plan to install). ```shell # This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list @@ -208,127 +209,57 @@ In releases older than Debian 12 and Ubuntu 22.04, `/etc/apt/keyrings` does not you can create it by running `sudo mkdir -m 755 /etc/apt/keyrings` {{< /note >}} -### Google-hosted package repository (deprecated) {#dpkg-google-package-repo} - -These instructions are for Kubernetes {{< skew currentVersion >}}. - -1. Update the `apt` package index and install packages needed to use the Kubernetes `apt` repository: - - ```shell - sudo apt-get update - # apt-transport-https may be a dummy package; if so, you can skip that package - sudo apt-get install -y apt-transport-https ca-certificates curl - ``` - -2. Download the Google Cloud public signing key: - - ```shell - curl -fsSL https://dl.k8s.io/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg - ``` - -3. Add the Google-hosted `apt` repository: - - ```shell - # This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list - echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list - ``` - -4. Update the `apt` package index, install kubelet, kubeadm and kubectl, and pin their version: - - ```shell - sudo apt-get update - sudo apt-get install -y kubelet kubeadm kubectl - sudo apt-mark hold kubelet kubeadm kubectl - ``` - -{{< note >}} -In releases older than Debian 12 and Ubuntu 22.04, `/etc/apt/keyrings` does not exist by default; -you can create it by running `sudo mkdir -m 755 /etc/apt/keyrings` -{{< /note >}} - {{% /tab %}} {{% tab name="Red Hat-based distributions" %}} 1. Set SELinux to `permissive` mode: -```shell -# Set SELinux in permissive mode (effectively disabling it) -sudo setenforce 0 -sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config -``` + These instructions are for Kubernetes {{< skew currentVersion >}}. + + ```shell + # Set SELinux in permissive mode (effectively disabling it) + sudo setenforce 0 + sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config + ``` {{< caution >}} - Setting SELinux in permissive mode by running `setenforce 0` and `sed ...` - effectively disables it. This is required to allow containers to access the host - filesystem; for example, some cluster network plugins require that. You have to - do this until SELinux support is improved in the kubelet. +effectively disables it. This is required to allow containers to access the host +filesystem; for example, some cluster network plugins require that. You have to +do this until SELinux support is improved in the kubelet. - You can leave SELinux enabled if you know how to configure it but it may require - settings that are not supported by kubeadm. +settings that are not supported by kubeadm. {{< /caution >}} -### Kubernetes package repositories {#rpm-k8s-package-repo} - -These instructions are for Kubernetes {{< skew currentVersion >}}. - 2. Add the Kubernetes `yum` repository. The `exclude` parameter in the repository definition ensures that the packages related to Kubernetes are not upgraded upon running `yum update` as there's a special procedure that - must be followed for upgrading Kubernetes. - -```shell -# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo -cat <}}/rpm/ -enabled=1 -gpgcheck=1 -gpgkey=https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/rpm/repodata/repomd.xml.key -exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni -EOF -``` - -3. Install kubelet, kubeadm and kubectl, and enable kubelet to ensure it's automatically started on startup: - -```shell -sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes -sudo systemctl enable --now kubelet -``` - -### Google-hosted package repository (deprecated) {#rpm-google-package-repo} - -These instructions are for Kubernetes {{< skew currentVersion >}}. - -2. Add the Google-hosted `yum` repository. The `exclude` parameter in the - repository definition ensures that the packages related to Kubernetes are - not upgraded upon running `yum update` as there's a special procedure that - must be followed for upgrading Kubernetes. + must be followed for upgrading Kubernetes. Please note that this repository + have packages only for Kubernetes {{< skew currentVersion >}}; for other + Kubernetes minor versions, you need to change the Kubernetes minor version + in the URL to match your desired minor version (you should also check that + you are reading the documentation for the version of Kubernetes that you + plan to install). -```shell -# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo -cat <}}/rpm/ + enabled=1 + gpgcheck=1 + gpgkey=https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/rpm/repodata/repomd.xml.key + exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni + EOF + ``` 3. Install kubelet, kubeadm and kubectl, and enable kubelet to ensure it's automatically started on startup: -```shell -sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes -sudo systemctl enable --now kubelet -``` - -{{< note >}} -If the `baseurl` fails because your RPM-based distribution cannot interpret `$basearch`, replace `\$basearch` with your computer's architecture. -Type `uname -m` to see that value. -For example, the `baseurl` URL for `x86_64` could be: `https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64`. -{{< /note >}} + ```shell + sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes + sudo systemctl enable --now kubelet + ``` {{% /tab %}} {{% tab name="Without a package manager" %}} @@ -342,7 +273,7 @@ sudo mkdir -p "$DEST" curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_PLUGINS_VERSION}/cni-plugins-linux-${ARCH}-${CNI_PLUGINS_VERSION}.tgz" | sudo tar -C "$DEST" -xz ``` -Define the directory to download command files +Define the directory to download command files: {{< note >}} The `DOWNLOAD_DIR` variable must be set to a writable directory. @@ -354,7 +285,7 @@ DOWNLOAD_DIR="/usr/local/bin" sudo mkdir -p "$DOWNLOAD_DIR" ``` -Install crictl (required for kubeadm / Kubelet Container Runtime Interface (CRI)) +Install crictl (required for kubeadm / Kubelet Container Runtime Interface (CRI)): ```bash CRICTL_VERSION="v1.28.0" @@ -371,12 +302,17 @@ cd $DOWNLOAD_DIR sudo curl -L --remote-name-all https://dl.k8s.io/release/${RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet} sudo chmod +x {kubeadm,kubelet} -RELEASE_VERSION="v0.15.1" -curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service +RELEASE_VERSION="v0.16.2" +curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/krel/templates/latest/kubelet/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service sudo mkdir -p /etc/systemd/system/kubelet.service.d -curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf +curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/krel/templates/latest/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf ``` +{{< note >}} +Please refer to the note in the [Before you begin](#before-you-begin) section for Linux distributions +that do not include `glibc` by default. +{{< /note >}} + Install `kubectl` by following the instructions on [Install Tools page](/docs/tasks/tools/#kubectl). Enable and start `kubelet`: @@ -388,12 +324,12 @@ systemctl enable --now kubelet {{< note >}} The Flatcar Container Linux distribution mounts the `/usr` directory as a read-only filesystem. Before bootstrapping your cluster, you need to take additional steps to configure a writable directory. -See the [Kubeadm Troubleshooting guide](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#usr-mounted-read-only/) to learn how to set up a writable directory. +See the [Kubeadm Troubleshooting guide](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#usr-mounted-read-only/) +to learn how to set up a writable directory. {{< /note >}} {{% /tab %}} {{< /tabs >}} - The kubelet is now restarting every few seconds, as it waits in a crashloop for kubeadm to tell it what to do. @@ -411,7 +347,8 @@ See [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configu ## Troubleshooting -If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/). +If you are running into difficulties with kubeadm, please consult our +[troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/). ## {{% heading "whatsnext" %}} diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md b/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md index d8c1499ae97c9..3e91058180b13 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md @@ -162,12 +162,10 @@ Kubeadm deletes the `/etc/kubernetes/bootstrap-kubelet.conf` file after completi Note that the kubeadm CLI command never touches this drop-in file. This configuration file installed by the `kubeadm` -[DEB](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf) or -[RPM package](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/rpm/kubeadm/10-kubeadm.conf) is written to +[package](https://github.com/kubernetes/release/blob/cd53840/cmd/krel/templates/latest/kubeadm/10-kubeadm.conf) is written to `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` and is used by systemd. It augments the basic -[`kubelet.service` for RPM](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/rpm/kubelet/kubelet.service) or -[`kubelet.service` for DEB](https://github.com/kubernetes/release/blob/master/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service): +[`kubelet.service`](https://github.com/kubernetes/release/blob/cd53840/cmd/krel/templates/latest/kubelet/kubelet.service): {{< note >}} The contents below are just an example. If you don't want to use a package manager diff --git a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md index 1efc5e9b69d90..c250347d423dc 100644 --- a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md +++ b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md @@ -108,6 +108,10 @@ If you haven't already set up a cluster locally, run `minikube start` to create http://172.17.0.15:31637 ``` + ```shell + curl http://172.17.0.15:31637 + ``` + The output is similar to: ```none diff --git a/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md b/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md index 0ae940296241b..5a5f41008e913 100644 --- a/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md +++ b/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md @@ -23,7 +23,7 @@ of Containers for each. - Fetch all Pods in all namespaces using `kubectl get pods --all-namespaces` - Format the output to include only the list of Container image names - using `-o jsonpath={.items[*].spec.containers[*].image}`. This will recursively parse out the + using `-o jsonpath={.items[*].spec['initContainers', 'containers'][*].image}`. This will recursively parse out the `image` field from the returned json. - See the [jsonpath reference](/docs/reference/kubectl/jsonpath/) for further information on how to use jsonpath. @@ -33,7 +33,7 @@ of Containers for each. - Use `uniq` to aggregate image counts ```shell -kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\ +kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec['initContainers', 'containers'][*].image}" |\ tr -s '[[:space:]]' '\n' |\ sort |\ uniq -c @@ -42,7 +42,7 @@ The jsonpath is interpreted as follows: - `.items[*]`: for each returned value - `.spec`: get the spec -- `.containers[*]`: for each container +- `['initContainers', 'containers'][*]`: for each container - `.image`: get the image {{< note >}} diff --git a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md index f9a29f332dbbe..ac69021c0ba26 100644 --- a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md +++ b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md @@ -275,16 +275,16 @@ that is not currently used by an etcd process. Taking the snapshot will not affect the performance of the member. Below is an example for taking a snapshot of the keyspace served by -`$ENDPOINT` to the file `snapshotdb`: +`$ENDPOINT` to the file `snapshot.db`: ```shell -ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshotdb +ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshot.db ``` Verify the snapshot: ```shell -ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshotdb +ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshot.db ``` ```console @@ -343,19 +343,25 @@ employed to recover the data of a failed cluster. Before starting the restore operation, a snapshot file must be present. It can either be a snapshot file from a previous backup operation, or from a remaining [data directory](https://etcd.io/docs/current/op-guide/configuration/#--data-dir). + Here is an example: ```shell -ETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 snapshot restore snapshotdb +ETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 snapshot restore snapshot.db ``` -Another example for restoring using etcdctl options: + +Another example for restoring using `etcdctl` options: + ```shell -ETCDCTL_API=3 etcdctl snapshot restore --data-dir snapshotdb +ETCDCTL_API=3 etcdctl --data-dir snapshot restore snapshot.db ``` -Yet another example would be to first export the environment variable +where `` is a directory that will be created during the restore process. + +Yet another example would be to first export the `ETCDCTL_API` environment variable: + ```shell export ETCDCTL_API=3 -etcdctl snapshot restore --data-dir snapshotdb +etcdctl --data-dir snapshot restore snapshot.db ``` For more information and examples on restoring a cluster from a snapshot file, see @@ -410,4 +416,8 @@ Defragmentation is an expensive operation, so it should be executed as infrequen as possible. On the other hand, it's also necessary to make sure any etcd member will not run out of the storage quota. The Kubernetes project recommends that when you perform defragmentation, you use a tool such as [etcd-defrag](https://github.com/ahrtr/etcd-defrag). + +You can also run the defragmentation tool as a Kubernetes CronJob, to make sure that +defragmentation happens regularly. See [`etcd-defrag-cronjob.yaml`](https://github.com/ahrtr/etcd-defrag/blob/main/doc/etcd-defrag-cronjob.yaml) +for details. {{< /note >}} diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md b/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md index d39f2a4891e6e..02f68e91fa3f5 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md @@ -6,21 +6,25 @@ weight: 120 -This page explains how to switch from one Kubernetes package repository to another -when upgrading Kubernetes minor releases. Unlike deprecated Google-hosted -repositories, the Kubernetes package repositories are structured in a way that -there's a dedicated package repository for each Kubernetes minor version. +This page explains how to enable a package repository for a new Kubernetes minor release +for users of the community-owned package repositories hosted at `pkgs.k8s.io`. +Unlike the legacy package repositories, the community-owned package repositories are +structured in a way that there's a dedicated package repository for each Kubernetes +minor version. ## {{% heading "prerequisites" %}} -This document assumes that you're already using the Kubernetes community-owned -package repositories. If that's not the case, it's strongly recommended to migrate -to the Kubernetes package repositories. +This document assumes that you're already using the community-owned +package repositories (`pkgs.k8s.io`). If that's not the case, it's strongly +recommended to migrate to the community-owned package repositories as described +in the [official announcement](/blog/2023/08/15/pkgs-k8s-io-introduction/). + +{{% legacy-repos-deprecation %}} ### Verifying if the Kubernetes package repositories are used -If you're unsure whether you're using the Kubernetes package repositories or the -Google-hosted repository, take the following steps to verify: +If you're unsure whether you're using the community-owned package repositories or the +legacy package repositories, take the following steps to verify: {{< tabs name="k8s_install_versions" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} @@ -39,7 +43,8 @@ deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io ``` **You're using the Kubernetes package repositories and this guide applies to you.** -Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories. +Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories +as described in the [official announcement](/blog/2023/08/15/pkgs-k8s-io-introduction/). {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} @@ -64,7 +69,35 @@ exclude=kubelet kubeadm kubectl ``` **You're using the Kubernetes package repositories and this guide applies to you.** -Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories. +Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories +as described in the [official announcement](/blog/2023/08/15/pkgs-k8s-io-introduction/). + +{{% /tab %}} + +{{% tab name="openSUSE or SLES" %}} + +Print the contents of the file that defines the Kubernetes `zypper` repository: + +```shell +# On your system, this configuration file could have a different name +cat /etc/zypp/repos.d/kubernetes.repo +``` + +If you see a `baseurl` similar to the `baseurl` in the output below: + +``` +[kubernetes] +name=Kubernetes +baseurl=https://pkgs.k8s.io/core:/stable:/v{{< skew currentVersionAddMinor -1 "." >}}/rpm/ +enabled=1 +gpgcheck=1 +gpgkey=https://pkgs.k8s.io/core:/stable:/v{{< skew currentVersionAddMinor -1 "." >}}/rpm/repodata/repomd.xml.key +exclude=kubelet kubeadm kubectl +``` + +**You're using the Kubernetes package repositories and this guide applies to you.** +Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories +as described in the [official announcement](/blog/2023/08/15/pkgs-k8s-io-introduction/). {{% /tab %}} {{< /tabs >}} diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md index 075b610b688b5..12e7298a30ad9 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md @@ -54,11 +54,13 @@ The upgrade workflow at high level is the following: ## Changing the package repository -If you're using the Kubernetes community-owned repositories, you need to change -the package repository to one that contains packages for your desired Kubernetes -minor version. This is explained in [Changing the Kubernetes package repository](/docs/tasks/administer-cluster/kubeadm/change-package-repository/) +If you're using the community-owned package repositories (`pkgs.k8s.io`), you need to +enable the package repository for the desired Kubernetes minor release. This is explained in +[Changing the Kubernetes package repository](/docs/tasks/administer-cluster/kubeadm/change-package-repository/) document. +{{% legacy-repos-deprecation %}} + ## Determine which version to upgrade to Find the latest patch release for Kubernetes {{< skew currentVersion >}} using the OS package manager: diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes.md b/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes.md index cff79362570c2..7fd2738ca5102 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes.md @@ -19,11 +19,13 @@ upgrade the control plane nodes before upgrading your Linux Worker nodes. ## Changing the package repository -If you're using the Kubernetes community-owned repositories, you need to change -the package repository to one that contains packages for your desired Kubernetes -minor version. This is explained in [Changing the Kubernetes package repository](/docs/tasks/administer-cluster/kubeadm/change-package-repository/) +If you're using the community-owned package repositories (`pkgs.k8s.io`), you need to +enable the package repository for the desired Kubernetes minor release. This is explained in +[Changing the Kubernetes package repository](/docs/tasks/administer-cluster/kubeadm/change-package-repository/) document. +{{% legacy-repos-deprecation %}} + ## Upgrading worker nodes ### Upgrade kubeadm diff --git a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md index 506dd2723e00b..815f9f70c3aec 100644 --- a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md +++ b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md @@ -35,14 +35,22 @@ address: "192.168.0.8" port: 20250 serializeImagePulls: false evictionHard: - memory.available: "200Mi" + memory.available: "100Mi" + nodefs.available: "10%" + nodefs.inodesFree: "5%" + imagefs.available: "15%" ``` -In the example, the kubelet is configured to serve on IP address 192.168.0.8 and port 20250, pull images in parallel, -and evict Pods when available memory drops below 200Mi. Since only one of the four evictionHard thresholds is configured, -other evictionHard thresholds are reset to 0 from their built-in defaults. -All other kubelet configuration values are left at their built-in defaults, unless overridden -by flags. Command line flags which target the same value as a config file will override that value. +In this example, the kubelet is configured with the following settings: + +1. `address`: The kubelet will serve on IP address `192.168.0.8`. +2. `port`: The kubelet will serve on port `20250`. +3. `serializeImagePulls`: Image pulls will be done in parallel. +4. `evictionHard`: The kubelet will evict Pods under one of the following conditions: + - When the node's available memory drops below 100MiB. + - When the node's main filesystem's available space is less than 10%. + - When the image filesystem's available space is less than 15%. + - When more than 95% of the node's main filesystem's inodes are in use. {{< note >}} In the example, by changing the default value of only one parameter for @@ -51,6 +59,9 @@ will be set to zero. In order to provide custom values, you should provide all the threshold values respectively. {{< /note >}} +The `imagefs` is an optional filesystem that container runtimes use to store container +images and container writable layers. + ## Start a kubelet process configured via the config file {{< note >}} diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd.md index 9bbba039e0d9c..a4adb12234622 100644 --- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd.md +++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/migrate-dockershim-dockerd.md @@ -76,6 +76,8 @@ instructions for that tool. 1. Open `/var/lib/kubelet/kubeadm-flags.env` on each affected node. 1. Modify the `--container-runtime-endpoint` flag to `unix:///var/run/cri-dockerd.sock`. +1. Modify the `--container-runtime` flag to `remote` + (unavailable in Kubernetes v1.27 and later). The kubeadm tool stores the node's socket as an annotation on the `Node` object in the control plane. To modify this socket for each affected node: @@ -118,4 +120,4 @@ kubectl uncordon ## {{% heading "whatsnext" %}} * Read the [dockershim removal FAQ](/dockershim/). -* [Learn how to migrate from Docker Engine with dockershim to containerd](/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/). \ No newline at end of file +* [Learn how to migrate from Docker Engine with dockershim to containerd](/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/). diff --git a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md index 66c0fc8f2c655..fedc88f2b2757 100644 --- a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md +++ b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md @@ -96,7 +96,7 @@ system daemon should ideally run within its own child control group. Refer to for more details on recommended control group hierarchy. Note that Kubelet **does not** create `--kube-reserved-cgroup` if it doesn't -exist. Kubelet will fail if an invalid cgroup is specified. With `systemd` +exist. The kubelet will fail to start if an invalid cgroup is specified. With `systemd` cgroup driver, you should follow a specific pattern for the name of the cgroup you define: the name should be the value you set for `--kube-reserved-cgroup`, with `.slice` appended. diff --git a/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md b/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md index 660b4e903bc96..45f18cec89160 100644 --- a/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md +++ b/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md @@ -15,7 +15,7 @@ You will need to have the following tools installed: - `cosign` ([install guide](https://docs.sigstore.dev/cosign/installation/)) - `curl` (often provided by your operating system) -- `jq` ([download jq](https://stedolan.github.io/jq/download/)) +- `jq` ([download jq](https://jqlang.github.io/jq/download/)) ## Verifying binary signatures diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md index 7245624bf8349..2f3dd8dc0ae92 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md @@ -109,6 +109,10 @@ stringData: password: ``` +{{< note >}} +The `stringData` field for a Secret does not work well with server-side apply. +{{< /note >}} + When you retrieve the Secret data, the command returns the encoded values, and not the plaintext values you provided in `stringData`. @@ -152,6 +156,10 @@ stringData: username: administrator ``` +{{< note >}} +The `stringData` field for a Secret does not work well with server-side apply. +{{< /note >}} + The `Secret` object is created as follows: ```yaml diff --git a/content/en/docs/tasks/configure-pod-container/enforce-standards-admission-controller.md b/content/en/docs/tasks/configure-pod-container/enforce-standards-admission-controller.md index 5f11dba1a1451..207d589d3535c 100644 --- a/content/en/docs/tasks/configure-pod-container/enforce-standards-admission-controller.md +++ b/content/en/docs/tasks/configure-pod-container/enforce-standards-admission-controller.md @@ -33,12 +33,12 @@ For v1.22, use [v1alpha1](https://v1-22.docs.kubernetes.io/docs/tasks/configure- {{< /note >}} ```yaml -apiVersion: apiserver.config.k8s.io/v1 # see compatibility note +apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration plugins: - name: PodSecurity configuration: - apiVersion: pod-security.admission.config.k8s.io/v1 + apiVersion: pod-security.admission.config.k8s.io/v1 # see compatibility note kind: PodSecurityConfiguration # Defaults applied when a mode label is not set. # diff --git a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md index 24d913beca89d..fe973d912f67c 100644 --- a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md +++ b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md @@ -38,7 +38,8 @@ docker login When prompted, enter your Docker ID, and then the credential you want to use (access token, or the password for your Docker ID). -The login process creates or updates a `config.json` file that holds an authorization token. Review [how Kubernetes interprets this file](/docs/concepts/containers/images#config-json). +The login process creates or updates a `config.json` file that holds an authorization token. +Review [how Kubernetes interprets this file](/docs/concepts/containers/images#config-json). View the `config.json` file: @@ -60,7 +61,8 @@ The output contains a section similar to this: {{< note >}} If you use a Docker credentials store, you won't see that `auth` entry but a `credsStore` entry with the name of the store as value. -In that case, you can create a secret directly. See [Create a Secret by providing credentials on the command line](#create-a-secret-by-providing-credentials-on-the-command-line). +In that case, you can create a secret directly. +See [Create a Secret by providing credentials on the command line](#create-a-secret-by-providing-credentials-on-the-command-line). {{< /note >}} ## Create a Secret based on existing credentials {#registry-secret-existing-credentials} @@ -211,7 +213,14 @@ kubectl get pod private-reg ``` {{< note >}} -In case the Pod fails to start with the status `ImagePullBackOff`, view the Pod events: +To use image pull secrets for a Pod (or a Deployment, or other object that +has a pod template that you are using), you need to make sure that the appropriate +Secret does exist in the right namespace. The namespace to use is the same +namespace where you defined the Pod. +{{< /note >}} + +Also, in case the Pod fails to start with the status `ImagePullBackOff`, view the Pod events: + ```shell kubectl describe pod private-reg ``` @@ -229,12 +238,6 @@ Events: ... FailedToRetrieveImagePullSecret ... Unable to retrieve some image pull secrets (); attempting to pull the image may not succeed. ``` - -{{< /note >}} - - - - ## {{% heading "whatsnext" %}} * Learn more about [Secrets](/docs/concepts/configuration/secret/) diff --git a/content/en/docs/tasks/debug/debug-cluster/_index.md b/content/en/docs/tasks/debug/debug-cluster/_index.md index 3278fdfa7d4ce..fcb7ba4a016c5 100644 --- a/content/en/docs/tasks/debug/debug-cluster/_index.md +++ b/content/en/docs/tasks/debug/debug-cluster/_index.md @@ -14,6 +14,9 @@ problem you are experiencing. See the [application troubleshooting guide](/docs/tasks/debug/debug-application/) for tips on application debugging. You may also visit the [troubleshooting overview document](/docs/tasks/debug/) for more information. +For troubleshooting {{}}, refer to +[Troubleshooting kubectl](/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/). + ## Listing your cluster diff --git a/content/en/docs/tasks/debug/debug-cluster/troubleshoot-kubectl.md b/content/en/docs/tasks/debug/debug-cluster/troubleshoot-kubectl.md new file mode 100644 index 0000000000000..2166d204b3776 --- /dev/null +++ b/content/en/docs/tasks/debug/debug-cluster/troubleshoot-kubectl.md @@ -0,0 +1,158 @@ +--- +title: "Troubleshooting kubectl" +content_type: task +weight: 10 +--- + + + +This documentation is about investigating and diagnosing +{{}} related issues. +If you encounter issues accessing `kubectl` or connecting to your cluster, this +document outlines various common scenarios and potential solutions to help +identify and address the likely cause. + + + +## {{% heading "prerequisites" %}} + +* You need to have a Kubernetes cluster. +* You also need to have `kubectl` installed - see [install tools](/docs/tasks/tools/#kubectl) + +## Verify kubectl setup + +Make sure you have installed and configured `kubectl` correctly on your local machine. +Check the `kubectl` version to ensure it is up-to-date and compatible with your cluster. + +Check kubectl version: + +```shell +kubectl version +``` + +You'll see a similar output: + +```console +Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.4",GitCommit:"fa3d7990104d7c1f16943a67f11b154b71f6a132", GitTreeState:"clean",BuildDate:"2023-07-19T12:20:54Z", GoVersion:"go1.20.6", Compiler:"gc", Platform:"linux/amd64"} +Kustomize Version: v5.0.1 +Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.3",GitCommit:"25b4e43193bcda6c7328a6d147b1fb73a33f1598", GitTreeState:"clean",BuildDate:"2023-06-14T09:47:40Z", GoVersion:"go1.20.5", Compiler:"gc", Platform:"linux/amd64"} + +``` + +If you see `Unable to connect to the server: dial tcp :8443: i/o timeout`, +instead of `Server Version`, you need to troubleshoot kubectl connectivity with your cluster. + +Make sure you have installed the kubectl by following the +[official documentation for installing kubectl](/docs/tasks/tools/#kubectl), and you have +properly configured the `$PATH` environment variable. + +## Check kubeconfig + +The `kubectl` requires a `kubeconfig` file to connect to a Kubernetes cluster. The +`kubeconfig` file is usually located under the `~/.kube/config` directory. Make sure +that you have a valid `kubeconfig` file. If you don't have a `kubeconfig` file, you can +obtain it from your Kubernetes administrator, or you can copy it from your Kubernetes +control plane's `/etc/kubernetes/admin.conf` directory. If you have deployed your +Kubernetes cluster on a cloud platform and lost your `kubeconfig` file, you can +re-generate it using your cloud provider's tools. Refer the cloud provider's +documentation for re-generating a `kubeconfig` file. + +Check if the `$KUBECONFIG` environment variable is configured correctly. You can set +`$KUBECONFIG`environment variable or use the `--kubeconfig` parameter with the kubectl +to specify the directory of a `kubeconfig` file. + +## Check VPN connectivity + +If you are using a Virtual Private Network (VPN) to access your Kubernetes cluster, +make sure that your VPN connection is active and stable. Sometimes, VPN disconnections +can lead to connection issues with the cluster. Reconnect to the VPN and try accessing +the cluster again. + +## Authentication and authorization + +If you are using the token based authentication and the kubectl is returning an error +regarding the authentication token or authentication server address, validate the +Kubernetes authentication token and the authentication server address are configured +properly. + +If kubectl is returning an error regarding the authorization, make sure that you are +using the valid user credentials. And you have the permission to access the resource +that you have requested. + +## Verify contexts + +Kubernetes supports [multiple clusters and contexts](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). +Ensure that you are using the correct context to interact with your cluster. + +List available contexts: + +```shell +kubectl config get-contexts +``` + +Switch to the appropriate context: + +```shell +kubectl config use-context +``` + +## API server and load balancer + +The {{}} server is the +central component of a Kubernetes cluster. If the API server or the load balancer that +runs in front of your API servers is not reachable or not responding, you won't be able +to interact with the cluster. + +Check the if the API server's host is reachable by using `ping` command. Check cluster's +network connectivity and firewall. If your are using a cloud provider for deploying +the cluster, check your cloud provider's health check status for the cluster's +API server. + +Verify the status of the load balancer (if used) to ensure it is healthy and forwarding +traffic to the API server. + +## TLS problems + +The Kubernetes API server only serves HTTPS requests by default. In that case TLS problems +may occur due to various reasons, such as certificate expiry or chain of trust validity. + +You can find the TLS certificate in the kubeconfig file, located in the `~/.kube/config` +directory. The `certificate-authority` attribute contains the CA certificate and the +`client-certificate` attribute contains the client certificate. + +Verify the expiry of these certificates: + +```shell +openssl x509 -noout -dates -in $(kubectl config view --minify --output 'jsonpath={.clusters[0].cluster.certificate-authority}') +``` + +output: +```console +notBefore=Sep 2 08:34:12 2023 GMT +notAfter=Aug 31 08:34:12 2033 GMT +``` + +```shell +openssl x509 -noout -dates -in $(kubectl config view --minify --output 'jsonpath={.users[0].user.client-certificate}') +``` + +output: +```console +notBefore=Sep 2 08:34:12 2023 GMT +notAfter=Sep 2 08:34:12 2026 GMT +``` + +## Verify kubectl helpers + +Some kubectl authentication helpers provide easy access to Kubernetes clusters. If you +have used such helpers and are facing connectivity issues, ensure that the necessary +configurations are still present. + +Check kubectl configuration for authentication details: + +```shell +kubectl config view +``` + +If you previously used a helper tool (for example, `kubectl-oidc-login`), ensure that it is still +installed and configured correctly. \ No newline at end of file diff --git a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md index 4967ff8f27cc2..a24dad21cf30e 100644 --- a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md +++ b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md @@ -52,11 +52,13 @@ Save the file as `Dockerfile`, build the image and push it to a registry. This e pushes the image to [Google Container Registry (GCR)](https://cloud.google.com/container-registry/). For more details, please read the GCR -[documentation](https://cloud.google.com/container-registry/docs/). +[documentation](https://cloud.google.com/container-registry/docs/). Alternatively +you can also use the [docker hub](https://hub.docker.com/search?q=). For more details +refer to the docker hub [documentation](https://docs.docker.com/docker-hub/repos/create/#create-a-repository). ```shell -docker build -t gcr.io/my-gcp-project/my-kube-scheduler:1.0 . -gcloud docker -- push gcr.io/my-gcp-project/my-kube-scheduler:1.0 +docker build -t gcr.io/my-gcp-project/my-kube-scheduler:1.0 . # The image name and the repository +gcloud docker -- push gcr.io/my-gcp-project/my-kube-scheduler:1.0 # used in here is just an example ``` ## Define a Kubernetes Deployment for the scheduler diff --git a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md index 60536cf858347..28b3493a0468c 100644 --- a/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md +++ b/content/en/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions.md @@ -769,7 +769,7 @@ validations are not supported by ratcheting under the implementation in Kubernet ratcheted. -## Validation rules +### Validation rules {{< feature-state state="beta" for_k8s_version="v1.25" >}} diff --git a/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md b/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md index dd784463810ce..2bae4e6c9bff5 100644 --- a/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md +++ b/content/en/docs/tasks/manage-hugepages/scheduling-hugepages.md @@ -14,15 +14,45 @@ by applications in a Pod. This page describes how users can consume huge pages. ## {{% heading "prerequisites" %}} +Kubernetes nodes must +[pre-allocate huge pages](https://www.kernel.org/doc/html/latest/admin-guide/mm/hugetlbpage.html) +in order for the node to report its huge page capacity. -1. Kubernetes nodes must pre-allocate huge pages in order for the node to report - its huge page capacity. A node can pre-allocate huge pages for multiple - sizes. +A node can pre-allocate huge pages for multiple sizes, for instance, +the following line in `/etc/default/grub` allocates `2*1GiB` of 1 GiB +and `512*2 MiB` of 2 MiB pages: + +``` +GRUB_CMDLINE_LINUX="hugepagesz=1G hugepages=2 hugepagesz=2M hugepages=512" +``` The nodes will automatically discover and report all huge page resources as schedulable resources. +When you describe the Node, you should see something similar to the following +in the following in the `Capacity` and `Allocatable` sections: + +``` +Capacity: + cpu: ... + ephemeral-storage: ... + hugepages-1Gi: 2Gi + hugepages-2Mi: 1Gi + memory: ... + pods: ... +Allocatable: + cpu: ... + ephemeral-storage: ... + hugepages-1Gi: 2Gi + hugepages-2Mi: 1Gi + memory: ... + pods: ... +``` +{{< note >}} +For dynamically allocated pages (after boot), the Kubelet needs to be restarted +for the new allocations to be refrelected. +{{< /note >}} diff --git a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md index f8705587dab70..57a1cf510388d 100644 --- a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md +++ b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md @@ -36,7 +36,7 @@ You need the `cfssl` tool. You can download `cfssl` from Some steps in this page use the `jq` tool. If you don't have `jq`, you can install it via your operating system's software sources, or fetch it from -[https://stedolan.github.io/jq/](https://stedolan.github.io/jq/). +[https://jqlang.github.io/jq/](https://jqlang.github.io/jq/). @@ -267,7 +267,7 @@ kubectl get csr my-svc.my-namespace -o json | \ ``` {{< note >}} -This uses the command line tool [`jq`](https://stedolan.github.io/jq/) to populate the base64-encoded +This uses the command line tool [`jq`](https://jqlang.github.io/jq/) to populate the base64-encoded content in the `.status.certificate` field. If you do not have `jq`, you can also save the JSON output to a file, populate this field manually, and upload the resulting file. diff --git a/content/en/docs/tasks/tools/install-kubectl-linux.md b/content/en/docs/tasks/tools/install-kubectl-linux.md index 684f904b14bda..ef7a87889a935 100644 --- a/content/en/docs/tasks/tools/install-kubectl-linux.md +++ b/content/en/docs/tasks/tools/install-kubectl-linux.md @@ -192,6 +192,38 @@ To upgrade kubectl to another minor release, you'll need to bump the version in sudo yum install -y kubectl ``` +{{% /tab %}} + +{{% tab name="SUSE-based distributions" %}} + +1. Add the Kubernetes `zypper` repository. If you want to use Kubernetes version + different than {{< param "version" >}}, replace {{< param "version" >}} with + the desired minor version in the command below. + + ```bash + # This overwrites any existing configuration in /etc/zypp/repos.d/kubernetes.repo + cat <}}/rpm/ + enabled=1 + gpgcheck=1 + gpgkey=https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/rpm/repodata/repomd.xml.key + EOF + ``` + +{{< note >}} +To upgrade kubectl to another minor release, you'll need to bump the version in `/etc/zypp/repos.d/kubernetes.repo` +before running `zypper update`. This procedure is described in more detail in +[Changing The Kubernetes Package Repository](/docs/tasks/administer-cluster/kubeadm/change-package-repository/). +{{< /note >}} + + 2. Install kubectl using `zypper`: + + ```bash + sudo zypper install -y kubectl + ``` + {{% /tab %}} {{< /tabs >}} diff --git a/content/en/docs/tutorials/services/connect-applications-service.md b/content/en/docs/tutorials/services/connect-applications-service.md index eadab8bd4612f..771149566b422 100644 --- a/content/en/docs/tutorials/services/connect-applications-service.md +++ b/content/en/docs/tutorials/services/connect-applications-service.md @@ -59,7 +59,7 @@ to make queries against both IPs. Note that the containers are *not* using port the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same `containerPort`, and access them from any other pod or node in your cluster using the assigned IP -address for the Service. If you want to arrange for a specific port on the host +address for the pod. If you want to arrange for a specific port on the host Node to be forwarded to backing Pods, you can - but the networking model should mean that you do not need to do so. @@ -71,7 +71,7 @@ if you're curious. So we have pods running nginx in a flat, cluster wide, address space. In theory, you could talk to these pods directly, but what happens when a node dies? The pods -die with it, and the Deployment will create new ones, with different IPs. This is +die with it, and the ReplicaSet inside the Deployment will create new ones, with different IPs. This is the problem a Service solves. A Kubernetes Service is an abstraction which defines a logical set of Pods running @@ -189,7 +189,7 @@ Note there's no mention of your Service. This is because you created the replica before the Service. Another disadvantage of doing this is that the scheduler might put both Pods on the same machine, which will take your entire Service down if it dies. We can do this the right way by killing the 2 Pods and waiting for the -Deployment to recreate them. This time around the Service exists *before* the +Deployment to recreate them. This time the Service exists *before* the replicas. This will give you scheduler-level Service spreading of your Pods (provided all your nodes have equal capacity), as well as the right environment variables: @@ -292,6 +292,10 @@ And also the configmap: ```shell kubectl create configmap nginxconfigmap --from-file=default.conf ``` + +You can find an example for `default.conf` in +[the Kubernetes examples project repo](https://github.com/kubernetes/examples/tree/bc9ca4ca32bb28762ef216386934bef20f1f9930/staging/https-nginx/). + ``` configmap/nginxconfigmap created ``` @@ -302,6 +306,49 @@ kubectl get configmaps NAME DATA AGE nginxconfigmap 1 114s ``` + +You can view the details of the `nginxconfigmap` ConfigMap using the following command: + +```shell +kubectl describe configmap nginxconfigmap +``` + +The output is similar to: + +```console +Name: nginxconfigmap +Namespace: default +Labels: +Annotations: + +Data +==== +default.conf: +---- +server { + listen 80 default_server; + listen [::]:80 default_server ipv6only=on; + + listen 443 ssl; + + root /usr/share/nginx/html; + index index.html; + + server_name localhost; + ssl_certificate /etc/nginx/ssl/tls.crt; + ssl_certificate_key /etc/nginx/ssl/tls.key; + + location / { + try_files $uri $uri/ =404; + } +} + +BinaryData +==== + +Events: +``` + Following are the manual steps to follow in case you run into problems running make (on windows for example): ```shell @@ -311,7 +358,7 @@ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -ou cat /d/tmp/nginx.crt | base64 cat /d/tmp/nginx.key | base64 ``` - + Use the output from the previous commands to create a yaml file as follows. The base64 encoded value should all be on a single line. @@ -476,5 +523,3 @@ LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.el * Learn more about [Using a Service to Access an Application in a Cluster](/docs/tasks/access-application-cluster/service-access-application-cluster/) * Learn more about [Connecting a Front End to a Back End Using a Service](/docs/tasks/access-application-cluster/connecting-frontend-backend/) * Learn more about [Creating an External Load Balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/) - - diff --git a/content/en/docs/tutorials/stateful-application/basic-stateful-set.md b/content/en/docs/tutorials/stateful-application/basic-stateful-set.md index 70c3e9e3ac40d..10b2c35b4403a 100644 --- a/content/en/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/content/en/docs/tutorials/stateful-application/basic-stateful-set.md @@ -27,14 +27,25 @@ following Kubernetes concepts: * [Headless Services](/docs/concepts/services-networking/service/#headless-services) * [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) * [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/) -* [StatefulSets](/docs/concepts/workloads/controllers/statefulset/) * The [kubectl](/docs/reference/kubectl/kubectl/) command line tool +{{% include "task-tutorial-prereqs.md" %}} +You should configure `kubectl` to use a context that uses the `default` +namespace. +If you are using an existing cluster, make sure that it's OK to use that +cluster's default namespace to practice. Ideally, practice in a cluster +that doesn't run any real workloads. + +It's also useful to read the concept page about [StatefulSets](/docs/concepts/workloads/controllers/statefulset/). + {{< note >}} This tutorial assumes that your cluster is configured to dynamically provision -PersistentVolumes. If your cluster is not configured to do so, you +PersistentVolumes. You'll also need to have a [default StorageClass](/docs/concepts/storage/storage-classes/#default-storageclass). +If your cluster is not configured to provision storage dynamically, you will have to manually provision two 1 GiB volumes prior to starting this -tutorial. +tutorial and +set up your cluster so that those PersistentVolumes map to the +PersistentVolumeClaim templates that the StatefulSet defines. {{< /note >}} ## {{% heading "objectives" %}} diff --git a/content/en/examples/secret/basicauth-secret.yaml b/content/en/examples/secret/basicauth-secret.yaml new file mode 100644 index 0000000000000..a854b267a01a5 --- /dev/null +++ b/content/en/examples/secret/basicauth-secret.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: Secret +metadata: + name: secret-basic-auth +type: kubernetes.io/basic-auth +stringData: + username: admin # required field for kubernetes.io/basic-auth + password: t0p-Secret # required field for kubernetes.io/basic-auth \ No newline at end of file diff --git a/content/en/examples/secret/bootstrap-token-secret-base64.yaml b/content/en/examples/secret/bootstrap-token-secret-base64.yaml new file mode 100644 index 0000000000000..98233758e2e7c --- /dev/null +++ b/content/en/examples/secret/bootstrap-token-secret-base64.yaml @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Secret +metadata: + name: bootstrap-token-5emitj + namespace: kube-system +type: bootstrap.kubernetes.io/token +data: + auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4= + expiration: MjAyMC0wOS0xM1QwNDozOToxMFo= + token-id: NWVtaXRq + token-secret: a3E0Z2lodnN6emduMXAwcg== + usage-bootstrap-authentication: dHJ1ZQ== + usage-bootstrap-signing: dHJ1ZQ== \ No newline at end of file diff --git a/content/en/examples/secret/bootstrap-token-secret-literal.yaml b/content/en/examples/secret/bootstrap-token-secret-literal.yaml new file mode 100644 index 0000000000000..6aec11ce870fc --- /dev/null +++ b/content/en/examples/secret/bootstrap-token-secret-literal.yaml @@ -0,0 +1,18 @@ +apiVersion: v1 +kind: Secret +metadata: + # Note how the Secret is named + name: bootstrap-token-5emitj + # A bootstrap token Secret usually resides in the kube-system namespace + namespace: kube-system +type: bootstrap.kubernetes.io/token +stringData: + auth-extra-groups: "system:bootstrappers:kubeadm:default-node-token" + expiration: "2020-09-13T04:39:10Z" + # This token ID is used in the name + token-id: "5emitj" + token-secret: "kq4gihvszzgn1p0r" + # This token can be used for authentication + usage-bootstrap-authentication: "true" + # and it can be used for signing + usage-bootstrap-signing: "true" \ No newline at end of file diff --git a/content/en/examples/secret/dockercfg-secret.yaml b/content/en/examples/secret/dockercfg-secret.yaml new file mode 100644 index 0000000000000..ccf73bc306f24 --- /dev/null +++ b/content/en/examples/secret/dockercfg-secret.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: Secret +metadata: + name: secret-dockercfg +type: kubernetes.io/dockercfg +data: + .dockercfg: | + eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo= \ No newline at end of file diff --git a/content/en/examples/secret/dotfile-secret.yaml b/content/en/examples/secret/dotfile-secret.yaml new file mode 100644 index 0000000000000..5c7900ad97479 --- /dev/null +++ b/content/en/examples/secret/dotfile-secret.yaml @@ -0,0 +1,27 @@ +apiVersion: v1 +kind: Secret +metadata: + name: dotfile-secret +data: + .secret-file: dmFsdWUtMg0KDQo= +--- +apiVersion: v1 +kind: Pod +metadata: + name: secret-dotfiles-pod +spec: + volumes: + - name: secret-volume + secret: + secretName: dotfile-secret + containers: + - name: dotfile-test-container + image: registry.k8s.io/busybox + command: + - ls + - "-l" + - "/etc/secret-volume" + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" \ No newline at end of file diff --git a/content/en/examples/secret/optional-secret.yaml b/content/en/examples/secret/optional-secret.yaml new file mode 100644 index 0000000000000..cc510b9078130 --- /dev/null +++ b/content/en/examples/secret/optional-secret.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + readOnly: true + volumes: + - name: foo + secret: + secretName: mysecret + optional: true \ No newline at end of file diff --git a/content/en/examples/secret/serviceaccount-token-secret.yaml b/content/en/examples/secret/serviceaccount-token-secret.yaml new file mode 100644 index 0000000000000..8ec8fb577d547 --- /dev/null +++ b/content/en/examples/secret/serviceaccount-token-secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Secret +metadata: + name: secret-sa-sample + annotations: + kubernetes.io/service-account.name: "sa-name" +type: kubernetes.io/service-account-token +data: + extra: YmFyCg== \ No newline at end of file diff --git a/content/en/examples/secret/ssh-auth-secret.yaml b/content/en/examples/secret/ssh-auth-secret.yaml new file mode 100644 index 0000000000000..9f79cbfb065fd --- /dev/null +++ b/content/en/examples/secret/ssh-auth-secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Secret +metadata: + name: secret-ssh-auth +type: kubernetes.io/ssh-auth +data: + # the data is abbreviated in this example + ssh-privatekey: | + UG91cmluZzYlRW1vdGljb24lU2N1YmE= \ No newline at end of file diff --git a/content/en/examples/secret/tls-auth-secret.yaml b/content/en/examples/secret/tls-auth-secret.yaml new file mode 100644 index 0000000000000..1e14b8e00ac47 --- /dev/null +++ b/content/en/examples/secret/tls-auth-secret.yaml @@ -0,0 +1,28 @@ +apiVersion: v1 +kind: Secret +metadata: + name: secret-tls +type: kubernetes.io/tls +data: + # values are base64 encoded, which obscures them but does NOT provide + # any useful level of confidentiality + tls.crt: | + LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNVakNDQWJzQ0FnMytNQTBHQ1NxR1NJYjNE + UUVCQlFVQU1JR2JNUXN3Q1FZRFZRUUdFd0pLVURFT01Bd0cKQTFVRUNCTUZWRzlyZVc4eEVEQU9C + Z05WQkFjVEIwTm9kVzh0YTNVeEVUQVBCZ05WQkFvVENFWnlZVzVyTkVSRQpNUmd3RmdZRFZRUUxF + dzlYWldKRFpYSjBJRk4xY0hCdmNuUXhHREFXQmdOVkJBTVREMFp5WVc1ck5FUkVJRmRsCllpQkRR + VEVqTUNFR0NTcUdTSWIzRFFFSkFSWVVjM1Z3Y0c5eWRFQm1jbUZ1YXpSa1pDNWpiMjB3SGhjTk1U + TXcKTVRFeE1EUTFNVE01V2hjTk1UZ3dNVEV3TURRMU1UTTVXakJMTVFzd0NRWURWUVFHREFKS1VE + RVBNQTBHQTFVRQpDQXdHWEZSdmEzbHZNUkV3RHdZRFZRUUtEQWhHY21GdWF6UkVSREVZTUJZR0Ex + VUVBd3dQZDNkM0xtVjRZVzF3CmJHVXVZMjl0TUlHYU1BMEdDU3FHU0liM0RRRUJBUVVBQTRHSUFE + Q0JoQUo5WThFaUhmeHhNL25PbjJTbkkxWHgKRHdPdEJEVDFKRjBReTliMVlKanV2YjdjaTEwZjVN + Vm1UQllqMUZTVWZNOU1vejJDVVFZdW4yRFljV29IcFA4ZQpqSG1BUFVrNVd5cDJRN1ArMjh1bklI + QkphVGZlQ09PekZSUFY2MEdTWWUzNmFScG04L3dVVm16eGFLOGtCOWVaCmhPN3F1TjdtSWQxL2pW + cTNKODhDQXdFQUFUQU5CZ2txaGtpRzl3MEJBUVVGQUFPQmdRQU1meTQzeE15OHh3QTUKVjF2T2NS + OEtyNWNaSXdtbFhCUU8xeFEzazlxSGtyNFlUY1JxTVQ5WjVKTm1rWHYxK2VSaGcwTi9WMW5NUTRZ + RgpnWXcxbnlESnBnOTduZUV4VzQyeXVlMFlHSDYyV1hYUUhyOVNVREgrRlowVnQvRGZsdklVTWRj + UUFEZjM4aU9zCjlQbG1kb3YrcE0vNCs5a1h5aDhSUEkzZXZ6OS9NQT09Ci0tLS0tRU5EIENFUlRJ + RklDQVRFLS0tLS0K + # In this example, the key data is not a real PEM-encoded private key + tls.key: | + RXhhbXBsZSBkYXRhIGZvciB0aGUgVExTIGNydCBmaWVsZA== \ No newline at end of file diff --git a/content/en/releases/_index.md b/content/en/releases/_index.md index 97d9004313158..3748f9231b1a3 100644 --- a/content/en/releases/_index.md +++ b/content/en/releases/_index.md @@ -6,13 +6,17 @@ layout: release-info notoc: true --- - -The Kubernetes project maintains release branches for the most recent three minor releases ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). Kubernetes 1.19 and newer receive [approximately 1 year of patch support](/releases/patch-releases/#support-period). Kubernetes 1.18 and older received approximately 9 months of patch support. +The Kubernetes project maintains release branches for the most recent three minor releases +({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). +Kubernetes 1.19 and newer receive +[approximately 1 year of patch support](/releases/patch-releases/#support-period). +Kubernetes 1.18 and older received approximately 9 months of patch support. Kubernetes versions are expressed as **x.y.z**, -where **x** is the major version, **y** is the minor version, and **z** is the patch version, following [Semantic Versioning](https://semver.org/) terminology. +where **x** is the major version, **y** is the minor version, and **z** is the patch version, +following [Semantic Versioning](https://semver.org/) terminology. More information in the [version skew policy](/releases/version-skew-policy/) document. @@ -24,6 +28,7 @@ More information in the [version skew policy](/releases/version-skew-policy/) do ## Upcoming Release -Check out the [schedule](https://github.com/kubernetes/sig-release/tree/master/releases/release-{{< skew nextMinorVersion >}}) for the upcoming **{{< skew nextMinorVersion >}}** Kubernetes release! +Check out the [schedule](https://github.com/kubernetes/sig-release/tree/master/releases/release-{{< skew nextMinorVersion >}}) +for the upcoming **{{< skew nextMinorVersion >}}** Kubernetes release! ## Helpful Resources diff --git a/content/en/releases/download.md b/content/en/releases/download.md index 0cee6e3556afb..c728ec015f9a8 100644 --- a/content/en/releases/download.md +++ b/content/en/releases/download.md @@ -43,6 +43,7 @@ You can fetch that list using: ```shell curl -Ls "https://sbom.k8s.io/$(curl -Ls https://dl.k8s.io/release/stable.txt)/release" | grep "SPDXID: SPDXRef-Package-registry.k8s.io" | grep -v sha256 | cut -d- -f3- | sed 's/-/\//' | sed 's/-v1/:v1/' ``` + For Kubernetes v{{< skew currentVersion >}}, the only kind of code artifact that you can verify integrity for is a container image, using the experimental signing support. @@ -50,11 +51,10 @@ signing support. To manually verify signed container images of Kubernetes core components, refer to [Verify Signed Container Images](/docs/tasks/administer-cluster/verify-signed-artifacts). - - ## Binaries -Find links to download Kubernetes components (and their checksums) in the [CHANGELOG](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG) files. +Find links to download Kubernetes components (and their checksums) in the +[CHANGELOG](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG) files. Alternately, use [downloadkubernetes.com](https://www.downloadkubernetes.com/) to filter by version and architecture. diff --git a/content/en/releases/notes.md b/content/en/releases/notes.md index 1bb60c810627c..bcda7d0a04437 100644 --- a/content/en/releases/notes.md +++ b/content/en/releases/notes.md @@ -8,6 +8,10 @@ sitemap: priority: 0.5 --- -Release notes can be found by reading the [Changelog](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG) that matches your Kubernetes version. View the changelog for {{< skew currentVersionAddMinor 0 >}} on [GitHub](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-{{< skew currentVersionAddMinor 0 >}}.md). +Release notes can be found by reading the [Changelog](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG) +that matches your Kubernetes version. View the changelog for {{< skew currentVersionAddMinor 0 >}} on +[GitHub](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-{{< skew currentVersionAddMinor 0 >}}.md). -Alternately, release notes can be searched and filtered online at: [relnotes.k8s.io](https://relnotes.k8s.io). View filtered release notes for {{< skew currentVersionAddMinor 0 >}} on [relnotes.k8s.io](https://relnotes.k8s.io/?releaseVersions={{< skew currentVersionAddMinor 0 >}}.0). +Alternately, release notes can be searched and filtered online at: [relnotes.k8s.io](https://relnotes.k8s.io). +View filtered release notes for {{< skew currentVersionAddMinor 0 >}} on +[relnotes.k8s.io](https://relnotes.k8s.io/?releaseVersions={{< skew currentVersionAddMinor 0 >}}.0). diff --git a/content/en/releases/patch-releases.md b/content/en/releases/patch-releases.md index 08481d28b0e86..5ec1f86bdd76c 100644 --- a/content/en/releases/patch-releases.md +++ b/content/en/releases/patch-releases.md @@ -78,9 +78,11 @@ releases may also occur in between these. | Monthly Patch Release | Cherry Pick Deadline | Target date | | --------------------- | -------------------- | ----------- | -| October 2023 | 2023-10-13 | 2023-10-18 | | November 2023 | N/A | N/A | | December 2023 | 2023-12-01 | 2023-12-06 | +| January 2024 | 2024-01-12 | 2024-01-17 | +| February 2024 | 2024-02-09 | 2024-02-14 | +| March 2024 | 2024-03-08 | 2024-03-13 | **Note:** Due to overlap with KubeCon NA 2023 and the resulting lack of availability of Release Managers, it has been decided to skip patch releases diff --git a/content/en/releases/release-managers.md b/content/en/releases/release-managers.md index 34fab5552f9b5..4fb1ab78c4f88 100644 --- a/content/en/releases/release-managers.md +++ b/content/en/releases/release-managers.md @@ -16,7 +16,6 @@ The responsibilities of each role are described below. - [Becoming a Release Manager](#becoming-a-release-manager) - [Release Manager Associates](#release-manager-associates) - [Becoming a Release Manager Associate](#becoming-a-release-manager-associate) -- [Build Admins](#build-admins) - [SIG Release Leads](#sig-release-leads) - [Chairs](#chairs) - [Technical Leads](#technical-leads) @@ -25,13 +24,16 @@ The responsibilities of each role are described below. | Mailing List | Slack | Visibility | Usage | Membership | | --- | --- | --- | --- | --- | -| [release-managers@kubernetes.io](mailto:release-managers@kubernetes.io) | [#release-management](https://kubernetes.slack.com/messages/CJH2GBF7Y) (channel) / @release-managers (user group) | Public | Public discussion for Release Managers | All Release Managers (including Associates, Build Admins, and SIG Chairs) | +| [release-managers@kubernetes.io](mailto:release-managers@kubernetes.io) | [#release-management](https://kubernetes.slack.com/messages/CJH2GBF7Y) (channel) / @release-managers (user group) | Public | Public discussion for Release Managers | All Release Managers (including Associates, and SIG Chairs) | | [release-managers-private@kubernetes.io](mailto:release-managers-private@kubernetes.io) | N/A | Private | Private discussion for privileged Release Managers | Release Managers, SIG Release leadership | | [security-release-team@kubernetes.io](mailto:security-release-team@kubernetes.io) | [#security-release-team](https://kubernetes.slack.com/archives/G0162T1RYHG) (channel) / @security-rel-team (user group) | Private | Security release coordination with the Security Response Committee | [security-discuss-private@kubernetes.io](mailto:security-discuss-private@kubernetes.io), [release-managers-private@kubernetes.io](mailto:release-managers-private@kubernetes.io) | ### Security Embargo Policy -Some information about releases is subject to embargo and we have defined policy about how those embargoes are set. Please refer to the [Security Embargo Policy](https://github.com/kubernetes/committee-security-response/blob/main/private-distributors-list.md#embargo-policy) for more information. +Some information about releases is subject to embargo and we have defined policy about +how those embargoes are set. Please refer to the +[Security Embargo Policy](https://github.com/kubernetes/committee-security-response/blob/main/private-distributors-list.md#embargo-policy) +for more information. ## Handbooks @@ -39,7 +41,6 @@ Some information about releases is subject to embargo and we have defined policy - [Patch Release Team][handbook-patch-release] - [Branch Managers][handbook-branch-mgmt] -- [Build Admins][handbook-packaging] ## Release Managers @@ -155,22 +156,6 @@ Contributors can become Associates by demonstrating the following: - these efforts require interacting and pairing with Release Managers and Associates -## Build Admins - -Build Admins are (currently) Google employees with the requisite access to -Google build systems/tooling to publish deb/rpm packages on behalf of the -Kubernetes project. They are responsible for: - -- Building, signing, and publishing the deb/rpm packages -- Being the interlock with Release Managers (and Associates) on the final steps -of each minor (1.Y) and patch (1.Y.Z) release - -GitHub team: [@kubernetes/build-admins](https://github.com/orgs/kubernetes/teams/build-admins) - -- Aaron Crickenberger ([@spiffxp](https://github.com/spiffxp)) -- Ben Kazemi ([@BenjaminKazemi](https://github.com/BenjaminKazemi)) -- Grant McCloskey ([@MushuEE](https://github.com/MushuEE)) - ## SIG Release Leads SIG Release Chairs and Technical Leads are responsible for: @@ -208,7 +193,6 @@ Example: [1.15 Release Team](https://git.k8s.io/sig-release/releases/release-1.1 [community-membership]: https://git.k8s.io/community/community-membership.md#member [handbook-branch-mgmt]: https://git.k8s.io/sig-release/release-engineering/role-handbooks/branch-manager.md -[handbook-packaging]: https://git.k8s.io/release/hack/rapture/README.md [handbook-patch-release]: https://git.k8s.io/sig-release/release-engineering/role-handbooks/patch-release-team.md [k-sig-release-releases]: https://git.k8s.io/sig-release/releases [patches]: /releases/patch-releases/ diff --git a/content/en/releases/version-skew-policy.md b/content/en/releases/version-skew-policy.md index 7a9f1c753a1b2..7031402e5c356 100644 --- a/content/en/releases/version-skew-policy.md +++ b/content/en/releases/version-skew-policy.md @@ -20,13 +20,19 @@ Specific cluster deployment tools may place additional restrictions on version s ## Supported versions -Kubernetes versions are expressed as **x.y.z**, where **x** is the major version, **y** is the minor version, and **z** is the patch version, following [Semantic Versioning](https://semver.org/) terminology. -For more information, see [Kubernetes Release Versioning](https://git.k8s.io/sig-release/release-engineering/versioning.md#kubernetes-release-versioning). +Kubernetes versions are expressed as **x.y.z**, where **x** is the major version, +**y** is the minor version, and **z** is the patch version, following +[Semantic Versioning](https://semver.org/) terminology. For more information, see +[Kubernetes Release Versioning](https://git.k8s.io/sig-release/release-engineering/versioning.md#kubernetes-release-versioning). -The Kubernetes project maintains release branches for the most recent three minor releases ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). Kubernetes 1.19 and newer receive [approximately 1 year of patch support](/releases/patch-releases/#support-period). Kubernetes 1.18 and older received approximately 9 months of patch support. +The Kubernetes project maintains release branches for the most recent three minor releases +({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). +Kubernetes 1.19 and newer receive [approximately 1 year of patch support](/releases/patch-releases/#support-period). +Kubernetes 1.18 and older received approximately 9 months of patch support. -Applicable fixes, including security fixes, may be backported to those three release branches, depending on severity and feasibility. -Patch releases are cut from those branches at a [regular cadence](/releases/patch-releases/#cadence), plus additional urgent releases, when required. +Applicable fixes, including security fixes, may be backported to those three release branches, +depending on severity and feasibility. Patch releases are cut from those branches at a +[regular cadence](/releases/patch-releases/#cadence), plus additional urgent releases, when required. The [Release Managers](/releases/release-managers/) group owns this decision. @@ -36,7 +42,8 @@ For more information, see the Kubernetes [patch releases](/releases/patch-releas ### kube-apiserver -In [highly-available (HA) clusters](/docs/setup/production-environment/tools/kubeadm/high-availability/), the newest and oldest `kube-apiserver` instances must be within one minor version. +In [highly-available (HA) clusters](/docs/setup/production-environment/tools/kubeadm/high-availability/), +the newest and oldest `kube-apiserver` instances must be within one minor version. Example: @@ -51,7 +58,8 @@ Example: Example: * `kube-apiserver` is at **{{< skew currentVersion >}}** -* `kubelet` is supported at **{{< skew currentVersion >}}**, **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}** +* `kubelet` is supported at **{{< skew currentVersion >}}**, **{{< skew currentVersionAddMinor -1 >}}**, + **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}** {{< note >}} If version skew exists between `kube-apiserver` instances in an HA cluster, this narrows the allowed `kubelet` versions. @@ -60,18 +68,24 @@ If version skew exists between `kube-apiserver` instances in an HA cluster, this Example: * `kube-apiserver` instances are at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** -* `kubelet` is supported at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}** (**{{< skew currentVersion >}}** is not supported because that would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**) +* `kubelet` is supported at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, + and **{{< skew currentVersionAddMinor -3 >}}** (**{{< skew currentVersion >}}** is not supported because that + would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**) ### kube-proxy * `kube-proxy` must not be newer than `kube-apiserver`. -* `kube-proxy` may be up to three minor versions older than `kube-apiserver` (`kube-proxy` < 1.25 may only be up to two minor versions older than `kube-apiserver`). -* `kube-proxy` may be up to three minor versions older or newer than the `kubelet` instance it runs alongside (`kube-proxy` < 1.25 may only be up to two minor versions older or newer than the `kubelet` instance it runs alongside). +* `kube-proxy` may be up to three minor versions older than `kube-apiserver` + (`kube-proxy` < 1.25 may only be up to two minor versions older than `kube-apiserver`). +* `kube-proxy` may be up to three minor versions older or newer than the `kubelet` instance + it runs alongside (`kube-proxy` < 1.25 may only be up to two minor versions older or newer + than the `kubelet` instance it runs alongside). Example: * `kube-apiserver` is at **{{< skew currentVersion >}}** -* `kube-proxy` is supported at **{{< skew currentVersion >}}**, **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}** +* `kube-proxy` is supported at **{{< skew currentVersion >}}**, **{{< skew currentVersionAddMinor -1 >}}**, + **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}** {{< note >}} If version skew exists between `kube-apiserver` instances in an HA cluster, this narrows the allowed `kube-proxy` versions. @@ -80,26 +94,36 @@ If version skew exists between `kube-apiserver` instances in an HA cluster, this Example: * `kube-apiserver` instances are at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** -* `kube-proxy` is supported at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}** (**{{< skew currentVersion >}}** is not supported because that would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**) +* `kube-proxy` is supported at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, + and **{{< skew currentVersionAddMinor -3 >}}** (**{{< skew currentVersion >}}** is not supported because that would + be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**) ### kube-controller-manager, kube-scheduler, and cloud-controller-manager -`kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` must not be newer than the `kube-apiserver` instances they communicate with. They are expected to match the `kube-apiserver` minor version, but may be up to one minor version older (to allow live upgrades). +`kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` must not be newer than the +`kube-apiserver` instances they communicate with. They are expected to match the `kube-apiserver` minor version, +but may be up to one minor version older (to allow live upgrades). Example: * `kube-apiserver` is at **{{< skew currentVersion >}}** -* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** +* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported + at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** {{< note >}} -If version skew exists between `kube-apiserver` instances in an HA cluster, and these components can communicate with any `kube-apiserver` instance in the cluster (for example, via a load balancer), this narrows the allowed versions of these components. +If version skew exists between `kube-apiserver` instances in an HA cluster, and these components +can communicate with any `kube-apiserver` instance in the cluster (for example, via a load balancer), +this narrows the allowed versions of these components. {{< /note >}} Example: * `kube-apiserver` instances are at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** -* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` communicate with a load balancer that can route to any `kube-apiserver` instance -* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at **{{< skew currentVersionAddMinor -1 >}}** (**{{< skew currentVersion >}}** is not supported because that would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**) +* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` communicate with a load balancer + that can route to any `kube-apiserver` instance +* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at + **{{< skew currentVersionAddMinor -1 >}}** (**{{< skew currentVersion >}}** is not supported + because that would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**) ### kubectl @@ -108,7 +132,8 @@ Example: Example: * `kube-apiserver` is at **{{< skew currentVersion >}}** -* `kubectl` is supported at **{{< skew currentVersionAddMinor 1 >}}**, **{{< skew currentVersion >}}**, and **{{< skew currentVersionAddMinor -1 >}}** +* `kubectl` is supported at **{{< skew currentVersionAddMinor 1 >}}**, **{{< skew currentVersion >}}**, + and **{{< skew currentVersionAddMinor -1 >}}** {{< note >}} If version skew exists between `kube-apiserver` instances in an HA cluster, this narrows the supported `kubectl` versions. @@ -117,21 +142,24 @@ If version skew exists between `kube-apiserver` instances in an HA cluster, this Example: * `kube-apiserver` instances are at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** -* `kubectl` is supported at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** (other versions would be more than one minor version skewed from one of the `kube-apiserver` components) +* `kubectl` is supported at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** + (other versions would be more than one minor version skewed from one of the `kube-apiserver` components) ## Supported component upgrade order -The supported version skew between components has implications on the order in which components must be upgraded. -This section describes the order in which components must be upgraded to transition an existing cluster from version **{{< skew currentVersionAddMinor -1 >}}** to version **{{< skew currentVersion >}}**. +The supported version skew between components has implications on the order +in which components must be upgraded. This section describes the order in +which components must be upgraded to transition an existing cluster from version +**{{< skew currentVersionAddMinor -1 >}}** to version **{{< skew currentVersion >}}**. Optionally, when preparing to upgrade, the Kubernetes project recommends that you do the following to benefit from as many regression and bug fixes as -possible during your upgrade: +possible during your upgrade: -* Ensure that components are on the most recent patch version of your current - minor version. -* Upgrade components to the most recent patch version of the target minor - version. +* Ensure that components are on the most recent patch version of your current + minor version. +* Upgrade components to the most recent patch version of the target minor + version. For example, if you're running version {{}}, ensure that you're on the most recent patch version. Then, upgrade to the most @@ -142,12 +170,19 @@ recent patch version of {{}}. Pre-requisites: * In a single-instance cluster, the existing `kube-apiserver` instance is **{{< skew currentVersionAddMinor -1 >}}** -* In an HA cluster, all `kube-apiserver` instances are at **{{< skew currentVersionAddMinor -1 >}}** or **{{< skew currentVersion >}}** (this ensures maximum skew of 1 minor version between the oldest and newest `kube-apiserver` instance) -* The `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` instances that communicate with this server are at version **{{< skew currentVersionAddMinor -1 >}}** (this ensures they are not newer than the existing API server version, and are within 1 minor version of the new API server version) -* `kubelet` instances on all nodes are at version **{{< skew currentVersionAddMinor -1 >}}** or **{{< skew currentVersionAddMinor -2 >}}** (this ensures they are not newer than the existing API server version, and are within 2 minor versions of the new API server version) +* In an HA cluster, all `kube-apiserver` instances are at **{{< skew currentVersionAddMinor -1 >}}** or + **{{< skew currentVersion >}}** (this ensures maximum skew of 1 minor version between the oldest and newest `kube-apiserver` instance) +* The `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` instances that + communicate with this server are at version **{{< skew currentVersionAddMinor -1 >}}** + (this ensures they are not newer than the existing API server version, and are within 1 minor version of the new API server version) +* `kubelet` instances on all nodes are at version **{{< skew currentVersionAddMinor -1 >}}** or **{{< skew currentVersionAddMinor -2 >}}** + (this ensures they are not newer than the existing API server version, and are within 2 minor versions of the new API server version) * Registered admission webhooks are able to handle the data the new `kube-apiserver` instance will send them: - * `ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration` objects are updated to include any new versions of REST resources added in **{{< skew currentVersion >}}** (or use the [`matchPolicy: Equivalent` option](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchpolicy) available in v1.15+) - * The webhooks are able to handle any new versions of REST resources that will be sent to them, and any new fields added to existing versions in **{{< skew currentVersion >}}** + * `ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration` objects are updated to include + any new versions of REST resources added in **{{< skew currentVersion >}}** + (or use the [`matchPolicy: Equivalent` option](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchpolicy) available in v1.15+) + * The webhooks are able to handle any new versions of REST resources that will be sent to them, + and any new fields added to existing versions in **{{< skew currentVersion >}}** Upgrade `kube-apiserver` to **{{< skew currentVersion >}}** @@ -161,7 +196,9 @@ require `kube-apiserver` to not skip minor versions when upgrading, even in sing Pre-requisites: -* The `kube-apiserver` instances these components communicate with are at **{{< skew currentVersion >}}** (in HA clusters in which these control plane components can communicate with any `kube-apiserver` instance in the cluster, all `kube-apiserver` instances must be upgraded before upgrading these components) +* The `kube-apiserver` instances these components communicate with are at **{{< skew currentVersion >}}** + (in HA clusters in which these control plane components can communicate with any `kube-apiserver` + instance in the cluster, all `kube-apiserver` instances must be upgraded before upgrading these components) Upgrade `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` to **{{< skew currentVersion >}}**. There is no @@ -175,7 +212,8 @@ Pre-requisites: * The `kube-apiserver` instances the `kubelet` communicates with are at **{{< skew currentVersion >}}** -Optionally upgrade `kubelet` instances to **{{< skew currentVersion >}}** (or they can be left at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, or **{{< skew currentVersionAddMinor -3 >}}**) +Optionally upgrade `kubelet` instances to **{{< skew currentVersion >}}** (or they can be left at +**{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, or **{{< skew currentVersionAddMinor -3 >}}**) {{< note >}} Before performing a minor version `kubelet` upgrade, [drain](/docs/tasks/administer-cluster/safely-drain-node/) pods from that node. @@ -183,7 +221,8 @@ In-place minor version `kubelet` upgrades are not supported. {{}} {{< warning >}} -Running a cluster with `kubelet` instances that are persistently three minor versions behind `kube-apiserver` means they must be upgraded before the control plane can be upgraded. +Running a cluster with `kubelet` instances that are persistently three minor versions behind +`kube-apiserver` means they must be upgraded before the control plane can be upgraded. {{}} ### kube-proxy @@ -192,8 +231,11 @@ Pre-requisites: * The `kube-apiserver` instances `kube-proxy` communicates with are at **{{< skew currentVersion >}}** -Optionally upgrade `kube-proxy` instances to **{{< skew currentVersion >}}** (or they can be left at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, or **{{< skew currentVersionAddMinor -3 >}}**) +Optionally upgrade `kube-proxy` instances to **{{< skew currentVersion >}}** +(or they can be left at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, +or **{{< skew currentVersionAddMinor -3 >}}**) {{< warning >}} -Running a cluster with `kube-proxy` instances that are persistently three minor versions behind `kube-apiserver` means they must be upgraded before the control plane can be upgraded. +Running a cluster with `kube-proxy` instances that are persistently three minor versions behind +`kube-apiserver` means they must be upgraded before the control plane can be upgraded. {{}} diff --git a/content/en/training/_index.html b/content/en/training/_index.html index 74880486a6bf2..28fe46cfc0c43 100644 --- a/content/en/training/_index.html +++ b/content/en/training/_index.html @@ -14,17 +14,22 @@

Build your cloud native career

Kubernetes is at the core of the cloud native movement. Training and certifications from the Linux Foundation and our training partners lets you invest in your career, learn Kubernetes, and make your cloud native projects successful.

-
- -
-
- -
-
- -
-
- +
+
+ +
+
+ +
+
+ +
+
+ +
+
+ +
@@ -93,6 +98,16 @@

Go to Certification +
+ +
+ Kubernetes and Cloud Native Security Associate (KCSA) +
+

The KCSA is a pre-professional certification designed for candidates interested in advancing to the professional level through a demonstrated understanding of foundational knowledge and skills of security technologies in the cloud native ecosystem.

+

A certified KCSA will confirm an understanding of the baseline security configuration of Kubernetes clusters to meet compliance objectives.

+
+ Go to Certification +
Certified Kubernetes Application Developer (CKAD) @@ -106,6 +121,7 @@
Certified Kubernetes Administrator (CKA)
+

The Certified Kubernetes Administrator (CKA) program provides assurance that CKAs have the skills, knowledge, and competency to perform the responsibilities of Kubernetes administrators.

A certified Kubernetes administrator has demonstrated the ability to do basic installation as well as configuring and managing production-grade Kubernetes clusters.


@@ -115,6 +131,7 @@
Certified Kubernetes Security Specialist (CKS)
+

The Certified Kubernetes Security Specialist program provides assurance that the holder is comfortable and competent with a broad range of best practices. CKS certification covers skills for securing container-based applications and Kubernetes platforms during build, deployment and runtime.

Candidates for CKS must hold a current Certified Kubernetes Administrator (CKA) certification to demonstrate they possess sufficient Kubernetes expertise before sitting for the CKS.


diff --git a/content/es/docs/concepts/storage/projected-volumes.md b/content/es/docs/concepts/storage/projected-volumes.md new file mode 100644 index 0000000000000..1540d5df97b54 --- /dev/null +++ b/content/es/docs/concepts/storage/projected-volumes.md @@ -0,0 +1,119 @@ +--- +reviewers: + - ramrodo + - krol3 + - electrocucaracha +title: Volúmenes proyectados +content_type: concept +weight: 21 # just after persistent volumes +--- + + + +Este documento describe los _volúmenes proyectados_ en Kubernetes. Necesita estar familiarizado con [volúmenes](/es/docs/concepts/storage/volumes/). + + + +## Introducción + +Un volumen `proyectado` asigna varias fuentes de volúmenes existentes al mismo directorio. + +Actualmente se pueden proyectar los siguientes tipos de fuentes de volumen: + +- [`secret`](/es/docs/concepts/storage/volumes/#secret) +- [`downwardAPI`](/es/docs/concepts/storage/volumes/#downwardapi) +- [`configMap`](/es/docs/concepts/storage/volumes/#configmap) +- [`serviceAccountToken`](#serviceaccounttoken) + +Se requiere que todas las fuentes estén en el mismo espacio de nombres que el Pod. Para más detalles, +vea el documento de diseño [all-in-one volume](https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md). + +### Configuración de ejemplo con un secreto, una downwardAPI y una configMap {#example-configuration-secret-downwardapi-configmap} + +{{% code_sample file="pods/storage/projected-secret-downwardapi-configmap.yaml" %}} + +### Configuración de ejemplo: secretos con un modo de permiso no predeterminado establecido {#example-configuration-secrets-nondefault-permission-mode} + +{{% code_sample file="pods/storage/projected-secrets-nondefault-permission-mode.yaml" %}} + +Cada fuente de volumen proyectada aparece en la especificación bajo `sources`. Los parámetros son casi los mismos con dos excepciones: + +- Para los secretos, el campo `secretName` se ha cambiado a `name` para que sea coherente con el nombre de ConfigMap. +- El `defaultMode` solo se puede especificar en el nivel proyectado y no para cada fuente de volumen. Sin embargo, como se ilustra arriba, puede configurar explícitamente el `mode` para cada proyección individual. + +## Volúmenes proyectados de serviceAccountToken {#serviceaccounttoken} + +Puede inyectar el token para la [service account](/docs/reference/access-authn-authz/authentication/#service-account-tokens) actual +en un Pod en una ruta especificada. Por ejemplo: + +{{% code_sample file="pods/storage/projected-service-account-token.yaml" %}} + +El Pod de ejemplo tiene un volumen proyectado que contiene el token de cuenta de servicio inyectado. +Los contenedores en este Pod pueden usar ese token para acceder al servidor API de Kubernetes, autenticándose con la identidad de [the pod's ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/). + +El campo `audience` contiene la audiencia prevista del +token. Un destinatario del token debe identificarse con un identificador especificado en la audiencia del token y, de lo contrario, debe rechazar el token. Este campo es opcional y de forma predeterminada es el identificador del servidor API. + +The `expirationSeconds` es la duración esperada de validez del token de la cuenta de servicio. El valor predeterminado es 1 hora y debe durar al menos 10 minutos (600 segundos). +Un administrador +también puede limitar su valor máximo especificando la opción `--service-account-max-token-expiration` +para el servidor API. El campo `path` especifica una ruta relativa al punto de montaje del volumen proyectado. + +{{< note >}} +Un contenedor que utiliza una fuente de volumen proyectada como montaje de volumen [`subPath`](/docs/concepts/storage/volumes/#using-subpath) +no recibirá actualizaciones para esas fuentes de volumen. +{{< /note >}} + +## Interacciones SecurityContext + +La [propuesta](https://git.k8s.io/enhancements/keps/sig-storage/2451-service-account-token-volumes#proposal) para el manejo de permisos de archivos en la mejora del volumen de cuentas de servicio proyectadas introdujo los archivos proyectados que tienen los permisos de propietario correctos establecidos. + +### Linux + +En los pods de Linux que tienen un volumen proyectado y `RunAsUser` configurado en el Pod +[`SecurityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context), +los archivos proyectados tienen la conjunto de propiedad correcto, incluida la propiedad del usuario del contenedor. + +Cuando todos los contenedores en un pod tienen el mismo `runAsUser` configurado en su +[`PodSecurityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) +o el contenedor +[`SecurityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context-1), +entonces el kubelet garantiza que el contenido del volumen `serviceAccountToken` sea propiedad de ese usuario y que el archivo token tenga su modo de permiso establecido en `0600`. + +{{< note >}} +{{< glossary_tooltip text="Ephemeral containers" term_id="ephemeral-container" >}} +agregado a un pod después de su creación _no_ cambia los permisos de volumen que se establecieron cuando se creó el pod. + +Si los permisos de volumen `serviceAccountToken` de un Pod se establecieron en `0600` porque todos los demás contenedores en el Pod tienen el mismo `runAsUser`, los contenedores efímeros deben usar el mismo `runAsUser` para poder leer el token. +{{< /note >}} + +### Windows + +En los pods de Windows que tienen un volumen proyectado y `RunAsUsername` configurado en el pod `SecurityContext`, la propiedad no se aplica debido a la forma en que se administran las cuentas de usuario en Windows. +Windows almacena y administra cuentas de grupos y usuarios locales en un archivo de base de datos llamado Administrador de cuentas de seguridad (SAM). +Cada contenedor mantiene su propia instancia de la base de datos SAM, de la cual el host no tiene visibilidad mientras el contenedor se está ejecutando. +Los contenedores de Windows están diseñados para ejecutar la parte del modo de usuario del sistema operativo de forma aislada del host, de ahí el mantenimiento de una base de datos SAM virtual. +Como resultado, el kubelet que se ejecuta en el host no tiene la capacidad de configurar dinámicamente la propiedad de los archivos del host para cuentas de contenedores virtualizados. Se recomienda que, si los archivos de la máquina host se van a compartir con el contenedor, se coloquen en su propio montaje de volumen fuera de `C:\`. + +De forma predeterminada, los archivos proyectados tendrán la siguiente propiedad, como se muestra en un archivo de volumen proyectado de ejemplo: + +```powershell +PS C:\> Get-Acl C:\var\run\secrets\kubernetes.io\serviceaccount\..2021_08_31_22_22_18.318230061\ca.crt | Format-List + +Path : Microsoft.PowerShell.Core\FileSystem::C:\var\run\secrets\kubernetes.io\serviceaccount\..2021_08_31_22_22_18.318230061\ca.crt +Owner : BUILTIN\Administrators +Group : NT AUTHORITY\SYSTEM +Access : NT AUTHORITY\SYSTEM Allow FullControl + BUILTIN\Administrators Allow FullControl + BUILTIN\Users Allow ReadAndExecute, Synchronize +Audit : +Sddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU) +``` + +Esto implica que todos los usuarios administradores como `ContainerAdministrator` tendrán acceso de lectura, escritura y ejecución, mientras que los usuarios que no sean administradores tendrán acceso de lectura y ejecución. + +{{< note >}} +En general, se desaconseja otorgar acceso al contenedor al host, ya que puede abrir la puerta a posibles vulnerabilidades de seguridad. + +Crear un Pod de Windows con `RunAsUser` en su `SecurityContext` dará como resultado que el Pod quede atascado en `ContainerCreating` para siempre. Por lo tanto, se recomienda no utilizar la opción `RunAsUser` exclusiva de Linux con Windows Pods. +{{< /note >}} diff --git a/content/es/docs/concepts/workloads/controllers/statefulset.md b/content/es/docs/concepts/workloads/controllers/statefulset.md index 95e86a7a3f674..7ed24d2b7edaf 100644 --- a/content/es/docs/concepts/workloads/controllers/statefulset.md +++ b/content/es/docs/concepts/workloads/controllers/statefulset.md @@ -153,7 +153,7 @@ El valor de Cluster Domain se pondrá a `cluster.local` a menos que Kubernetes crea un [PersistentVolume](/docs/concepts/storage/persistent-volumes/) para cada VolumeClaimTemplate. En el ejemplo de nginx de arriba, cada Pod recibirá un único PersistentVolume -con una StorageClass igual a `my-storage-class` y 1 Gib de almacenamiento provisionado. Si no se indica ninguna StorageClass, +con una StorageClass igual a `my-storage-class` y 1 GiB de almacenamiento provisionado. Si no se indica ninguna StorageClass, entonces se usa la StorageClass por defecto. Cuando un Pod se (re)programa en un nodo, sus `volumeMounts` montan los PersistentVolumes asociados con sus PersistentVolume Claims. Nótese que los PersistentVolumes asociados con los diff --git a/content/es/docs/tasks/tools/included/install-kubectl-linux.md b/content/es/docs/tasks/tools/included/install-kubectl-linux.md index 10b756f079167..5ff454b8f9928 100644 --- a/content/es/docs/tasks/tools/included/install-kubectl-linux.md +++ b/content/es/docs/tasks/tools/included/install-kubectl-linux.md @@ -45,7 +45,7 @@ Por ejemplo, para descargar la versión {{< skew currentPatchVersion >}} en Linu Descargue el archivo de comprobación de kubectl: ```bash - curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" + curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" ``` Valide el binario kubectl con el archivo de comprobación: @@ -199,7 +199,7 @@ A continuación, se muestran los procedimientos para configurar el autocompletad Descargue el archivo de comprobación kubectl-convert: ```bash - curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256" + curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256" ``` Valide el binario kubectl-convert con el archivo de comprobación: diff --git a/content/es/examples/pods/storage/projected-secret-downwardapi-configmap.yaml b/content/es/examples/pods/storage/projected-secret-downwardapi-configmap.yaml new file mode 100644 index 0000000000000..453dc08c0c7d9 --- /dev/null +++ b/content/es/examples/pods/storage/projected-secret-downwardapi-configmap.yaml @@ -0,0 +1,35 @@ +apiVersion: v1 +kind: Pod +metadata: + name: volume-test +spec: + containers: + - name: container-test + image: busybox:1.28 + volumeMounts: + - name: all-in-one + mountPath: "/projected-volume" + readOnly: true + volumes: + - name: all-in-one + projected: + sources: + - secret: + name: mysecret + items: + - key: username + path: my-group/my-username + - downwardAPI: + items: + - path: "labels" + fieldRef: + fieldPath: metadata.labels + - path: "cpu_limit" + resourceFieldRef: + containerName: container-test + resource: limits.cpu + - configMap: + name: myconfigmap + items: + - key: config + path: my-group/my-config diff --git a/content/es/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml b/content/es/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml new file mode 100644 index 0000000000000..b921fd93c5833 --- /dev/null +++ b/content/es/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml @@ -0,0 +1,27 @@ +apiVersion: v1 +kind: Pod +metadata: + name: volume-test +spec: + containers: + - name: container-test + image: busybox:1.28 + volumeMounts: + - name: all-in-one + mountPath: "/projected-volume" + readOnly: true + volumes: + - name: all-in-one + projected: + sources: + - secret: + name: mysecret + items: + - key: username + path: my-group/my-username + - secret: + name: mysecret2 + items: + - key: password + path: my-group/my-password + mode: 511 diff --git a/content/es/examples/pods/storage/projected-service-account-token.yaml b/content/es/examples/pods/storage/projected-service-account-token.yaml new file mode 100644 index 0000000000000..cc307659a78ef --- /dev/null +++ b/content/es/examples/pods/storage/projected-service-account-token.yaml @@ -0,0 +1,21 @@ +apiVersion: v1 +kind: Pod +metadata: + name: sa-token-test +spec: + containers: + - name: container-test + image: busybox:1.28 + volumeMounts: + - name: token-vol + mountPath: "/service-account" + readOnly: true + serviceAccountName: default + volumes: + - name: token-vol + projected: + sources: + - serviceAccountToken: + audience: api + expirationSeconds: 3600 + path: token diff --git a/content/fr/docs/tasks/debug-application-cluster/get-shell-running-container.md b/content/fr/docs/tasks/debug-application-cluster/get-shell-running-container.md index 1973bcd1bb89b..a853ad6723e18 100644 --- a/content/fr/docs/tasks/debug-application-cluster/get-shell-running-container.md +++ b/content/fr/docs/tasks/debug-application-cluster/get-shell-running-container.md @@ -105,7 +105,7 @@ Lorsque vous avez terminé avec votre shell, entrez `exit`. Dans une fenêtre de commande ordinaire, pas votre shell, répertoriez les variables d'environnement dans le conteneur en cours d'exécution: ```shell -kubectl exec shell-demo env +kubectl exec shell-demo -- env ``` Essayez d'exécuter d'autres commandes. diff --git a/content/id/docs/concepts/cluster-administration/addons.md b/content/id/docs/concepts/cluster-administration/addons.md index a39668ee2472b..79876d0efa977 100644 --- a/content/id/docs/concepts/cluster-administration/addons.md +++ b/content/id/docs/concepts/cluster-administration/addons.md @@ -25,7 +25,7 @@ Laman ini akan menjabarkan beberapa *add-ons* yang tersedia serta tautan instruk * [Canal](https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel) menggabungkan Flannel dan Calico, menyediakan jaringan serta *policy* jaringan. * [Cilium](https://github.com/cilium/cilium) merupakan *plugin* jaringan L3 dan *policy* jaringan yang dapat menjalankan *policy* HTTP/API/L7 secara transparan. Mendukung mode *routing* maupun *overlay/encapsulation*. * [CNI-Genie](https://github.com/cni-genie/CNI-Genie) memungkinkan Kubernetes agar dapat terkoneksi dengan beragam *plugin* CNI, seperti Calico, Canal, Flannel, Romana, atau Weave dengan mulus. -* [Contiv](http://contiv.github.io) menyediakan jaringan yang dapat dikonfigurasi (*native* L3 menggunakan BGP, *overlay* menggunakan vxlan, klasik L2, dan Cisco-SDN/ACI) untuk berbagai penggunaan serta *policy framework* yang kaya dan beragam. Proyek Contiv merupakan proyek [open source](http://github.com/contiv). Laman [instalasi](http://github.com/contiv/install) ini akan menjabarkan cara instalasi, baik untuk klaster dengan kubeadm maupun non-kubeadm. +* [Contiv](https://contivpp.io) menyediakan jaringan yang dapat dikonfigurasi (*native* L3 menggunakan BGP, *overlay* menggunakan vxlan, klasik L2, dan Cisco-SDN/ACI) untuk berbagai penggunaan serta *policy framework* yang kaya dan beragam. Proyek Contiv merupakan proyek [open source](https://github.com/contiv). Laman [instalasi](https://github.com/contiv/install) ini akan menjabarkan cara instalasi, baik untuk klaster dengan kubeadm maupun non-kubeadm. * [Contrail](http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), yang berbasis dari [Tungsten Fabric](https://tungsten.io), merupakan sebuah proyek *open source* yang menyediakan virtualisasi jaringan *multi-cloud* serta platform manajemen *policy*. Contrail dan Tungsten Fabric terintegrasi dengan sistem orkestrasi lainnya seperti Kubernetes, OpenShift, OpenStack dan Mesos, serta menyediakan mode isolasi untuk mesin virtual (VM), kontainer/pod dan *bare metal*. * [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) merupakan penyedia jaringan *overlay* yang dapat digunakan pada Kubernetes. * [Knitter](https://github.com/ZTE/Knitter/) merupakan solusi jaringan yang mendukung multipel jaringan pada Kubernetes. diff --git a/content/id/docs/concepts/cluster-administration/logging.md b/content/id/docs/concepts/cluster-administration/logging.md index e00745d1d979f..33266635c333d 100644 --- a/content/id/docs/concepts/cluster-administration/logging.md +++ b/content/id/docs/concepts/cluster-administration/logging.md @@ -21,7 +21,7 @@ Arsitektur _logging_ pada level klaster yang akan dijelaskan berikut mengasumsik Pada bagian ini, kamu dapat melihat contoh tentang dasar _logging_ pada Kubernetes yang mengeluarkan data pada _standard output_. Demonstrasi berikut ini menggunakan sebuah [spesifikasi pod](/examples/debug/counter-pod.yaml) dengan kontainer yang akan menuliskan beberapa teks ke _standard output_ tiap detik. -{{< codenew file="debug/counter-pod.yaml" >}} +{{% codenew file="debug/counter-pod.yaml" %}} Untuk menjalankan pod ini, gunakan perintah berikut: @@ -126,13 +126,13 @@ Dengan menggunakan cara ini kamu dapat memisahkan aliran log dari bagian-bagian Sebagai contoh, sebuah pod berjalan pada satu kontainer tunggal, dan kontainer menuliskan ke dua berkas log yang berbeda, dengan dua format yang berbeda pula. Berikut ini _file_ konfigurasi untuk Pod: -{{< codenew file="admin/logging/two-files-counter-pod.yaml" >}} +{{% codenew file="admin/logging/two-files-counter-pod.yaml" %}} Hal ini akan menyulitkan untuk mengeluarkan log dalam format yang berbeda pada aliran log yang sama, meskipun kamu dapat me-_redirect_ keduanya ke `stdout` dari kontainer. Sebagai gantinya, kamu dapat menggunakan dua buah kontainer _sidecar_. Tiap kontainer _sidecar_ dapat membaca suatu berkas log tertentu dari _shared volume_ kemudian mengarahkan log ke `stdout`-nya sendiri. Berikut _file_ konfigurasi untuk pod yang memiliki dua buah kontainer _sidecard_: -{{< codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" >}} +{{% codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" %}} Saat kamu menjalankan pod ini, kamu dapat mengakses tiap aliran log secara terpisah dengan menjalankan perintah berikut: @@ -175,7 +175,7 @@ Menggunakan agen _logging_ di dalam kontainer _sidecar_ dapat berakibat pengguna Sebagai contoh, kamu dapat menggunakan [Stackdriver](/docs/tasks/debug-application-cluster/logging-stackdriver/), yang menggunakan fluentd sebagai agen _logging_. Berikut ini dua _file_ konfigurasi yang dapat kamu pakai untuk mengimplementasikan cara ini. _File_ yang pertama berisi sebuah [ConfigMap](/id/docs/tasks/configure-pod-container/configure-pod-configmap/) untuk mengonfigurasi fluentd. -{{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}} +{{% codenew file="admin/logging/fluentd-sidecar-config.yaml" %}} {{< note >}} Konfigurasi fluentd berada diluar cakupan artikel ini. Untuk informasi lebih lanjut tentang cara mengonfigurasi fluentd, silakan lihat [dokumentasi resmi fluentd ](http://docs.fluentd.org/). @@ -183,7 +183,7 @@ Konfigurasi fluentd berada diluar cakupan artikel ini. Untuk informasi lebih lan _File_ yang kedua mendeskripsikan sebuah pod yang memiliki kontainer _sidecar_ yang menjalankan fluentd. Pod ini melakukan _mount_ sebuah volume yang akan digunakan fluentd untuk mengambil data konfigurasinya. -{{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}} +{{% codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" %}} Setelah beberapa saat, kamu akan mendapati pesan log pada _interface_ Stackdriver. diff --git a/content/id/docs/concepts/cluster-administration/manage-deployment.md b/content/id/docs/concepts/cluster-administration/manage-deployment.md index 4bdc7f790e6dc..12525e39e1754 100644 --- a/content/id/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/id/docs/concepts/cluster-administration/manage-deployment.md @@ -17,7 +17,7 @@ Kamu telah melakukan _deploy_ pada aplikasimu dan mengeksposnya melalui sebuah _ Banyak aplikasi memerlukan beberapa _resource_, seperti Deployment dan Service. Pengelolaan beberapa _resource_ dapat disederhanakan dengan mengelompokkannya dalam berkas yang sama (dengan pemisah `---` pada YAML). Contohnya: -{{< codenew file="application/nginx-app.yaml" >}} +{{% codenew file="application/nginx-app.yaml" %}} Beberapa _resource_ dapat dibuat seolah-olah satu _resource_: diff --git a/content/id/docs/concepts/cluster-administration/networking.md b/content/id/docs/concepts/cluster-administration/networking.md index b300c29e98bc0..d587349dcb977 100644 --- a/content/id/docs/concepts/cluster-administration/networking.md +++ b/content/id/docs/concepts/cluster-administration/networking.md @@ -105,7 +105,7 @@ Plugin ini dirancang untuk secara langsung mengkonfigurasi dan _deploy_ dalam VP ### Contiv -[Contiv](https://github.com/contiv/netplugin) menyediakan jaringan yang dapat dikonfigurasi (_native_ l3 menggunakan BGP, _overlay_ menggunakan vxlan, classic l2, atau Cisco-SDN / ACI) untuk berbagai kasus penggunaan. [Contiv](http://contiv.io) semuanya open sourced. +[Contiv](https://github.com/contiv/netplugin) menyediakan jaringan yang dapat dikonfigurasi (_native_ l3 menggunakan BGP, _overlay_ menggunakan vxlan, classic l2, atau Cisco-SDN / ACI) untuk berbagai kasus penggunaan. [Contiv](https://contivpp.io) semuanya open sourced. ### Contrail / Tungsten Fabric diff --git a/content/id/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/id/docs/concepts/overview/working-with-objects/kubernetes-objects.md index aa702827b9ad4..5195243acb70a 100644 --- a/content/id/docs/concepts/overview/working-with-objects/kubernetes-objects.md +++ b/content/id/docs/concepts/overview/working-with-objects/kubernetes-objects.md @@ -64,7 +64,7 @@ akan mengubah informasi yang kamu berikan ke dalam format JSON ketika melakukan Berikut merupakan contoh _file_ `.yaml` yang menunjukkan _field_ dan _spec_ objek untuk _Deployment_: -{{< codenew file="application/deployment.yaml" >}} +{{% codenew file="application/deployment.yaml" %}} Salah satu cara untuk membuat _Deployment_ menggunakan _file_ `.yaml` seperti yang dijabarkan di atas adalah dengan menggunakan perintah diff --git a/content/id/docs/concepts/policy/pod-security-policy.md b/content/id/docs/concepts/policy/pod-security-policy.md index 3646246150e83..d89e6ca7398f3 100644 --- a/content/id/docs/concepts/policy/pod-security-policy.md +++ b/content/id/docs/concepts/policy/pod-security-policy.md @@ -146,7 +146,7 @@ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n Beri definisi objek contoh PodSecurityPolicy dalam sebuah berkas. Ini adalah kebijakan yang mencegah pembuatan Pod-Pod yang _privileged_. -{{< codenew file="policy/example-psp.yaml" >}} +{{% codenew file="policy/example-psp.yaml" %}} Dan buatlah PodSecurityPolicy tersebut dengan `kubectl`: @@ -297,11 +297,11 @@ podsecuritypolicy "example" deleted Berikut adalah kebijakan dengan batasan paling sedikit yang dapat kamu buat, ekuivalen dengan tidak menggunakan _admission controller_ Pod Security Policy: -{{< codenew file="policy/privileged-psp.yaml" >}} +{{% codenew file="policy/privileged-psp.yaml" %}} Berikut adalah sebuah contoh kebijakan yang restriktif yang mengharuskan pengguna-pengguna untuk berjalan sebagai pengguna yang _unprivileged_, memblokir kemungkinan eskalasi menjadi _root_, dan mengharuskan penggunaan beberapa mekanisme keamanan. -{{< codenew file="policy/restricted-psp.yaml" >}} +{{% codenew file="policy/restricted-psp.yaml" %}} ## Referensi Kebijakan diff --git a/content/id/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/id/docs/concepts/scheduling-eviction/assign-pod-node.md index 4f0f838db5204..6139b0b2d00f0 100644 --- a/content/id/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/id/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -52,7 +52,7 @@ spec: Kemudian tambahkan sebuah `nodeSelector` seperti berikut: -{{< codenew file="pods/pod-nginx.yaml" >}} +{{% codenew file="pods/pod-nginx.yaml" %}} Ketika kamu menjalankan perintah `kubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml`, pod tersebut akan dijadwalkan pada node yang memiliki label yang dirinci. Kamu dapat memastikan penambahan nodeSelector berhasil dengan menjalankan `kubectl get pods -o wide` dan melihat "NODE" tempat Pod ditugaskan. @@ -110,7 +110,7 @@ Afinitas node dinyatakan sebagai _field_ `nodeAffinity` dari _field_ `affinity` Berikut ini contoh dari pod yang menggunakan afinitas node: -{{< codenew file="pods/pod-with-node-affinity.yaml" >}} +{{% codenew file="pods/pod-with-node-affinity.yaml" %}} Aturan afinitas node tersebut menyatakan pod hanya bisa ditugaskan pada node dengan label yang memiliki kunci `kubernetes.io/e2e-az-name` dan bernilai `e2e-az1` atau `e2e-az2`. Selain itu, dari semua node yang memenuhi kriteria tersebut, mode dengan label dengan kunci `another-node-label-key` and bernilai `another-node-label-value` harus lebih diutamakan. @@ -151,7 +151,7 @@ Afinitas antar pod dinyatakan sebagai _field_ `podAffinity` dari _field_ `affini #### Contoh pod yang menggunakan pod affinity: -{{< codenew file="pods/pod-with-pod-affinity.yaml" >}} +{{% codenew file="pods/pod-with-pod-affinity.yaml" %}} Afinitas pada pod tersebut menetapkan sebuah aturan afinitas pod dan aturan anti-afinitas pod. Pada contoh ini, `podAffinity` adalah `requiredDuringSchedulingIgnoredDuringExecution` sementara `podAntiAffinity` adalah `preferredDuringSchedulingIgnoredDuringExecution`. Aturan afinitas pod menyatakan bahwa pod dapat dijadwalkan pada node hanya jika node tersebut berada pada zona yang sama dengan minimal satu pod yang sudah berjalan yang memiliki label dengan kunci "security" dan bernilai "S1". (Lebih detail, pod dapat berjalan pada node N jika node N memiliki label dengan kunci `failure-domain.beta.kubernetes.io/zone`dan nilai V sehingga ada minimal satu node dalam klaster dengan kunci `failure-domain.beta.kubernetes.io/zone` dan bernilai V yang menjalankan pod yang memiliki label dengan kunci "security" dan bernilai "S1".) Aturan anti-afinitas pod menyatakan bahwa pod memilih untuk tidak dijadwalkan pada sebuah node jika node tersebut sudah menjalankan pod yang memiliki label dengan kunci "security" dan bernilai "S2". (Jika `topologyKey` adalah `failure-domain.beta.kubernetes.io/zone` maka dapat diartikan bahwa pod tidak dapat dijadwalkan pada node jika node berada pada zona yang sama dengan pod yang memiliki label dengan kunci "security" dan bernilai "S2".) Lihat [design doc](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md) untuk lebih banyak contoh afinitas dan anti-afinitas pod, baik `requiredDuringSchedulingIgnoredDuringExecution` diff --git a/content/id/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md b/content/id/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md index 26a2473f460c9..bd0f5339c8013 100644 --- a/content/id/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md +++ b/content/id/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md @@ -68,7 +68,7 @@ Selain _boilerplate default_, kita dapat menambahkan entri pada berkas `bar.remote` pada `10.1.2.3`, kita dapat melakukannya dengan cara menambahkan HostAliases pada Pod di bawah _field_ `.spec.hostAliases`: -{{< codenew file="service/networking/hostaliases-pod.yaml" >}} +{{% codenew file="service/networking/hostaliases-pod.yaml" %}} Pod ini kemudian dapat dihidupkan dengan perintah berikut: diff --git a/content/id/docs/concepts/services-networking/connect-applications-service.md b/content/id/docs/concepts/services-networking/connect-applications-service.md index 545f5a76e129a..b4fee74d27dae 100644 --- a/content/id/docs/concepts/services-networking/connect-applications-service.md +++ b/content/id/docs/concepts/services-networking/connect-applications-service.md @@ -25,7 +25,7 @@ Panduan ini menggunakan server *nginx* sederhana untuk mendemonstrasikan konsepn Kita melakukan ini di beberapa contoh sebelumnya, tetapi mari kita lakukan sekali lagi dan berfokus pada prespektif jaringannya. Buat sebuah *nginx Pod*, dan perhatikan bahwa templat tersebut mempunyai spesifikasi *port* kontainer: -{{< codenew file="service/networking/run-my-nginx.yaml" >}} +{{% codenew file="service/networking/run-my-nginx.yaml" %}} Ini membuat aplikasi tersebut dapat diakses dari *node* manapun di dalam klaster kamu. Cek lokasi *node* dimana *Pod* tersebut berjalan: ```shell @@ -66,7 +66,7 @@ service/my-nginx exposed Perintah di atas sama dengan `kubectl apply -f` dengan *yaml* sebagai berikut: -{{< codenew file="service/networking/nginx-svc.yaml" >}} +{{% codenew file="service/networking/nginx-svc.yaml" %}} Spesifikasi ini akan membuat *Service* yang membuka *TCP port 80* di setiap *Pod* dengan label `run: my-nginx` dan mengeksposnya ke dalam *port Service* (`targetPort`: adalah port kontainer yang menerima trafik, `port` adalah *service port* yang dapat berupa *port* apapun yang digunakan *Pod* lain untuk mengakses *Service*). @@ -253,7 +253,7 @@ nginxsecret Opaque 2 1m Sekarang modifikasi replika *nginx* untuk menjalankan server *https* menggunakan *certificate* di dalam *secret* dan *Service* untuk mengekspos semua *port* (80 dan 443): -{{< codenew file="service/networking/nginx-secure-app.yaml" >}} +{{% codenew file="service/networking/nginx-secure-app.yaml" %}} Berikut catatan penting tentang manifes *nginx-secure-app*: @@ -281,7 +281,7 @@ node $ curl -k https://10.244.3.5 Perlu dicatat bahwa kita menggunakan parameter `-k` saat menggunakan *curl*, ini karena kita tidak tau apapun tentang *Pod* yang menjalankan *nginx* saat pembuatan seritifikat, jadi kita harus memberitahu *curl* untuk mengabaikan ketidakcocokan *CName*. Dengan membuat *Service*, kita menghubungkan *CName* yang digunakan pada *certificate* dengan nama pada *DNS* yang digunakan *Pod*. Lakukan pengujian dari sebuah *Pod* (*secret* yang sama digunakan untuk agar mudah, *Pod* tersebut hanya membutuhkan *nginx.crt* untuk mengakses *Service*) -{{< codenew file="service/networking/curlpod.yaml" >}} +{{% codenew file="service/networking/curlpod.yaml" %}} ```shell kubectl apply -f ./curlpod.yaml diff --git a/content/id/docs/concepts/services-networking/dns-pod-service.md b/content/id/docs/concepts/services-networking/dns-pod-service.md index efdba8d7a13be..a7dc17f96f4cd 100644 --- a/content/id/docs/concepts/services-networking/dns-pod-service.md +++ b/content/id/docs/concepts/services-networking/dns-pod-service.md @@ -225,7 +225,7 @@ pada _field_ `dnsConfig`: Di bawah ini merupakan contoh sebuah Pod dengan pengaturan DNS kustom: -{{< codenew file="service/networking/custom-dns.yaml" >}} +{{% codenew file="service/networking/custom-dns.yaml" %}} Ketika Pod diatas dibuat, maka Container `test` memiliki isi berkas `/etc/resolv.conf` sebagai berikut: diff --git a/content/id/docs/concepts/services-networking/dual-stack.md b/content/id/docs/concepts/services-networking/dual-stack.md index 52e892f4f4704..6faed791617ff 100644 --- a/content/id/docs/concepts/services-networking/dual-stack.md +++ b/content/id/docs/concepts/services-networking/dual-stack.md @@ -96,19 +96,19 @@ Kubernetes akan mengalokasikan alamat IP (atau yang dikenal juga sebagai "_cluster IP_") dari `service-cluster-ip-range` yang dikonfigurasi pertama kali untuk Service ini. -{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} +{{% codenew file="service/networking/dual-stack-default-svc.yaml" %}} Spesifikasi Service berikut memasukkan bagian `ipFamily`. Sehingga Kubernetes akan mengalokasikan alamat IPv6 (atau yang dikenal juga sebagai "_cluster IP_") dari `service-cluster-ip-range` yang dikonfigurasi untuk Service ini. -{{< codenew file="service/networking/dual-stack-ipv6-svc.yaml" >}} +{{% codenew file="service/networking/dual-stack-ipv6-svc.yaml" %}} Sebagai perbandingan, spesifikasi Service berikut ini akan dialokasikan sebuah alamat IPv4 (atau yang dikenal juga sebagai "_cluster IP_") dari `service-cluster-ip-range` yang dikonfigurasi untuk Service ini. -{{< codenew file="service/networking/dual-stack-ipv4-svc.yaml" >}} +{{% codenew file="service/networking/dual-stack-ipv4-svc.yaml" %}} ### Tipe _LoadBalancer_ diff --git a/content/id/docs/concepts/services-networking/ingress.md b/content/id/docs/concepts/services-networking/ingress.md index 84db01b37e827..8e129ff223cf7 100644 --- a/content/id/docs/concepts/services-networking/ingress.md +++ b/content/id/docs/concepts/services-networking/ingress.md @@ -132,7 +132,7 @@ akan diarahkan pada *backend default*. Terdapat konsep Kubernetes yang memungkinkan kamu untuk mengekspos sebuah Service, lihat [alternatif lain](#alternatif-lain). Kamu juga bisa membuat spesifikasi Ingress dengan *backend default* yang tidak memiliki *rules*. -{{< codenew file="service/networking/ingress.yaml" >}} +{{% codenew file="service/networking/ingress.yaml" %}} Jika kamu menggunakan `kubectl apply -f` kamu dapat melihat: diff --git a/content/id/docs/concepts/workloads/controllers/daemonset.md b/content/id/docs/concepts/workloads/controllers/daemonset.md index ea21a7b268fc7..4edafc7f444af 100644 --- a/content/id/docs/concepts/workloads/controllers/daemonset.md +++ b/content/id/docs/concepts/workloads/controllers/daemonset.md @@ -37,7 +37,7 @@ Kamu bisa definisikan DaemonSet dalam berkas YAML. Contohnya, berkas `daemonset.yaml` di bawah mendefinisikan DaemonSet yang menjalankan _image_ Docker fluentd-elasticsearch: -{{< codenew file="controllers/daemonset.yaml" >}} +{{% codenew file="controllers/daemonset.yaml" %}} * Buat DaemonSet berdasarkan berkas YAML: ``` diff --git a/content/id/docs/concepts/workloads/controllers/deployment.md b/content/id/docs/concepts/workloads/controllers/deployment.md index 18f1542418e33..f6a3244174fe0 100644 --- a/content/id/docs/concepts/workloads/controllers/deployment.md +++ b/content/id/docs/concepts/workloads/controllers/deployment.md @@ -41,7 +41,7 @@ Berikut adalah penggunaan yang umum pada Deployment: Berikut adalah contoh Deployment. Dia membuat ReplicaSet untuk membangkitkan tiga Pod `nginx`: -{{< codenew file="controllers/nginx-deployment.yaml" >}} +{{% codenew file="controllers/nginx-deployment.yaml" %}} Dalam contoh ini: diff --git a/content/id/docs/concepts/workloads/controllers/garbage-collection.md b/content/id/docs/concepts/workloads/controllers/garbage-collection.md index 5eb00cf987caa..121d148b2f20b 100644 --- a/content/id/docs/concepts/workloads/controllers/garbage-collection.md +++ b/content/id/docs/concepts/workloads/controllers/garbage-collection.md @@ -22,7 +22,7 @@ Kamu juga bisa menspesifikasikan hubungan antara pemilik dan dependen dengan car Berikut adalah berkas untuk sebuah ReplicaSet yang memiliki tiga Pod: -{{< codenew file="controllers/replicaset.yaml" >}} +{{% codenew file="controllers/replicaset.yaml" %}} Jika kamu membuat ReplicaSet tersebut dan kemudian melihat metadata Pod, kamu akan melihat kolom OwnerReferences: diff --git a/content/id/docs/concepts/workloads/controllers/job.md b/content/id/docs/concepts/workloads/controllers/job.md index 4a7cce3f2a4a3..03a58ea21223b 100644 --- a/content/id/docs/concepts/workloads/controllers/job.md +++ b/content/id/docs/concepts/workloads/controllers/job.md @@ -33,7 +33,7 @@ Berikut merupakan contoh konfigurasi Job. Job ini melakukan komputasi π hingga digit ke 2000 kemudian memberikan hasilnya sebagai keluaran. Job tersebut memerlukan waktu 10 detik untuk dapat diselesaikan. -{{< codenew file="controllers/job.yaml" >}} +{{% codenew file="controllers/job.yaml" %}} Kamu dapat menjalankan contoh tersebut dengan menjalankan perintah berikut: diff --git a/content/id/docs/concepts/workloads/controllers/replicaset.md b/content/id/docs/concepts/workloads/controllers/replicaset.md index 57b1124208a91..e43ccc57c0ac6 100644 --- a/content/id/docs/concepts/workloads/controllers/replicaset.md +++ b/content/id/docs/concepts/workloads/controllers/replicaset.md @@ -29,7 +29,7 @@ Hal ini berarti kamu boleh jadi tidak akan membutuhkan manipulasi objek ReplicaS ## Contoh -{{< codenew file="controllers/frontend.yaml" >}} +{{% codenew file="controllers/frontend.yaml" %}} Menyimpan _manifest_ ini dalam `frontend.yaml` dan mengirimkannya ke klaster Kubernetes akan membuat ReplicaSet yang telah didefinisikan beserta dengan Pod yang dikelola. @@ -131,7 +131,7 @@ Walaupun kamu bisa membuat Pod biasa tanpa masalah, sangat direkomendasikan untu Mengambil contoh ReplicaSet _frontend_ sebelumnya, dan Pod yang ditentukan pada _manifest_ berikut: -{{< codenew file="pods/pod-rs.yaml" >}} +{{% codenew file="pods/pod-rs.yaml" %}} Karena Pod tersebut tidak memiliki Controller (atau objek lain) sebagai referensi pemilik yang sesuai dengan selektor dari ReplicaSet _frontend_, Pod tersebut akan langsung diakuisisi oleh ReplicaSet. @@ -257,7 +257,7 @@ Jumlah Pod pada ReplicaSet dapat diatur dengan mengubah nilai dari _field_ `.spe Pengaturan jumlah Pod pada ReplicaSet juga dapat dilakukan mengunakan [Horizontal Pod Autoscalers (HPA)](/docs/tasks/run-application/horizontal-pod-autoscale/). Berikut adalah contoh HPA terhadap ReplicaSet yang telah dibuat pada contoh sebelumnya. -{{< codenew file="controllers/hpa-rs.yaml" >}} +{{% codenew file="controllers/hpa-rs.yaml" %}} Menyimpan _manifest_ ini dalam `hpa-rs.yaml` dan mengirimkannya ke klaster Kubernetes akan membuat HPA tersebut yang akan mengatur jumlah Pod pada ReplicaSet yang telah didefinisikan bergantung terhadap penggunaan CPU dari Pod yang direplikasi. diff --git a/content/id/docs/concepts/workloads/controllers/replicationcontroller.md b/content/id/docs/concepts/workloads/controllers/replicationcontroller.md index 48ec718a6df67..f53cac7f290c9 100644 --- a/content/id/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/id/docs/concepts/workloads/controllers/replicationcontroller.md @@ -36,7 +36,7 @@ Sebuah contoh sederhana adalah membuat sebuah objek ReplicationController untuk Contoh ReplicationController ini mengonfigurasi tiga salinan dari peladen web nginx. -{{< codenew file="controllers/replication.yaml" >}} +{{% codenew file="controllers/replication.yaml" %}} Jalankan contoh di atas dengan mengunduh berkas contoh dan menjalankan perintah ini: diff --git a/content/id/docs/concepts/workloads/controllers/statefulset.md b/content/id/docs/concepts/workloads/controllers/statefulset.md index a309e223a3693..5c091d3620939 100644 --- a/content/id/docs/concepts/workloads/controllers/statefulset.md +++ b/content/id/docs/concepts/workloads/controllers/statefulset.md @@ -154,7 +154,7 @@ Domain klaster akan diatur menjadi `cluster.local` kecuali Kubernetes membuat sebuah [PersistentVolume](/id/docs/concepts/storage/persistent-volumes/) untuk setiap VolumeClaimTemplate. Pada contoh nginx di atas, setiap Pod akan menerima sebuah PersistentVolume -dengan StorageClass `my-storage-class` dan penyimpanan senilai 1 Gib yang sudah di-_provisioning_. Jika tidak ada StorageClass +dengan StorageClass `my-storage-class` dan penyimpanan senilai 1 GiB yang sudah di-_provisioning_. Jika tidak ada StorageClass yang dispesifikasikan, maka StorageClass _default_ akan digunakan. Ketika sebuah Pod dilakukan _(re)schedule_ pada sebuah Node, `volumeMounts` akan me-_mount_ PersistentVolumes yang terkait dengan PersistentVolume Claim-nya. Perhatikan bahwa, PersistentVolume yang terkait dengan @@ -275,4 +275,3 @@ StatefulSet akan mulai membuat Pod dengan templat konfigurasi yang sudah di-_rev * Ikuti contoh yang ada pada [bagaimana cara melakukan deploy Cassandra dengan StatefulSets](/docs/tutorials/stateful-application/cassandra/). - diff --git a/content/id/docs/concepts/workloads/pods/init-containers.md b/content/id/docs/concepts/workloads/pods/init-containers.md index 7ccde41c9b624..380f904644a30 100644 --- a/content/id/docs/concepts/workloads/pods/init-containers.md +++ b/content/id/docs/concepts/workloads/pods/init-containers.md @@ -242,6 +242,33 @@ Gunakan `activeDeadlineSeconds` pada Pod dan `livenessProbe` pada Container untu Nama setiap Container aplikasi dan Init Container pada sebuah Pod haruslah unik; Kesalahan validasi akan terjadi jika ada Container atau Init Container yang memiliki nama yang sama. +### API untuk sidecar containers + +{{< feature-state for_k8s_version="v1.28" state="alpha" >}} + +Mulai dari Kubernetes 1.28 dalam mode alpha, terdapat fitur yang disebut `SidecarContainers` yang memungkinkan Anda untuk menentukan `restartPolicy` untuk kontainer init yang independen dari Pod dan kontainer init lainnya. [Probes] (/docs/concepts/workloads/pods/pod-lifecycle/#types-of-probe) juga dapat ditambahkan untuk mengendalikan siklus hidup mereka. + +Jika sebuah kontainer init dibuat dengan `restartPolicy` yang diatur sebagai `Always`, maka kontainer ini akan mulai dan tetap berjalan selama seluruh masa hidup Pod, yang berguna untuk menjalankan layanan pendukung yang terpisah dari kontainer aplikasi utama. + +Jika sebuah `readinessProbe` ditentukan untuk kontainer init ini, hasilnya akan digunakan untuk menentukan status siap dari Pod. + +Karena kontainer-kontainer ini didefinisikan sebagai kontainer init, mereka mendapatkan manfaat dari urutan dan jaminan berurutan yang sama seperti kontainer init lainnya, yang memungkinkan mereka dicampur dengan kontainer init lainnya dalam aliran inisialisasi Pod yang kompleks. + +Dibandingkan dengan kontainer init reguler, kontainer init tipe sidecar terus berjalan, dan kontainer init berikutnya dapat mulai menjalankan saat kubelet telah menetapkan status kontainer `started` menjadi benar untuk kontainer init tipe sidecar. Status tersebut menjadi benar karena ada proses yang berjalan dalam kontainer dan tidak ada probe awal yang ditentukan, atau sebagai hasil dari keberhasilan `startupProbe`. + +Fitur ini dapat digunakan untuk mengimplementasikan pola kontainer sidecar dengan lebih tangguh, karena kubelet selalu akan me-restart kontainer sidecar jika kontainer tersebut gagal. + +Berikut adalah contoh Deployment dengan dua kontainer, salah satunya adalah sidecar: + +{{% code_sample language="yaml" file="application/deployment-sidecar.yaml" %}} + +Fitur ini juga berguna untuk menjalankan Job dengan sidecar, karena kontainer sidecar tidak akan mencegah Job untuk menyelesaikan tugasnya setelah kontainer utama selesai. + +Berikut adalah contoh sebuah Job dengan dua kontainer, salah satunya adalah sidecar: + +{{% code_sample language="yaml" file="application/job/job-sidecar.yaml" %}} + + ### Sumber Daya Karena eksekusi Init Container yang berurutan, aturan-aturan untuk sumber daya berlaku sebagai berikut: diff --git a/content/id/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/id/docs/concepts/workloads/pods/pod-topology-spread-constraints.md index 6f27244ef6dea..188080abb5802 100644 --- a/content/id/docs/concepts/workloads/pods/pod-topology-spread-constraints.md +++ b/content/id/docs/concepts/workloads/pods/pod-topology-spread-constraints.md @@ -114,7 +114,7 @@ node2 dan node3 (`P` merepresentasikan Pod): Jika kita ingin Pod baru akan disebar secara merata berdasarkan Pod yang telah ada pada semua zona, maka _spec_ bernilai sebagai berikut: -{{< codenew file="pods/topology-spread-constraints/one-constraint.yaml" >}} +{{% codenew file="pods/topology-spread-constraints/one-constraint.yaml" %}} `topologyKey: zone` berarti persebaran merata hanya akan digunakan pada Node dengan pasangan label "zone: ". `whenUnsatisfiable: DoNotSchedule` memberitahukan penjadwal untuk membiarkan @@ -161,7 +161,7 @@ Ini dibuat berdasarkan contoh sebelumnya. Misalkan kamu memiliki klaster dengan Kamu dapat menggunakan 2 TopologySpreadConstraint untuk mengatur persebaran Pod pada zona dan Node: -{{< codenew file="pods/topology-spread-constraints/two-constraints.yaml" >}} +{{% codenew file="pods/topology-spread-constraints/two-constraints.yaml" %}} Dalam contoh ini, untuk memenuhi batasan pertama, Pod yang baru hanya akan ditempatkan pada "zoneB", sedangkan untuk batasan kedua, Pod yang baru hanya akan ditempatkan pada "node4". Maka hasil dari @@ -224,7 +224,7 @@ sesuai dengan nilai tersebut akan dilewatkan. berkas yaml seperti di bawah, jadi "mypod" akan ditempatkan pada "zoneB", bukan "zoneC". Demikian juga `spec.nodeSelector` akan digunakan. - {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} + {{% codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" %}} ### Batasan _default_ pada tingkat klaster diff --git a/content/id/docs/tasks/administer-cluster/dns-debugging-resolution.md b/content/id/docs/tasks/administer-cluster/dns-debugging-resolution.md index 67b451db52b34..fdc31a1147e9c 100644 --- a/content/id/docs/tasks/administer-cluster/dns-debugging-resolution.md +++ b/content/id/docs/tasks/administer-cluster/dns-debugging-resolution.md @@ -21,7 +21,7 @@ kube-dns. ### Membuat Pod sederhana yang digunakan sebagai lingkungan pengujian -{{< codenew file="admin/dns/dnsutils.yaml" >}} +{{% codenew file="admin/dns/dnsutils.yaml" %}} Gunakan manifes berikut untuk membuat sebuah Pod: diff --git a/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md b/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md index 1aae3e38f009b..04f7076cfdb07 100644 --- a/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md +++ b/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md @@ -40,7 +40,7 @@ kubectl create namespace constraints-mem-example Berikut berkas konfigurasi untuk sebuah LimitRange: -{{< codenew file="admin/resource/memory-constraints.yaml" >}} +{{% codenew file="admin/resource/memory-constraints.yaml" %}} Membuat LimitRange: @@ -85,7 +85,7 @@ Berikut berkas konfigurasi Pod yang memiliki satu Container. Manifes Container menentukan permintaan memori 600 MiB dan limit memori 800 MiB. Nilai tersebut memenuhi batasan minimum dan maksimum memori yang ditentukan oleh LimitRange. -{{< codenew file="admin/resource/memory-constraints-pod.yaml" >}} +{{% codenew file="admin/resource/memory-constraints-pod.yaml" %}} Membuat Pod: @@ -127,7 +127,7 @@ kubectl delete pod constraints-mem-demo --namespace=constraints-mem-example Berikut berkas konfigurasi untuk sebuah Pod yang memiliki satu Container. Container tersebut menentukan permintaan memori 800 MiB dan batas memori 1.5 GiB. -{{< codenew file="admin/resource/memory-constraints-pod-2.yaml" >}} +{{% codenew file="admin/resource/memory-constraints-pod-2.yaml" %}} Mencoba membuat Pod: @@ -148,7 +148,7 @@ pods "constraints-mem-demo-2" is forbidden: maximum memory usage per Container i Berikut berkas konfigurasi untuk sebuah Pod yang memiliki satu Container. Container tersebut menentukan permintaan memori 100 MiB dan limit memori 800 MiB. -{{< codenew file="admin/resource/memory-constraints-pod-3.yaml" >}} +{{% codenew file="admin/resource/memory-constraints-pod-3.yaml" %}} Mencoba membuat Pod: @@ -171,7 +171,7 @@ pods "constraints-mem-demo-3" is forbidden: minimum memory usage per Container i Berikut berkas konfigurasi untuk sebuah Pod yang memiliki satu Container. Container tersebut tidak menentukan permintaan memori dan juga limit memori. -{{< codenew file="admin/resource/memory-constraints-pod-4.yaml" >}} +{{% codenew file="admin/resource/memory-constraints-pod-4.yaml" %}} Mencoba membuat Pod: @@ -202,7 +202,7 @@ dari LimitRange. Pada tahap ini, Containermu mungkin saja berjalan ataupun mungkin juga tidak berjalan. Ingat bahwa prasyarat untuk tugas ini adalah Node harus memiliki setidaknya 1 GiB memori. Jika tiap Node hanya memiliki -1 GiB memori, maka tidak akan ada cukup memori untuk dialokasikan pada setiap Node untuk memenuhi permintaan 1 Gib memori. Jika ternyata kamu menggunakan Node dengan 2 GiB memori, maka kamu mungkin memiliki cukup ruang untuk memenuhi permintaan 1 GiB tersebut. +1 GiB memori, maka tidak akan ada cukup memori untuk dialokasikan pada setiap Node untuk memenuhi permintaan 1 GiB memori. Jika ternyata kamu menggunakan Node dengan 2 GiB memori, maka kamu mungkin memiliki cukup ruang untuk memenuhi permintaan 1 GiB tersebut. Menghapus Pod: diff --git a/content/id/docs/tasks/configure-pod-container/assign-memory-resource.md b/content/id/docs/tasks/configure-pod-container/assign-memory-resource.md index bb092e86a58c6..4a50fb84159c2 100644 --- a/content/id/docs/tasks/configure-pod-container/assign-memory-resource.md +++ b/content/id/docs/tasks/configure-pod-container/assign-memory-resource.md @@ -69,7 +69,7 @@ Dalam latihan ini, kamu akan membuat Pod yang memiliki satu Container. Container sebesar 100 MiB dan batasan memori sebesar 200 MiB. Berikut berkas konfigurasi untuk Pod: -{{< codenew file="pods/resource/memory-request-limit.yaml" >}} +{{% codenew file="pods/resource/memory-request-limit.yaml" %}} Bagian `args` dalam berkas konfigurasi memberikan argumen untuk Container pada saat dimulai. Argumen`"--vm-bytes", "150M"` memberi tahu Container agar mencoba mengalokasikan memori sebesar 150 MiB. @@ -139,7 +139,7 @@ Dalam latihan ini, kamu membuat Pod yang mencoba mengalokasikan lebih banyak mem Berikut adalah berkas konfigurasi untuk Pod yang memiliki satu Container dengan berkas permintaan memori sebesar 50 MiB dan batasan memori sebesar 100 MiB: -{{< codenew file="pods/resource/memory-request-limit-2.yaml" >}} +{{% codenew file="pods/resource/memory-request-limit-2.yaml" %}} Dalam bagian `args` dari berkas konfigurasi, kamu dapat melihat bahwa Container tersebut akan mencoba mengalokasikan memori sebesar 250 MiB, yang jauh di atas batas yaitu 100 MiB. @@ -250,7 +250,7 @@ kapasitas dari Node mana pun dalam klaster kamu. Berikut adalah berkas konfigura Container dengan permintaan memori 1000 GiB, yang kemungkinan besar melebihi kapasitas dari setiap Node dalam klaster kamu. -{{< codenew file="pods/resource/memory-request-limit-3.yaml" >}} +{{% codenew file="pods/resource/memory-request-limit-3.yaml" %}} Buatlah Pod: diff --git a/content/id/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md b/content/id/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md index a60d862fd26c2..3d4a5d079b8d1 100644 --- a/content/id/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md +++ b/content/id/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md @@ -64,7 +64,7 @@ Afinitas Node di dalam klaster Kubernetes. Konfigurasi ini menunjukkan sebuah Pod yang memiliki afinitas node `requiredDuringSchedulingIgnoredDuringExecution`, `disktype: ssd`. Dengan kata lain, Pod hanya akan dijadwalkan hanya pada Node yang memiliki label `disktype=ssd`. -{{< codenew file="pods/pod-nginx-required-affinity.yaml" >}} +{{% codenew file="pods/pod-nginx-required-affinity.yaml" %}} 1. Terapkan konfigurasi berikut untuk membuat sebuah Pod yang akan dijadwalkan pada Node yang kamu pilih: @@ -90,7 +90,7 @@ Dengan kata lain, Pod hanya akan dijadwalkan hanya pada Node yang memiliki label Konfigurasi ini memberikan deskripsi sebuah Pod yang memiliki afinitas Node `preferredDuringSchedulingIgnoredDuringExecution`,`disktype: ssd`. Artinya Pod akan diutamakan dijalankan pada Node yang memiliki label `disktype=ssd`. -{{< codenew file="pods/pod-nginx-preferred-affinity.yaml" >}} +{{% codenew file="pods/pod-nginx-preferred-affinity.yaml" %}} 1. Terapkan konfigurasi berikut untuk membuat sebuah Pod yang akan dijadwalkan pada Node yang kamu pilih: diff --git a/content/id/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/id/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index d56f59a09a646..e03a9d97a331a 100644 --- a/content/id/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/id/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -46,7 +46,7 @@ Kubernetes menyediakan _probe liveness_ untuk mendeteksi dan memperbaiki situasi Pada latihan ini, kamu akan membuat Pod yang menjalankan Container dari image `registry.k8s.io/busybox`. Berikut ini adalah berkas konfigurasi untuk Pod tersebut: -{{< codenew file="pods/probe/exec-liveness.yaml" >}} +{{% codenew file="pods/probe/exec-liveness.yaml" %}} Pada berkas konfigurasi di atas, kamu dapat melihat bahwa Pod memiliki satu `Container`. _Field_ `periodSeconds` menentukan bahwa kubelet harus melakukan _probe liveness_ setiap 5 detik. @@ -128,7 +128,7 @@ liveness-exec 1/1 Running 1 1m Jenis kedua dari _probe liveness_ menggunakan sebuah permintaan GET HTTP. Berikut ini berkas konfigurasi untuk Pod yang menjalankan Container dari image `registry.k8s.io/liveness`. -{{< codenew file="pods/probe/http-liveness.yaml" >}} +{{% codenew file="pods/probe/http-liveness.yaml" %}} Pada berkas konfigurasi tersebut, kamu dapat melihat Pod memiliki sebuah Container. _Field_ `periodSeconds` menentukan bahwa kubelet harus mengerjakan _probe liveness_ setiap 3 detik. @@ -190,7 +190,7 @@ kubelet akan mencoba untuk membuka soket pada Container kamu dengan porta terten Jika koneksi dapat terbentuk dengan sukses, maka Container dianggap dalam kondisi sehat. Namun jika tidak berhasil terbentuk, maka Container dianggap gagal. -{{< codenew file="pods/probe/tcp-liveness-readiness.yaml" >}} +{{% codenew file="pods/probe/tcp-liveness-readiness.yaml" %}} Seperti yang terlihat, konfigurasi untuk pemeriksaan TCP cukup mirip dengan pemeriksaan HTTP. Contoh ini menggunakan _probe readiness_ dan _liveness_. diff --git a/content/id/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md b/content/id/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md index 858342880e7e1..79db4e848a754 100644 --- a/content/id/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md +++ b/content/id/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md @@ -93,7 +93,7 @@ untuk mengatur Berikut berkas konfigurasi untuk hostPath PersistentVolume: -{{< codenew file="pods/storage/pv-volume.yaml" >}} +{{% codenew file="pods/storage/pv-volume.yaml" %}} Berkas konfigurasi tersebut menentukan bahwa volume berada di `/mnt/data` pada klaster Node. Konfigurasi tersebut juga menentukan ukuran dari 10 gibibytes dan @@ -129,7 +129,7 @@ setidaknya untuk satu Node. Berikut berkas konfigurasi untuk PersistentVolumeClaim: -{{< codenew file="pods/storage/pv-claim.yaml" >}} +{{% codenew file="pods/storage/pv-claim.yaml" %}} Membuat sebuah PersistentVolumeClaim: @@ -169,7 +169,7 @@ Langkah selanjutnya adalah membuat sebuah Pod yang akan menggunakan PersistentVo Berikut berkas konfigurasi untuk Pod: -{{< codenew file="pods/storage/pv-pod.yaml" >}} +{{% codenew file="pods/storage/pv-pod.yaml" %}} Perhatikan bahwa berkas konfigurasi Pod menentukan sebuah PersistentVolumeClaim, tetapi tidak menentukan PeristentVolume. Dari sudut pandang Pod, _claim_ adalah volume. diff --git a/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md index bfdad56610635..2cf40c6106aec 100644 --- a/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -467,7 +467,7 @@ configmap/special-config-2-c92b5mmcf2 created 2. Memberikan nilai `special.how` yang sudah terdapat pada ConfigMap pada variabel _environment_ `SPECIAL_LEVEL_KEY` di spesifikasi Pod. - {{< codenew file="pods/pod-single-configmap-env-variable.yaml" >}} + {{% codenew file="pods/pod-single-configmap-env-variable.yaml" %}} Buat Pod: @@ -481,7 +481,7 @@ configmap/special-config-2-c92b5mmcf2 created * Seperti pada contoh sebelumnya, buat ConfigMap terlebih dahulu. - {{< codenew file="configmap/configmaps.yaml" >}} + {{% codenew file="configmap/configmaps.yaml" %}} Buat ConfigMap: @@ -491,7 +491,7 @@ configmap/special-config-2-c92b5mmcf2 created * Tentukan variabel _environment_ pada spesifikasi Pod. - {{< codenew file="pods/pod-multiple-configmap-env-variable.yaml" >}} + {{% codenew file="pods/pod-multiple-configmap-env-variable.yaml" %}} Buat Pod: @@ -509,7 +509,7 @@ Fungsi ini tersedia pada Kubernetes v1.6 dan selanjutnya. * Buat ConfigMap yang berisi beberapa pasangan kunci-nilai. - {{< codenew file="configmap/configmap-multikeys.yaml" >}} + {{% codenew file="configmap/configmap-multikeys.yaml" %}} Buat ConfigMap: @@ -519,7 +519,7 @@ Fungsi ini tersedia pada Kubernetes v1.6 dan selanjutnya. * Gunakan `envFrom` untuk menentukan seluruh data pada ConfigMap sebagai variabel _environment_ kontainer. Kunci dari ConfigMap akan menjadi nama variabel _environment_ di dalam Pod. - {{< codenew file="pods/pod-configmap-envFrom.yaml" >}} + {{% codenew file="pods/pod-configmap-envFrom.yaml" %}} Buat Pod: @@ -536,7 +536,7 @@ Kamu dapat menggunakan variabel _environment_ yang ditentukan ConfigMap pada bag Sebagai contoh, spesifikasi Pod berikut -{{< codenew file="pods/pod-configmap-env-var-valueFrom.yaml" >}} +{{% codenew file="pods/pod-configmap-env-var-valueFrom.yaml" %}} dibuat dengan menjalankan @@ -545,6 +545,9 @@ kubectl create -f https://kubernetes.io/examples/pods/pod-configmap-env-var-valu ``` menghasilkan keluaran pada kontainer `test-container` seperti berikut: +```shell +kubectl logs dapi-test-pod +``` ```shell very charm @@ -556,7 +559,7 @@ Seperti yang sudah dijelaskan pada [Membuat ConfigMap dari berkas](#membuat-conf Contoh pada bagian ini merujuk pada ConfigMap bernama `special-config`, Seperti berikut. -{{< codenew file="configmap/configmap-multikeys.yaml" >}} +{{% codenew file="configmap/configmap-multikeys.yaml" %}} Buat ConfigMap: @@ -570,7 +573,7 @@ Tambahkan nama ConfigMap di bawah bagian `volumes` pada spesifikasi Pod. Hal ini akan menambahkan data ConfigMap pada direktori yang ditentukan oleh `volumeMounts.mountPath` (pada kasus ini, `/etc/config`). Bagian `command` berisi daftar berkas pada direktori dengan nama-nama yang sesuai dengan kunci-kunci pada ConfigMap. -{{< codenew file="pods/pod-configmap-volume.yaml" >}} +{{% codenew file="pods/pod-configmap-volume.yaml" %}} Buat Pod: @@ -594,7 +597,7 @@ Jika ada beberapa berkas pada direktori `/etc/config/`, berkas-berkas tersebut a Gunakan kolom `path` untuk menentukan jalur berkas yang diinginkan untuk butir tertentu pada ConfigMap (butir ConfigMap tertentu). Pada kasus ini, butir `SPECIAL_LEVEL` akan akan dipasangkan sebagai `config-volume` pada `/etc/config/keys`. -{{< codenew file="pods/pod-configmap-volume-specific-key.yaml" >}} +{{% codenew file="pods/pod-configmap-volume-specific-key.yaml" %}} Buat Pod: diff --git a/content/id/docs/tasks/configure-pod-container/configure-service-account.md b/content/id/docs/tasks/configure-pod-container/configure-service-account.md index e53812d65a8b3..f469b257d85cd 100644 --- a/content/id/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/id/docs/tasks/configure-pod-container/configure-service-account.md @@ -282,7 +282,7 @@ Kubelet juga dapat memproyeksikan _token_ ServiceAccount ke Pod. Kamu dapat mene Perilaku ini diatur pada PodSpec menggunakan tipe ProjectedVolume yaitu [ServiceAccountToken](/id/docs/concepts/storage/volumes/#projected). Untuk memungkinkan Pod dengan _token_ dengan pengguna bertipe _"vault"_ dan durasi validitas selama dua jam, kamu harus mengubah bagian ini pada PodSpec: -{{< codenew file="pods/pod-projected-svc-token.yaml" >}} +{{% codenew file="pods/pod-projected-svc-token.yaml" %}} Buat Pod: diff --git a/content/id/docs/tasks/configure-pod-container/configure-volume-storage.md b/content/id/docs/tasks/configure-pod-container/configure-volume-storage.md index 02d664d530457..e6b6f365a45c0 100644 --- a/content/id/docs/tasks/configure-pod-container/configure-volume-storage.md +++ b/content/id/docs/tasks/configure-pod-container/configure-volume-storage.md @@ -25,7 +25,7 @@ _Filesystem_ dari sebuah Container hanya hidup selama Container itu juga hidup. Pada latihan ini, kamu membuat sebuah Pod yang menjalankan sebuah Container. Pod ini memiliki sebuah Volume dengan tipe [emptyDir](/id/docs/concepts/storage/volumes/#emptydir) yang tetap bertahan, meski Container berakhir dan dimulai ulang. Berikut berkas konfigurasi untuk Pod: -{{< codenew file="pods/storage/redis.yaml" >}} +{{% codenew file="pods/storage/redis.yaml" %}} 1. Membuat Pod: diff --git a/content/id/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/id/docs/tasks/configure-pod-container/pull-image-private-registry.md index 50aad8de9a15a..3fe2ce8407c3b 100644 --- a/content/id/docs/tasks/configure-pod-container/pull-image-private-registry.md +++ b/content/id/docs/tasks/configure-pod-container/pull-image-private-registry.md @@ -176,7 +176,7 @@ Kamu telah berhasil menetapkan kredensial Docker kamu sebagai sebuah Secret yang Berikut ini adalah berkas konfigurasi untuk Pod yang memerlukan akses ke kredensial Docker kamu pada `regcred`: -{{< codenew file="pods/private-reg-pod.yaml" >}} +{{% codenew file="pods/private-reg-pod.yaml" %}} Unduh berkas diatas: diff --git a/content/id/docs/tasks/configure-pod-container/quality-service-pod.md b/content/id/docs/tasks/configure-pod-container/quality-service-pod.md index c5337c8854a75..5ced04b84f1fb 100644 --- a/content/id/docs/tasks/configure-pod-container/quality-service-pod.md +++ b/content/id/docs/tasks/configure-pod-container/quality-service-pod.md @@ -41,7 +41,7 @@ Agar sebuah Pod memiliki kelas QoS Guaranteed: Berikut adalah berkas konfigurasi untuk sebuah Pod dengan satu Container. Container tersebut memiliki sebuah batasan memori dan sebuah permintaan memori, keduanya sama dengan 200MiB. Container itu juga mempunyai batasan CPU dan permintaan CPU yang sama sebesar 700 milliCPU: -{{< codenew file="pods/qos/qos-pod.yaml" >}} +{{% codenew file="pods/qos/qos-pod.yaml" %}} Buatlah Pod: @@ -100,7 +100,7 @@ Sebuah Pod akan mendapatkan kelas QoS Burstable apabila: Berikut adalah berkas konfigurasi untuk Pod dengan satu Container. Container yang dimaksud memiliki batasan memori sebesar 200MiB dan permintaan memori sebesar 100MiB. -{{< codenew file="pods/qos/qos-pod-2.yaml" >}} +{{% codenew file="pods/qos/qos-pod-2.yaml" %}} Buatlah Pod: @@ -147,7 +147,7 @@ Agar Pod mendapatkan kelas QoS BestEffort, Container dalam pod tidak boleh memiliki batasan atau permintaan memori atau CPU. Berikut adalah berkas konfigurasi untuk Pod dengan satu Container. Container yang dimaksud tidak memiliki batasan atau permintaan memori atau CPU apapun. -{{< codenew file="pods/qos/qos-pod-3.yaml" >}} +{{% codenew file="pods/qos/qos-pod-3.yaml" %}} Buatlah Pod: @@ -183,7 +183,7 @@ kubectl delete pod qos-demo-3 --namespace=qos-example Berikut adalah konfigurasi berkas untuk Pod yang memiliki dua Container. Satu Container menentukan permintaan memori sebesar 200MiB. Container yang lain tidak menentukan permintaan atau batasan apapun. -{{< codenew file="pods/qos/qos-pod-4.yaml" >}} +{{% codenew file="pods/qos/qos-pod-4.yaml" %}} Perhatikan bahwa Pod ini memenuhi kriteria untuk kelas QoS Burstable. Maksudnya, Container tersebut tidak memenuhi kriteria untuk kelas QoS Guaranteed, dan satu dari Container tersebut memiliki permintaan memori. diff --git a/content/id/docs/tasks/configure-pod-container/security-context.md b/content/id/docs/tasks/configure-pod-container/security-context.md index d190468399cf1..a8bd1bfdf9620 100644 --- a/content/id/docs/tasks/configure-pod-container/security-context.md +++ b/content/id/docs/tasks/configure-pod-container/security-context.md @@ -50,7 +50,7 @@ dalam spesifikasi Pod. Bagian `securityContext` adalah sebuah objek Aturan keamanan yang kamu tetapkan untuk Pod akan berlaku untuk semua Container dalam Pod tersebut. Berikut sebuah berkas konfigurasi untuk Pod yang memiliki volume `securityContext` dan `emptyDir`: -{{< codenew file="pods/security/security-context.yaml" >}} +{{% codenew file="pods/security/security-context.yaml" %}} Dalam berkas konfigurasi ini, bagian `runAsUser` menentukan bahwa dalam setiap Container pada Pod, semua proses dijalankan oleh ID pengguna 1000. Bagian `runAsGroup` menentukan grup utama dengan ID 3000 untuk @@ -191,7 +191,7 @@ ada aturan yang tumpang tindih. Aturan pada Container mempengaruhi volume pada P Berikut berkas konfigurasi untuk Pod yang hanya memiliki satu Container. Keduanya, baik Pod dan Container memiliki bagian `securityContext` sebagai berikut: -{{< codenew file="pods/security/security-context-2.yaml" >}} +{{% codenew file="pods/security/security-context-2.yaml" %}} Buatlah Pod tersebut: @@ -244,7 +244,7 @@ bagian `capabilities` pada `securityContext` di manifes Container-nya. Pertama-tama, mari melihat apa yang terjadi ketika kamu tidak menyertakan bagian `capabilities`. Berikut ini adalah berkas konfigurasi yang tidak menambah atau mengurangi kemampuan apa pun dari Container: -{{< codenew file="pods/security/security-context-3.yaml" >}} +{{% codenew file="pods/security/security-context-3.yaml" %}} Buatlah Pod tersebut: @@ -306,7 +306,7 @@ Container ini memiliki kapabilitas tambahan yang sudah ditentukan. Berikut ini adalah berkas konfigurasi untuk Pod yang hanya menjalankan satu Container. Konfigurasi ini menambahkan kapabilitas `CAP_NET_ADMIN` dan `CAP_SYS_TIME`: -{{< codenew file="pods/security/security-context-4.yaml" >}} +{{% codenew file="pods/security/security-context-4.yaml" %}} Buatlah Pod tersebut: diff --git a/content/id/docs/tasks/configure-pod-container/share-process-namespace.md b/content/id/docs/tasks/configure-pod-container/share-process-namespace.md index 9b32d74b3cdf6..c764bd8df3eaa 100644 --- a/content/id/docs/tasks/configure-pod-container/share-process-namespace.md +++ b/content/id/docs/tasks/configure-pod-container/share-process-namespace.md @@ -34,7 +34,7 @@ proses pemecahan masalah (_troubleshoot_) image kontainer yang tidak memiliki ut Pembagian _namespace_ proses (_Process Namespace Sharing_) diaktifkan menggunakan _field_ `shareProcessNamespace` `v1.PodSpec`. Sebagai contoh: -{{< codenew file="pods/share-process-namespace.yaml" >}} +{{% codenew file="pods/share-process-namespace.yaml" %}} 1. Buatlah sebuah Pod `nginx` di dalam klaster kamu: diff --git a/content/id/docs/tasks/debug-application-cluster/debug-application-introspection.md b/content/id/docs/tasks/debug-application-cluster/debug-application-introspection.md index 746c46f045a09..a2c7b2f318610 100644 --- a/content/id/docs/tasks/debug-application-cluster/debug-application-introspection.md +++ b/content/id/docs/tasks/debug-application-cluster/debug-application-introspection.md @@ -18,7 +18,7 @@ Pod kamu. Namun ada sejumlah cara untuk mendapatkan lebih banyak informasi tenta Dalam contoh ini, kamu menggunakan Deployment untuk membuat dua buah Pod, yang hampir sama dengan contoh sebelumnya. -{{< codenew file="application/nginx-with-request.yaml" >}} +{{% codenew file="application/nginx-with-request.yaml" %}} Buat Deployment dengan menjalankan perintah ini: diff --git a/content/id/docs/tasks/debug-application-cluster/get-shell-running-container.md b/content/id/docs/tasks/debug-application-cluster/get-shell-running-container.md index e15a8a4df6532..432898c0fbc6d 100644 --- a/content/id/docs/tasks/debug-application-cluster/get-shell-running-container.md +++ b/content/id/docs/tasks/debug-application-cluster/get-shell-running-container.md @@ -26,7 +26,7 @@ mendapatkan _shell_ untuk masuk ke dalam Container yang sedang berjalan. Dalam latihan ini, kamu perlu membuat Pod yang hanya memiliki satu Container saja. Container tersebut menjalankan _image_ nginx. Berikut ini adalah berkas konfigurasi untuk Pod tersebut: -{{< codenew file="application/shell-demo.yaml" >}} +{{% codenew file="application/shell-demo.yaml" %}} Buatlah Pod tersebut: @@ -108,7 +108,7 @@ Pada jendela (_window_) perintah biasa, bukan pada _shell_ kamu di dalam Contain lihatlah daftar variabel lingkungan (_environment variable_) pada Container yang sedang berjalan: ```shell -kubectl exec shell-demo env +kubectl exec shell-demo -- env ``` Cobalah dengan menjalankan perintah lainnya. Berikut beberapa contohnya: diff --git a/content/id/docs/tasks/inject-data-application/define-command-argument-container.md b/content/id/docs/tasks/inject-data-application/define-command-argument-container.md index 9f2cd7a7aefc8..f2d248232e004 100644 --- a/content/id/docs/tasks/inject-data-application/define-command-argument-container.md +++ b/content/id/docs/tasks/inject-data-application/define-command-argument-container.md @@ -44,7 +44,7 @@ Merujuk pada [catatan](#catatan) di bawah. Pada latihan ini, kamu akan membuat sebuah Pod baru yang menjalankan sebuah Container. Berkas konfigurasi untuk Pod mendefinisikan sebuah perintah dan dua argumen: -{{< codenew file="pods/commands.yaml" >}} +{{% codenew file="pods/commands.yaml" %}} 1. Buat sebuah Pod dengan berkas konfigurasi YAML: diff --git a/content/id/docs/tasks/inject-data-application/define-environment-variable-container.md b/content/id/docs/tasks/inject-data-application/define-environment-variable-container.md index 0f35ef27f7188..584866d4c4d12 100644 --- a/content/id/docs/tasks/inject-data-application/define-environment-variable-container.md +++ b/content/id/docs/tasks/inject-data-application/define-environment-variable-container.md @@ -30,7 +30,7 @@ Dalam latihan ini, kamu membuat sebuah Pod yang menjalankan satu buah Container. Berkas konfigurasi untuk Pod tersebut mendefinisikan sebuah variabel lingkungan dengan nama `DEMO_GREETING` yang bernilai `"Hello from the environment"`. Berikut berkas konfigurasi untuk Pod tersebut: -{{< codenew file="pods/inject/envars.yaml" >}} +{{% codenew file="pods/inject/envars.yaml" %}} 1. Buatlah sebuah Pod berdasarkan berkas konfigurasi YAML tersebut: diff --git a/content/id/docs/tasks/inject-data-application/distribute-credentials-secure.md b/content/id/docs/tasks/inject-data-application/distribute-credentials-secure.md index c08db9484f6cc..5d4c3633fac96 100644 --- a/content/id/docs/tasks/inject-data-application/distribute-credentials-secure.md +++ b/content/id/docs/tasks/inject-data-application/distribute-credentials-secure.md @@ -37,7 +37,7 @@ Gunakan alat yang telah dipercayai oleh OS kamu untuk menghindari risiko dari pe Berikut ini adalah berkas konfigurasi yang dapat kamu gunakan untuk membuat Secret yang akan menampung nama pengguna dan kata sandi kamu: -{{< codenew file="pods/inject/secret.yaml" >}} +{{% codenew file="pods/inject/secret.yaml" %}} 1. Membuat Secret @@ -95,7 +95,7 @@ Tentu saja ini lebih mudah. Pendekatan yang mendetil setiap langkah di atas bert Berikut ini adalah berkas konfigurasi yang dapat kamu gunakan untuk membuat Pod: -{{< codenew file="pods/inject/secret-pod.yaml" >}} +{{% codenew file="pods/inject/secret-pod.yaml" %}} 1. Membuat Pod: @@ -157,7 +157,7 @@ Berikut ini adalah berkas konfigurasi yang dapat kamu gunakan untuk membuat Pod: * Tentukan nilai `backend-username` yang didefinisikan di Secret ke variabel lingkungan `SECRET_USERNAME` di dalam spesifikasi Pod. - {{< codenew file="pods/inject/pod-single-secret-env-variable.yaml" >}} + {{% codenew file="pods/inject/pod-single-secret-env-variable.yaml" %}} * Membuat Pod: @@ -187,7 +187,7 @@ Berikut ini adalah berkas konfigurasi yang dapat kamu gunakan untuk membuat Pod: * Definisikan variabel lingkungan di dalam spesifikasi Pod. - {{< codenew file="pods/inject/pod-multiple-secret-env-variable.yaml" >}} + {{% codenew file="pods/inject/pod-multiple-secret-env-variable.yaml" %}} * Membuat Pod: @@ -221,7 +221,7 @@ Fitur ini tersedia mulai dari Kubernetes v1.6 dan yang lebih baru. * Gunakan envFrom untuk mendefinisikan semua data Secret sebagai variabel lingkungan Container. _Key_ dari Secret akan mennjadi nama variabel lingkungan di dalam Pod. - {{< codenew file="pods/inject/pod-secret-envFrom.yaml" >}} + {{% codenew file="pods/inject/pod-secret-envFrom.yaml" %}} * Membuat Pod: diff --git a/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md b/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md index 349a283d7a01a..6bf0f53532aa8 100644 --- a/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md +++ b/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md @@ -34,7 +34,7 @@ Untuk informasi lanjut mengenai keterbatasan, lihat [CronJob](/id/docs/concepts/ CronJob membutuhkan sebuah berkas konfigurasi. Ini adalah contoh dari berkas konfigurasi CronJob `.spec` yang akan mencetak waktu sekarang dan pesan "hello" setiap menit: -{{< codenew file="application/job/cronjob.yaml" >}} +{{% codenew file="application/job/cronjob.yaml" %}} Jalankan contoh CronJob menggunakan perintah berikut: diff --git a/content/id/docs/tasks/manage-kubernetes-objects/declarative-config.md b/content/id/docs/tasks/manage-kubernetes-objects/declarative-config.md index 88eeaf38d3079..073937e189409 100644 --- a/content/id/docs/tasks/manage-kubernetes-objects/declarative-config.md +++ b/content/id/docs/tasks/manage-kubernetes-objects/declarative-config.md @@ -52,7 +52,7 @@ Tambahkan parameter `-R` untuk memproses seluruh direktori secara rekursif. Berikut sebuah contoh *file* konfigurasi objek: -{{< codenew file="application/simple_deployment.yaml" >}} +{{% codenew file="application/simple_deployment.yaml" %}} Jalankan perintah `kubectl diff` untuk menampilkan objek yang akan dibuat: @@ -135,7 +135,7 @@ Tambahkan argumen `-R` untuk memproses seluruh direktori secara rekursif. Berikut sebuah contoh *file* konfigurasi: -{{< codenew file="application/simple_deployment.yaml" >}} +{{% codenew file="application/simple_deployment.yaml" %}} Buat objek dengan perintah `kubectl apply`:: @@ -248,7 +248,7 @@ spec: Perbarui *file* konfigurasi `simple_deployment.yaml`, ubah *image* dari `nginx:1.7.9` ke `nginx:1.11.9`, dan hapus *field* `minReadySeconds`: -{{< codenew file="application/update_deployment.yaml" >}} +{{% codenew file="application/update_deployment.yaml" %}} Terapkan perubahan yang telah dibuat di *file* konfigurasi: @@ -379,7 +379,7 @@ Perintah `kubectl apply` menulis konten dari berkas konfigurasi ke anotasi `kube Agar lebih jelas, simak contoh berikut. Misalkan, berikut adalah *file* konfigurasi untuk sebuah objek Deployment: -{{< codenew file="application/update_deployment.yaml" >}} +{{% codenew file="application/update_deployment.yaml" %}} Juga, misalkan, berikut adalah konfigurasi *live* dari objek Deployment yang sama: @@ -627,7 +627,7 @@ TODO(pwittrock): *Uncomment* ini untuk versi 1.6 Berikut adalah sebuah *file* konfigurasi untuk sebuah Deployment. Berkas berikut tidak menspesifikasikan `strategy`: -{{< codenew file="application/simple_deployment.yaml" >}} +{{% codenew file="application/simple_deployment.yaml" %}} Buat objek dengan perintah `kubectl apply`: diff --git a/content/id/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/id/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 7a23efa6ff3c4..1c16b087b79db 100644 --- a/content/id/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/content/id/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -57,7 +57,7 @@ Bagian ini mendefinisikan laman index.php yang melakukan beberapa komputasi inte Pertama, kita akan memulai Deployment yang menjalankan _image_ dan mengeksposnya sebagai Service menggunakan konfigurasi berikut: -{{< codenew file="application/php-apache.yaml" >}} +{{% codenew file="application/php-apache.yaml" %}} Jalankan perintah berikut: @@ -434,7 +434,7 @@ Semua metrik di HorizontalPodAutoscaler dan metrik API ditentukan menggunakan no Daripada menggunakan perintah `kubectl autoscale` untuk membuat HorizontalPodAutoscaler secara imperatif, kita dapat menggunakan berkas berikut untuk membuatnya secara deklaratif: -{{< codenew file="application/hpa/php-apache.yaml" >}} +{{% codenew file="application/hpa/php-apache.yaml" %}} Kita akan membuat _autoscaler_ dengan menjalankan perintah berikut: diff --git a/content/id/docs/tasks/run-application/run-stateless-application-deployment.md b/content/id/docs/tasks/run-application/run-stateless-application-deployment.md index 74e76c827be57..3e96eb1fda1e6 100644 --- a/content/id/docs/tasks/run-application/run-stateless-application-deployment.md +++ b/content/id/docs/tasks/run-application/run-stateless-application-deployment.md @@ -38,7 +38,7 @@ Kamu dapat menjalankan aplikasi dengan membuat sebuah objek Deployment Kubernete dapat mendeskripsikan sebuah Deployment di dalam berkas YAML. Sebagai contohnya, berkas YAML berikut mendeskripsikan sebuah Deployment yang menjalankan _image_ Docker nginx:1.14.2: -{{< codenew file="application/deployment.yaml" >}} +{{% codenew file="application/deployment.yaml" %}} 1. Buatlah sebuah Deployment berdasarkan berkas YAML: @@ -100,7 +100,7 @@ YAML berikut mendeskripsikan sebuah Deployment yang menjalankan _image_ Docker n Kamu dapat mengubah Deployment dengan cara mengaplikasikan berkas YAML yang baru. Berkas YAML ini memberikan spesifikasi Deployment untuk menggunakan Nginx versi 1.16.1. -{{< codenew file="application/deployment-update.yaml" >}} +{{% codenew file="application/deployment-update.yaml" %}} 1. Terapkan berkas YAML yang baru: @@ -116,7 +116,7 @@ Kamu dapat meningkatkan jumlah Pod di dalam Deployment dengan menerapkan berkas YAML baru. Berkas YAML ini akan meningkatkan jumlah replika menjadi 4, yang nantinya memberikan spesifikasi agar Deployment memiliki 4 buah Pod. -{{< codenew file="application/deployment-scale.yaml" >}} +{{% codenew file="application/deployment-scale.yaml" %}} 1. Terapkan berkas YAML: diff --git a/content/id/docs/tutorials/hello-minikube.md b/content/id/docs/tutorials/hello-minikube.md index 6790dbf47fba5..d2e4a5de76677 100644 --- a/content/id/docs/tutorials/hello-minikube.md +++ b/content/id/docs/tutorials/hello-minikube.md @@ -38,9 +38,9 @@ Kamupun bisa mengikuti tutorial ini kalau sudah instalasi minikube di lokal. Sil Tutorial ini menyediakan image Kontainer yang dibuat melalui barisan kode berikut: -{{< codenew language="js" file="minikube/server.js" >}} +{{% codenew language="js" file="minikube/server.js" %}} -{{< codenew language="conf" file="minikube/Dockerfile" >}} +{{% codenew language="conf" file="minikube/Dockerfile" %}} Untuk info lebih lanjut tentang perintah `docker build`, baca [dokumentasi Docker](https://docs.docker.com/engine/reference/commandline/build/). diff --git a/content/id/docs/tutorials/stateful-application/basic-stateful-set.md b/content/id/docs/tutorials/stateful-application/basic-stateful-set.md index b664a3bb8abf1..7ce5437d61bfc 100644 --- a/content/id/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/content/id/docs/tutorials/stateful-application/basic-stateful-set.md @@ -59,7 +59,7 @@ Contoh ini menciptakan sebuah [Service _headless_](/id/docs/concepts/services-networking/service/#service-headless), `nginx`, untuk mempublikasikan alamat IP Pod di dalam StatefulSet, `web`. -{{< codenew file="application/web/web.yaml" >}} +{{% codenew file="application/web/web.yaml" %}} Unduh contoh di atas, dan simpan ke dalam berkas dengan nama `web.yaml`. @@ -1075,7 +1075,7 @@ menjalankan atau mengakhiri semua Pod secara bersamaan (paralel), dan tidak menu suatu Pod menjadi Running dan Ready atau benar-benar berakhir sebelum menjalankan atau mengakhiri Pod yang lain. -{{< codenew file="application/web/web-parallel.yaml" >}} +{{% codenew file="application/web/web-parallel.yaml" %}} Unduh contoh di atas, dan simpan ke sebuah berkas dengan nama `web-parallel.yaml`. diff --git a/content/id/docs/tutorials/stateless-application/expose-external-ip-address.md b/content/id/docs/tutorials/stateless-application/expose-external-ip-address.md index df297f4c634b7..2152c8e0e3621 100644 --- a/content/id/docs/tutorials/stateless-application/expose-external-ip-address.md +++ b/content/id/docs/tutorials/stateless-application/expose-external-ip-address.md @@ -42,7 +42,7 @@ yang mengekspos alamat IP eksternal. 1. Jalankan sebuah aplikasi Hello World pada klaster kamu: -{{< codenew file="service/load-balancer-example.yaml" >}} +{{% codenew file="service/load-balancer-example.yaml" %}} ```shell kubectl apply -f https://k8s.io/examples/service/load-balancer-example.yaml diff --git a/content/id/examples/application/deployment-sidecar.yaml b/content/id/examples/application/deployment-sidecar.yaml new file mode 100644 index 0000000000000..3f1b841d31ebf --- /dev/null +++ b/content/id/examples/application/deployment-sidecar.yaml @@ -0,0 +1,34 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: myapp + labels: + app: myapp +spec: + replicas: 1 + selector: + matchLabels: + app: myapp + template: + metadata: + labels: + app: myapp + spec: + containers: + - name: myapp + image: alpine:latest + command: ['sh', '-c', 'while true; do echo "logging" >> /opt/logs.txt; sleep 1; done'] + volumeMounts: + - name: data + mountPath: /opt + initContainers: + - name: logshipper + image: alpine:latest + restartPolicy: Always + command: ['sh', '-c', 'tail -F /opt/logs.txt'] + volumeMounts: + - name: data + mountPath: /opt + volumes: + - name: data + emptyDir: {} \ No newline at end of file diff --git a/content/id/examples/application/job/job-sidecar.yaml b/content/id/examples/application/job/job-sidecar.yaml new file mode 100644 index 0000000000000..9787ad88515b2 --- /dev/null +++ b/content/id/examples/application/job/job-sidecar.yaml @@ -0,0 +1,26 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: myjob +spec: + template: + spec: + containers: + - name: myjob + image: alpine:latest + command: ['sh', '-c', 'echo "logging" > /opt/logs.txt'] + volumeMounts: + - name: data + mountPath: /opt + initContainers: + - name: logshipper + image: alpine:latest + restartPolicy: Always + command: ['sh', '-c', 'tail -F /opt/logs.txt'] + volumeMounts: + - name: data + mountPath: /opt + restartPolicy: Never + volumes: + - name: data + emptyDir: {} \ No newline at end of file diff --git a/content/ja/docs/concepts/security/secrets-good-practices.md b/content/ja/docs/concepts/security/secrets-good-practices.md new file mode 100644 index 0000000000000..53bf69959a6de --- /dev/null +++ b/content/ja/docs/concepts/security/secrets-good-practices.md @@ -0,0 +1,81 @@ +--- +title: Kubernetes Secretの適切な使用方法 +description: > + クラスター管理者とアプリケーション開発者向けの適切なSecret管理の原則と実践方法。 +content_type: concept +weight: 70 +--- + + + +{{}} + +以下の適切な使用方法は、クラスター管理者とアプリケーション開発者の両方を対象としています。 +これらのガイドラインに従って、Secretオブジェクト内の機密情報のセキュリティを向上させ、Secretの効果的な管理を行ってください。 + + + +## クラスター管理者 + +このセクションでは、クラスター管理者がクラスター内の機密情報のセキュリティを強化するために使用できる適切な方法を提供します。 + +### データ保存時の暗号化を構成する + +デフォルトでは、Secretオブジェクトは{{}}内で暗号化されていない状態で保存されます。 +`etcd`内のSecretデータを暗号化するように構成する必要があります。 +手順については、[機密データ保存時の暗号化](/docs/tasks/administer-cluster/encrypt-data/)を参照してください。 + +### Secretへの最小特権アクセスを構成する {#least-privilege-secrets} + +Kubernetesの{{}} [(RBAC)](/docs/reference/access-authn-authz/rbac/)などのアクセス制御メカニズムを計画する際、`Secret`オブジェクトへのアクセスに関する以下のガイドラインを考慮してください。 +また、[RBACの適切な使用方法](/docs/concepts/security/rbac-good-practices)の他のガイドラインにも従ってください。 + +- **コンポーネント**: `watch`または`list`アクセスを、最上位の特権を持つシステムレベルのコンポーネントのみに制限してください。コンポーネントの通常の動作が必要とする場合にのみ、Secretへの`get`アクセスを許可してください。 +- **ユーザー**: Secretへの`get`、`watch`、`list`アクセスを制限してください。`etcd`へのアクセスはクラスター管理者にのみ許可し、読み取り専用アクセスも許可してください。特定の注釈を持つSecretへのアクセスを制限するなど、より複雑なアクセス制御については、サードパーティの認証メカニズムを検討してください。 + +{{< caution >}} +Secretへの`list`アクセスを暗黙的に許可すると、サブジェクトがSecretの内容を取得できるようになります。 +{{< /caution >}} + +Secretを使用するPodを作成できるユーザーは、そのSecretの値も見ることができます。 +クラスターのポリシーがユーザーにSecretを直接読むことを許可しない場合でも、同じユーザーがSecretを公開するPodを実行するアクセスを持つかもしれません。 +このようなアクセスを持つユーザーによるSecretデータの意図的または偶発的な公開の影響を検出または制限することができます。 +いくつかの推奨事項には以下があります: + +* 短寿命のSecretを使用する +* 特定のイベントに対してアラートを出す監査ルールを実装する(例:単一ユーザーによる複数のSecretの同時読み取り) + +### etcdの管理ポリシーを改善する + +使用しなくなった場合には、`etcd`が使用する永続ストレージを削除するかシュレッダーで処理してください。 + +複数の`etcd`インスタンスがある場合、インスタンス間の通信を暗号化されたSSL/TLS通信に設定して、転送中のSecretデータを保護してください。 + +### 外部Secretへのアクセスを構成する + +{{% thirdparty-content %}} + +外部のSecretストアプロバイダーを使用して機密データをクラスターの外部に保存し、その情報にアクセスするようにPodを構成できます。 +[Kubernetes Secrets Store CSI Driver](https://secrets-store-csi-driver.sigs.k8s.io/)は、kubeletが外部ストアからSecretを取得し、データにアクセスすることを許可された特定のPodにSecretをボリュームとしてマウントするDaemonSetです。 + +サポートされているプロバイダーの一覧については、[Secret Store CSI Driverのプロバイダー](https://secrets-store-csi-driver.sigs.k8s.io/concepts.html#provider-for-the-secrets-store-csi-driver)を参照してください。 + +## 開発者 + +このセクションでは、Kubernetesリソースの作成と展開時に機密データのセキュリティを向上させるための開発者向けの適切な使用方法を提供します。 + +### 特定のコンテナへのSecretアクセスを制限する + +Pod内で複数のコンテナを定義し、そのうち1つのコンテナだけがSecretへのアクセスを必要とする場合、他のコンテナがそのSecretにアクセスできないようにボリュームマウントや環境変数の設定を行ってください。 + +### 読み取り後にSecretデータを保護する + +アプリケーションは、環境変数やボリュームから機密情報を読み取った後も、その値を保護する必要があります。 +例えば、アプリケーションは機密情報を平文でログに記録したり、信頼できない第三者に送信したりしないようにする必要があります。 + +### Secretマニフェストの共有を避ける +Secretを{{< glossary_tooltip text="マニフェスト" term_id="manifest" >}}を介して設定し、秘密データをBase64でエンコードしている場合、このファイルを共有したりソースリポジトリにチェックインしたりすると、その秘密はマニフェストを読むことのできる全員に公開されます。 + +{{< caution >}} +Base64エンコードは暗号化方法ではなく、平文と同じく機密性を提供しません。 +{{< /caution >}} diff --git a/content/ja/docs/concepts/storage/volumes.md b/content/ja/docs/concepts/storage/volumes.md index c4445d49c49f7..df385712b7a7b 100644 --- a/content/ja/docs/concepts/storage/volumes.md +++ b/content/ja/docs/concepts/storage/volumes.md @@ -899,7 +899,7 @@ spec: Portworxの`CSIMigration`機能が追加されましたが、Kubernetes 1.23ではAlpha状態であるため、デフォルトで無効になっています。 すべてのプラグイン操作を既存のツリー内プラグインから`pxd.portworx.com`Container Storage Interface(CSI)ドライバーにリダイレクトします。 -[Portworx CSIドライバー](https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/csi/)をクラスターにインストールする必要があります。 +[Portworx CSIドライバー](https://docs.portworx.com/portworx-enterprise/operations/operate-kubernetes/storage-operations/csi)をクラスターにインストールする必要があります。 この機能を有効にするには、kube-controller-managerとkubeletで`CSIMigrationPortworx=true`を設定します。 ## subPathの使用 {#using-subpath} diff --git a/content/ja/docs/concepts/workloads/controllers/replicaset.md b/content/ja/docs/concepts/workloads/controllers/replicaset.md index c3d5282fa5c15..1f73fe842f911 100644 --- a/content/ja/docs/concepts/workloads/controllers/replicaset.md +++ b/content/ja/docs/concepts/workloads/controllers/replicaset.md @@ -275,7 +275,7 @@ ReplicaSetは、ただ`.spec.replicas`フィールドを更新することによ [`controller.kubernetes.io/pod-deletion-cost`](/docs/reference/labels-annotations-taints/#pod-deletion-cost)アノテーションを使用すると、ReplicaSetをスケールダウンする際に、どのPodを最初に削除するかについて、ユーザーが優先順位を設定することができます。 -アノテーションはPodに設定する必要があり、範囲は[-2147483647, 2147483647]になります。同じReplicaSetに属する他のPodと比較して、Podを削除する際のコストを表しています。削除コストの低いPodは、削除コストの高いPodより優先的に削除されます。 +アノテーションはPodに設定する必要があり、範囲は[-2147483648, 2147483647]になります。同じReplicaSetに属する他のPodと比較して、Podを削除する際のコストを表しています。削除コストの低いPodは、削除コストの高いPodより優先的に削除されます。 このアノテーションを設定しないPodは暗黙的に0と設定され、負の値は許容されます。 無効な値はAPIサーバーによって拒否されます。 diff --git a/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md b/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md index 89d521098c809..a975102d76207 100644 --- a/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md @@ -92,7 +92,7 @@ Podの`spec`には、Always、OnFailure、またはNeverのいずれかの値を PodにはPodStatusがあります。それにはPodが成功したかどうかの情報を持つ[PodCondition](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podcondition-v1-core)の配列が含まれています。kubeletは、下記のPodConditionを管理します: * `PodScheduled`: PodがNodeにスケジュールされました。 -* `PodHasNetwork`: (アルファ版機能; [明示的に有効](#pod-has-network)にしなければならない) Podサンドボックスが正常に成功され、ネットワークの設定が完了しました。 +* `PodHasNetwork`: (アルファ版機能; [明示的に有効](#pod-has-network)にしなければならない) Podサンドボックスが正常に作成され、ネットワークの設定が完了しました。 * `ContainersReady`: Pod内のすべてのコンテナが準備できた状態です。 * `Initialized`: すべての[Initコンテナ](/ja/docs/concepts/workloads/pods/init-containers)が正常に終了しました。 * `Ready`: Podはリクエストを処理でき、一致するすべてのサービスの負荷分散プールに追加されます。 @@ -206,7 +206,7 @@ probeを使ってコンテナをチェックする4つの異なる方法があ : コンテナの診断が失敗しました。 `Unknown` -: コンテナの診断が失敗しました(何も実行する必要はなく、kubeletはさらにチェックを行います)。 +: コンテナの診断自体が失敗しました(何も実行する必要はなく、kubeletはさらにチェックを行います)。 ### Probeの種類 {#types-of-probe} diff --git a/content/ja/docs/reference/glossary/addons.md b/content/ja/docs/reference/glossary/addons.md new file mode 100644 index 0000000000000..ee781366e0231 --- /dev/null +++ b/content/ja/docs/reference/glossary/addons.md @@ -0,0 +1,16 @@ +--- +title: Add-ons +id: addons +date: 2019-12-15 +full_link: /ja/docs/concepts/cluster-administration/addons/ +short_description: > + Kubernetesの機能を拡張するリソース。 + +aka: +tags: +- tool +--- + Kubernetesの機能を拡張するリソース。 + + +[アドオンのインストール](/ja/docs/concepts/cluster-administration/addons/)では、クラスターのアドオン使用について詳しく説明し、いくつかの人気のあるアドオンを列挙します。 diff --git a/content/ja/docs/reference/glossary/kubeadm.md b/content/ja/docs/reference/glossary/kubeadm.md new file mode 100644 index 0000000000000..535f26183dc5a --- /dev/null +++ b/content/ja/docs/reference/glossary/kubeadm.md @@ -0,0 +1,18 @@ +--- +title: Kubeadm +id: kubeadm +date: 2018-04-12 +full_link: /ja/docs/reference/setup-tools/kubeadm/ +short_description: > + Kubernetesを迅速にインストールし、安全なクラスターをセットアップするためのツール。 + +aka: +tags: +- tool +- operation +--- + Kubernetesを迅速にインストールし、安全なクラスターをセットアップするためのツール。 + + + +kubeadmを使用して、コントロールプレーンとワーカーノード{{< glossary_tooltip text="ワーカーノード" term_id="node" >}}コンポーネントの両方をインストールできます。 diff --git a/content/ja/docs/reference/glossary/secret.md b/content/ja/docs/reference/glossary/secret.md index 3324279bd65f9..8f7196f0c6d9c 100644 --- a/content/ja/docs/reference/glossary/secret.md +++ b/content/ja/docs/reference/glossary/secret.md @@ -15,4 +15,6 @@ tags: -機密情報の取り扱い方法を細かく制御することができ、保存時には[暗号化](/ja/docs/tasks/administer-cluster/encrypt-data/#ensure-all-secrets-are-encrypted)するなど、誤って公開してしまうリスクを減らすことができます。{{< glossary_tooltip text="Pod" term_id="pod" >}}は、ボリュームマウントされたファイルとして、またはPodのイメージをPullするkubeletによって、Secretを参照します。Secretは機密情報を扱うのに最適で、機密でない情報には[ConfigMap](/ja/docs/tasks/configure-pod-container/configure-pod-configmap/)が適しています。 +Secretは、機密情報の使用方法をより管理しやすくし、偶発的な漏洩のリスクを減らすことができます。Secretの値はbase64文字列としてエンコードされ、デフォルトでは暗号化されずに保存されますが、[保存時に暗号化](/docs/tasks/administer-cluster/encrypt-data/#ensure-all-secrets-are-encrypted)するように設定することもできます。 + +{{< glossary_tooltip text="Pod" term_id="pod" >}}は、ボリュームマウントや環境変数など、さまざまな方法でSecretを参照できます。Secretは機密データ用に設計されており、[ConfigMap](/ja/docs/tasks/configure-pod-container/configure-pod-configmap/)は非機密データ用に設計されています。 \ No newline at end of file diff --git a/content/ja/docs/setup/best-practices/certificates.md b/content/ja/docs/setup/best-practices/certificates.md index a782b52fee1e9..a499631875a05 100644 --- a/content/ja/docs/setup/best-practices/certificates.md +++ b/content/ja/docs/setup/best-practices/certificates.md @@ -67,7 +67,7 @@ CAの秘密鍵をクラスターにコピーしたくない場合、自身で全 | kube-etcd | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` | | kube-etcd-peer | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` | | kube-etcd-healthcheck-client | etcd-ca | | client | | -| kube-apiserver-etcd-client | etcd-ca | system:masters | client | | +| kube-apiserver-etcd-client | etcd-ca | | client | | | kube-apiserver | kubernetes-ca | | server | ``, ``, ``, `[1]` | | kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | | | front-proxy-client | kubernetes-front-proxy-ca | | client | | diff --git a/content/pl/_index.html b/content/pl/_index.html index 1fa0f72c4cd08..06312f7197954 100644 --- a/content/pl/_index.html +++ b/content/pl/_index.html @@ -4,9 +4,10 @@ cid: home sitemap: priority: 1.0 - --- +{{< site-searchbar >}} + {{< blocks/section id="oceanNodes" >}} {{% blocks/feature image="flower" %}} [Kubernetes]({{< relref "/docs/concepts/overview/" >}}), znany też jako K8s, to otwarte oprogramowanie służące do automatyzacji procesów uruchamiania, skalowania i zarządzania aplikacjami w kontenerach. @@ -58,5 +59,3 @@

The Challenges of Migrating 150+ Microservices to Kubernetes

{{< /blocks/section >}} - -{{< blocks/kubernetes-features >}} diff --git a/content/pl/docs/home/_index.md b/content/pl/docs/home/_index.md index 1e2ecfe14b61b..f72000862d264 100644 --- a/content/pl/docs/home/_index.md +++ b/content/pl/docs/home/_index.md @@ -4,7 +4,7 @@ noedit: true cid: docsHome layout: docsportal_home class: gridPage gridPageHome -linkTitle: "Strona główna" +linkTitle: "Dokumentacja" main_menu: true weight: 10 hide_feedback: true @@ -39,24 +39,26 @@ cards: description: "Wyszukaj popularne zadania i dowiedz się, jak sobie z nimi efektywnie poradzić." button: "Przegląd zadań" button_path: "/docs/tasks" -- name: training - title: "Szkolenia" - description: "Uzyskaj certyfikat Kubernetes i spraw, aby Twoje projekty cloud native zakończyły się sukcesem!" - button: "Oferta szkoleń" - button_path: "/training" - name: reference title: Dokumentacja źródłowa description: Zapoznaj się z terminologią, składnią poleceń, typami zasobów API i dokumentacją narzędzi instalacyjnych. button: Zajrzyj do źródeł button_path: /docs/reference - name: contribute - title: Weź udział w tworzeniu dokumentacji - description: Każdy może przyczynić się do tworzenia dokumentacji - zarówno nowicjusze, jak i starzy wyjadacze. - button: Weź udział + title: Weź udział w tworzeniu Kubernetesa + description: Każdy może pomóc - zarówno nowicjusze, jak i starzy wyjadacze. + button: Zobacz, jak możesz pomóc button_path: /docs/contribute -- name: release-notes - title: Informacje o wydaniu K8s - description: Jeśli instalujesz lub aktualizujesz Kubernetesa, zajrzyj do informacji o najnowszym wydaniu. +- name: training + title: "Szkolenia" + description: "Uzyskaj certyfikat Kubernetes i spraw, aby Twoje projekty cloud native zakończyły się sukcesem!" + button: "Oferta szkoleń" + button_path: "/training" +- name: Download + title: Pobierz Kubernetesa + description: Zainstaluj Kubernetes lub zakutalizuj do najnowszej wersji. + button: "Pobierz Kubernetesa" + button_path: "/releases/download" - name: about title: O dokumentacji description: Tu znajdziesz dokumentację bieżącej i czterech poprzednich wersji Kubernetes. diff --git a/content/pl/docs/reference/_index.md b/content/pl/docs/reference/_index.md index 9d0e772188ac6..fecda7e8fd454 100644 --- a/content/pl/docs/reference/_index.md +++ b/content/pl/docs/reference/_index.md @@ -4,6 +4,7 @@ linkTitle: "Materiały źródłowe" main_menu: true weight: 70 content_type: concept +no_list: true --- @@ -28,6 +29,7 @@ Aby wywołać Kubernetes API z wybranego języka programowania, możesz skorzyst [bibliotek klienckich](/docs/reference/using-api/client-libraries/). Oficjalnie wspierane biblioteki to: +* [Kubernetes Go client library](https://github.com/kubernetes/client-go/) * [Kubernetes Python client library](https://github.com/kubernetes-client/python) * [Kubernetes Java client library](https://github.com/kubernetes-client/java) * [Kubernetes JavaScript client library](https://github.com/kubernetes-client/javascript) @@ -65,27 +67,44 @@ Kubernetesa lub innych narzędzi. Choć większość tych API nie jest udostępn serwer API w trybie RESTful, są one niezbędne dla użytkowników i administratorów w korzystaniu i zarządzaniu klastrem. -* [kube-apiserver configuration (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/) -* [kube-apiserver configuration (v1)](/docs/reference/config-api/apiserver-config.v1/) + +* [kubeconfig (v1)](/docs/reference/config-api/kubeconfig.v1/) +* [kube-apiserver admission (v1)](/docs/reference/config-api/apiserver-admission.v1/) +* [kube-apiserver configuration (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/) i +* [kube-apiserver configuration (v1beta1)](/docs/reference/config-api/apiserver-config.v1beta1/) i + [kube-apiserver configuration (v1)](/docs/reference/config-api/apiserver-config.v1/) * [kube-apiserver encryption (v1)](/docs/reference/config-api/apiserver-encryption.v1/) * [kube-apiserver event rate limit (v1alpha1)](/docs/reference/config-api/apiserver-eventratelimit.v1alpha1/) -* [kubelet configuration (v1alpha1)](/docs/reference/config-api/kubelet-config.v1alpha1/) oraz - [kubelet configuration (v1beta1)](/docs/reference/config-api/kubelet-config.v1beta1/) -* [kubelet credential providers (v1alpha1)](/docs/reference/config-api/kubelet-credentialprovider.v1alpha1/) -* [kubelet credential providers (v1beta1)](/docs/reference/config-api/kubelet-credentialprovider.v1beta1/) -* [kube-scheduler configuration (v1beta2)](/docs/reference/config-api/kube-scheduler-config.v1beta2/) oraz - [kube-scheduler configuration (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/) +* [kubelet configuration (v1alpha1)](/docs/reference/config-api/kubelet-config.v1alpha1/), + [kubelet configuration (v1beta1)](/docs/reference/config-api/kubelet-config.v1beta1/) i + [kubelet configuration (v1)](/docs/reference/config-api/kubelet-config.v1/) +* [kubelet credential providers (v1alpha1)](/docs/reference/config-api/kubelet-credentialprovider.v1alpha1/), + [kubelet credential providers (v1beta1)](/docs/reference/config-api/kubelet-credentialprovider.v1beta1/) i + [kubelet credential providers (v1)](/docs/reference/config-api/kubelet-credentialprovider.v1/) +* [kube-scheduler configuration (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/) i + [kube-scheduler configuration (v1)](/docs/reference/config-api/kube-scheduler-config.v1/) +* [kube-controller-manager configuration (v1alpha1)](/docs/reference/config-api/kube-controller-manager-config.v1alpha1/) * [kube-proxy configuration (v1alpha1)](/docs/reference/config-api/kube-proxy-config.v1alpha1/) * [`audit.k8s.io/v1` API](/docs/reference/config-api/apiserver-audit.v1/) -* [Client authentication API (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/) oraz +* [Client authentication API (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/) i [Client authentication API (v1)](/docs/reference/config-api/client-authentication.v1/) * [WebhookAdmission configuration (v1)](/docs/reference/config-api/apiserver-webhookadmission.v1/) * [ImagePolicy API (v1alpha1)](/docs/reference/config-api/imagepolicy.v1alpha1/) ## API konfiguracji dla kubeadm -* [v1beta2](/docs/reference/config-api/kubeadm-config.v1beta2/) + * [v1beta3](/docs/reference/config-api/kubeadm-config.v1beta3/) +* [v1beta4](/docs/reference/config-api/kubeadm-config.v1beta4/) + +## Zewnętrzne API + +Istnieją API, które zostały zdefiniowane w ramach projektu Kubernetes, ale nie zostały zaimplementowane +przez główny projekt: + +* [Metrics API (v1beta1)](/docs/reference/external-api/metrics.v1beta1/) +* [Custom Metrics API (v1beta2)](/docs/reference/external-api/custom-metrics.v1beta2) +* [External Metrics API (v1beta1)](/docs/reference/external-api/external-metrics.v1beta1) ## Dokumentacja projektowa diff --git a/content/pl/docs/reference/glossary/cloud-controller-manager.md b/content/pl/docs/reference/glossary/cloud-controller-manager.md index 5d09fa4695d73..d0398cb755534 100644 --- a/content/pl/docs/reference/glossary/cloud-controller-manager.md +++ b/content/pl/docs/reference/glossary/cloud-controller-manager.md @@ -12,7 +12,7 @@ tags: - operation --- Element składowy {{< glossary_tooltip text="warstwy sterowania" term_id="control-plane" >}} Kubernetesa, -który zarządza usługami realizowanymi po stronie chmur obliczeniowych. Cloud controller manager umożliwia +który zarządza usługami realizowanymi po stronie chmur obliczeniowych. [Cloud controller manager](/docs/concepts/architecture/cloud-controller/) umożliwia połączenie Twojego klastra z API operatora usług chmurowych i rozdziela składniki operujące na platformie chmurowej od tych, które dotyczą wyłącznie samego klastra. diff --git a/content/pl/docs/setup/_index.md b/content/pl/docs/setup/_index.md index 5d63a0fdcfec5..a4a5a7713d1ad 100644 --- a/content/pl/docs/setup/_index.md +++ b/content/pl/docs/setup/_index.md @@ -22,7 +22,7 @@ Instalując Kubernetesa, przy wyborze platformy kieruj się: łatwością w utrz Możesz [pobrać Kubernetesa](/releases/download/), aby zainstalować klaster na lokalnym komputerze, w chmurze czy w prywatnym centrum obliczeniowym. -Niektóre [komponenty Kubernetesa](/docs/concepts/overview/components/), na przykład `kube-apiserver` czy `kube-proxy` mogą być +Niektóre [komponenty Kubernetesa](/docs/concepts/overview/components/), na przykład `{{< glossary_tooltip text="kube-apiserver" term_id="kube-apiserver" >}} czy {{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}} mogą być uruchamiane jako [kontenery](/releases/download/#container-images) wewnątrz samego klastra. **Zalecamy** uruchamianie komponentów Kubernetesa jako kontenery zawsze, @@ -59,9 +59,6 @@ jest [kubeadm](/docs/setup/production-environment/tools/kubeadm/). - Wybierz [środowisko uruchomieniowe dla kontenerów](/docs/setup/production-environment/container-runtimes/) w nowym klastrze - Naucz się [najlepszych praktyk](/docs/setup/best-practices/) przy konfigurowaniu klastra -Na stronie [Partnerów Kubernetesa](https://kubernetes.io/partners/#conformance) znajdziesz listę dostawców posiadających -[certyfikację Kubernetes](https://github.com/cncf/k8s-conformance/#certified-kubernetes). - Kubernetes zaprojektowano w ten sposób, że {{< glossary_tooltip term_id="control-plane" text="warstwa sterowania" >}} wymaga do działania systemu Linux. W ramach klastra aplikacje mogą być uruchamiane na systemie Linux i innych, w tym Windows. diff --git a/content/pl/releases/_index.md b/content/pl/releases/_index.md index 515dbf2f85c92..2add7bf6e7c77 100644 --- a/content/pl/releases/_index.md +++ b/content/pl/releases/_index.md @@ -2,15 +2,20 @@ linktitle: Historia wydań title: Wydania type: docs +layout: release-info +notoc: true --- - -Projekt Kubernetes zapewnia wsparcie dla trzech ostatnich wydań _minor_ ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). Poprawki do wydania 1.19 i nowszych [będą publikowane przez około rok](/releases/patch-releases/#support-period). Kubernetes w wersji 1.18 i wcześniejszych otrzymywał poprawki przez 9 miesięcy. +Projekt Kubernetes zapewnia wsparcie dla trzech ostatnich wydań _minor_ +({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). +Poprawki do wydania 1.19 i nowszych [będą publikowane przez około rok](/releases/patch-releases/#support-period). +Kubernetes w wersji 1.18 i wcześniejszych otrzymywał poprawki przez 9 miesięcy. Wersje Kubernetesa oznaczane są jako **x.y.z**, -gdzie **x** jest oznaczeniem wersji głównej (_major_), **y** — podwersji (_minor_), a **z** — numer poprawki (_patch_), zgodnie z terminologią [Semantic Versioning](https://semver.org/). +gdzie **x** jest oznaczeniem wersji głównej (_major_), **y** — podwersji (_minor_), a **z** — numer poprawki (_patch_), +zgodnie z terminologią [Semantic Versioning](https://semver.org/). Więcej informacji można z znaleźć w dokumencie [version skew policy](/releases/version-skew-policy/). @@ -22,6 +27,7 @@ Więcej informacji można z znaleźć w dokumencie [version skew policy](/releas ## Nadchodzące wydania -Zajrzyj na [harmonogram](https://github.com/kubernetes/sig-release/tree/master/releases/release-{{< skew nextMinorVersion >}}) nadchodzącego wydania Kubernetesa numer **{{< skew nextMinorVersion >}}**! +Zajrzyj na [harmonogram](https://github.com/kubernetes/sig-release/tree/master/releases/release-{{< skew nextMinorVersion >}}) +nadchodzącego wydania Kubernetesa numer **{{< skew nextMinorVersion >}}**! ## Przydatne zasoby diff --git a/content/pt-br/docs/concepts/workloads/controllers/replicaset.md b/content/pt-br/docs/concepts/workloads/controllers/replicaset.md index dcd22d1f77000..440187b4b9460 100644 --- a/content/pt-br/docs/concepts/workloads/controllers/replicaset.md +++ b/content/pt-br/docs/concepts/workloads/controllers/replicaset.md @@ -280,7 +280,7 @@ Se o Pod obedecer todos os items acima simultaneamente, a seleção é aleatóri Utilizando a anotação [`controller.kubernetes.io/pod-deletion-cost`](/docs/reference/labels-annotations-taints/#pod-deletion-cost), usuários podem definir uma preferência em relação à quais pods serão removidos primeiro caso o ReplicaSet precise escalonar para baixo. -A anotação deve ser definida no pod, com uma variação de [-2147483647, 2147483647]. Isso representa o custo de deletar um pod comparado com outros pods que pertencem à esse mesmo ReplicaSet. Pods com um custo de deleção menor são eleitos para deleção antes de pods com um custo maior. +A anotação deve ser definida no pod, com uma variação de [-2147483648, 2147483647]. Isso representa o custo de deletar um pod comparado com outros pods que pertencem à esse mesmo ReplicaSet. Pods com um custo de deleção menor são eleitos para deleção antes de pods com um custo maior. O valor implícito para essa anotação para pods que não a tem definida é 0; valores negativos são permitidos. Valores inválidos serão rejeitados pelo servidor API. diff --git a/content/pt-br/docs/home/_index.md b/content/pt-br/docs/home/_index.md index 61c2921ce9bb7..74b80af76d325 100644 --- a/content/pt-br/docs/home/_index.md +++ b/content/pt-br/docs/home/_index.md @@ -6,7 +6,7 @@ noedit: true cid: docsHome layout: docsportal_home class: gridPage gridPageHome -linkTitle: "Home" +linkTitle: "Documentação" main_menu: true weight: 10 hide_feedback: true diff --git a/content/pt-br/docs/tasks/tools/install-kubectl-linux.md b/content/pt-br/docs/tasks/tools/install-kubectl-linux.md index 4c37e5f96b0fb..2656115f22b4c 100644 --- a/content/pt-br/docs/tasks/tools/install-kubectl-linux.md +++ b/content/pt-br/docs/tasks/tools/install-kubectl-linux.md @@ -44,7 +44,7 @@ Por exemplo, para fazer download da versão {{< skew currentPatchVersion >}} no Faça download do arquivo checksum de verificação do kubectl: ```bash - curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" + curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" ``` Valide o binário kubectl em relação ao arquivo de verificação: @@ -215,7 +215,7 @@ Abaixo estão os procedimentos para configurar o autocompletar para Bash, Fish e Faça download do arquivo checksum de verificação do kubectl-convert: ```bash - curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256" + curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256" ``` Valide o binário kubectl-convert com o arquivo de verificação: diff --git a/content/pt-br/docs/tutorials/kubernetes-basics/_index.html b/content/pt-br/docs/tutorials/kubernetes-basics/_index.html index 20d0529c6258d..8fa280c8ce306 100644 --- a/content/pt-br/docs/tutorials/kubernetes-basics/_index.html +++ b/content/pt-br/docs/tutorials/kubernetes-basics/_index.html @@ -62,25 +62,25 @@

Módulos básicos do Kubernetes

@@ -90,17 +90,17 @@

Módulos básicos do Kubernetes

diff --git a/content/pt-br/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html b/content/pt-br/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html index c1b06dedca975..e181a606b76ad 100644 --- a/content/pt-br/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html +++ b/content/pt-br/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro.html @@ -132,7 +132,7 @@

Implante seu primeiro aplicativo no empacotado em um contêiner que utiliza o NGINX para repetir todas as requisições. (Se você ainda não tentou criar o aplicativo hello-node e implantá-lo usando um contêiner, você pode fazer isso primeiro seguindo as - instruções do tutorial Olá, Minikube!). + instruções do tutorial Olá, Minikube!).

diff --git a/content/pt-br/docs/tutorials/kubernetes-basics/scale/scale-intro.html b/content/pt-br/docs/tutorials/kubernetes-basics/scale/scale-intro.html index ee6b9d3241498..dbff93774c91f 100644 --- a/content/pt-br/docs/tutorials/kubernetes-basics/scale/scale-intro.html +++ b/content/pt-br/docs/tutorials/kubernetes-basics/scale/scale-intro.html @@ -1,46 +1,71 @@ --- -title: Executando múltiplas instâncias de seu aplicativo +title: Executando Múltiplas Instâncias da sua Aplicação weight: 10 +description: |- + Escalone uma aplicação existente de forma manual utilizando kubectl. --- - + - -

-
-

Objetivos

+
+

Objetivos

    -
  • Escalar uma aplicação usando kubectl.
  • +
  • Escalonar uma aplicação usando kubectl.
-

Escalando uma aplicação

- -

Nos módulos anteriores nós criamos um Deployment, e então o expusemos publicamente através de um serviço (Service). O Deployment criou apenas um único Pod para executar nossa aplicação. Quando o tráfego aumentar nós precisaremos escalar a aplicação para suportar a demanda de usuários.

+

Escalonando uma aplicação

-

O escalonamento é obtido pela mudança do número de réplicas em um Deployment

+

+ Nos módulos anteriores, criamos um + Deployment, + e então o expusemos publicamente através de um serviço + (Service). + O Deployment criou apenas um único Pod para executar nossa aplicação. + Quando o tráfego aumentar, precisaremos escalonar a aplicação para + suportar a demanda de usuários. +

+

+ Se você ainda não tiver estudado as seções anteriores, inicie + pelo tutorial + Usando Minikube para criar um cluster. +

+

+ O escalonamento é obtido pela mudança do número de + réplicas em um Deployment +

+

+ NOTA Se você estiver seguindo este tutorial após a + seção anterior, + poderá ser necessário refazer a seção criando um cluster, + pois os serviços podem ter sido removidos. +

Resumo:

    -
  • Escalando um Deployment
  • +
  • Escalonando um Deployment
-

Você pode criar desde o início um Deployment com múltiplas instâncias usando o parâmetro --replicas para que o kubectl crie o comando de deployment

+

+ Você pode criar desde o início um Deployment com + múltiplas instâncias usando o parâmetro --replicas + do comando kubectl create deployment +

@@ -86,14 +111,34 @@

Visão geral sobre escalonamento

-

Escalar um Deployment garantirá que novos Pods serão criados e agendados para nós de processamento com recursos disponíveis. O escalonamento aumentará o número de Pods para o novo estado desejado. O Kubernetes também suporta o auto-escalonamento (autoscaling) de Pods, mas isso está fora do escopo deste tutorial. Escalar para zero também é possível, e isso terminará todos os Pods do Deployment especificado.

+

+ Escalonar um Deployment garantirá que novos Pods serão criados + e alocados em nós de processamento com recursos disponíveis. O + escalonamento aumentará o número de Pods para o novo estado + desejado. O Kubernetes também suporta o auto-escalonamento + (autoscaling) + de Pods, mas isso está fora do escopo deste tutorial. Escalonar + para zero também é possível, e encerrará todos os Pods do + Deployment especificado. +

-

Executar múltiplas instâncias de uma aplicação irá requerer uma forma de distribuir o tráfego entre todas elas. Serviços possuem um balanceador de carga integrado que distribuirá o tráfego de rede entre todos os Pods de um Deployment exposto. Serviços irão monitorar continuamente os Pods em execução usando endpoints para garantir que o tráfego seja enviado apenas para Pods disponíveis.

+

+ Executar múltiplas instâncias de uma aplicação requer uma forma + de distribuir o tráfego entre todas elas. Serviços possuem um + balanceador de carga integrado que distribui o tráfego de rede + entre todos os Pods de um Deployment exposto. Serviços irão + monitorar continuamente os Pods em execução usando endpoints + para garantir que o tráfego seja enviado apenas para Pods + disponíveis. +

-

O Escalonamento é obtido pela mudança do número de réplicas em um Deployment.

+

+ O escalonamento é obtido pela mudança do número de + réplicas em um Deployment. +

@@ -102,17 +147,121 @@

Visão geral sobre escalonamento

-

No momento em que múltiplas instâncias de uma aplicação estiverem em execução será possível realizar atualizações graduais no cluster sem que ocorra indisponibilidade. Nós cobriremos isso no próximo módulo. Agora, vamos ao terminal online e escalar nossa aplicação.

+

+ Uma vez que você tenha múltiplas instâncias de uma aplicação + em execução será possível realizar atualizações graduais no + cluster sem que ocorra indisponibilidade. Cobriremos isso no + próximo módulo. Agora, vamos ao terminal escalonar nossa aplicação. +

+
+
+ +
+
+

Escalonando um Deployment

+

+ Para listar seus Deployments, utilize o subcomando + get deployments: + kubectl get deployments +

+

A saída deve ser semelhante a:

+
+                NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
+                kubernetes-bootcamp   1/1     1            1           11m
+                
+

+ Teremos um único Pod. Se nenhum Pod aparecer, tente rodar o + comando novamente. +

+
    +
  • NAME lista os nomes dos Deployments no cluster.
  • +
  • + READY exibe a proporção de réplicas atuais/desejadas + (CURRENT/DESIRED). +
  • +
  • + UP-TO-DATE exibe o número de réplicas que foram + atualizadas para atingir o estado desejado. +
  • +
  • + AVAILABLE exibe o número de réplicas da aplicação + que estão disponíveis para seus usuários. +
  • +
  • + AGE exibe há quanto tempo a aplicação está rodando. +
  • +
+

Para ver o ReplicaSet criado pelo Deployment, execute + kubectl get rs

+

Observe que o nome do ReplicaSet sempre é exibido no formato + [NOME-DO-DEPLOYMENT]-[TEXTO-ALEATÓRIO]. O texto aleatório + é gerado e utiliza o valor do pod-template-hash como semente.

+

Duas colunas importantes desta saída são:

+
    +
  • DESIRED exibe o número desejado de réplicas da aplicação, + que você define quando cria o objeto Deployment. Este é o estado + desejado.
  • +
  • CURRENT exibe quantas réplicas estão em execução atualmente.
  • +
+

A seguir, vamos escalonar o Deployment para 4 réplicas. Utilizaremos + o comando kubectl scale, seguido pelo tipo Deployment, + nome e o número desejado de instâncias:

+

kubectl scale deployments/kubernetes-bootcamp --replicas=4

+

Para listar seus Deployments mais uma vez, utilize get deployments:

+

kubectl get deployments

+

A mudança foi aplicada, e temos 4 instâncias da aplicação disponíveis. A seguir, + vamos verificar se o número de Pods mudou:

+

kubectl get pods -o wide

+

Temos 4 Pods agora, com endereços IP diferentes. A mudança foi registrada no log + de eventos do Deployment. Para verificar esta mudança, utilize o subcomando describe:

+

kubectl describe deployments/kubernetes-bootcamp

+

Você pode ver na saída deste comando que temos 4 réplicas agora.

-
- Iniciar tutorial interativo +

Balanceamento de carga

+

Vamos verificar que o Service está efetuando o balanceamento de carga + do tráfego recebido. Para encontrar o endereço IP exposto e a porta podemos + utilizar o comando para descrever o serviço como aprendemos na seção anterior:

+

kubectl describe services/kubernetes-bootcamp

+

Crie uma variável de ambiente chamada NODE_PORT que possui + o valor da porta do nó:

+

export NODE_PORT="$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')"

+

echo NODE_PORT=$NODE_PORT

+

A seguir, iremos executar o comando curl para efetuar + uma requisição para o endereço IP e porta expostos. Rode este comando + múltiplas vezes:

+

curl http://"$(minikube ip):$NODE_PORT"

+

Cada requisição é atendida por um Pod diferente. Isso demonstra que o + balanceamento de carga está funcionando.

+
+
+

Reduzir o número de réplicas

+

Para reduzir o número de réplicas do Deployment para 2, execute + o subcomando scale novamente:

+

kubectl scale deployments/kubernetes-bootcamp --replicas=2

+

Liste os Deployments para verificar se a mudança foi aplicada + com o subcomando get deployments:

+

kubectl get deployments

+

O número de réplicas reduziu para 2. Liste o número de Pods com + o comando get pods:

+

kubectl get pods -o wide

+

Isso confirma que 2 Pods foram encerrados.

+
+
+ +
+

+ Assim que você finalizar este tutorial, vá para + Performing a Rolling Update (em inglês).

+

+
+
diff --git a/content/uk/_index.html b/content/uk/_index.html index a8b05fa13c528..560a8fb291cdb 100644 --- a/content/uk/_index.html +++ b/content/uk/_index.html @@ -4,6 +4,8 @@ cid: home --- +{{< site-searchbar >}} + {{< blocks/section id="oceanNodes" >}} {{% blocks/feature image="flower" %}} -Давайте повернемось назад у часі та дізнаємось, завдяки чому Kubernetes став таким корисним. +Повернімось назад у часі та дізнаємось, завдяки чому Kubernetes став таким корисним. ![Еволюція розгортання](/images/docs/Container_Evolution.svg) diff --git a/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-best-effort.svg b/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-best-effort.svg index cf9283885855e..e35b2f39509bb 100644 --- a/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-best-effort.svg +++ b/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-best-effort.svg @@ -1,87 +1,395 @@ - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-limit.svg b/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-limit.svg index 3a545f20dd85f..a2ba00c58fd4e 100644 --- a/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-limit.svg +++ b/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-limit.svg @@ -1,226 +1,1072 @@ - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-no-limits.svg b/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-no-limits.svg index 845f5d0d07bb2..57b207b80a0be 100644 --- a/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-no-limits.svg +++ b/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-no-limits.svg @@ -1,203 +1,951 @@ - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high.svg b/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high.svg index 02357ef901582..4ba0b15957a28 100644 --- a/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high.svg +++ b/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high.svg @@ -1,252 +1,1195 @@ - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/index.md b/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/index.md index 036ce1030acda..c61a6f08bb3d4 100644 --- a/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/index.md +++ b/content/zh-cn/blog/_posts/2023-05-05-memory-qos-cgroups-v2/index.md @@ -24,7 +24,9 @@ Kubernetes v1.27, released in April 2023, introduced changes to Memory QoS (alph Kubernetes v1.27 于 2023 年 4 月发布,引入了对内存 QoS(Alpha)的更改,用于提高 Linux 节点中的内存管理功能。 对内存 QoS 的支持最初是在 Kubernetes v1.22 中添加的,后来发现了关于计算 `memory.high` 公式的一些[不足](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2570-memory-qos#reasons-for-changing-the-formula-of-memoryhigh-calculation-in-alpha-v127)。 @@ -36,7 +38,8 @@ Support for Memory QoS was initially added in Kubernetes v1.22, and later some [ ## 背景 {#background} Kubernetes 允许你在 Pod 规约中设置某容器对每类资源的需求。通常要设置的资源是 CPU 和内存。 @@ -45,7 +48,7 @@ For example, a Pod manifest that defines container resource requirements could l --> 例如,定义容器资源需求的 Pod 清单可能如下所示: -``` +```yaml apiVersion: v1 kind: Pod metadata: @@ -65,7 +68,11 @@ spec: * `spec.containers[].resources.requests` 当你为 Pod 中的容器设置资源请求时, [Kubernetes 调度器](/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler)使用此信息来决定将 Pod 放置在哪个节点上。 @@ -74,13 +81,21 @@ spec: * `spec.containers[].resources.limits` 当你为 Pod 中的容器设置资源限制时,kubelet 会强制实施这些限制, 以便运行的容器使用的资源不得超过你设置的限制。 当 kubelet 将容器作为 Pod 的一部分启动时,kubelet 会将容器的 CPU 和内存请求和限制传递给容器运行时。 容器运行时将 CPU 请求和 CPU 限制设置到容器上。如果系统有空闲的 CPU 时间, @@ -88,7 +103,10 @@ When the kubelet starts a container as a part of a Pod, kubelet passes the conta 即,如果容器在给定时间片内使用的 CPU 数量超过指定的限制,则容器的 CPU 使用率将受到限制。 在内存 QoS 特性出现之前,容器运行时仅使用内存限制并忽略内存的 `request` (请求值从前到现在一直被用于影响[调度](/zh-cn/docs/concepts/scheduling-eviction/#scheduling))。 @@ -105,14 +123,24 @@ Let's compare how the container runtime on Linux typically configures memory req * **内存请求** 内存请求主要由 kube-scheduler 在(Kubernetes)Pod 调度时使用。 在 cgroups v1 中,没有任何控件来设置 cgroup 必须始终保留的最小内存量。 因此,容器运行时不使用 Pod 规约中设置的内存请求值。 cgroups v2 中引入了一个 `memory.min` 设置,用于设置给定 cgroup 中的进程确定可用的最小内存量。 如果 cgroup 的内存使用量在其有效最小边界内,则该 cgroup 的内存在任何情况下都不会被回收。 @@ -127,27 +155,37 @@ Let's compare how the container runtime on Linux typically configures memory req * **内存限制** `memory.limit` 指定内存限制,如果容器尝试分配更多内存,超出该限制, Linux 内核将通过 OOM(内存不足)来杀死并终止进程。如果终止的进程是容器内的主 (或唯一)进程,则容器可能会退出。 在 cgroups v1 中,`memory.limit_in_bytes` 接口用于设置内存用量限制。 然而,与 CPU 不同的是,内存用量是无法抑制的:一旦容器超过内存限制,它就会被 OOM 杀死。 在 cgroups v2 中,`memory.max` 类似于 cgroupv1 中的 `memory.limit_in_bytes`。 MemoryQoS 机制将 `memory.max` 映射到 `spec.containers[].resources.limits.memory` 以设置内存用量的硬性限制。如果内存消耗超过此水平,内核将调用其 OOM 杀手机制。 cgroups v2 中还添加了 `memory.high` 配置。MemoryQoS 机制使用 `memory.high` 来设置内存用量抑制上限。 如果超出了 `memory.high` 限制,则违规的 cgroup 会受到抑制,并且内核会尝试回收内存,这可能会避免 OOM 终止。 @@ -163,7 +201,8 @@ Let's compare how the container runtime on Linux typically configures memory req ### Cgroups v2 内存控制器接口和 Kubernetes 容器资源映 {#cgroups-v2-memory-controller-interfaces-kubernetes-container-resources-mapping} MemoryQoS 机制使用 cgroups v2 的内存控制器来保证 Kubernetes 中的内存资源。 此特性使用的 cgroupv2 接口有: @@ -178,7 +217,11 @@ MemoryQoS 机制使用 cgroups v2 的内存控制器来保证 Kubernetes 中的 {{< figure src="/blog/2023/05/05/qos-memory-resources/memory-qos-cal.svg" title="内存 QoS 级别" alt="内存 QoS 级别" >}} `memory.max` 映射到 Pod 规约中指定的 `limits.memory`。 kubelet 和容器运行时在对应的 cgroup 中配置限制值。内核强制执行限制机制以防止容器用量超过所配置的资源限制。 @@ -190,7 +233,11 @@ kubelet 和容器运行时在对应的 cgroup 中配置限制值。内核强制 {{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-max.svg" title="memory.max 映射到 limit.memory" alt="memory.max 映射到 limit.memory" >}} `memory.min` 被映射到 `requests.memory`,这会导致内存资源被预留而永远不会被内核回收。 这就是 MemoryQoS 机制确保 Kubernetes Pod 内存可用性的方式。 @@ -202,7 +249,12 @@ kubelet 和容器运行时在对应的 cgroup 中配置限制值。内核强制 {{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-min.svg" title="memory.min 映射到 requests.memory" alt="memory.min 映射到 requests.memory" >}} 对于内存保护,除了原来的限制内存用量的方式之外,MemoryQoS 机制还会对用量接近其内存限制的工作负载进行抑制, 确保系统不会因内存使用的零星增加而不堪重负。当你启用 MemoryQoS 特性时, @@ -235,7 +287,9 @@ KubeletConfiguration 中将提供一个新字段 `memoryThrottlingFactor`。默 memory.max - memory.high 指定内存用量抑制上限。这是控制 cgroup 内存用量的主要机制。 @@ -295,7 +354,9 @@ KubeletConfiguration 中将提供一个新字段 `memoryThrottlingFactor`。默 ### 针对 cgroup 层次结构的 `memory.min` 计算 {#memory-min-calculations-for-cgroups-heirarchy} 当发出容器内存请求时,kubelet 在创建容器期间通过 CRI 中的 `Unified` 字段将 `memory.min` 传递给后端 CRI 运行时(例如 containerd 或 CRI-O)。容器级别 cgroup 中的 `memory.min` 将设置为: @@ -308,7 +369,8 @@ $memory.min = pod.spec.containers[i].resources.requests[memory]$

由于 `memory.min` 接口要求祖先 cgroups 目录全部被设置, 因此需要正确设置 Pod 和节点的 cgroups 目录。 @@ -338,7 +400,8 @@ $memory.min = \sum_{i}^{no. of nodes}\sum_{j}^{no. of pods}pod[i].spec.container

Kubelet 将直接使用 libcontainer 库(来自 runc 项目)管理 Pod 级别和节点级别 cgroups 的层次结构,而容器 cgroups 限制由容器运行时管理。 @@ -349,21 +412,28 @@ cgroups 的层次结构,而容器 cgroups 限制由容器运行时管理。 ### 支持 Pod QoS 类别 {#support-for-pod-qos-classes} 根据用户对 Kubernetes v1.22 中 Alpha 特性的反馈,一些用户希望在 Pod 层面选择不启用 MemoryQoS, 以确保不会出现早期内存抑制现象。因此,在 Kubernetes v1.27 中 MemoryQoS 还支持根据 服务质量(QoS)对 Pod 类设置 memory.high。以下是按 QoS 类设置 memory.high 的几种情况: 1. **Guaranteed Pods**:根据其 QoS 定义,要求 Pod 的内存请求等于其内存限制,并且不允许超配。 因此,通过不设置 memory.high,MemoryQoS 特性会针对这些 Pod 被禁用。 这样做可以确保 **Guaranteed Pod** 充分利用其内存请求,也就是其内存限制,并且不会被抑制。 2. **Burstable Pod**:根据其 QoS 定义,要求 Pod 中至少有一个容器具有 CPU 或内存请求或限制设置。 @@ -388,9 +458,11 @@ Based on user feedback for the Alpha feature in Kubernetes v1.22, some users wou {{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-high-no-limits.svg" title="当请求和限制未被设置时的 memory.high" alt="当请求和限制未被设置时的 memory.high" >}} -3. **BestEffort Pods**:根据其 QoS 定义,不需要设置内存或 CPU 限制或请求。对于这种情况, +3. **BestEffort Pod**:根据其 QoS 定义,不需要设置内存或 CPU 限制或请求。对于这种情况, kubernetes 设置 requests.memory = 0 并将公式中的 limits.memory 替换为节点可分配内存: @@ -458,7 +533,8 @@ Huge thank you to all the contributors who helped with the design, implementatio * Mrunal Patel([mrunalp](https://github.com/mrunalp)) 对于那些有兴趣参与未来内存 QoS 特性讨论的人,你可以通过多种方式联系 SIG Node: diff --git a/content/zh-cn/blog/_posts/2023-08-15-pkgs-k8s-io-introduction.md b/content/zh-cn/blog/_posts/2023-08-15-pkgs-k8s-io-introduction.md new file mode 100644 index 0000000000000..2dd837d0cf8d7 --- /dev/null +++ b/content/zh-cn/blog/_posts/2023-08-15-pkgs-k8s-io-introduction.md @@ -0,0 +1,324 @@ +--- +layout: blog +title: "pkgs.k8s.io:介绍 Kubernetes 社区自有的包仓库" +date: 2023-08-15T20:00:00+0000 +slug: pkgs-k8s-io-introduction +--- + + + +**作者**:Marko Mudrinić (Kubermatic) + +**译者**:Wilson Wu (DaoCloud) + + +我很高兴代表 Kubernetes SIG Release 介绍 Kubernetes +社区自有的 Debian 和 RPM 软件仓库:`pkgs.k8s.io`! +这些全新的仓库取代了我们自 Kubernetes v1.5 以来一直使用的托管在 +Google 的仓库(`apt.kubernetes.io` 和 `yum.kubernetes.io`)。 + + +这篇博文包含关于这些新的包仓库的信息、它对最终用户意味着什么以及如何迁移到新仓库。 + + +**ℹ️ 更新(2023 年 8 月 31 日):旧版托管在 Google 的仓库已被弃用,并将于 2023 年 9 月 13 日开始被冻结。** +查看[弃用公告](/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/)了解有关此更改的更多详细信息。 + + +## 关于新的包仓库,你需要了解哪些信息? {#what-you-need-to-know-about-the-new-package-repositories} + + +**(更新于 2023 年 8 月 31 日)** + + +- 这是一个**明确同意的更改**;你需要手动从托管在 Google 的仓库迁移到 + Kubernetes 社区自有的仓库。请参阅本公告后面的[如何迁移](#how-to-migrate), + 了解迁移信息和说明。 + +- 旧版托管在 Google 的仓库**自 2023 年 8 月 31 日起被弃用**, + 并将**于 2023 年 9 月 13 日左右被冻结**。 + 冻结将在计划于 2023 年 9 月发布补丁之后立即发生。 + 冻结旧仓库意味着我们在 2023 年 9 月 13 日这个时间点之后仅将 Kubernetes + 项目的包发布到社区自有的仓库。有关此更改的更多详细信息, + 请查看[弃用公告](/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/)。 + +- 旧仓库中的现有包将在可预见的未来一段时间内可用。 + 然而,Kubernetes 项目无法保证这会持续多久。 + 已弃用的旧仓库及其内容可能会在未来随时被删除,恕不另行通知。 + + +- 鉴于在 2023 年 9 月 13 日这个截止时间点之后不会向旧仓库发布任何新版本, + 如果你不在该截止时间点迁移至新的 Kubernetes 仓库, + 你将无法升级到该日期之后发布的任何补丁或次要版本。 + 也就是说,我们建议**尽快**迁移到新的 Kubernetes 仓库。 + +- 新的 Kubernetes 仓库中包含社区开始接管包构建以来仍在支持的 Kubernetes 版本的包。 + 这意味着 v1.24.0 之前的任何内容都只存在于托管在 Google 的仓库中。 + +- 每个 Kubernetes 次要版本都有一个专用的仓库。 + 当升级到不同的次要版本时,你必须记住,仓库详细信息也会发生变化。 + + +## 为什么我们要引入新的包仓库? {#why-are-we-introducing-new-package-repositories} + + +随着 Kubernetes 项目的不断发展,我们希望确保最终用户获得最佳体验。 +托管在 Google 的仓库多年来一直为我们提供良好的服务, +但我们开始面临一些问题,需要对发布包的方式进行重大变更。 +我们的另一个目标是对所有关键组件使用社区拥有的基础设施,其中包括仓库。 + + +将包发布到托管在 Google 的仓库是一个手动过程, +只能由名为 [Google 构建管理员](/zh-cn/releases/release-managers/#build-admins)的 Google 员工团队来完成。 +[Kubernetes 发布管理员团队](/zh-cn/releases/release-managers/#release-managers)是一个非常多元化的团队, +尤其是在我们工作的时区方面。考虑到这一限制,我们必须对每个版本进行非常仔细的规划, +确保我们有发布经理和 Google 构建管理员来执行发布。 + + +另一个问题是由于我们只有一个包仓库。因此,我们无法发布预发行版本 +(Alpha、Beta 和 RC)的包。这使得任何有兴趣测试的人都更难测试 Kubernetes 预发布版本。 +我们从测试这些版本的人员那里收到的反馈对于确保版本的最佳质量至关重要, +因此我们希望尽可能轻松地测试这些版本。最重要的是,只有一个仓库限制了我们对 +`cri-tools` 和 `kubernetes-cni` 等依赖进行发布, + + +尽管存在这些问题,我们仍非常感谢 Google 和 Google 构建管理员这些年来的参与、支持和帮助! + + +## 新的包仓库如何工作? {#how-the-new-package-repositories-work} + + +新的 Debian 和 RPM 仓库托管在 `pkgs.k8s.io`。 +目前,该域指向一个 CloudFront CDN,其后是包含仓库和包的 S3 存储桶。 +然而,我们计划在未来添加更多的镜像站点,让其他公司有可能帮助我们提供软件包服务。 + + +包通过 [OpenBuildService(OBS)平台](http://openbuildservice.org)构建和发布。 +经过长时间评估不同的解决方案后,我们决定使用 OpenBuildService 作为管理仓库和包的平台。 +首先,OpenBuildService 是一个开源平台,被大量开源项目和公司使用, +如 openSUSE、VideoLAN、Dell、Intel 等。OpenBuildService 具有许多功能, +使其非常灵活且易于与我们现有的发布工具集成。 +它还允许我们以与托管在 Google 的仓库类似的方式构建包,从而使迁移过程尽可能无缝。 + + +SUSE 赞助 Kubernetes 项目并且支持访问其引入的 OpenBuildService 环境 +([`build.opensuse.org`](http://build.opensuse.org)), +还提供将 OBS 与我们的发布流程集成的技术支持。 + + +我们使用 SUSE 的 OBS 实例来构建和发布包。构建新版本后, +我们的工具会自动将所需的制品和包设置推送到 `build.opensuse.org`。 +这将触发构建过程,为所有支持的架构(AMD64、ARM64、PPC64LE、S390X)构建包。 +最后,生成的包将自动推送到我们社区拥有的 S3 存储桶,以便所有用户都可以使用它们。 + + +我们想借此机会感谢 SUSE 允许我们使用 `build.opensuse.org` +以及他们的慷慨支持,使这种集成成为可能! + + +## 托管在 Google 的仓库和 Kubernetes 仓库之间有哪些显著差异? {#what-are-significant-differences-between-the-google-hosted-and-kubernetes-package-repositories} + + +你应该注意三个显著差异: + + +- 每个 Kubernetes 次要版本都有一个专用的仓库。例如, + 名为 `core:/stable:/v1.28` 的仓库仅托管稳定 Kubernetes v1.28 版本的包。 + 这意味着你可以从此仓库安装 v1.28.0,但无法安装 v1.27.0 或 v1.28 之外的任何其他次要版本。 + 升级到另一个次要版本后,你必须添加新的仓库并可以选择删除旧的仓库 + +- 每个 Kubernetes 仓库中可用的 `cri-tools` 和 `kubernetes-cni` 包版本有所不同 + - 这两个包是 `kubelet` 和 `kubeadm` 的依赖项 + - v1.24 到 v1.27 的 Kubernetes 仓库与托管在 Google 的仓库具有这些包的相同版本 + - v1.28 及更高版本的 Kubernetes 仓库将仅发布该 Kubernetes 次要版本 + - 就 v1.28 而言,Kubernetes v1.28 的仓库中仅提供 kubernetes-cni 1.2.0 和 cri-tools v1.28 + - 与 v1.29 类似,我们只计划发布 cri-tools v1.29 以及 Kubernetes v1.29 将使用的 kubernetes-cni 版本 + +- 包版本的修订部分(`1.28.0-00` 中的 `-00` 部分)现在由 OpenBuildService + 平台自动生成,并具有不同的格式。修订版本现在采用 `-x.y` 格式,例如 `1.28.0-1.1` + + +## 这是否会影响现有的托管在 Google 的仓库? {#does-this-in-any-way-affect-existing-google-hosted-repositories} + + +托管在 Google 的仓库以及发布到其中的所有包仍然可用,与之前一样。 +我们构建包并将其发布到托管在 Google 仓库的方式没有变化, +所有新引入的更改仅影响发布到社区自有仓库的包。 + + +然而,正如本文开头提到的,我们计划将来停止将包发布到托管在 Google 的仓库。 + + +## 如何迁移到 Kubernetes 社区自有的仓库? {#how-to-migrate} + + +### 使用 `apt`/`apt-get` 的 Debian、Ubuntu 一起其他操作系统 {#how-to-migrate-deb} + + +1. 替换 `apt` 仓库定义,以便 `apt` 指向新仓库而不是托管在 Google 的仓库。 + 确保将以下命令中的 Kubernetes 次要版本替换为你当前使用的次要版本: + + ```shell + echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list + ``` + + +2. 下载 Kubernetes 仓库的公共签名密钥。所有仓库都使用相同的签名密钥, + 因此你可以忽略 URL 中的版本: + + ```shell + curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg + ``` + + +3. 更新 `apt` 包索引: + + ```shell + sudo apt-get update + ``` + + +### 使用 `rpm`/`dnf` 的 CentOS、Fedora、RHEL 以及其他操作系统 {#how-to-migrate-rpm} + + +1. 替换 `yum` 仓库定义,使 `yum` 指向新仓库而不是托管在 Google 的仓库。 + 确保将以下命令中的 Kubernetes 次要版本替换为你当前使用的次要版本: + + ```shell + cat < +## 迁移到 Kubernetes 仓库后是否可以回滚到托管在 Google 的仓库? {#can-i-rollback-to-the-google-hosted-repository-after-migrating-to-the-kubernetes-repositories} + + +一般来说,可以。只需执行与迁移时相同的步骤,但使用托管在 Google 的仓库参数。 +你可以在[“安装 kubeadm”](/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm)等文档中找到这些参数。 + + +## 为什么没有固定的域名/IP 列表?为什么我无法限制包下载? {#why-isn-t-there-a-stable-list-of-domains-ips-why-can-t-i-restrict-package-downloads} + + +我们对 `pkgs.k8s.io` 的计划是使其根据用户位置充当一组后端(包镜像)的重定向器。 +此更改的本质意味着下载包的用户可以随时重定向到任何镜像。 +鉴于架构和我们计划在不久的将来加入更多镜像,我们无法提供给你可以添加到允许列表中的 +IP 地址或域名列表。 + + +限制性控制机制(例如限制访问特定 IP/域名列表的中间人代理或网络策略)将随着此更改而中断。 +对于这些场景,我们鼓励你将包的发布版本与你可以严格控制的本地仓库建立镜像。 + + +## 如果我发现新的仓库有异常怎么办? {#what-should-i-do-if-i-detect-some-abnormality-with-the-new-repositories} + + +如果你在新的 Kubernetes 仓库中遇到任何问题, +请在 [`kubernetes/release` 仓库](https://github.com/kubernetes/release/issues/new/choose)中提交问题。 diff --git a/content/zh-cn/blog/_posts/2023-10-05-sig-architecture-conformance-spotlight.md b/content/zh-cn/blog/_posts/2023-10-05-sig-architecture-conformance-spotlight.md new file mode 100644 index 0000000000000..99aefd4cce49c --- /dev/null +++ b/content/zh-cn/blog/_posts/2023-10-05-sig-architecture-conformance-spotlight.md @@ -0,0 +1,341 @@ +--- +layout: blog +title: "聚焦 SIG Architecture: Conformance" +slug: sig-architecture-conformance-spotlight-2023 +date: 2023-10-05 +--- + + + +**作者**:Frederico Muñoz (SAS Institute) + +**译者**:[Michael Yao](https://github.com/windsonsea) (DaoCloud) + + +**这是 SIG Architecture 焦点访谈系列的首次采访,这一系列访谈将涵盖多个子项目。 +我们从 SIG Architecture:Conformance 子项目开始。** + +在本次 [SIG Architecture](https://github.com/kubernetes/community/blob/master/sig-architecture/README.md) +访谈中,我们与 [Riaan Kleinhans](https://github.com/Riaankl) (ii-Team) 进行了对话,他是 +[Conformance 子项目](https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#conformance-definition-1)的负责人。 + + +## 关于 SIG Architecture 和 Conformance 子项目 + +**Frederico (FSM)**:你好 Riaan,欢迎!首先,请介绍一下你自己,你的角色以及你是如何参与 Kubernetes 的。 + +**Riaan Kleinhans (RK)**:嗨!我叫 Riaan Kleinhans,我住在南非。 +我是新西兰 [ii-Team](ii.nz) 的项目经理。在我加入 ii 时,本来计划在 2020 年 4 月搬到新西兰, +然后新冠疫情爆发了。幸运的是,作为一个灵活和富有活力的团队,我们能够在各个不同的时区以远程方式协作。 + + +ii 团队负责管理 Kubernetes Conformance 测试的技术债务,并编写测试内容来消除这些技术债务。 +我担任项目经理的角色,成为监控、测试内容编写和社区之间的桥梁。通过这项工作,我有幸在最初的几个月里结识了 +[Dan Kohn](https://github.com/dankohn),他对我们的工作充满热情,给了我很大的启发。 + + +**FSM**:谢谢!所以,你参与 SIG Architecture 是因为合规性的工作? + +**RK**:SIG Architecture 负责管理 Kubernetes Conformance 子项目。 +最初,我大部分时间直接与 SIG Architecture 交流 Conformance 子项目。 +然而,随着我们开始按 SIG 来组织工作任务,我们开始直接与各个 SIG 进行协作。 +与拥有未被测试的 API 的这些 SIG 的协作帮助我们加快了工作进度。 + + +**FSM**:你如何描述 Conformance 子项目的主要目标和介入的领域? + +**RM**: Kubernetes Conformance 子项目专注于通过开发和维护全面的合规性测试套件来确保兼容性并遵守 +Kubernetes 规范。其主要目标包括确保不同 Kubernetes 实现之间的兼容性,验证 API 规范的遵守情况, +通过鼓励合规性认证来支持生态体系,并促进 Kubernetes 社区内的合作。 +通过提供标准化的测试并促进一致的行为和功能, +Conformance 子项目为开发人员和用户提供了一个可靠且兼容的 Kubernetes 生态体系。 + + +## 关于 Conformance Test Suite 的更多内容 + +**FSM**:我认为,提供这些标准化测试的一部分工作在于 +[Conformance Test Suite](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)。 +你能解释一下它是什么以及其重要性吗? + +**RK**:Kubernetes Conformance Test Suite 检查 Kubernetes 发行版是否符合项目的规范, +确保在不同的实现之间的兼容性。它涵盖了诸如 API、联网、存储、调度和安全等各个特性。 +能够通过测试,则表示实现合理,便于推动构建一致且可移植的容器编排平台。 + + +**FSM**:是的,这些测试很重要,因为它们定义了所有 Kubernetes 集群必须支持的最小特性集合。 +你能描述一下决定将哪些特性包含在内的过程吗?在最小特性集的思路与其他 SIG 提案之间是否有所冲突? + +**RK**:SIG Architecture 针对经受合规性测试的每个端点的要求,都有明确的定义。 +API 端点只有正式发布且不是可选的特性,才会被(进一步)考虑是否合规。 +多年来,关于合规性配置文件已经进行了若干讨论, +探讨将被大多数终端用户广泛使用的可选端点(例如 RBAC)纳入特定配置文件中的可能性。 +然而,这一方面仍在不断改进中。 + + +不满足合规性标准的端点被列在 +[ineligible_endpoints.yaml](https://github.com/kubernetes/kubernetes/blob/master/test/conformance/testdata/ineligible_endpoints.yaml) 中, +该文件放在 Kubernetes 代码仓库中,是被公开访问的。 +随着这些端点的状态或要求发生变化,此文件可能会被更新以添加或删除端点。 +不合格的端点也可以在 [APISnoop](https://apisnoop.cncf.io/) 上看到。 + +对于 SIG Architecture 来说,确保透明度并纳入社区意见以确定端点的合格或不合格状态是至关重要的。 + + +**FSM**:为新特性编写测试内容通常需要某种强制执行方式。 +你如何看待 Kubernetes 中这方面的演变?是否有人在努力改进这个流程, +使得必须具备测试成为头等要务,或许这从来都不是一个问题? + +**RK**:在 2018 年开始围绕 Kubernetes 合规性计划进行讨论时,只有大约 11% 的端点被测试所覆盖。 +那时,CNCF 的管理委员会提出一个要求,如果要提供资金覆盖缺失的合规性测试,Kubernetes 社区应采取一个策略, +即如果新特性没有包含稳定 API 的合规性测试,则不允许添加此特性。 + + +SIG Architecture 负责监督这一要求,[APISnoop](https://apisnoop.cncf.io/) +在此方面被证明是一个非常有价值的工具。通过自动化流程,APISnoop 在每个周末生成一个 PR, +以突出 Conformance 覆盖范围的变化。如果有端点在没有进行合规性测试的情况下进阶至正式发布, +将会被迅速识别发现。这种方法有助于防止积累新的技术债务。 + +此外,我们计划在不久的将来创建一个发布通知任务,作用是添加额外一层防护,以防止产生新的技术债务。 + + +**FSM**:我明白了,工具化和自动化在其中起着重要的作用。 +在你看来,就合规性而言,还有哪些领域需要做一些工作? +换句话说,目前标记为优先改进的领域有哪些? + +**RK**:在 1.27 版本中,我们已完成了 “100% 合规性测试” 的里程碑! + + +当时,社区重新审视了所有被列为不合规的端点。这个列表是收集多年的社区意见后填充的。 +之前被认为不合规的几个端点已被挑选出来并迁移到一个新的专用列表中, +该列表中包含目前合规性测试开发的焦点。同样,可以在 apisnoop.cncf.io 上查阅此列表。 + + +为了确保在合规性项目中避免产生新的技术债务,我们计划建立一个发布通知任务作为额外的预防措施。 + +虽然 APISnoop 目前被托管在 CNCF 基础设施上,但此项目已慷慨地捐赠给了 Kubernetes 社区。 +因此,它将在 2023 年底之前转移到社区自治的基础设施上。 + + +**FSM**:这是个好消息!对于想要提供帮助的人们,你能否重点说明一下协作的价值所在? +参与贡献是否需要对 Kubernetes 有很扎实的知识,或否有办法让一些新人也能为此项目做出贡献? + +**RK**:参与合规性测试就像 "洗碗" 一样,它可能不太显眼,但仍然非常重要。 +这需要对 Kubernetes 有深入的理解,特别是在需要对端点进行测试的领域。 +这就是为什么与负责测试 API 端点的每个 SIG 进行协作会如此重要。 + + +我们的承诺是让所有人都能参与测试内容编写,作为这一承诺的一部分, +ii 团队目前正在开发一个 “点击即部署(click and deploy)” 的解决方案。 +此解决方案旨在使所有人都能在几分钟内快速创建一个在真实硬件上工作的环境。 +我们将在准备好后分享有关此项开发的更新。 + + +**FSM**:那会非常有帮助,谢谢。最后你还想与我们的读者分享些什么见解吗? + +**RK**:合规性测试是一个协作性的社区工作,涉及各个 SIG 之间的广泛合作。 +SIG Architecture 在推动倡议并提供指导方面起到了领头作用。然而, +工作的进展在很大程度上依赖于所有 SIG 在审查、增强和认可测试方面的支持。 + + +我要衷心感谢 ii 团队多年来对解决技术债务的坚定承诺。 +特别要感谢 [Hippie Hacker](https://github.com/hh) 的指导和对愿景的引领作用,这是非常宝贵的。 +此外,我还要特别表扬 Stephen Heywood 在最近几个版本中承担了大部分测试内容编写工作而做出的贡献, +还有 Zach Mandeville 对 APISnoop 也做了很好的贡献。 + + +**FSM**:非常感谢你参加本次访谈并分享你的深刻见解,我本人从中获益良多,我相信读者们也会同样受益。 diff --git a/content/zh-cn/docs/concepts/architecture/_index.md b/content/zh-cn/docs/concepts/architecture/_index.md index 5e707ed397efb..3ce2af7852eb4 100644 --- a/content/zh-cn/docs/concepts/architecture/_index.md +++ b/content/zh-cn/docs/concepts/architecture/_index.md @@ -4,3 +4,8 @@ weight: 30 description: > Kubernetes 背后的架构概念。 --- + + +{{< figure src="/images/docs/kubernetes-cluster-architecture.svg" alt="Kubernetes 组件" caption="Kubernetes 集群架构" class="diagram-large" >}} diff --git a/content/zh-cn/docs/concepts/architecture/cgroups.md b/content/zh-cn/docs/concepts/architecture/cgroups.md index 382e972903b30..40c4dc6468621 100644 --- a/content/zh-cn/docs/concepts/architecture/cgroups.md +++ b/content/zh-cn/docs/concepts/architecture/cgroups.md @@ -195,8 +195,8 @@ cgroup v2 使用一个与 cgroup v1 不同的 API,因此如果有任何应用 DaemonSet for monitoring pods and containers, update it to v0.43.0 or later. * If you deploy Java applications, prefer to use versions which fully support cgroup v2: * [OpenJDK / HotSpot](https://bugs.openjdk.org/browse/JDK-8230305): jdk8u372, 11.0.16, 15 and later - * [IBM Semeru Runtimes](https://www.eclipse.org/openj9/docs/version0.33/#control-groups-v2-support): jdk8u345-b01, 11.0.16.0, 17.0.4.0, 18.0.2.0 and later - * [IBM Java](https://www.ibm.com/docs/en/sdk-java-technology/8?topic=new-service-refresh-7#whatsnew_sr7__fp15): 8.0.7.15 and later + * [IBM Semeru Runtimes](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.382.0, 11.0.20.0, 17.0.8.0, and later + * [IBM Java](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.8.6 and later * If you are using the [uber-go/automaxprocs](https://github.com/uber-go/automaxprocs) package, make sure the version you use is v1.5.1 or higher. --> @@ -205,8 +205,8 @@ cgroup v2 使用一个与 cgroup v1 不同的 API,因此如果有任何应用 需将其更新到 v0.43.0 或更高版本。 * 如果你部署 Java 应用程序,最好使用完全支持 cgroup v2 的版本: * [OpenJDK / HotSpot](https://bugs.openjdk.org/browse/JDK-8230305): jdk8u372、11.0.16、15 及更高的版本 - * [IBM Semeru Runtimes](https://www.eclipse.org/openj9/docs/version0.33/#control-groups-v2-support): jdk8u345-b01、11.0.16.0、17.0.4.0、18.0.2.0 及更高的版本 - * [IBM Java](https://www.ibm.com/docs/en/sdk-java-technology/8?topic=new-service-refresh-7#whatsnew_sr7__fp15): 8.0.7.15 及更高的版本 + * [IBM Semeru Runtimes](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.382.0、11.0.20.0、17.0.8.0 及更高的版本 + * [IBM Java](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.8.6 及更高的版本 * 如果你正在使用 [uber-go/automaxprocs](https://github.com/uber-go/automaxprocs) 包, 确保你使用的版本是 v1.5.1 或者更高。 diff --git a/content/zh-cn/docs/concepts/cluster-administration/_index.md b/content/zh-cn/docs/concepts/cluster-administration/_index.md index f261dcc037f02..cec70f46da945 100644 --- a/content/zh-cn/docs/concepts/cluster-administration/_index.md +++ b/content/zh-cn/docs/concepts/cluster-administration/_index.md @@ -5,6 +5,12 @@ content_type: concept description: > 关于创建和管理 Kubernetes 集群的底层细节。 no_list: true +card: + name: setup + weight: 60 + anchors: + - anchor: "#securing-a-cluster" + title: 保护集群 --- diff --git a/content/zh-cn/docs/concepts/cluster-administration/addons.md b/content/zh-cn/docs/concepts/cluster-administration/addons.md index 522598aac6662..007fcaacae998 100644 --- a/content/zh-cn/docs/concepts/cluster-administration/addons.md +++ b/content/zh-cn/docs/concepts/cluster-administration/addons.md @@ -48,7 +48,7 @@ Add-on 扩展了 Kubernetes 的功能。 network policies on L3-L7 using an identity-based security model that is decoupled from network addressing. Cilium can act as a replacement for kube-proxy; it also offers additional, opt-in observability and security features. - Cilium is a [CNCF project at the Incubation level](https://www.cncf.io/projects/cilium/). + Cilium is a [CNCF project at the Graduated level](https://www.cncf.io/projects/cilium/). --> ## 联网和网络策略 {#networking-and-network-policy} @@ -66,7 +66,7 @@ Add-on 扩展了 Kubernetes 的功能。 能够以原生路由(routing)和覆盖/封装(overlay/encapsulation)模式跨越多个集群, 并且可以使用与网络寻址分离的基于身份的安全模型在 L3 至 L7 上实施网络策略。 Cilium 可以作为 kube-proxy 的替代品;它还提供额外的、可选的可观察性和安全功能。 - Cilium 是一个[孵化级别的 CNCF 项目](https://www.cncf.io/projects/cilium/)。 + Cilium 是一个[毕业级别的 CNCF 项目](https://www.cncf.io/projects/cilium/)。 ## 如何实现 Kubernetes 的网络模型 {#how-to-implement-the-kubernetes-network-model} -网络模型由每个节点上的容器运行时实现。最常见的容器运行时使用 -[Container Network Interface](https://github.com/containernetworking/cni) (CNI) 插件来管理其网络和安全功能。 -许多不同的 CNI 插件来自于许多不同的供应商。其中一些仅提供添加和删除网络接口的基本功能, +网络模型由各节点上的容器运行时来实现。最常见的容器运行时使用 +[Container Network Interface](https://github.com/containernetworking/cni) (CNI) 插件来管理其网络和安全能力。 +来自不同供应商 CNI 插件有很多。其中一些仅提供添加和删除网络接口的基本功能, 而另一些则提供更复杂的解决方案,例如与其他容器编排系统集成、运行多个 CNI 插件、高级 IPAM 功能等。 + 请参阅[此页面](/zh-cn/docs/concepts/cluster-administration/addons/#networking-and-network-policy)了解 Kubernetes 支持的网络插件的非详尽列表。 ## {{% heading "whatsnext" %}} -网络模型的早期设计、运行原理以及未来的一些计划, -都在[联网设计文档](https://git.k8s.io/design-proposals-archive/network/networking.md)里有更详细的描述。 +网络模型的早期设计、运行原理都在[联网设计文档](https://git.k8s.io/design-proposals-archive/network/networking.md)里有详细描述。 +关于未来的计划,以及旨在改进 Kubernetes 联网能力的一些正在进行的工作,可以参考 SIG Network +的 [KEPs](https://github.com/kubernetes/enhancements/tree/master/keps/sig-network)。 diff --git a/content/zh-cn/docs/concepts/cluster-administration/system-traces.md b/content/zh-cn/docs/concepts/cluster-administration/system-traces.md index 56a4373bf06e0..aefeb8e90a2ed 100644 --- a/content/zh-cn/docs/concepts/cluster-administration/system-traces.md +++ b/content/zh-cn/docs/concepts/cluster-administration/system-traces.md @@ -215,12 +215,19 @@ span will be sent to the exporter. -Kubernetes v{{< skew currentVersion >}} 中的 kubelet 从垃圾回收、Pod -同步例程以及每个 gRPC 方法中收集 span。CRI-O 和 containerd -这类关联的容器运行时可以将链路链接到其导出的 span,以提供更多上下文信息。 +Kubernetes v{{< skew currentVersion >}} 中的 kubelet 收集与垃圾回收、Pod +同步例程以及每个 gRPC 方法相关的 Span。 +kubelet 借助 gRPC 来传播跟踪上下文,以便 CRI-O 和 containerd +这类带有跟踪插桩的容器运行时可以在其导出的 Span 与 kubelet +所提供的跟踪上下文之间建立关联。所得到的跟踪数据会包含 kubelet +与容器运行时 Span 之间的父子链接关系,从而为调试节点问题提供有用的上下文信息。 @@ -40,7 +40,7 @@ Because Secrets can be created independently of the Pods that use them, there is less risk of the Secret (and its data) being exposed during the workflow of creating, viewing, and editing Pods. Kubernetes, and applications that run in your cluster, can also take additional precautions with Secrets, such as avoiding -writing secret data to nonvolatile storage. +writing sensitive data to nonvolatile storage. Secrets are similar to {{< glossary_tooltip text="ConfigMaps" term_id="configmap" >}} but are specifically intended to hold confidential data. @@ -48,7 +48,7 @@ but are specifically intended to hold confidential data. 由于创建 Secret 可以独立于使用它们的 Pod, 因此在创建、查看和编辑 Pod 的工作流程中暴露 Secret(及其数据)的风险较小。 Kubernetes 和在集群中运行的应用程序也可以对 Secret 采取额外的预防措施, -例如避免将机密数据写入非易失性存储。 +例如避免将敏感数据写入非易失性存储。 Secret 类似于 {{}} 但专门用于保存机密数据。 @@ -124,7 +124,7 @@ Kubernetes 控制面也使用 Secret; ### Use case: dotfiles in a secret volume You can make your data "hidden" by defining a key that begins with a dot. -This key represents a dotfile or "hidden" file. For example, when the following secret +This key represents a dotfile or "hidden" file. For example, when the following Secret is mounted into a volume, `secret-volume`, the volume will contain a single file, called `.secret-file`, and the `dotfile-test-container` will have this file present at the path `/etc/secret-volume/.secret-file`. @@ -146,35 +146,7 @@ you must use `ls -la` to see them when listing directory contents. 列举目录内容时你必须使用 `ls -la` 才能看到它们。 {{< /note >}} -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: dotfile-secret -data: - .secret-file: dmFsdWUtMg0KDQo= ---- -apiVersion: v1 -kind: Pod -metadata: - name: secret-dotfiles-pod -spec: - volumes: - - name: secret-volume - secret: - secretName: dotfile-secret - containers: - - name: dotfile-test-container - image: registry.k8s.io/busybox - command: - - ls - - "-l" - - "/etc/secret-volume" - volumeMounts: - - name: secret-volume - readOnly: true - mountPath: "/etc/secret-volume" -``` +{{% code language="yaml" file="secret/dotfile-secret.yaml" %}} - 如果你的云原生组件需要执行身份认证来访问你所知道的、在同一 Kubernetes 集群中运行的另一个应用, @@ -330,8 +302,8 @@ Kubernetes 并不对类型的名称作任何限制。不过,如果你要使用 则你必须满足为该类型所定义的所有要求。 如果你要定义一种公开使用的 Secret 类型,请遵守 Secret 类型的约定和结构, @@ -339,19 +311,19 @@ by a `/`. For example: `cloud-hosting.example.net/cloud-api-credentials`. 例如:`cloud-hosting.example.net/cloud-api-credentials`。 ### Opaque Secret -当 Secret 配置文件中未作显式设定时,默认的 Secret 类型是 `Opaque`。 -当你使用 `kubectl` 来创建一个 Secret 时,你会使用 `generic` -子命令来标明要创建的是一个 `Opaque` 类型 Secret。 -例如,下面的命令会创建一个空的 `Opaque` 类型 Secret 对象: +当你未在 Secret 清单中显式指定类型时,默认的 Secret 类型是 `Opaque`。 +当你使用 `kubectl` 来创建一个 Secret 时,你必须使用 `generic` +子命令来标明要创建的是一个 `Opaque` 类型的 Secret。 +例如,下面的命令会创建一个空的 `Opaque` 类型的 Secret: ```shell kubectl create secret generic empty-secret @@ -361,7 +333,7 @@ kubectl get secret empty-secret -输出类似于 +输出类似于: ``` NAME TYPE DATA AGE @@ -376,116 +348,89 @@ In this case, `0` means you have created an empty Secret. 在这个例子中,`0` 意味着你刚刚创建了一个空的 Secret。 -### 服务账号令牌 Secret {#service-account-token-secrets} - -类型为 `kubernetes.io/service-account-token` 的 Secret -用来存放标识某{{< glossary_tooltip text="服务账号" term_id="service-account" >}}的令牌凭据。 +{{< glossary_tooltip text="ServiceAccount" term_id="service-account" >}}. This +is a legacy mechanism that provides long-lived ServiceAccount credentials to +Pods. +--> +### ServiceAccount 令牌 Secret {#service-account-token-secrets} + +类型为 `kubernetes.io/service-account-token` 的 Secret 用来存放标识某 +{{< glossary_tooltip text="ServiceAccount" term_id="service-account" >}} 的令牌凭据。 +这是为 Pod 提供长期有效 ServiceAccount 凭据的传统机制。 + + +在 Kubernetes v1.22 及更高版本中,推荐的方法是通过使用 +[`TokenRequest`](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) API +来获取短期自动轮换的 ServiceAccount 令牌。你可以使用以下方法获取这些短期令牌: + + +- 直接调用 `TokenRequest` API,或者使用像 `kubectl` 这样的 API 客户端。 + 例如,你可以使用 + [`kubectl create token`](/docs/reference/generated/kubectl/kubectl-commands#-em-token-em-) 命令。 +- 在 Pod 清单中请求使用[投射卷](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume)挂载的令牌。 + Kubernetes 会创建令牌并将其挂载到 Pod 中。 + 当挂载令牌的 Pod 被删除时,此令牌会自动失效。 + 更多细节参阅[启动使用服务账号令牌投射的 Pod](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#launch-a-pod-using-service-account-token-projection)。 {{< note >}} -Kubernetes 在 v1.22 版本之前都会自动创建用来访问 Kubernetes API 的凭据。 -这一老的机制是基于创建可被挂载到运行中 Pod 内的令牌 Secret 来实现的。 -在最近的版本中,包括 Kubernetes v{{< skew currentVersion >}} 中,API 凭据是直接通过 -[TokenRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) -API 来获得的,这一凭据会使用[投射卷](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume)挂载到 -Pod 中。使用这种方式获得的令牌有确定的生命期,并且在挂载它们的 Pod 被删除时自动作废。 - - -你仍然可以[手动创建](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-service-account-api-token) -服务账号令牌。例如,当你需要一个永远都不过期的令牌时。 -不过,仍然建议使用 [TokenRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) -子资源来获得访问 API 服务器的令牌。 -你可以使用 [`kubectl create token`](/docs/reference/generated/kubectl/kubectl-commands#-em-token-em-) -命令调用 `TokenRequest` API 获得令牌。 -{{< /note >}} - - 只有在你无法使用 `TokenRequest` API 来获取令牌, 并且你能够接受因为将永不过期的令牌凭据写入到可读取的 API 对象而带来的安全风险时, -才应该创建服务账号令牌 Secret 对象。 +才应该创建 ServiceAccount 令牌 Secret。 +更多细节参阅[为 ServiceAccount 手动创建长期有效的 API 令牌](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-service-account-api-token)。 +{{< /note >}} 使用这种 Secret 类型时,你需要确保对象的注解 `kubernetes.io/service-account-name` -被设置为某个已有的服务账号名称。 -如果你同时负责 ServiceAccount 和 Secret 对象的创建,应该先创建 ServiceAccount 对象。 +被设置为某个已有的 ServiceAccount 名称。 +如果你同时创建 ServiceAccount 和 Secret 对象,应该先创建 ServiceAccount 对象。 -当 Secret 对象被创建之后,某个 Kubernetes{{< glossary_tooltip text="控制器" term_id="controller" >}}会填写 -Secret 的其它字段,例如 `kubernetes.io/service-account.uid` 注解以及 `data` 字段中的 -`token` 键值,使之包含实际的令牌内容。 +当 Secret 对象被创建之后,某个 Kubernetes +{{< glossary_tooltip text="控制器" term_id="controller" >}}会填写 +Secret 的其它字段,例如 `kubernetes.io/service-account.uid` 注解和 +`data` 字段中的 `token` 键值(该键包含一个身份认证令牌)。 -下面的配置实例声明了一个服务账号令牌 Secret: +下面的配置实例声明了一个 ServiceAccount 令牌 Secret: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: secret-sa-sample - annotations: - kubernetes.io/service-account.name: "sa-name" -type: kubernetes.io/service-account-token -data: - # 你可以像 Opaque Secret 一样在这里添加额外的键/值偶对 - extra: YmFyCg== -``` +{{% code language="yaml" file="secret/serviceaccount-token-secret.yaml" %}} -参考 [ServiceAccount](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/) -文档了解服务账号的工作原理。你也可以查看 +参考 [ServiceAccount](/zh-cn/docs/concepts/security/service-accounts/) +文档了解 ServiceAccount 的工作原理。你也可以查看 [`Pod`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) 资源中的 `automountServiceAccountToken` 和 `serviceAccountName` 字段文档, -进一步了解从 Pod 中引用服务账号凭据。 +进一步了解从 Pod 中引用 ServiceAccount 凭据。 -`kubernetes.io/dockercfg` 是一种保留类型,用来存放 `~/.dockercfg` 文件的序列化形式。 -该文件是配置 Docker 命令行的一种老旧形式。使用此 Secret 类型时,你需要确保 -Secret 的 `data` 字段中包含名为 `.dockercfg` 的主键,其对应键值是用 base64 -编码的某 `~/.dockercfg` 文件的内容。 +- `kubernetes.io/dockercfg`:存放 `~/.dockercfg` 文件的序列化形式,它是配置 Docker + 命令行的一种老旧形式。Secret 的 `data` 字段包含名为 `.dockercfg` 的主键, + 其值是用 base64 编码的某 `~/.dockercfg` 文件的内容。 +- `kubernetes.io/dockerconfigjson`:存放 JSON 数据的序列化形式, + 该 JSON 也遵从 `~/.docker/config.json` 文件的格式规则,而后者是 `~/.dockercfg` + 的新版本格式。使用此 Secret 类型时,Secret 对象的 `data` 字段必须包含 + `.dockerconfigjson` 键,其键值为 base64 编码的字符串包含 `~/.docker/config.json` + 文件的内容。 -类型 `kubernetes.io/dockerconfigjson` 被设计用来保存 JSON 数据的序列化形式, -该 JSON 也遵从 `~/.docker/config.json` 文件的格式规则,而后者是 `~/.dockercfg` -的新版本格式。使用此 Secret 类型时,Secret 对象的 `data` 字段必须包含 -`.dockerconfigjson` 键,其键值为 base64 编码的字符串包含 `~/.docker/config.json` -文件的内容。 - 下面是一个 `kubernetes.io/dockercfg` 类型 Secret 的示例: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: secret-dockercfg -type: kubernetes.io/dockercfg -data: - .dockercfg: | - "" -``` +{{% code language="yaml" file="secret/dockercfg-secret.yaml" %}} {{< note >}} -当你使用清单文件来创建这两类 Secret 时,API 服务器会检查 `data` 字段中是否存在所期望的主键, +当你使用清单文件通过 Docker 配置来创建 Secret 时,API 服务器会检查 `data` 字段中是否存在所期望的主键, 并且验证其中所提供的键值是否是合法的 JSON 数据。 不过,API 服务器不会检查 JSON 数据本身是否是一个合法的 Docker 配置文件内容。 -当你没有 Docker 配置文件,或者你想使用 `kubectl` 创建一个 Secret -来访问容器仓库时,你可以这样做: +你还可以使用 `kubectl` 创建一个 Secret 来访问容器仓库时, +当你没有 Docker 配置文件时你可以这样做: ```shell kubectl create secret docker-registry secret-tiger-docker \ @@ -594,22 +522,24 @@ kubectl create secret docker-registry secret-tiger-docker \ ``` -上面的命令创建一个类型为 `kubernetes.io/dockerconfigjson` 的 Secret。 -如果你对 `.data.dockerconfigjson` 内容进行转储并执行 base64 解码: +此命令创建一个类型为 `kubernetes.io/dockerconfigjson` 的 Secret。 + +从这个新的 Secret 中获取 `.data.dockerconfigjson` 字段并执行数据解码: ```shell kubectl get secret secret-tiger-docker -o jsonpath='{.data.*}' | base64 -d ``` -那么输出等价于这个 JSON 文档(这也是一个有效的 Docker 配置文件): +输出等价于以下 JSON 文档(这也是一个有效的 Docker 配置文件): ```json { @@ -657,26 +587,25 @@ Secret must contain one of the following two keys: - `password`: 用于身份认证的密码或令牌。 以上两个键的键值都是 base64 编码的字符串。 -当然你也可以在创建 Secret 时使用 `stringData` 字段来提供明文形式的内容。 +当然你也可以在 Secret 清单中的使用 `stringData` 字段来提供明文形式的内容。 + 以下清单是基本身份验证 Secret 的示例: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: secret-basic-auth -type: kubernetes.io/basic-auth -stringData: - username: admin # kubernetes.io/basic-auth 类型的必需字段 - password: t0p-Secret # kubernetes.io/basic-auth 类型的必需字段 -``` +{{% code language="yaml" file="secret/basicauth-secret.yaml" %}} + +{{< note >}} + +Secret 的 `stringData` 字段不能很好地与服务器端应用配合使用。 +{{< /note >}} -提供 SSH 身份认证类型的 Secret 仅仅是出于用户方便性考虑。 -你也可以使用 `Opaque` 类型来保存用于 SSH 身份认证的凭据。 +提供 SSH 身份认证类型的 Secret 仅仅是出于方便性考虑。 +你可以使用 `Opaque` 类型来保存用于 SSH 身份认证的凭据。 不过,使用预定义的、公开的 Secret 类型(`kubernetes.io/ssh-auth`) 有助于其他人理解你的 Secret 的用途,也可以就其中包含的主键名形成约定。 -API 服务器确实会检查 Secret 配置中是否提供了所需要的主键。 +Kubernetes API 会验证这种类型的 Secret 中是否设定了所需的主键。 {{< caution >}} ### TLS Secret -Kubernetes 提供一种内置的 `kubernetes.io/tls` Secret 类型,用来存放 TLS -场合通常要使用的证书及其相关密钥。 +`kubernetes.io/tls` Secret 类型用来存放 TLS 场合通常要使用的证书及其相关密钥。 + TLS Secret 的一种典型用法是为 [Ingress](/zh-cn/docs/concepts/services-networking/ingress/) 资源配置传输过程中的数据加密,不过也可以用于其他资源或者直接在负载中使用。 当使用此类型的 Secret 时,Secret 配置中的 `data` (或 `stringData`)字段必须包含 @@ -779,38 +700,23 @@ TLS Secret 的一种典型用法是为 [Ingress](/zh-cn/docs/concepts/services-n 下面的 YAML 包含一个 TLS Secret 的配置示例: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: secret-tls -type: kubernetes.io/tls -stringData: - # 此例中的数据被截断 - tls.crt: | - --------BEGIN CERTIFICATE----- - MIIC2DCCAcCgAwIBAgIBATANBgkqh ... - tls.key: | - -----BEGIN RSA PRIVATE KEY----- - MIIEpgIBAAKCAQEA7yn3bRHQ5FHMQ ... -``` +{{% code language="yaml" file="secret/tls-auth-secret.yaml" %}} -提供 TLS 类型的 Secret 仅仅是出于用户方便性考虑。 -你也可以使用 `Opaque` 类型来保存用于 TLS 服务器与/或客户端的凭据。 -不过,使用内置的 Secret 类型的有助于对凭据格式进行归一化处理,并且 -API 服务器确实会检查 Secret 配置中是否提供了所需要的主键。 +提供 TLS 类型的 Secret 仅仅是出于方便性考虑。 +你可以创建 `Opaque` 类型的 Secret 来保存用于 TLS 身份认证的凭据。 +不过,使用已定义和公开的 Secret 类型有助于确保你自己项目中的 Secret 格式的一致性。 +API 服务器会验证这种类型的 Secret 是否设定了所需的主键。 -当使用 `kubectl` 来创建 TLS Secret 时,你可以像下面的例子一样使用 `tls` -子命令: +要使用 `kubectl` 创建 TLS Secret,你可以使用 `tls` 子命令: ```shell kubectl create secret tls my-tls-secret \ @@ -828,15 +734,13 @@ and must match the given private key for `--key`. ### 启动引导令牌 Secret {#bootstrap-token-secrets} -通过将 Secret 的 `type` 设置为 `bootstrap.kubernetes.io/token` -可以创建启动引导令牌类型的 Secret。这种类型的 Secret 被设计用来支持节点的启动引导过程。 +`bootstrap.kubernetes.io/token` Secret 类型针对的是节点启动引导过程所用的令牌。 其中包含用来为周知的 ConfigMap 签名的令牌。 -上面的 YAML 文件可能看起来令人费解,因为其中的数值均为 base64 编码的字符串。 -实际上,你完全可以使用下面的 YAML 来创建一个一模一样的 Secret: +你也可以在 Secret 的 `stringData` 字段中提供值,而无需对其进行 base64 编码: -```yaml -apiVersion: v1 -kind: Secret -metadata: - # 注意 Secret 的命名方式 - name: bootstrap-token-5emitj - # 启动引导令牌 Secret 通常位于 kube-system 名字空间 - namespace: kube-system -type: bootstrap.kubernetes.io/token -stringData: - auth-extra-groups: "system:bootstrappers:kubeadm:default-node-token" - expiration: "2020-09-13T04:39:10Z" - # 此令牌 ID 被用于生成 Secret 名称 - token-id: "5emitj" - token-secret: "kq4gihvszzgn1p0r" - # 此令牌还可用于 authentication (身份认证) - usage-bootstrap-authentication: "true" - # 且可用于 signing (证书签名) - usage-bootstrap-signing: "true" -``` +{{% code language="yaml" file="secret/bootstrap-token-secret-literal.yaml" %}} + +{{< note >}} + +Secret 的 `stringData` 字段不能很好地与服务器端应用配合使用。 +{{< /note >}} @@ -1094,24 +953,7 @@ Kubernetes ignores it. 当你在 Pod 中引用 Secret 时,你可以将该 Secret 标记为**可选**,就像下面例子中所展示的那样。 如果可选的 Secret 不存在,Kubernetes 将忽略它。 -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: mypod -spec: - containers: - - name: mypod - image: redis - volumeMounts: - mountPath: "/etc/foo" - readOnly: true - volumes: - - name: foo - secret: - secretName: mysecret - optional: true -``` +{{% code language="yaml" file="secret/optional-secret.yaml" %}} ### 容器镜像拉取 Secret {#using-imagepullsecrets} @@ -1278,8 +1120,8 @@ Secret 是在 Pod 层面来配置的。 ## 节点标签 {#built-in-node-labels} @@ -539,7 +541,7 @@ specified. 如果当前正被调度的 Pod 在具有自我亲和性的 Pod 序列中排在第一个, 那么只要它满足其他所有的亲和性规则,它就可以被成功调度。 -这是通过以下方式确定的:确保集群中没有其他 Pod 与此 Pod 的命名空间和标签选择器匹配; +这是通过以下方式确定的:确保集群中没有其他 Pod 与此 Pod 的名字空间和标签选择算符匹配; 该 Pod 满足其自身定义的条件,并且选定的节点满足所指定的所有拓扑要求。 这确保即使所有的 Pod 都配置了 Pod 间亲和性,也不会出现调度死锁的情况。 @@ -565,29 +567,40 @@ uses the "soft" `preferredDuringSchedulingIgnoredDuringExecution`. `preferredDuringSchedulingIgnoredDuringExecution`。 -亲和性规则表示,仅当节点和至少一个已运行且有 `security=S1` 的标签的 -Pod 处于同一区域时,才可以将该 Pod 调度到节点上。 -更确切的说,调度器必须将 Pod 调度到具有 `topology.kubernetes.io/zone=V` -标签的节点上,并且集群中至少有一个位于该可用区的节点上运行着带有 -`security=S1` 标签的 Pod。 - - -反亲和性规则表示,如果节点处于 Pod 所在的同一可用区且至少一个 Pod 具有 -`security=S2` 标签,则该 Pod 不应被调度到该节点上。 -更确切地说, 如果同一可用区中存在其他运行着带有 `security=S2` 标签的 Pod 节点, -并且节点具有标签 `topology.kubernetes.io/zone=R`,Pod 不能被调度到该节点上。 +The affinity rule specifies that the scheduler is allowed to place the example Pod +on a node only if that node belongs to a specific [zone](/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/) +where other Pods have been labeled with `security=S1`. +For instance, if we have a cluster with a designated zone, let's call it "Zone V," +consisting of nodes labeled with `topology.kubernetes.io/zone=V`, the scheduler can +assign the Pod to any node within Zone V, as long as there is at least one Pod within +Zone V already labeled with `security=S1`. Conversely, if there are no Pods with `security=S1` +labels in Zone V, the scheduler will not assign the example Pod to any node in that zone. +--> +亲和性规则规定,只有节点属于特定的 +[区域](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/) +且该区域中的其他 Pod 已打上 `security=S1` 标签时,调度器才可以将示例 Pod 调度到此节点上。 +例如,如果我们有一个具有指定区域(称之为 "Zone V")的集群,此区域由带有 `topology.kubernetes.io/zone=V` +标签的节点组成,那么只要 Zone V 内已经至少有一个 Pod 打了 `security=S1` 标签, +调度器就可以将此 Pod 调度到 Zone V 内的任何节点。相反,如果 Zone V 中没有带有 `security=S1` 标签的 Pod, +则调度器不会将示例 Pod 调度给该区域中的任何节点。 + + +反亲和性规则规定,如果节点属于特定的 +[区域](/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints/topology-spread-constraints/) +且该区域中的其他 Pod 已打上 `security=S2` 标签,则调度器应尝试避免将 Pod 调度到此节点上。 +例如,如果我们有一个具有指定区域(我们称之为 "Zone R")的集群,此区域由带有 `topology.kubernetes.io/zone=R` +标签的节点组成,只要 Zone R 内已经至少有一个 Pod 打了 `security=S2` 标签, +调度器应避免将 Pod 分配给 Zone R 内的任何节点。相反,如果 Zone R 中没有带有 `security=S2` 标签的 Pod, +则反亲和性规则不会影响将 Pod 调度到 Zone R。 -`nodeName` 旨在供自定义调度程序或需要绕过任何已配置调度程序的高级场景使用。 -如果已分配的 Node 负载过重,绕过调度程序可能会导致 Pod 失败。 +`nodeName` 旨在供自定义调度器或需要绕过任何已配置调度器的高级场景使用。 +如果已分配的 Node 负载过重,绕过调度器可能会导致 Pod 失败。 你可以使用[节点亲和性](#node-affinity)或 [`nodeselector` 字段](#nodeselector)将 -Pod 分配给特定 Node,而无需绕过调度程序。 +Pod 分配给特定 Node,而无需绕过调度器。 {{}} -## API {#api} +## API + -ResourceClass 和 ResourceClaim 的参数存储在单独的对象中, -通常使用安装资源驱动程序时创建的 {{< glossary_tooltip -term_id="CustomResourceDefinition" text="CRD" >}} 所定义的类型。 +ResourceClass 和 ResourceClaim 的参数存储在单独的对象中,通常使用安装资源驱动程序时创建的 +{{< glossary_tooltip term_id="CustomResourceDefinition" text="CRD" >}} 所定义的类型。 -## 预调度的 Pod +## 预调度的 Pod {#pre-scheduled-pods} 当你(或别的 API 客户端)创建设置了 `spec.nodeName` 的 Pod 时,调度器将被绕过。 如果 Pod 所需的某个 ResourceClaim 尚不存在、未被分配或未为该 Pod 保留,那么 kubelet @@ -335,8 +335,8 @@ kube-scheduler, kube-controller-manager and kubelet also need the feature gate. --> 动态资源分配是一个 **alpha 特性**,只有在启用 `DynamicResourceAllocation` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) -和 `resource.k8s.io/v1alpha1` {{< glossary_tooltip text="API 组" -term_id="api-group" >}} 时才启用。 +和 `resource.k8s.io/v1alpha1` +{{< glossary_tooltip text="API 组" term_id="api-group" >}} 时才启用。 有关详细信息,参阅 `--feature-gates` 和 `--runtime-config` [kube-apiserver 参数](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/)。 kube-scheduler、kube-controller-manager 和 kubelet 也需要设置该特性门控。 @@ -356,6 +356,7 @@ If your cluster supports dynamic resource allocation, the response is either a list of ResourceClass objects or: --> 如果你的集群支持动态资源分配,则响应是 ResourceClass 对象列表或: + ``` No resources found ``` @@ -364,6 +365,7 @@ No resources found If not supported, this error is printed instead: --> 如果不支持,则会输出如下错误: + ``` error: the server doesn't have a resource type "resourceclasses" ``` @@ -391,4 +393,4 @@ be installed. Please refer to the driver's documentation for details. [Dynamic Resource Allocation KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/3063-dynamic-resource-allocation/README.md). --> - 了解更多该设计的信息, - 参阅[动态资源分配 KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/3063-dynamic-resource-allocation/README.md)。 \ No newline at end of file + 参阅[动态资源分配 KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/3063-dynamic-resource-allocation/README.md)。 diff --git a/content/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction.md b/content/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction.md index 086e35092af80..0a6e99b939019 100644 --- a/content/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/node-pressure-eviction.md @@ -19,7 +19,7 @@ kubelet can proactively fail one or more pods on the node to reclaim resources and prevent starvation. During a node-pressure eviction, the kubelet sets the [phase](/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) for the -selected pods to `Failed`. This terminates the Pods. +selected pods to `Failed`, and terminates the Pod. Node-pressure eviction is not the same as [API-initiated eviction](/docs/concepts/scheduling-eviction/api-eviction/). @@ -30,7 +30,7 @@ Node-pressure eviction is not the same as kubelet 可以主动地使节点上一个或者多个 Pod 失效,以回收资源防止饥饿。 在节点压力驱逐期间,kubelet 将所选 Pod 的[阶段](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) -设置为 `Failed`。这将终止 Pod。 +设置为 `Failed` 并终止 Pod。 节点压力驱逐不同于 [API 发起的驱逐](/zh-cn/docs/concepts/scheduling-eviction/api-eviction/)。 @@ -55,7 +55,7 @@ The kubelet attempts to [reclaim node-level resources](#reclaim-node-resources) before it terminates end-user pods. For example, it removes unused container images when disk resources are starved. --> -## 自我修复行为 +## 自我修复行为 {#self-healing-behavior} kubelet 在终止最终用户 Pod 之前会尝试[回收节点级资源](#reclaim-node-resources)。 例如,它会在磁盘资源不足时删除未使用的容器镜像。 @@ -75,7 +75,7 @@ pods in place of the evicted pods. -### 静态 Pod 的自我修复 +### 静态 Pod 的自我修复 {#self-healing-for-static-pods} 你可以使用以下标志来配置软驱逐条件: -* `eviction-soft`:一组驱逐条件,如 `memory.available<1.5Gi`, +- `eviction-soft`:一组驱逐条件,如 `memory.available<1.5Gi`, 如果驱逐条件持续时长超过指定的宽限期,可以触发 Pod 驱逐。 -* `eviction-soft-grace-period`:一组驱逐宽限期, +- `eviction-soft-grace-period`:一组驱逐宽限期, 如 `memory.available=1m30s`,定义软驱逐条件在触发 Pod 驱逐之前必须保持多长时间。 -* `eviction-max-pod-grace-period`:在满足软驱逐条件而终止 Pod 时使用的最大允许宽限期(以秒为单位)。 +- `eviction-max-pod-grace-period`:在满足软驱逐条件而终止 Pod 时使用的最大允许宽限期(以秒为单位)。 kubelet 具有以下默认硬驱逐条件: -* `memory.available<100Mi` -* `nodefs.available<10%` -* `imagefs.available<15%` -* `nodefs.inodesFree<5%`(Linux 节点) +- `memory.available<100Mi` +- `nodefs.available<10%` +- `imagefs.available<15%` +- `nodefs.inodesFree<5%`(Linux 节点) -* 了解 [API 发起的驱逐](/zh-cn/docs/concepts/scheduling-eviction/api-eviction/) -* 了解 [Pod 优先级和抢占](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/) -* 了解 [PodDisruptionBudgets](/zh-cn/docs/tasks/run-application/configure-pdb/) -* 了解[服务质量](/zh-cn/docs/tasks/configure-pod-container/quality-service-pod/)(QoS) -* 查看[驱逐 API](/docs/reference/generated/kubernetes-api/{{}}/#create-eviction-pod-v1-core) +- 了解 [API 发起的驱逐](/zh-cn/docs/concepts/scheduling-eviction/api-eviction/) +- 了解 [Pod 优先级和抢占](/zh-cn/docs/concepts/scheduling-eviction/pod-priority-preemption/) +- 了解 [PodDisruptionBudgets](/zh-cn/docs/tasks/run-application/configure-pdb/) +- 了解[服务质量](/zh-cn/docs/tasks/configure-pod-container/quality-service-pod/)(QoS) +- 查看[驱逐 API](/docs/reference/generated/kubernetes-api/{{}}/#create-eviction-pod-v1-core) diff --git a/content/zh-cn/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md b/content/zh-cn/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md index 7bd495a507951..1df056b6c7827 100644 --- a/content/zh-cn/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md @@ -11,7 +11,7 @@ weight: 40 -{{< feature-state for_k8s_version="v1.26" state="alpha" >}} +{{< feature-state for_k8s_version="v1.27" state="beta" >}} 上述例子中 `effect` 使用的值为 `NoSchedule`,你也可以使用另外一个值 `PreferNoSchedule`。 -这是“优化”或“软”版本的 `NoSchedule` —— 系统会 **尽量** 避免将 Pod 调度到存在其不能容忍污点的节点上, -但这不是强制的。`effect` 的值还可以设置为 `NoExecute`,下文会详细描述这个值。 + + +`effect` 字段的允许值包括: + + +`NoExecute` +: 这会影响已在节点上运行的 Pod,具体影响如下: + * 如果 Pod 不能容忍这类污点,会马上被驱逐。 + * 如果 Pod 能够容忍这类污点,但是在容忍度定义中没有指定 `tolerationSeconds`, + 则 Pod 还会一直在这个节点上运行。 + * 如果 Pod 能够容忍这类污点,而且指定了 `tolerationSeconds`, + 则 Pod 还能在这个节点上继续运行这个指定的时间长度。 + 这段时间过去后,节点生命周期控制器从节点驱除这些 Pod。 + + +`NoSchedule` +: 除非具有匹配的容忍度规约,否则新的 Pod 不会被调度到带有污点的节点上。 + 当前正在节点上运行的 Pod **不会**被驱逐。 + + +`PreferNoSchedule` +: `PreferNoSchedule` 是“偏好”或“软性”的 `NoSchedule`。 + 控制平面将**尝试**避免将不能容忍污点的 Pod 调度到的节点上,但不能保证完全避免。 通常情况下,如果给一个节点添加了一个 effect 值为 `NoExecute` 的污点, -则任何不能忍受这个污点的 Pod 都会马上被驱逐,任何可以忍受这个污点的 Pod 都不会被驱逐。 +则任何不能容忍这个污点的 Pod 都会马上被驱逐,任何可以容忍这个污点的 Pod 都不会被驱逐。 但是,如果 Pod 存在一个 effect 值为 `NoExecute` 的容忍度指定了可选属性 `tolerationSeconds` 的值,则表示在给节点添加了上述污点之后, Pod 还能继续在节点上运行的时间。例如, @@ -327,7 +365,7 @@ manually add tolerations to your pods. * **Taint based Evictions**: A per-pod-configurable eviction behavior when there are node problems, which is described in the next section. --> -* **基于污点的驱逐**: 这是在每个 Pod 中配置的在节点出现问题时的驱逐行为, +* **基于污点的驱逐**:这是在每个 Pod 中配置的在节点出现问题时的驱逐行为, 接下来的章节会描述这个特性。 -前文提到过污点的效果值 `NoExecute` 会影响已经在节点上运行的如下 Pod: - -* 如果 Pod 不能忍受这类污点,Pod 会马上被驱逐。 -* 如果 Pod 能够忍受这类污点,但是在容忍度定义中没有指定 `tolerationSeconds`, - 则 Pod 还会一直在这个节点上运行。 -* 如果 Pod 能够忍受这类污点,而且指定了 `tolerationSeconds`, - 则 Pod 还能在这个节点上继续运行这个指定的时间长度。 - 在节点被排空时,节点控制器或者 kubelet 会添加带有 `NoExecute` 效果的相关污点。 +此效果被默认添加到 `node.kubernetes.io/not-ready` 和 `node.kubernetes.io/unreachable` 污点中。 如果异常状态恢复正常,kubelet 或节点控制器能够移除相关的污点。 ```yaml --- apiVersion: v1 @@ -164,12 +184,16 @@ your cluster. Those fields are: {{< note >}} - `minDomains` 字段是一个 Beta 字段,在 1.25 中默认被禁用。 - 你可以通过启用 `MinDomainsInPodTopologySpread` - [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)来启用该字段。 + `MinDomainsInPodTopologySpread` + [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)为 + Pod 拓扑分布启用 `minDomains`。自 v1.28 起,`MinDomainsInPodTopologySpread` 特性门控默认被启用。 + 在早期的 Kubernetes 集群中,此特性门控可能被显式禁用或此字段可能不可用。 {{< /note >}} * [AKS 应用程序网关 Ingress 控制器](https://docs.microsoft.com/zh-cn/azure/application-gateway/tutorial-ingress-controller-add-on-existing?toc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Faks%2Ftoc.json&bc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fbread%2Ftoc.json) 是一个配置 [Azure 应用程序网关](https://docs.microsoft.com/zh-cn/azure/application-gateway/overview) 的 Ingress 控制器。 +* [阿里云 MSE Ingress](https://www.alibabacloud.com/help/zh/mse/user-guide/overview-of-mse-ingress-gateways) + 是一个 Ingress 控制器,它负责配置[阿里云原生网关](https://www.alibabacloud.com/help/en/mse/product-overview/cloud-native-gateway-overview?spm=a2c63.p38356.0.0.20563003HJK9is), + 也是 [Higress](https://github.com/alibaba/higress) 的商业版本。 * [Apache APISIX Ingress 控制器](https://github.com/apache/apisix-ingress-controller) 是一个基于 [Apache APISIX 网关](https://github.com/apache/apisix) 的 Ingress 控制器。 * [Avi Kubernetes Operator](https://github.com/vmware/load-balancer-and-ingress-services-for-kubernetes) @@ -97,6 +101,7 @@ Kubernetes 作为一个项目,目前支持和维护 which offers API gateway functionality. * [HAProxy Ingress](https://haproxy-ingress.github.io/) is an ingress controller for [HAProxy](https://www.haproxy.org/#desc). +* [Higress](https://github.com/alibaba/higress) is an [Envoy](https://www.envoyproxy.io) based API gateway that can run as an ingress controller. * The [HAProxy Ingress Controller for Kubernetes](https://github.com/haproxytech/kubernetes-ingress#readme) is also an ingress controller for [HAProxy](https://www.haproxy.org/#desc). * [Istio Ingress](https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/) @@ -111,6 +116,8 @@ Kubernetes 作为一个项目,目前支持和维护 Ingress 控制器,能够提供 API 网关功能。 * [HAProxy Ingress](https://haproxy-ingress.github.io/) 是一个针对 [HAProxy](https://www.haproxy.org/#desc) 的 Ingress 控制器。 +* [Higress](https://github.com/alibaba/higress) 是一个基于 [Envoy](https://www.envoyproxy.io) 的 API 网关, + 可以作为一个 Ingress 控制器运行。 * [用于 Kubernetes 的 HAProxy Ingress 控制器](https://github.com/haproxytech/kubernetes-ingress#readme) 也是一个针对 [HAProxy](https://www.haproxy.org/#desc) 的 Ingress 控制器。 * [Istio Ingress](https://istio.io/latest/zh/docs/tasks/traffic-management/ingress/kubernetes-ingress/) diff --git a/content/zh-cn/docs/concepts/services-networking/ingress.md b/content/zh-cn/docs/concepts/services-networking/ingress.md index edb38018e3ba7..c9356f442ffea 100644 --- a/content/zh-cn/docs/concepts/services-networking/ingress.md +++ b/content/zh-cn/docs/concepts/services-networking/ingress.md @@ -171,12 +171,12 @@ Ingress 经常使用注解(Annotations)来配置一些选项,具体取决 查看你所选的 Ingress 控制器的文档,以了解其所支持的注解。 -Ingress [规约](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) +[Ingress 规约](/zh-cn/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec) 提供了配置负载均衡器或者代理服务器所需要的所有信息。 最重要的是,其中包含对所有入站请求进行匹配的规则列表。 Ingress 资源仅支持用于转发 HTTP(S) 流量的规则。 @@ -187,16 +187,16 @@ should be defined. There are some ingress controllers, that work without the definition of a default `IngressClass`. For example, the Ingress-NGINX controller can be -configured with a [flag](https://kubernetes.github.io/ingress-nginx/#what-is-the-flag-watch-ingress-without-class) -`--watch-ingress-without-class`. It is [recommended](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do) though, to specify the +configured with a [flag](https://kubernetes.github.io/ingress-nginx/user-guide/k8s-122-migration/#what-is-the-flag-watch-ingress-without-class) +`--watch-ingress-without-class`. It is [recommended](https://kubernetes.github.io/ingress-nginx/user-guide/k8s-122-migration/#i-have-only-one-ingress-controller-in-my-cluster-what-should-i-do) though, to specify the default `IngressClass` as shown [below](#default-ingress-class). --> 如果 `ingressClassName` 被省略,那么你应该定义一个[默认的 Ingress 类](#default-ingress-class)。 有些 Ingress 控制器不需要定义默认的 `IngressClass`。比如:Ingress-NGINX -控制器可以通过[参数](https://kubernetes.github.io/ingress-nginx/#what-is-the-flag-watch-ingress-without-class) +控制器可以通过[参数](https://kubernetes.github.io/ingress-nginx/user-guide/k8s-122-migration/#what-is-the-flag-watch-ingress-without-class) `--watch-ingress-without-class` 来配置。 -不过仍然[推荐](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do) +不过仍然[推荐](https://kubernetes.github.io/ingress-nginx/user-guide/k8s-122-migration/#i-have-only-one-ingress-controller-in-my-cluster-what-should-i-do) 按[下文](#default-ingress-class)所示来设置默认的 `IngressClass`。 #### 访问没有选择算符的 Service {#service-no-selector-access} @@ -555,8 +557,7 @@ Endpoints API。 @@ -585,8 +586,8 @@ The same API limit means that you cannot manually update an Endpoints to have mo @@ -690,7 +691,8 @@ Kubernetes Service 类型允许指定你所需要的 Service 类型。 : Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default that is used if you don't explicitly specify a `type` for a Service. - You can expose the Service to the public internet using an [Ingress](/docs/concepts/services-networking/ingress/) or a + You can expose the Service to the public internet using an + [Ingress](/docs/concepts/services-networking/ingress/) or a [Gateway](https://gateway-api.sigs.k8s.io/). [`NodePort`](#type-nodeport) @@ -732,11 +734,13 @@ Kubernetes Service 类型允许指定你所需要的 Service 类型。 服务 API 中的 `type` 字段被设计为层层递进的形式 - 每层都建立在前一层的基础上。 -并不是所有云平台都作如此严格要求,但 Kubernetes 的 Service API 设计要求满足这一逻辑。 +但是,这种层层递进的形式有一个例外。 +你可以在定义 `LoadBalancer` 服务时[禁止负载均衡器分配 `NodePort`](/zh-cn/docs/concepts/services-networking/service/#load-balancer-nodeport-allocation)。 ```yaml apiVersion: v1 kind: Service @@ -902,8 +927,7 @@ control plane). If you want to specify particular IP address(es) to proxy the port, you can set the `--nodeport-addresses` flag for kube-proxy or the equivalent `nodePortAddresses` -field of the -[kube-proxy configuration file](/docs/reference/config-api/kube-proxy-config.v1alpha1/) +field of the [kube-proxy configuration file](/docs/reference/config-api/kube-proxy-config.v1alpha1/) to particular IP block(s). --> #### 为 `type: NodePort` 服务自定义 IP 地址配置 {#service-nodeport-custom-listen-address} @@ -939,7 +963,8 @@ kube-proxy 应视将其视为所在节点的本机地址。 此 Service 的可见形式为 `:spec.ports[*].nodePort` 以及 `.spec.clusterIP:spec.ports[*].port`。 如果设置了 kube-proxy 的 `--nodeport-addresses` 标志或 kube-proxy 配置文件中的等效字段, @@ -1022,7 +1047,8 @@ set is ignored. 针对 Service 的 `.spec.loadBalancerIP` 字段已在 Kubernetes v1.24 中被弃用。 @@ -1172,60 +1198,50 @@ Select one of the tabs. {{% tab name="GCP" %}} ```yaml -[...] metadata: - name: my-service - annotations: - networking.gke.io/load-balancer-type: "Internal" -[...] + name: my-service + annotations: + networking.gke.io/load-balancer-type: "Internal" ``` {{% /tab %}} {{% tab name="AWS" %}} ```yaml -[...] metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: "true" -[...] ``` {{% /tab %}} {{% tab name="Azure" %}} ```yaml -[...] metadata: - name: my-service - annotations: - service.beta.kubernetes.io/azure-load-balancer-internal: "true" -[...] + name: my-service + annotations: + service.beta.kubernetes.io/azure-load-balancer-internal: "true" ``` {{% /tab %}} {{% tab name="IBM Cloud" %}} ```yaml -[...] metadata: - name: my-service - annotations: - service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: "private" -[...] + name: my-service + annotations: + service.kubernetes.io/ibm-load-balancer-cloud-provider-ip-type: "private" ``` {{% /tab %}} {{% tab name="OpenStack" %}} ```yaml -[...] metadata: - name: my-service - annotations: - service.beta.kubernetes.io/openstack-internal-load-balancer: "true" -[...] + name: my-service + annotations: + service.beta.kubernetes.io/openstack-internal-load-balancer: "true" ``` {{% /tab %}} @@ -1233,12 +1249,10 @@ metadata: {{% tab name="百度云" %}} ```yaml -[...] metadata: - name: my-service - annotations: - service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true" -[...] + name: my-service + annotations: + service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true" ``` {{% /tab %}} @@ -1246,11 +1260,9 @@ metadata: {{% tab name="腾讯云" %}} ```yaml -[...] metadata: annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-xxxxx -[...] ``` {{% /tab %}} @@ -1258,23 +1270,19 @@ metadata: {{% tab name="阿里云" %}} ```yaml -[...] metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: "intranet" -[...] ``` {{% /tab %}} {{% tab name="OCI" %}} ```yaml -[...] metadata: - name: my-service - annotations: - service.beta.kubernetes.io/oci-load-balancer-internal: true -[...] + name: my-service + annotations: + service.beta.kubernetes.io/oci-load-balancer-internal: true ``` {{% /tab %}} {{< /tabs >}} @@ -1308,11 +1316,14 @@ spec: {{< note >}} `type: ExternalName` 的服务接受 IPv4 地址字符串,但将该字符串视为由数字组成的 DNS 名称, 而不是 IP 地址(然而,互联网不允许在 DNS 中使用此类名称)。 @@ -1441,7 +1452,8 @@ finding a Service: environment variables and DNS. When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. It adds `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables, where the Service name is upper-cased and dashes are converted to underscores. -It also supports variables (see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72)) +It also supports variables +(see [makeLinkVariables](https://github.com/kubernetes/kubernetes/blob/dd2d12f6dc0e654c15d5db57a5f9f6ba61192726/pkg/kubelet/envvars/envvars.go#L72)) that are compatible with Docker Engine's "_[legacy container links](https://docs.docker.com/network/links/)_" feature. @@ -1672,7 +1684,9 @@ Service 是 Kubernetes REST API 中的顶级资源。你可以找到有关 关键的设计思想是在 Pod 的卷来源中允许使用 -[卷申领的参数](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ephemeralvolumesource-v1alpha1-core)。 +[卷申领的参数](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ephemeralvolumesource-v1-core)。 PersistentVolumeClaim 的标签、注解和整套字段集均被支持。 -创建这样一个 Pod 后, -临时卷控制器在 Pod 所属的命名空间中创建一个实际的 PersistentVolumeClaim 对象, -并确保删除 Pod 时,同步删除 PersistentVolumeClaim。 +创建这样一个 Pod 后,临时卷控制器在 Pod 所属的命名空间中创建一个实际的 +PersistentVolumeClaim 对象,并确保删除 Pod 时,同步删除 PersistentVolumeClaim。 如上设置将触发卷的绑定与/或制备,相应动作或者在 {{< glossary_tooltip text="StorageClass" term_id="storage-class" >}} -使用即时卷绑定时立即执行, -或者当 Pod 被暂时性调度到某节点时执行 (`WaitForFirstConsumer` 卷绑定模式)。 +使用即时卷绑定时立即执行,或者当 Pod 被暂时性调度到某节点时执行 (`WaitForFirstConsumer` 卷绑定模式)。 对于通用的临时卷,建议采用后者,这样调度器就可以自由地为 Pod 选择合适的节点。 对于即时绑定,调度器则必须选出一个节点,使得在卷可用时,能立即访问该卷。 @@ -355,8 +351,8 @@ and in this case you need to ensure that volume clean up happens separately. 拥有通用临时存储的 Pod 是提供临时存储 (ephemeral storage) 的 PersistentVolumeClaim 的所有者。 当 Pod 被删除时,Kubernetes 垃圾收集器会删除 PVC, 然后 PVC 通常会触发卷的删除,因为存储类的默认回收策略是删除卷。 -你可以使用带有 `retain` 回收策略的 StorageClass 创建准临时 (quasi-ephemeral) 本地存储: -该存储比 Pod 寿命长,在这种情况下,你需要确保单独进行卷清理。 +你可以使用带有 `retain` 回收策略的 StorageClass 创建准临时 (Quasi-Ephemeral) 本地存储: +该存储比 Pod 寿命长,所以在这种情况下,你需要确保单独进行卷清理。 自动创建的 PVC 采取确定性的命名机制:名称是 Pod 名称和卷名称的组合,中间由连字符(`-`)连接。 -在上面的示例中,PVC 将命名为 `my-app-scratch-volume` 。 +在上面的示例中,PVC 将被命名为 `my-app-scratch-volume` 。 这种确定性的命名机制使得与 PVC 交互变得更容易,因为一旦知道 Pod 名称和卷名,就不必搜索它。 -这种命名机制也引入了潜在的冲突, -不同的 Pod 之间(名为 “Pod-a” 的 Pod 挂载名为 "scratch" 的卷, -和名为 "pod" 的 Pod 挂载名为 “a-scratch” 的卷,这两者均会生成名为 -"pod-a-scratch" 的 PVC),或者在 Pod 和手工创建的 PVC 之间可能出现冲突。 +这种命名机制也引入了潜在的冲突,不同的 Pod 之间(名为 “Pod-a” 的 +Pod 挂载名为 "scratch" 的卷,和名为 "pod" 的 Pod 挂载名为 “a-scratch” 的卷, +这两者均会生成名为 "pod-a-scratch" 的 PVC),或者在 Pod 和手工创建的 +PVC 之间可能出现冲突。 -以下冲突会被检测到:如果 PVC 是为 Pod 创建的,那么它只用于临时卷。 +这类冲突会被检测到:如果 PVC 是为 Pod 创建的,那么它只用于临时卷。 此检测基于所有权关系。现有的 PVC 不会被覆盖或修改。 但这并不能解决冲突,因为如果没有正确的 PVC,Pod 就无法启动。 +{{< caution >}} -{{< caution >}} 当同一个命名空间中命名 Pod 和卷时,要小心,以防止发生此类冲突。 {{< /caution >}} @@ -461,7 +457,7 @@ See [local ephemeral storage](/docs/concepts/configuration/manage-resources-cont - 有关设计的更多信息,参阅 [Ephemeral Inline CSI volumes KEP](https://github.com/kubernetes/enhancements/blob/ad6021b3d61a49040a3f835e12c8bb5424db2bbb/keps/sig-storage/20190122-csi-inline-volumes.md)。 -- 本特性下一步开发的更多信息,参阅 +- 关于本特性下一步开发的更多信息,参阅 [enhancement tracking issue #596](https://github.com/kubernetes/enhancements/issues/596)。 旧版本的 Kubernetes 仍支持这些“树内(In-Tree)”持久卷类型: @@ -943,13 +943,10 @@ Older versions of Kubernetes also supported the following in-tree PersistentVolu * [`cinder`](/zh-cn/docs/concepts/storage/volumes/#cinder) - Cinder (OpenStack block storage) (v1.27 开始**不可用**) * `photonPersistentDisk` - Photon 控制器持久化盘。(从 v1.15 版本开始将**不可用**) -* [`scaleIO`](/zh-cn/docs/concepts/storage/volumes/#scaleio) - ScaleIO 卷(v1.21 之后**不可用**) -* [`flocker`](/zh-cn/docs/concepts/storage/volumes/#flocker) - Flocker 存储 - (v1.25 之后**不可用**) -* [`quobyte`](/zh-cn/docs/concepts/storage/volumes/#quobyte) - Quobyte 卷 - (v1.25 之后**不可用**) -* [`storageos`](/zh-cn/docs/concepts/storage/volumes/#storageos) - StorageOS 卷 - (v1.25 之后**不可用**) +* `scaleIO` - ScaleIO 卷(v1.21 之后**不可用**) +* `flocker` - Flocker 存储 (v1.25 之后**不可用**) +* `quobyte` - Quobyte 卷 (v1.25 之后**不可用**) +* `storageos` - StorageOS 卷 (v1.25 之后**不可用**) ### 带有 Secret、DownwardAPI 和 ConfigMap 的配置示例 {#example-configuration-secret-downwardapi-configmap} -{{< codenew file="pods/storage/projected-secret-downwardapi-configmap.yaml" >}} +{{% code_sample file="pods/storage/projected-secret-downwardapi-configmap.yaml" %}} ### 带有非默认权限模式设置的 Secret 的配置示例 {#example-configuration-secrets-nondefault-permission-mode} -{{< codenew file="pods/storage/projected-secrets-nondefault-permission-mode.yaml" >}} +{{% code_sample file="pods/storage/projected-secrets-nondefault-permission-mode.yaml" %}} StorageClass 对象的命名很重要,用户使用这个命名来请求生成一个特定的类。 -当创建 StorageClass 对象时,管理员设置 StorageClass 对象的命名和其他参数, -一旦创建了对象就不能再对其更新。 +当创建 StorageClass 对象时,管理员设置 StorageClass 对象的命名和其他参数。 ### 删除策略 {#deletion-policy} -卷快照类具有 `deletionPolicy` 属性。用户可以配置当所绑定的 VolumeSnapshot -对象将被删除时,如何处理 VolumeSnapshotContent 对象。 +卷快照类具有 [deletionPolicy] 属性(/zh-cn/docs/concepts/storage/volume-snapshots/#delete)。 +用户可以配置当所绑定的 VolumeSnapshot 对象将被删除时,如何处理 VolumeSnapshotContent 对象。 卷快照类的这个策略可以是 `Retain` 或者 `Delete`。这个策略字段必须指定。 如果删除策略是 `Delete`,那么底层的存储快照会和 VolumeSnapshotContent 对象 diff --git a/content/zh-cn/docs/concepts/storage/volumes.md b/content/zh-cn/docs/concepts/storage/volumes.md index f9ba07391527b..69564a7cf6bee 100644 --- a/content/zh-cn/docs/concepts/storage/volumes.md +++ b/content/zh-cn/docs/concepts/storage/volumes.md @@ -431,16 +431,15 @@ The `emptyDir.medium` field controls where `emptyDir` volumes are stored. By default `emptyDir` volumes are stored on whatever medium that backs the node such as disk, SSD, or network storage, depending on your environment. If you set the `emptyDir.medium` field to `"Memory"`, Kubernetes mounts a tmpfs (RAM-backed -filesystem) for you instead. While tmpfs is very fast, be aware that unlike -disks, tmpfs is cleared on node reboot and any files you write count against -your container's memory limit. +filesystem) for you instead. While tmpfs is very fast be aware that, unlike +disks, files you write count against the memory limit of the container that wrote them. --> `emptyDir.medium` 字段用来控制 `emptyDir` 卷的存储位置。 默认情况下,`emptyDir` 卷存储在该节点所使用的介质上; 此处的介质可以是磁盘、SSD 或网络存储,这取决于你的环境。 你可以将 `emptyDir.medium` 字段设置为 `"Memory"`, 以告诉 Kubernetes 为你挂载 tmpfs(基于 RAM 的文件系统)。 -虽然 tmpfs 速度非常快,但是要注意它与磁盘不同:tmpfs 在节点重启时会被清除, +虽然 tmpfs 速度非常快,但是要注意它与磁盘不同, 并且你所写入的所有文件都会计入容器的内存消耗,受容器内存限制约束。 @@ -73,7 +73,7 @@ As a result, the following storage functionality is not supported on Windows nod * 块设备映射 * 内存作为存储介质(例如 `emptyDir.medium` 设置为 `Memory`) * 类似 UID/GID、各用户不同的 Linux 文件系统访问许可等文件系统特性 -* 使用 [DefaultMode 设置 Secret 权限](/zh-cn/docs/concepts/configuration/secret/#secret-files-permissions) +* 使用 [DefaultMode 设置 Secret 权限](/zh-cn/docs/tasks/inject-data-application/distribute-credentials-secure/#set-posix-permissions-for-secret-keys) (因为该特性依赖 UID/GID) * 基于 NFS 的存储和卷支持 * 扩展已挂载卷(resizefs) diff --git a/content/zh-cn/docs/concepts/workloads/_index.md b/content/zh-cn/docs/concepts/workloads/_index.md index 5e8f9e08b733d..024e5f8ca2502 100644 --- a/content/zh-cn/docs/concepts/workloads/_index.md +++ b/content/zh-cn/docs/concepts/workloads/_index.md @@ -1,15 +1,22 @@ --- title: "工作负载" weight: 55 -description: 理解 Pods,Kubernetes 中可部署的最小计算对象,以及辅助它运行它们的高层抽象对象。 +description: 理解 Kubernetes 中可部署的最小计算对象 Pod 以及辅助 Pod 运行的上层抽象。 +card: + title: 工作负载与 Pod + name: concepts + weight: 60 --- - {{< glossary_definition term_id="workload" length="short" >}} @@ -19,29 +26,24 @@ Whether your workload is a single component or several that work together, on Ku it inside a set of [_pods_](/docs/concepts/workloads/pods). In Kubernetes, a Pod represents a set of running {{< glossary_tooltip text="containers" term_id="container" >}} on your cluster. - -A Pod has a defined lifecycle. For example, once a Pod is running in your cluster then -a critical failure on the {{< glossary_tooltip text="node" term_id="node" >}} where that -Pod is running means that all the Pods on that node fail. Kubernetes treats that level -of failure as final: you would need to create a new Pod even if the node later recovers. --> 在 Kubernetes 中,无论你的负载是由单个组件还是由多个一同工作的组件构成, 你都可以在一组 [**Pod**](/zh-cn/docs/concepts/workloads/pods) 中运行它。 在 Kubernetes 中,Pod 代表的是集群上处于运行状态的一组 -{{< glossary_tooltip text="容器" term_id="container" >}} 的集合。 +{{< glossary_tooltip text="容器" term_id="container" >}}的集合。 Kubernetes Pod 遵循[预定义的生命周期](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/)。 例如,当在你的集群中运行了某个 Pod,但是 Pod 所在的 {{< glossary_tooltip text="节点" term_id="node" >}} 出现致命错误时, 所有该节点上的 Pod 的状态都会变成失败。Kubernetes 将这类失败视为最终状态: -即使该节点后来恢复正常运行,你也需要创建新的 `Pod` 以恢复应用。 +即使该节点后来恢复正常运行,你也需要创建新的 Pod 以恢复应用。 -* [`Deployment`](/zh-cn/docs/concepts/workloads/controllers/deployment/) 和 - [`ReplicaSet`](/zh-cn/docs/concepts/workloads/controllers/replicaset/) +* [Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/) 和 + [ReplicaSet](/zh-cn/docs/concepts/workloads/controllers/replicaset/) (替换原来的资源 {{< glossary_tooltip text="ReplicationController" term_id="replication-controller" >}})。 - `Deployment` 很适合用来管理你的集群上的无状态应用,`Deployment` 中的所有 - `Pod` 都是相互等价的,并且在需要的时候被替换。 + Deployment 很适合用来管理你的集群上的无状态应用,Deployment 中的所有 + Pod 都是相互等价的,并且在需要的时候被替换。 * [StatefulSet](/zh-cn/docs/concepts/workloads/controllers/statefulset/) 让你能够运行一个或者多个以某种方式跟踪应用状态的 Pod。 - 例如,如果你的负载会将数据作持久存储,你可以运行一个 `StatefulSet`,将每个 - `Pod` 与某个 [`PersistentVolume`](/zh-cn/docs/concepts/storage/persistent-volumes/) - 对应起来。你在 `StatefulSet` 中各个 `Pod` 内运行的代码可以将数据复制到同一 - `StatefulSet` 中的其它 `Pod` 中以提高整体的服务可靠性。 + 例如,如果你的负载会将数据作持久存储,你可以运行一个 StatefulSet,将每个 + Pod 与某个 [PersistentVolume](/zh-cn/docs/concepts/storage/persistent-volumes/) + 对应起来。你在 StatefulSet 中各个 Pod 内运行的代码可以将数据复制到同一 + StatefulSet 中的其它 Pod 中以提高整体的服务可靠性。 * [DaemonSet](/zh-cn/docs/concepts/workloads/controllers/daemonset/) - 定义提供节点本地支撑设施的 `Pod`。这些 Pod 可能对于你的集群的运维是 + 定义提供节点本地支撑设施的 Pod。这些 Pod 可能对于你的集群的运维是 非常重要的,例如作为网络链接的辅助工具或者作为网络 {{< glossary_tooltip text="插件" term_id="addons" >}} 的一部分等等。每次你向集群中添加一个新节点时,如果该节点与某 `DaemonSet` - 的规约匹配,则控制平面会为该 `DaemonSet` 调度一个 `Pod` 到该新节点上运行。 + 的规约匹配,则控制平面会为该 DaemonSet 调度一个 Pod 到该新节点上运行。 * [Job](/zh-cn/docs/concepts/workloads/controllers/job/) 和 [CronJob](/zh-cn/docs/concepts/workloads/controllers/cron-jobs/)。 - 定义一些一直运行到结束并停止的任务。`Job` 用来执行一次性任务,而 - `CronJob` 用来执行的根据时间规划反复运行的任务。 + 定义一些一直运行到结束并停止的任务。 + 你可以使用 [Job](/zh-cn/docs/concepts/workloads/controllers/job/) + 来定义只需要执行一次并且执行后即视为完成的任务。你可以使用 + [CronJob](/zh-cn/docs/concepts/workloads/controllers/cron-jobs/) + 来根据某个排期表来多次运行同一个 Job。 在庞大的 Kubernetes 生态系统中,你还可以找到一些提供额外操作的第三方工作负载相关的资源。 通过使用[定制资源定义(CRD)](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/), 你可以添加第三方工作负载资源,以完成原本不是 Kubernetes 核心功能的工作。 -例如,如果你希望运行一组 `Pod`,但要求**所有** Pod 都可用时才执行操作 +例如,如果你希望运行一组 Pod,但要求**所有** Pod 都可用时才执行操作 (比如针对某种高吞吐量的分布式任务),你可以基于定制资源实现一个能够满足这一需求的扩展, 并将其安装到集群中运行。 @@ -127,23 +135,23 @@ then you can implement or install an extension that does provide that feature. 除了阅读了解每类资源外,你还可以了解与这些资源相关的任务: -* [使用 `Deployment` 运行一个无状态的应用](/zh-cn/docs/tasks/run-application/run-stateless-application-deployment/) +* [使用 Deployment 运行一个无状态的应用](/zh-cn/docs/tasks/run-application/run-stateless-application-deployment/) * 以[单实例](/zh-cn/docs/tasks/run-application/run-single-instance-stateful-application/)或者[多副本集合](/zh-cn/docs/tasks/run-application/run-replicated-stateful-application/) 的形式运行有状态的应用; -* [使用 `CronJob` 运行自动化的任务](/zh-cn/docs/tasks/job/automated-tasks-with-cron-jobs/) +* [使用 CronJob 运行自动化的任务](/zh-cn/docs/tasks/job/automated-tasks-with-cron-jobs/) -要了解 Kubernetes 将代码与配置分离的实现机制,可参阅[配置部分](/zh-cn/docs/concepts/configuration/)。 +要了解 Kubernetes 将代码与配置分离的实现机制,可参阅[配置](/zh-cn/docs/concepts/configuration/)节。 一旦你的应用处于运行状态,你就可能想要以 -[`Service`](/zh-cn/docs/concepts/services-networking/service/) +[Service](/zh-cn/docs/concepts/services-networking/service/) 的形式使之可在互联网上访问;或者对于 Web 应用而言,使用 -[`Ingress`](/zh-cn/docs/concepts/services-networking/ingress) 资源将其暴露到互联网上。 +[Ingress](/zh-cn/docs/concepts/services-networking/ingress) 资源将其暴露到互联网上。 diff --git a/content/zh-cn/docs/concepts/workloads/controllers/deployment.md b/content/zh-cn/docs/concepts/workloads/controllers/deployment.md index 74be6c193d537..eb031308efdd7 100644 --- a/content/zh-cn/docs/concepts/workloads/controllers/deployment.md +++ b/content/zh-cn/docs/concepts/workloads/controllers/deployment.md @@ -2144,11 +2144,112 @@ For example, when this value is set to 30%, the new ReplicaSet can be scaled up rolling update starts, such that the total number of old and new Pods does not exceed 130% of desired Pods. Once old Pods have been killed, the new ReplicaSet can be scaled up further, ensuring that the total number of Pods running at any time during the update is at most 130% of desired Pods. + +Here are some Rolling Update Deployment examples that use the `maxUnavailable` and `maxSurge`: --> 例如,当此值为 30% 时,启动滚动更新后,会立即对新的 ReplicaSet 扩容,同时保证新旧 Pod 的总数不超过所需 Pod 总数的 130%。一旦旧 Pod 被杀死,新的 ReplicaSet 可以进一步扩容, 同时确保更新期间的任何时候运行中的 Pod 总数最多为所需 Pod 总数的 130%。 +以下是一些使用 `maxUnavailable` 和 `maxSurge` 的滚动更新 Deployment 的示例: + +{{< tabs name="tab_with_md" >}} +{{% tab name="最大不可用" %}} + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + strategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 1 +``` + +{{% /tab %}} +{{% tab name="最大峰值" %}} + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + strategy: + type: RollingUpdate + rollingUpdate: + maxSurge: 1 +``` + +{{% /tab %}} +{{% tab name="两项混合" %}} + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + strategy: + type: RollingUpdate + rollingUpdate: + maxSurge: 1 + maxUnavailable: 1 +``` + +{{% /tab %}} +{{< /tabs >}} + ### 进度期限秒数 {#progress-deadline-seconds} - + `.spec.progressDeadlineSeconds` 是一个可选字段,用于指定系统在报告 Deployment [进展失败](#failed-deployment) 之前等待 Deployment 取得进展的秒数。 这类报告会在资源状态中体现为 `type: Progressing`、`status: False`、 diff --git a/content/zh-cn/docs/concepts/workloads/controllers/job.md b/content/zh-cn/docs/concepts/workloads/controllers/job.md index 482c2c737419a..06ec1cfd7660d 100644 --- a/content/zh-cn/docs/concepts/workloads/controllers/job.md +++ b/content/zh-cn/docs/concepts/workloads/controllers/job.md @@ -890,7 +890,7 @@ These are some requirements and semantics of the API: are ignored. When no rule matches the Pod failure, the default handling applies. - you may want to restrict a rule to a specific container by specifying its name - in`spec.podFailurePolicy.rules[*].containerName`. When not specified the rule + in`spec.podFailurePolicy.rules[*].onExitCodes.containerName`. When not specified the rule applies to all containers. When specified, it should match one the container or `initContainer` names in the Pod template. - you may specify the action taken when a Pod failure policy is matched by @@ -910,9 +910,9 @@ These are some requirements and semantics of the API: - 在 `spec.podFailurePolicy.rules` 中设定的 Pod 失效策略规则将按序评估。 一旦某个规则与 Pod 失效策略匹配,其余规则将被忽略。 当没有规则匹配 Pod 失效策略时,将会采用默认的处理方式。 -- 你可能希望在 `spec.podFailurePolicy.rules[*].containerName` - 中通过指定的名称将规则限制到特定容器。 - 如果不设置,规则将适用于所有容器。 +- 你可能希望在 `spec.podFailurePolicy.rules[*].onExitCodes.containerName` + 中通过指定的名称限制只能针对特定容器应用对应的规则。 + 如果不设置此属性,规则将适用于所有容器。 如果指定了容器名称,它应该匹配 Pod 模板中的一个普通容器或一个初始容器(Init Container)。 - 你可以在 `spec.podFailurePolicy.rules[*].action` 指定当 Pod 失效策略发生匹配时要采取的操作。 可能的值为: @@ -1155,17 +1155,13 @@ consume. ## Job 模式 {#job-patterns} -Job 对象可以用来支持多个 Pod 的可靠的并发执行。 -Job 对象不是设计用来支持相互通信的并行进程的,后者一般在科学计算中应用较多。 -Job 的确能够支持对一组相互独立而又有所关联的**工作条目**的并行处理。 +Job 对象可以用来处理一组相互独立而又彼此关联的“工作条目”。 这类工作条目可能是要发送的电子邮件、要渲染的视频帧、要编解码的文件、NoSQL 数据库中要扫描的主键范围等等。 @@ -1182,25 +1178,35 @@ The tradeoffs are: 并行计算的模式有好多种,每种都有自己的强项和弱点。这里要权衡的因素有: +- 每个工作条目对应一个 Job 或者所有工作条目对应同一 Job 对象。 + 为每个工作条目创建一个 Job 的做法会给用户带来一些额外的负担,系统需要管理大量的 Job 对象。 + 用一个 Job 对象来完成所有工作条目的做法更适合处理大量工作条目的场景。 +- 创建数目与工作条目相等的 Pod 或者令每个 Pod 可以处理多个工作条目。 + 当 Pod 个数与工作条目数目相等时,通常不需要在 Pod 中对现有代码和容器做较大改动; + 让每个 Pod 能够处理多个工作条目的做法更适合于工作条目数量较大的场合。 + -- 每个工作条目对应一个 Job 或者所有工作条目对应同一 Job 对象。 - 后者更适合处理大量工作条目的场景; - 前者会给用户带来一些额外的负担,而且需要系统管理大量的 Job 对象。 -- 创建与工作条目相等的 Pod 或者令每个 Pod 可以处理多个工作条目。 - 前者通常不需要对现有代码和容器做较大改动; - 后者则更适合工作条目数量较大的场合,原因同上。 - 有几种技术都会用到工作队列。这意味着需要运行一个队列服务, 并修改现有程序或容器使之能够利用该工作队列。 与之比较,其他方案在修改现有容器化应用以适应需求方面可能更容易一些。 +- 当 Job 与某个[无头 Service](/zh-cn/docs/concepts/services-networking/service/#headless-services) + 之间存在关联时,你可以让 Job 中的 Pod 之间能够相互通信,从而协作完成计算。 下面是对这些权衡的汇总,第 2 到 4 列对应上面的权衡比较。 模式的名称对应了相关示例和更详细描述的链接。 @@ -1222,8 +1228,8 @@ The pattern names are also links to examples and more detailed description. | [每工作条目一 Pod 的队列](/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue/) | ✓ | | 有时 | | [Pod 数量可变的队列](/zh-cn/docs/tasks/job/fine-parallel-processing-work-queue/) | ✓ | ✓ | | | [静态任务分派的带索引的 Job](/zh-cn/docs/tasks/job/indexed-parallel-processing-static) | ✓ | | ✓ | -| [Job 模板扩展](/zh-cn/docs/tasks/job/parallel-processing-expansion/) | | | ✓ | | [带 Pod 间通信的 Job](/zh-cn/docs/tasks/job/job-with-pod-to-pod-communication/) | ✓ | 有时 | 有时 | +| [Job 模板扩展](/zh-cn/docs/tasks/job/parallel-processing-expansion/) | | | ✓ | | 模式 | `.spec.completions` | `.spec.parallelism` | | ----- |:-------------------:|:--------------------:| | [每工作条目一 Pod 的队列](/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue/) | W | 任意值 | | [Pod 个数可变的队列](/zh-cn/docs/tasks/job/fine-parallel-processing-work-queue/) | 1 | 任意值 | | [静态任务分派的带索引的 Job](/zh-cn/docs/tasks/job/indexed-parallel-processing-static) | W | | 任意值 | -| [Job 模板扩展](/zh-cn/docs/tasks/job/parallel-processing-expansion/) | 1 | 应该为 1 | | [带 Pod 间通信的 Job](/zh-cn/docs/tasks/job/job-with-pod-to-pod-communication/) | W | W | +| [Job 模板扩展](/zh-cn/docs/tasks/job/parallel-processing-expansion/) | 1 | 应该为 1 | -### 有序索引 {#ordinal-index} +### 序号索引 {#ordinal-index} 对于具有 N 个[副本](#replicas)的 StatefulSet,该 StatefulSet 中的每个 Pod 将被分配一个整数序号, -该序号在此 StatefulSet 上是唯一的。默认情况下,这些 Pod 将被从 0 到 N-1 的序号。 +该序号在此 StatefulSet 中是唯一的。默认情况下,这些 Pod 将被赋予从 0 到 N-1 的序号。 +StatefulSet 的控制器也会添加一个包含此索引的 Pod 标签:`apps.kubernetes.io/pod-index`。 +### Pod 索引标签 {#pod-index-label} + +{{< feature-state for_k8s_version="v1.28" state="beta" >}} + + +当 StatefulSet {{}}创建一个 Pod 时, +新的 Pod 会被打上 `apps.kubernetes.io/pod-index` 标签。标签的取值为 Pod 的序号索引。 +此标签使你能够将流量路由到特定索引值的 Pod、使用 Pod 索引标签来过滤日志或度量值等等。 +注意要使用这一特性需要启用特性门控 `PodIndexLabel`,而该门控默认是被启用的。 + -### 创建一个新页面{#creating-a-new-page} +### 创建一个新页面 {#creating-a-new-page} 为每个新页面选择其[内容类型](/zh-cn/docs/contribute/style/page-content-types/)。 文档站提供了模板或 [Hugo Archetypes](https://gohugo.io/content-management/archetypes/) 来创建新的内容页面。 @@ -225,13 +225,13 @@ reusable, and you want the reader to try it out themselves. 添加新的独立示例文件(如 YAML 文件)时,将代码放在 `/examples/` 的某个子目录中, -其中 `` 是该主题的语言。在主题文件中使用 `codenew` 短代码: +其中 `` 是该主题的语言。在主题文件中使用 `code_sample` 短代码: ```none -{{%/* codenew file="/my-example-yaml>" */%}} +{{%/* code_sample file="/my-example-yaml>" */%}} ``` -{{< note >}} 将新的 YAML 文件添加到 `/examples` 目录时,请确保该文件也在 `/examples_test.go` 文件中被引用。 当提交拉取请求时,网站的 Travis CI 会自动运行此测试用例,以确保所有示例都通过测试。 @@ -284,8 +284,8 @@ submitted to ensure all examples pass the tests. For an example of a topic that uses this technique, see [Running a Single-Instance Stateful Application](/docs/tasks/run-application/run-single-instance-stateful-application/). --> -有关使用此技术的主题的示例,请参见 -[运行单实例有状态的应用](/zh-cn/docs/tasks/run-application/run-single-instance-stateful-application/)。 +有关使用此技术的主题的示例, +请参见[运行单实例有状态的应用](/zh-cn/docs/tasks/run-application/run-single-instance-stateful-application/)。 -* 了解[使用页面内容类型](/zh-cn/docs/contribute/style/page-content-types/). -* 了解[创建 PR](/zh-cn/docs/contribute/new-content/open-a-pr/). - +* 了解[使用页面内容类型](/zh-cn/docs/contribute/style/page-content-types/)。 +* 了解[创建 PR](/zh-cn/docs/contribute/new-content/open-a-pr/)。 diff --git a/content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md b/content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md index 9cfd2cf49dc86..5ad0658471c08 100644 --- a/content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md +++ b/content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md @@ -17,8 +17,10 @@ weight: 120 在一个 Kubernetes 集群中,工作节点上的组件(kubelet 和 kube-proxy)需要与 @@ -27,8 +29,9 @@ Kubernetes 控制平面组件通信,尤其是 kube-apiserver。 我们强烈建议使用节点上的客户端 TLS 证书。 启动引导这些组件的正常过程,尤其是需要证书来与 kube-apiserver 安全通信的工作节点, @@ -36,8 +39,8 @@ This in turn, can make it challenging to initialize or scale a cluster. 这也使得初始化或者扩缩一个集群的操作变得具有挑战性。 @@ -61,16 +64,17 @@ When a worker node starts up, the kubelet does the following: 1. 寻找自己的 `kubeconfig` 文件 -2. 检索 API 服务器的 URL 和凭据,通常是来自 `kubeconfig` 文件中的 +1. 检索 API 服务器的 URL 和凭据,通常是来自 `kubeconfig` 文件中的 TLS 密钥和已签名证书 -3. 尝试使用这些凭据来与 API 服务器通信 +1. 尝试使用这些凭据来与 API 服务器通信 负责部署和管理集群的人有以下责任: 1. 创建 CA 密钥和证书 -2. 将 CA 证书发布到 kube-apiserver 运行所在的控制平面节点上 -3. 为每个 kubelet 创建密钥和证书;强烈建议为每个 kubelet 使用独一无二的、 +1. 将 CA 证书发布到 kube-apiserver 运行所在的控制平面节点上 +1. 为每个 kubelet 创建密钥和证书;强烈建议为每个 kubelet 使用独一无二的、 CN 取值与众不同的密钥和证书 -4. 使用 CA 密钥对 kubelet 证书签名 -5. 将 kubelet 密钥和签名的证书发布到 kubelet 运行所在的特定节点上 +1. 使用 CA 密钥对 kubelet 证书签名 +1. 将 kubelet 密钥和签名的证书发布到 kubelet 运行所在的特定节点上 本文中描述的 TLS 启动引导过程有意简化甚至完全自动化上述过程, @@ -121,16 +126,16 @@ In the bootstrap initialization process, the following occurs: 1. kubelet 启动 2. kubelet 看到自己**没有**对应的 `kubeconfig` 文件 @@ -145,12 +150,12 @@ In the bootstrap initialization process, the following occurs: 来批复该 CSR 9. kubelet 所需要的证书被创建 10. 证书被发放给 kubelet 11. kubelet 取回该证书 @@ -190,8 +195,9 @@ In addition, you need your Kubernetes Certificate Authority (CA). ## 证书机构 {#certificate-authority} @@ -200,10 +206,12 @@ to sign the kubelet certificate. As before, it is your responsibility to distrib 如前所述,将证书机构密钥和证书发布到控制平面节点是你的责任。 就本文而言,我们假定这些数据被发布到控制平面节点上的 `/var/lib/kubernetes/ca.pem`(证书)和 `/var/lib/kubernetes/ca-key.pem`(密钥)文件中。 @@ -247,8 +255,9 @@ containing the signing certificate, for example ### 初始启动引导认证 {#initial-bootstrap-authentication} @@ -262,16 +271,17 @@ bootstrap credentials, the following two authenticators are recommended for ease of provisioning. 1. [Bootstrap Tokens](#bootstrap-tokens) -2. [Token authentication file](#token-authentication-file) +1. [Token authentication file](#token-authentication-file) --> 尽管所有身份认证策略都可以用来对 kubelet 的初始启动凭据来执行认证, -出于容易准备的因素,建议使用如下两个身份认证组件: +但出于容易准备的因素,建议使用如下两个身份认证组件: 1. [启动引导令牌(Bootstrap Token)](#bootstrap-tokens) 2. [令牌认证文件](#token-authentication-file) 启动引导令牌是一种对 kubelet 进行身份认证的方法,相对简单且容易管理, 且不需要在启动 kube-apiserver 时设置额外的标志。 @@ -280,15 +290,16 @@ Using bootstrap tokens is a simpler and more easily managed method to authentica Whichever method you choose, the requirement is that the kubelet be able to authenticate as a user with the rights to: 1. create and retrieve CSRs -2. be automatically approved to request node client certificates, if automatic approval is enabled. +1. be automatically approved to request node client certificates, if automatic approval is enabled. --> 无论选择哪种方法,这里的需求是 kubelet 能够被身份认证为某个具有如下权限的用户: 1. 创建和读取 CSR -2. 在启用了自动批复时,能够在请求节点客户端证书时得到自动批复 +1. 在启用了自动批复时,能够在请求节点客户端证书时得到自动批复 使用启动引导令牌执行身份认证的 kubelet 会被认证为 `system:bootstrappers` 组中的用户。这是使用启动引导令牌的一种标准方法。 @@ -301,38 +312,41 @@ requests related to certificate provisioning. With RBAC in place, scoping the tokens to a group allows for great flexibility. For example, you could disable a particular bootstrap group's access when you are done provisioning the nodes. --> -随着这个功能特性的逐渐成熟,你需要确保令牌绑定到某基于角色的访问控制(RBAC) -策略上,从而严格限制请求(使用[启动引导令牌](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/)) +随着这个功能特性的逐渐成熟,你需要确保令牌绑定到某基于角色的访问控制(RBAC)策略上, +从而严格限制请求(使用[启动引导令牌](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/)) 仅限于客户端申请提供证书。当 RBAC 被配置启用时,可以将令牌限制到某个组, 从而提高灵活性。例如,你可以在准备节点期间禁止某特定启动引导组的访问。 #### 启动引导令牌 {#bootstrap-tokens} -启动引导令牌的细节在[这里](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/) -详述。启动引导令牌在 Kubernetes 集群中存储为 Secret 对象,被发放给各个 kubelet。 +启动引导令牌的细节在[这里](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/)详述。 +启动引导令牌在 Kubernetes 集群中存储为 Secret 对象,被发放给各个 kubelet。 你可以在整个集群中使用同一个令牌,也可以为每个节点发放单独的令牌。 这一过程有两个方面: 1. 基于令牌 ID、机密数据和范畴信息创建 Kubernetes Secret -2. 将令牌发放给 kubelet +1. 将令牌发放给 kubelet 从 kubelet 的角度,所有令牌看起来都很像,没有特别的含义。 @@ -407,7 +421,8 @@ certificate signing request (CSR) as well as retrieve it when done. Fortunately, Kubernetes ships with a `ClusterRole` with precisely these (and only these) permissions, `system:node-bootstrapper`. -To do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` group to the cluster role `system:node-bootstrapper`. +To do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` +group to the cluster role `system:node-bootstrapper`. --> ### 授权 kubelet 创建 CSR {#authorize-kubelet-to-create-csr} @@ -419,6 +434,9 @@ To do this, you only need to create a `ClusterRoleBinding` that binds the `syste 为了实现这一点,你只需要创建 `ClusterRoleBinding`,将 `system:bootstrappers` 组绑定到集群角色 `system:node-bootstrapper`。 + ```yaml # 允许启动引导节点创建 CSR apiVersion: rbac.authorization.k8s.io/v1 @@ -443,7 +461,7 @@ the controller-manager is responsible for issuing actual signed certificates. --> ## kube-controller-manager 配置 {#kube-controller-manager-configuration} -API 服务器从 kubelet 收到证书请求并对这些请求执行身份认证, +尽管 API 服务器从 kubelet 收到证书请求并对这些请求执行身份认证, 但真正负责发放签名证书的是控制器管理器(controller-manager)。 由于这些被签名的证书反过来会被 kubelet 用来在 kube-apiserver 执行普通的 kubelet 身份认证,很重要的一点是为控制器管理器所提供的 CA 也被 kube-apiserver 信任用来执行身份认证。CA 密钥和证书是通过 kube-apiserver 的标志 -`--client-ca-file=FILENAME`(例如,`--client-ca-file=/var/lib/kubernetes/ca.pem`), -来设定的,正如 kube-apiserver 配置节所述。 +`--client-ca-file=FILENAME`(例如 `--client-ca-file=/var/lib/kubernetes/ca.pem`)来设定的, +正如 kube-apiserver 配置节所述。 要将 Kubernetes CA 密钥和证书提供给 kube-controller-manager,可使用以下标志: @@ -530,23 +549,30 @@ RBAC permissions to the correct group. 许可权限有两组: * `nodeclient`:如果节点在为节点创建新的证书,则该节点还没有证书。 - 该节点使用前文所列的令牌之一来执行身份认证,因此是组 `system:bootstrappers` 组的成员。 + 该节点使用前文所列的令牌之一来执行身份认证,因此是 `system:bootstrappers` 组的成员。 * `selfnodeclient`:如果节点在对证书执行续期操作,则该节点已经拥有一个证书。 节点持续使用现有的证书将自己认证为 `system:nodes` 组的成员。 要允许 kubelet 请求并接收新的证书,可以创建一个 `ClusterRoleBinding` 将启动引导节点所处的组 `system:bootstrappers` 绑定到为其赋予访问权限的 `ClusterRole` `system:certificates.k8s.io:certificatesigningrequests:nodeclient`: + ```yaml # 批复 "system:bootstrappers" 组的所有 CSR apiVersion: rbac.authorization.k8s.io/v1 @@ -564,13 +590,17 @@ roleRef: ``` 要允许 kubelet 对其客户端证书执行续期操作,可以创建一个 `ClusterRoleBinding` 将正常工作的节点所处的组 `system:nodes` 绑定到为其授予访问许可的 `ClusterRole` `system:certificates.k8s.io:certificatesigningrequests:selfnodeclient`: + ```yaml # 批复 "system:nodes" 组的 CSR 续约请求 apiVersion: rbac.authorization.k8s.io/v1 @@ -602,14 +632,14 @@ collection. 的一部分的 `csrapproving` 控制器是自动被启用的。 该控制器使用 [`SubjectAccessReview` API](/zh-cn/docs/reference/access-authn-authz/authorization/#checking-api-access) 来确定给定用户是否被授权请求 CSR,之后基于鉴权结果执行批复操作。 -为了避免与其它批复组件发生冲突,内置的批复组件不会显式地拒绝任何 CSRs。 -该组件仅是忽略未被授权的请求。 -控制器也会作为垃圾收集的一部分清除已过期的证书。 +为了避免与其它批复组件发生冲突,内置的批复组件不会显式地拒绝任何 CSR。 +该组件仅是忽略未被授权的请求。控制器也会作为垃圾收集的一部分清除已过期的证书。 ## kubelet 配置 {#kubelet-configuration} @@ -640,7 +670,7 @@ Its format is identical to a normal `kubeconfig` file. A sample file might look 启动引导 `kubeconfig` 文件应该放在一个 kubelet 可访问的路径下,例如 `/var/lib/kubelet/bootstrap-kubeconfig`。 -其格式与普通的 `kubeconfig` 文件完全相同。实例文件可能看起来像这样: +其格式与普通的 `kubeconfig` 文件完全相同。示例文件可能看起来像这样: ```yaml apiVersion: v1 @@ -721,12 +751,12 @@ directory specified by `--cert-dir`. 证书和密钥文件会被放到 `--cert-dir` 所指定的目录中。 -### 客户和服务证书 {#client-and-serving-certificates} +### 客户端和服务证书 {#client-and-serving-certificates} 前文所述的内容都与 kubelet **客户端**证书相关,尤其是 kubelet 用来向 kube-apiserver 认证自身身份的证书。 @@ -758,7 +788,7 @@ TLS 启动引导所提供的客户端证书默认被签名为仅用于 `client a 不过,你可以启用服务器证书,至少可以部分地通过证书轮换来实现这点。 @@ -818,9 +848,9 @@ A deployment-specific approval process for kubelet serving certificates should t 1. are requested by nodes (ensure the `spec.username` field is of the form `system:node:` and `spec.groups` contains `system:nodes`) -2. request usages for a serving certificate (ensure `spec.usages` contains `server auth`, +1. request usages for a serving certificate (ensure `spec.usages` contains `server auth`, optionally contains `digital signature` and `key encipherment`, and contains no other usages) -3. only have IP and DNS subjectAltNames that belong to the requesting node, +1. only have IP and DNS subjectAltNames that belong to the requesting node, and have no URI and Email subjectAltNames (parse the x509 Certificate Signing Request in `spec.request` to verify `subjectAltNames`) --> @@ -828,9 +858,9 @@ A deployment-specific approval process for kubelet serving certificates should t 1. 由节点发出的请求(确保 `spec.username` 字段形式为 `system:node:` 且 `spec.groups` 包含 `system:nodes`) -2. 请求中包含服务证书用法(确保 `spec.usages` 中包含 `server auth`,可选地也可包含 +1. 请求中包含服务证书用法(确保 `spec.usages` 中包含 `server auth`,可选地也可包含 `digital signature` 和 `key encipherment`,且不包含其它用法) -3. 仅包含隶属于请求节点的 IP 和 DNS 的 `subjectAltNames`,没有 URI 和 Email +1. 仅包含隶属于请求节点的 IP 和 DNS 的 `subjectAltNames`,没有 URI 和 Email 形式的 `subjectAltNames`(解析 `spec.request` 中的 x509 证书签名请求可以检查 `subjectAltNames`) {{< /note >}} @@ -857,7 +887,11 @@ You have several options for generating these credentials: * 较老的方式:和 kubelet 在 TLS 启动引导之前所做的一样,用类似的方式创建和分发证书。 * DaemonSet:由于 kubelet 自身被加载到所有节点之上,并且有足够能力来启动基本服务, @@ -874,7 +908,7 @@ manager. --> ## kubectl 批复 {#kubectl-approval} -CSR 可以在编译进控制器内部的批复工作流之外被批复。 +CSR 可以在编译进控制器管理器内部的批复工作流之外被批复。 节点鉴权是一种特殊用途的鉴权模式,专门对 kubelet 发出的 API 请求进行授权。 - * services * endpoints @@ -57,8 +58,10 @@ Write operations: 写入操作: * 节点和节点状态(启用 `NodeRestriction` 准入插件以限制 kubelet 只能修改自己的节点) @@ -71,8 +74,11 @@ Auth-related operations: 身份认证与鉴权相关的操作: * 对于基于 TLS 的启动引导过程时使用的 [certificationsigningrequests API](/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/) @@ -80,25 +86,33 @@ Auth-related operations: * 为委派的身份验证/鉴权检查创建 TokenReview 和 SubjectAccessReview 的能力 在将来的版本中,节点鉴权器可能会添加或删除权限,以确保 kubelet 具有正确操作所需的最小权限集。 -为了获得节点鉴权器的授权,kubelet 必须使用一个凭证以表示它在 `system:nodes` +为了获得节点鉴权器的授权,kubelet 必须使用一个凭据以表示它在 `system:nodes` 组中,用户名为 `system:node:`。上述的组名和用户名格式要与 [kubelet TLS 启动引导](/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) 过程中为每个 kubelet 创建的标识相匹配。 `` 的值**必须**与 kubelet 注册的节点名称精确匹配。默认情况下,节点名称是由 `hostname` 提供的主机名,或者通过 kubelet `--hostname-override` @@ -114,7 +128,10 @@ To enable the Node authorizer, start the apiserver with `--authorization-mode=No 要启用节点鉴权器,请使用 `--authorization-mode=Node` 启动 API 服务器。 要限制 kubelet 可以写入的 API 对象,请使用 `--enable-admission-plugins=...,NodeRestriction,...` 启动 API 服务器,从而启用 @@ -132,8 +149,9 @@ To limit the API objects kubelets are able to write, enable the [NodeRestriction ### 在 `system:nodes` 组之外的 kubelet {#kubelets-outside-the-system-nodes-group} `system:nodes` 组之外的 kubelet 不会被 `Node` 鉴权模式授权,并且需要继续通过当前授权它们的机制来授权。 @@ -151,7 +169,7 @@ because they do not have a username in the `system:node:...` format. These kubelets would not be authorized by the `Node` authorization mode, and would need to continue to be authorized via whatever mechanism currently authorizes them. --> -在一些部署中,kubelet 具有 `system:nodes` 组的凭证, +在一些部署中,kubelet 具有 `system:nodes` 组的凭据, 但是无法给出它们所关联的节点的标识,因为它们没有 `system:node:...` 格式的用户名。 这些 kubelet 不会被 `Node` 鉴权模式授权,并且需要继续通过当前授权它们的任何机制来授权。 @@ -161,65 +179,3 @@ since the default node identifier implementation would not consider that a node --> 因为默认的节点标识符实现不会把它当作节点身份标识,`NodeRestriction` 准入插件会忽略来自这些 kubelet 的请求。 - - -### 相对于以前使用 RBAC 的版本的更新 {#upgrades-from-previous-versions-using-rbac} - - -升级的 1.7 之前的使用 [RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/) -的集群将继续按原样运行,因为 `system:nodes` 组绑定已经存在。 - - -如果集群管理员希望开始使用 `Node` 鉴权器和 `NodeRestriction` 准入插件来限制节点对 -API 的访问,这一需求可以通过下列操作来完成且不会影响已部署的应用: - - -1. 启用 `Node` 鉴权模式 (`--authorization-mode=Node,RBAC`) 和 `NodeRestriction` 准入插件 -2. 确保所有 kubelet 的凭据符合组/用户名要求 -3. 审核 API 服务器日志以确保 `Node` 鉴权器不会拒绝来自 kubelet 的请求(日志中没有持续的 `NODE DENY` 消息) -4. 删除 `system:node` 集群角色绑定 - - -### RBAC 节点权限 {#rbac-node-permissions} - - -在 1.6 版本中,当使用 [RBAC 鉴权模式](/zh-cn/docs/reference/access-authn-authz/rbac/) -时,`system:nodes` 集群角色会被自动绑定到 `system:node` 组。 - - -在 1.7 版本中,不再推荐将 `system:nodes` 组自动绑定到 `system:node` -角色,因为节点鉴权器通过对 Secret 和 ConfigMap 访问的额外限制完成了相同的任务。 -如果同时启用了 `Node` 和 `RBAC` 鉴权模式,1.7 版本则不会创建 `system:nodes` -组到 `system:node` 角色的自动绑定。 - - -在 1.8 版本中,绑定将根本不会被创建。 - - -使用 RBAC 时,将继续创建 `system:node` 集群角色,以便与将其他用户或组绑定到该角色的部署方法兼容。 diff --git a/content/zh-cn/docs/reference/access-authn-authz/service-accounts-admin.md b/content/zh-cn/docs/reference/access-authn-authz/service-accounts-admin.md index 41346a49ccfb5..f51877dbf625c 100644 --- a/content/zh-cn/docs/reference/access-authn-authz/service-accounts-admin.md +++ b/content/zh-cn/docs/reference/access-authn-authz/service-accounts-admin.md @@ -244,11 +244,15 @@ to obtain short-lived API access tokens is recommended instead. ## 控制平面细节 {#control-plane-details} +### ServiceAccount 控制器 {#serviceaccount-controller} + ServiceAccount 控制器管理名字空间内的 ServiceAccount, 并确保每个活跃的名字空间中都存在名为 `default` 的 ServiceAccount。 diff --git a/content/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager.md b/content/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager.md index ecc73066b9bf7..0bfba41b835f2 100644 --- a/content/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager.md +++ b/content/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager.md @@ -200,12 +200,12 @@ Path to the file containing Azure container registry configuration information. 针对 --secure-port 端口上请求执行监听操作的 IP 地址。 所对应的网络接口必须从集群中其它位置可访问(含命令行及 Web 客户端)。 如果此值为空或者设定为非特定地址(0.0.0.0::), -意味着所有网络接口都在监听范围。 +意味着所有网络接口和 IP 地址簇都在监听范围。 @@ -448,6 +448,19 @@ Filename containing a PEM-encoded RSA or ECDSA private key used to sign certific + +--concurrent-cron-job-syncs int32     默认值:5 + + +

+ +可以并发同步的 CronJob 对象个数。数值越大意味着对 CronJob 的响应越及时, +同时也意味着更大的 CPU(和网络带宽)压力。 +

+ + --concurrent-deployment-syncs int32     默认值:5 @@ -500,9 +513,10 @@ The number of garbage collector workers that are allowed to sync concurrently. - - - +--concurrent-horizontal-pod-autoscaler-syncs int32     默认值:5 + + + @@ -510,7 +524,20 @@ The number of horizontal pod autoscaler objects that are allowed to sync concurr 允许并发执行的、对水平 Pod 自动扩缩器对象进行同步的数量。 更大的数字 = 响应更快的水平 Pod 自动缩放器对象处理,但需要更高的 CPU(和网络)负载。

- +

+ + + +--concurrent-job-syncs int32     默认值:5 + + +

+ +可以并发同步的 Job 对象个数。较大的数值意味着更快的 Job 终结操作, +不过也意味着更多的 CPU (和网络)占用。 +

@@ -623,12 +650,24 @@ The number of statefulset objects that are allowed to sync concurrently. Larger -可以并发同步的 TTL-after-finished 控制器线程个数。 +可以并发同步的 ttl-after-finished-controller 线程个数。 + +--concurrent-validating-admission-policy-status-syncs int32     默认值:5 + + +

+ +可以并发同步的 ValidatingAdmissionPolicyStatusController 线程个数。 +

+ + --configure-cloud-routes     默认值:true @@ -671,13 +710,24 @@ Interval between starting controller managers. 要启用的控制器列表。* 表示启用所有默认启用的控制器; foo 启用名为 foo 的控制器; -foo 表示禁用名为 foo 的控制器。
-控制器的全集:attachdetach、bootstrapsigner、cloud-node-lifecycle、clusterrole-aggregation、cronjob、csrapproving、csrcleaner、csrsigning、daemonset、deployment、disruption、endpoint、endpointslice、endpointslicemirroring、ephemeral-volume、garbagecollector、horizontalpodautoscaling、job、namespace、nodeipam、nodelifecycle、persistentvolume-binder、persistentvolume-expander、podgc、pv-protection、pvc-protection、replicaset、replicationcontroller、resourcequota、root-ca-cert-publisher、route、service、serviceaccount、serviceaccount-token、statefulset、tokencleaner、ttl、ttl-after-finished
-默认禁用的控制器有:bootstrapsigner 和 tokencleaner。 +控制器的全集:bootstrap-signer-controller、certificatesigningrequest-approving-controller、 +certificatesigningrequest-cleaner-controller、certificatesigningrequest-signing-controller、 +cloud-node-lifecycle-controller、clusterrole-aggregation-controller、cronjob-controller、daemonset-controller、 +deployment-controller、disruption-controller、endpoints-controller、endpointslice-controller、 +endpointslice-mirroring-controller、ephemeral-volume-controller、garbage-collector-controller、 +horizontal-pod-autoscaler-controller、job-controller、namespace-controller、node-ipam-controller、 +node-lifecycle-controller、node-route-controller、persistentvolume-attach-detach-controller、 +persistentvolume-binder-controller、persistentvolume-expander-controller、persistentvolume-protection-controller、 +persistentvolumeclaim-protection-controller、pod-garbage-collector-controller、replicaset-controller、 +replicationcontroller-controller、resourcequota-controller、root-ca-certificate-publisher-controller、 +service-lb-controller、serviceaccount-controller、serviceaccount-token-controller、statefulset-controller、 +token-cleaner-controller、ttl-after-finished-controller、ttl-controller
+默认禁用的控制器有:bootstrap-signer-controller、token-cleaner-controller。 @@ -790,11 +840,13 @@ The length of endpoint slice updates batching period. Processing of pod changes 当云驱动程序设置为 external 时要使用的插件名称。此字符串可以为空。 只能在云驱动程序为 external 时设置。 -目前用来保证节点控制器和卷控制器能够在三种云驱动上正常工作。 +目前用来保证 node-ipam-controller、persistentvolume-binder-controller、persistentvolume-expander-controller +和 attach-detach-controller 能够在三种云驱动上正常工作。 + @@ -809,10 +861,9 @@ A set of key=value pairs that describe feature gates for alpha/experimental feat APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
-APISelfSubjectReview=true|false (BETA - default=true)
APIServerIdentity=true|false (BETA - default=true)
APIServerTracing=true|false (BETA - default=true)
-AdmissionWebhookMatchConditions=true|false (ALPHA - default=false)
+AdmissionWebhookMatchConditions=true|false (BETA - default=true)
AggregatedDiscoveryEndpoint=true|false (BETA - default=true)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
@@ -821,32 +872,32 @@ AppArmor=true|false (BETA - default=true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
CPUManagerPolicyOptions=true|false (BETA - default=true)
+CRDValidationRatcheting=true|false (ALPHA - default=false)
CSIMigrationPortworx=true|false (BETA - default=false)
-CSIMigrationRBD=true|false (ALPHA - default=false)
CSINodeExpandSecret=true|false (BETA - default=true)
CSIVolumeHealth=true|false (ALPHA - default=false)
CloudControllerManagerWebhook=true|false (ALPHA - default=false)
CloudDualStackNodeIPs=true|false (ALPHA - default=false)
ClusterTrustBundle=true|false (ALPHA - default=false)
ComponentSLIs=true|false (BETA - default=true)
+ConsistentListFromCache=true|false (ALPHA - default=false)
ContainerCheckpoint=true|false (ALPHA - default=false)
ContextualLogging=true|false (ALPHA - default=false)
+CronJobsScheduledAnnotation=true|false (BETA - default=true)
CrossNamespaceVolumeDataSource=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
CustomResourceValidationExpressions=true|false (BETA - default=true)
+DevicePluginCDIDevices=true|false (ALPHA - default=false)
DisableCloudProviders=true|false (ALPHA - default=false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
DynamicResourceAllocation=true|false (ALPHA - default=false)
ElasticIndexedJob=true|false (BETA - default=true)
EventedPLEG=true|false (BETA - default=false)
-ExpandedDNSConfig=true|false (BETA - default=true)
-ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GracefulNodeShutdown=true|false (BETA - default=true)
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
HPAContainerMetrics=true|false (BETA - default=true)
HPAScaleToZero=true|false (ALPHA - default=false)
HonorPVReclaimPolicy=true|false (ALPHA - default=false)
-IPTablesOwnershipCleanup=true|false (BETA - default=true)
InPlacePodVerticalScaling=true|false (ALPHA - default=false)
InTreePluginAWSUnregister=true|false (ALPHA - default=false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
@@ -854,18 +905,20 @@ InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
InTreePluginGCEUnregister=true|false (ALPHA - default=false)
InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
-InTreePluginRBDUnregister=true|false (ALPHA - default=false)
InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
+JobBackoffLimitPerIndex=true|false (ALPHA - default=false)
JobPodFailurePolicy=true|false (BETA - default=true)
+JobPodReplacementPolicy=true|false (ALPHA - default=false)
JobReadyPods=true|false (BETA - default=true)
KMSv2=true|false (BETA - default=true)
+KMSv2KDF=true|false (BETA - default=false)
+KubeProxyDrainingTerminatingNodes=true|false (ALPHA - default=false)
+KubeletCgroupDriverFromCRI=true|false (ALPHA - default=false)
KubeletInUserNamespace=true|false (ALPHA - default=false)
-KubeletPodResources=true|false (BETA - default=true)
KubeletPodResourcesDynamicResources=true|false (ALPHA - default=false)
KubeletPodResourcesGet=true|false (ALPHA - default=false)
-KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
KubeletTracing=true|false (BETA - default=true)
-LegacyServiceAccountTokenTracking=true|false (BETA - default=true)
+LegacyServiceAccountTokenCleanUp=true|false (ALPHA - default=false)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
LogarithmicScaleDown=true|false (BETA - default=true)
LoggingAlphaOptions=true|false (ALPHA - default=false)
@@ -875,35 +928,35 @@ MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
MemoryManager=true|false (BETA - default=true)
MemoryQoS=true|false (ALPHA - default=false)
MinDomainsInPodTopologySpread=true|false (BETA - default=true)
-MinimizeIPTablesRestore=true|false (BETA - default=true)
MultiCIDRRangeAllocator=true|false (ALPHA - default=false)
MultiCIDRServiceAllocator=true|false (ALPHA - default=false)
-NetworkPolicyStatus=true|false (ALPHA - default=false)
NewVolumeManagerReconstruction=true|false (BETA - default=true)
NodeInclusionPolicyInPodTopologySpread=true|false (BETA - default=true)
NodeLogQuery=true|false (ALPHA - default=false)
-NodeOutOfServiceVolumeDetach=true|false (BETA - default=true)
-NodeSwap=true|false (ALPHA - default=false)
+NodeSwap=true|false (BETA - default=false)
OpenAPIEnums=true|false (BETA - default=true)
PDBUnhealthyPodEvictionPolicy=true|false (BETA - default=true)
+PersistentVolumeLastPhaseTransitionTime=true|false (ALPHA - default=false)
PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
PodDeletionCost=true|false (BETA - default=true)
PodDisruptionConditions=true|false (BETA - default=true)
-PodHasNetworkCondition=true|false (ALPHA - default=false)
+PodHostIPs=true|false (ALPHA - default=false)
+PodIndexLabel=true|false (BETA - default=true)
+PodReadyToStartContainersCondition=true|false (ALPHA - default=false)
PodSchedulingReadiness=true|false (BETA - default=true)
-ProbeTerminationGracePeriod=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
-ProxyTerminatingEndpoints=true|false (BETA - default=true)
QOSReserved=true|false (ALPHA - default=false)
ReadWriteOncePod=true|false (BETA - default=true)
RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
-RetroactiveDefaultStorageClass=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
SELinuxMountReadWriteOncePod=true|false (BETA - default=true)
+SchedulerQueueingHints=true|false (BETA - default=true)
SecurityContextDeny=true|false (ALPHA - default=false)
-ServiceNodePortStaticSubrange=true|false (ALPHA - default=false)
+ServiceNodePortStaticSubrange=true|false (BETA - default=true)
+SidecarContainers=true|false (ALPHA - default=false)
SizeMemoryBackedVolumes=true|false (BETA - default=true)
+SkipReadOnlyValidationGCE=true|false (ALPHA - default=false)
StableLoadBalancerNodeSet=true|false (BETA - default=true)
StatefulSetAutoDeletePVC=true|false (BETA - default=true)
StatefulSetStartOrdinal=true|false (BETA - default=true)
@@ -911,10 +964,11 @@ StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
TopologyAwareHints=true|false (BETA - default=true)
TopologyManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
-TopologyManagerPolicyBetaOptions=true|false (BETA - default=false)
-TopologyManagerPolicyOptions=true|false (ALPHA - default=false)
-UserNamespacesStatelessPodsSupport=true|false (ALPHA - default=false)
-ValidatingAdmissionPolicy=true|false (ALPHA - default=false)
+TopologyManagerPolicyBetaOptions=true|false (BETA - default=true)
+TopologyManagerPolicyOptions=true|false (BETA - default=true)
+UnknownVersionInteroperabilityProxy=true|false (ALPHA - default=false)
+UserNamespacesSupport=true|false (ALPHA - default=false)
+ValidatingAdmissionPolicy=true|false (BETA - default=false)
VolumeCapacityPriority=true|false (ALPHA - default=false)
WatchList=true|false (ALPHA - default=false)
WinDSR=true|false (ALPHA - default=false)
@@ -926,10 +980,9 @@ WindowsHostNetwork=true|false (ALPHA - default=true) APIListChunking=true|false (BETA - 默认值为 true)
APIPriorityAndFairness=true|false (BETA - 默认值为 true)
APIResponseCompression=true|false (BETA - 默认值为 true)
-APISelfSubjectReview=true|false (BETA - 默认值为 true)
APIServerIdentity=true|false (BETA - 默认值为 true)
APIServerTracing=true|false (BETA - 默认值为 true)
-AdmissionWebhookMatchConditions=true|false (ALPHA - 默认值为 false)
+AdmissionWebhookMatchConditions=true|false (BETA - 默认值为 true)
AggregatedDiscoveryEndpoint=true|false (BETA - 默认值为 true)
AllAlpha=true|false (ALPHA - 默认值为 false)
AllBeta=true|false (BETA - 默认值为 false)
@@ -938,32 +991,32 @@ AppArmor=true|false (BETA - 默认值为 true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - 默认值为 false)
CPUManagerPolicyBetaOptions=true|false (BETA - 默认值为 true)
CPUManagerPolicyOptions=true|false (BETA - 默认值为 true)
+CRDValidationRatcheting=true|false (ALPHA - 默认值为 false)
CSIMigrationPortworx=true|false (BETA - 默认值为 false)
-CSIMigrationRBD=true|false (ALPHA - 默认值为 false)
CSINodeExpandSecret=true|false (BETA - 默认值为 true)
CSIVolumeHealth=true|false (ALPHA - 默认值为 false)
CloudControllerManagerWebhook=true|false (ALPHA - 默认值为 false)
CloudDualStackNodeIPs=true|false (ALPHA - 默认值为 false)
ClusterTrustBundle=true|false (ALPHA - 默认值为 false)
ComponentSLIs=true|false (BETA - 默认值为 true)
+ConsistentListFromCache=true|false (ALPHA - 默认值为 false)
ContainerCheckpoint=true|false (ALPHA - 默认值为 false)
ContextualLogging=true|false (ALPHA - 默认值为 false)
+CronJobsScheduledAnnotation=true|false (BETA - 默认值为 true)
CrossNamespaceVolumeDataSource=true|false (ALPHA - 默认值为 false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - 默认值为 false)
CustomResourceValidationExpressions=true|false (BETA - 默认值为 true)
+DevicePluginCDIDevices=true|false (ALPHA - 默认值为 false)
DisableCloudProviders=true|false (ALPHA - 默认值为 false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - 默认值为 false)
DynamicResourceAllocation=true|false (ALPHA - 默认值为 false)
ElasticIndexedJob=true|false (BETA - 默认值为 true)
EventedPLEG=true|false (BETA - 默认值为 false)
-ExpandedDNSConfig=true|false (BETA - 默认值为 true)
-ExperimentalHostUserNamespaceDefaulting=true|false (BETA - 默认值为 false)
GracefulNodeShutdown=true|false (BETA - 默认值为 true)
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - 默认值为 true)
HPAContainerMetrics=true|false (BETA - 默认值为 true)
HPAScaleToZero=true|false (ALPHA - 默认值为 false)
HonorPVReclaimPolicy=true|false (ALPHA - 默认值为 false)
-IPTablesOwnershipCleanup=true|false (BETA - 默认值为 true)
InPlacePodVerticalScaling=true|false (ALPHA - 默认值为 false)
InTreePluginAWSUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - 默认值为 false)
@@ -971,18 +1024,20 @@ InTreePluginAzureFileUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginGCEUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginOpenStackUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginPortworxUnregister=true|false (ALPHA - 默认值为 false)
-InTreePluginRBDUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginvSphereUnregister=true|false (ALPHA - 默认值为 false)
+JobBackoffLimitPerIndex=true|false (ALPHA - 默认值为 false)
JobPodFailurePolicy=true|false (BETA - 默认值为 true)
+JobPodReplacementPolicy=true|false (ALPHA - 默认值为 false)
JobReadyPods=true|false (BETA - 默认值为 true)
KMSv2=true|false (BETA - 默认值为 true)
+KMSv2KDF=true|false (BETA - 默认值为 false)
+KubeProxyDrainingTerminatingNodes=true|false (ALPHA - 默认值为 false)
+KubeletCgroupDriverFromCRI=true|false (ALPHA - 默认值为 false)
KubeletInUserNamespace=true|false (ALPHA - 默认值为 false)
-KubeletPodResources=true|false (BETA - 默认值为 true)
KubeletPodResourcesDynamicResources=true|false (ALPHA - 默认值为 false)
KubeletPodResourcesGet=true|false (ALPHA - 默认值为 false)
-KubeletPodResourcesGetAllocatable=true|false (BETA - 默认值为 true)
KubeletTracing=true|false (BETA - 默认值为 true)
-LegacyServiceAccountTokenTracking=true|false (BETA - 默认值为 true)
+LegacyServiceAccountTokenCleanUp=true|false (ALPHA - 默认值为 false)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - 默认值为 false)
LogarithmicScaleDown=true|false (BETA - 默认值为 true)
LoggingAlphaOptions=true|false (ALPHA - 默认值为 false)
@@ -992,35 +1047,35 @@ MaxUnavailableStatefulSet=true|false (ALPHA - 默认值为 false)
MemoryManager=true|false (BETA - 默认值为 true)
MemoryQoS=true|false (ALPHA - 默认值为 false)
MinDomainsInPodTopologySpread=true|false (BETA - 默认值为 true)
-MinimizeIPTablesRestore=true|false (BETA - 默认值为 true)
MultiCIDRRangeAllocator=true|false (ALPHA - 默认值为 false)
MultiCIDRServiceAllocator=true|false (ALPHA - 默认值为 false)
-NetworkPolicyStatus=true|false (ALPHA - 默认值为 false)
NewVolumeManagerReconstruction=true|false (BETA - 默认值为 true)
NodeInclusionPolicyInPodTopologySpread=true|false (BETA - 默认值为 true)
NodeLogQuery=true|false (ALPHA - 默认值为 false)
-NodeOutOfServiceVolumeDetach=true|false (BETA - 默认值为 true)
-NodeSwap=true|false (ALPHA - 默认值为 false)
+NodeSwap=true|false (BETA - 默认值为 false)
OpenAPIEnums=true|false (BETA - 默认值为 true)
PDBUnhealthyPodEvictionPolicy=true|false (BETA - 默认值为 true)
+PersistentVolumeLastPhaseTransitionTime=true|false (ALPHA - 默认值为 false)
PodAndContainerStatsFromCRI=true|false (ALPHA - 默认值为 false)
PodDeletionCost=true|false (BETA - 默认值为 true)
PodDisruptionConditions=true|false (BETA - 默认值为 true)
-PodHasNetworkCondition=true|false (ALPHA - 默认值为 false)
+PodHostIPs=true|false (ALPHA - 默认值为 false)
+PodIndexLabel=true|false (BETA - 默认值为 true)
+PodReadyToStartContainersCondition=true|false (ALPHA - 默认值为 false)
PodSchedulingReadiness=true|false (BETA - 默认值为 true)
-ProbeTerminationGracePeriod=true|false (BETA - 默认值为 true)
ProcMountType=true|false (ALPHA - 默认值为 false)
-ProxyTerminatingEndpoints=true|false (BETA - 默认值为 true)
QOSReserved=true|false (ALPHA - 默认值为 false)
ReadWriteOncePod=true|false (BETA - 默认值为 true)
RecoverVolumeExpansionFailure=true|false (ALPHA - 默认值为 false)
RemainingItemCount=true|false (BETA - 默认值为 true)
-RetroactiveDefaultStorageClass=true|false (BETA - 默认值为 true)
RotateKubeletServerCertificate=true|false (BETA - 默认值为 true)
SELinuxMountReadWriteOncePod=true|false (BETA - 默认值为 true)
+SchedulerQueueingHints=true|false (BETA - 默认值为 true)
SecurityContextDeny=true|false (ALPHA - 默认值为 false)
-ServiceNodePortStaticSubrange=true|false (ALPHA - 默认值为 false)
+ServiceNodePortStaticSubrange=true|false (BETA - 默认值为 true)
+SidecarContainers=true|false (ALPHA - 默认值为 false)
SizeMemoryBackedVolumes=true|false (BETA - 默认值为 true)
+SkipReadOnlyValidationGCE=true|false (ALPHA - 默认值为 false)
StableLoadBalancerNodeSet=true|false (BETA - 默认值为 true)
StatefulSetAutoDeletePVC=true|false (BETA - 默认值为 true)
StatefulSetStartOrdinal=true|false (BETA - 默认值为 true)
@@ -1028,10 +1083,11 @@ StorageVersionAPI=true|false (ALPHA - 默认值为 false)
StorageVersionHash=true|false (BETA - 默认值为 true)
TopologyAwareHints=true|false (BETA - 默认值为 true)
TopologyManagerPolicyAlphaOptions=true|false (ALPHA - 默认值为 false)
-TopologyManagerPolicyBetaOptions=true|false (BETA - 默认值为 false)
-TopologyManagerPolicyOptions=true|false (ALPHA - 默认值为 false)
-UserNamespacesStatelessPodsSupport=true|false (ALPHA - 默认值为 false)
-ValidatingAdmissionPolicy=true|false (ALPHA - 默认值为 false)
+TopologyManagerPolicyBetaOptions=true|false (BETA - 默认值为 true)
+TopologyManagerPolicyOptions=true|false (BETA - 默认值为 true)
+UnknownVersionInteroperabilityProxy=true|false (ALPHA - 默认值为 false)
+UserNamespacesSupport=true|false (ALPHA - 默认值为 false)
+ValidatingAdmissionPolicy=true|false (BETA - 默认值为 false)
VolumeCapacityPriority=true|false (ALPHA - 默认值为 false)
WatchList=true|false (ALPHA - 默认值为 false)
WinDSR=true|false (ALPHA - 默认值为 false)
@@ -1194,9 +1250,9 @@ Path to kubeconfig file with authorization and master location information. -节点控制器在执行 Pod 驱逐操作逻辑时, +node-lifecycle-controller 在执行 Pod 驱逐操作逻辑时, 基于此标志所设置的节点个数阈值来判断所在集群是否为大规模集群。 当集群规模小于等于此规模时, --secondary-node-eviction-rate 会被隐式重设为 0。 @@ -1312,6 +1368,18 @@ Path to the config file for controller leader migration, or empty to use the val

+ +--legacy-service-account-token-clean-up-period duration     默认值:8760h0m0s + + +

+ +从最近一次使用某个旧的服务账号令牌计起,到该令牌在可以删除之前的时长。 +

+ + --log-flush-frequency duration     默认值:5s @@ -1386,9 +1454,9 @@ The resync period in reflectors will be random between MinResyncPeriod and 2*Min -EndpointSliceMirroring 控制器将同时执行的服务端点同步操作数。 +endpointslice-mirroring-controller 将同时执行的服务端点同步操作数。 较大的数量 = 更快的端点切片更新,但 CPU(和网络)负载更多。 默认为 5。 @@ -1399,9 +1467,9 @@ EndpointSliceMirroring 控制器将同时执行的服务端点同步操作数。 -EndpointSlice 的长度更新了 EndpointSliceMirroring 控制器的批处理周期。 +EndpointSlice 的长度会更新 endpointslice-mirroring-controller 的批处理周期。 EndpointSlice 更改的处理将延迟此持续时间, 以使它们与潜在的即将进行的更新结合在一起,并减少 EndpointSlice 更新的总数。 较大的数量 = 较高的端点编程延迟,但是生成的端点修订版本数量较少 @@ -1414,9 +1482,9 @@ EndpointSlice 更改的处理将延迟此持续时间, -EndpointSliceMirroring 控制器将添加到 EndpointSlice 的最大端点数。 +endpointslice-mirroring-controller 可添加到某 EndpointSlice 的端点个数上限。 每个分片的端点越多,端点分片越少,但资源越大。默认为 100。 @@ -1504,9 +1572,9 @@ Amount of time which we allow running Node to be unresponsive before marking it -节点控制器对节点状态进行同步的重复周期。 +cloud-node-lifecycle-controller 对节点状态进行同步的周期。 @@ -1949,9 +2017,10 @@ number for the log level verbosity--> -打印版本信息之后退出。 +--version, --version=raw 打印版本信息之后退出; +--version=vX.Y.Z... 设置报告的版本。 @@ -1967,31 +2036,6 @@ comma-separated list of pattern=N settings for file-filtered logging (only works - ---volume-host-allow-local-loopback     默认值:true - - - - -此标志为 false 时,禁止本地回路 IP 地址和 --volume-host-cidr-denylist -中所指定的 CIDR 范围。 - - - - ---volume-host-cidr-denylist strings - - - - -用逗号分隔的一个 CIDR 范围列表,禁止使用这些地址上的卷插件。 - - - diff --git a/content/zh-cn/docs/reference/command-line-tools-reference/kube-proxy.md b/content/zh-cn/docs/reference/command-line-tools-reference/kube-proxy.md index 7df32035bee08..c358565a9a14a 100644 --- a/content/zh-cn/docs/reference/command-line-tools-reference/kube-proxy.md +++ b/content/zh-cn/docs/reference/command-line-tools-reference/kube-proxy.md @@ -9,17 +9,6 @@ content_type: tool-reference weight: 30 --> - - ## {{% heading "synopsis" %}} + 逗号分隔的文件列表,用于检查 boot-id。使用第一个存在的文件。

@@ -250,10 +240,9 @@ A set of key=value pairs that describe feature gates for alpha/experimental feat APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
-APISelfSubjectReview=true|false (BETA - default=true)
APIServerIdentity=true|false (BETA - default=true)
APIServerTracing=true|false (BETA - default=true)
-AdmissionWebhookMatchConditions=true|false (ALPHA - default=false)
+AdmissionWebhookMatchConditions=true|false (BETA - default=true)
AggregatedDiscoveryEndpoint=true|false (BETA - default=true)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
@@ -262,32 +251,32 @@ AppArmor=true|false (BETA - default=true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
CPUManagerPolicyOptions=true|false (BETA - default=true)
+CRDValidationRatcheting=true|false (ALPHA - default=false)
CSIMigrationPortworx=true|false (BETA - default=false)
-CSIMigrationRBD=true|false (ALPHA - default=false)
CSINodeExpandSecret=true|false (BETA - default=true)
CSIVolumeHealth=true|false (ALPHA - default=false)
CloudControllerManagerWebhook=true|false (ALPHA - default=false)
CloudDualStackNodeIPs=true|false (ALPHA - default=false)
ClusterTrustBundle=true|false (ALPHA - default=false)
ComponentSLIs=true|false (BETA - default=true)
+ConsistentListFromCache=true|false (ALPHA - default=false)
ContainerCheckpoint=true|false (ALPHA - default=false)
ContextualLogging=true|false (ALPHA - default=false)
+CronJobsScheduledAnnotation=true|false (BETA - default=true)
CrossNamespaceVolumeDataSource=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
CustomResourceValidationExpressions=true|false (BETA - default=true)
+DevicePluginCDIDevices=true|false (ALPHA - default=false)
DisableCloudProviders=true|false (ALPHA - default=false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
DynamicResourceAllocation=true|false (ALPHA - default=false)
ElasticIndexedJob=true|false (BETA - default=true)
EventedPLEG=true|false (BETA - default=false)
-ExpandedDNSConfig=true|false (BETA - default=true)
-ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GracefulNodeShutdown=true|false (BETA - default=true)
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
HPAContainerMetrics=true|false (BETA - default=true)
HPAScaleToZero=true|false (ALPHA - default=false)
HonorPVReclaimPolicy=true|false (ALPHA - default=false)
-IPTablesOwnershipCleanup=true|false (BETA - default=true)
InPlacePodVerticalScaling=true|false (ALPHA - default=false)
InTreePluginAWSUnregister=true|false (ALPHA - default=false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
@@ -295,18 +284,20 @@ InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
InTreePluginGCEUnregister=true|false (ALPHA - default=false)
InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
-InTreePluginRBDUnregister=true|false (ALPHA - default=false)
InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
+JobBackoffLimitPerIndex=true|false (ALPHA - default=false)
JobPodFailurePolicy=true|false (BETA - default=true)
-JobReadyPods=true|false (BETA - default=true)
+JobPodReplacementPolicy=true|false (ALPHA - default=false)
J +obReadyPods=true|false (BETA - default=true)
KMSv2=true|false (BETA - default=true)
+KMSv2KDF=true|false (BETA - default=false)
+KubeProxyDrainingTerminatingNodes=true|false (ALPHA - default=false)
+KubeletCgroupDriverFromCRI=true|false (ALPHA - default=false)
KubeletInUserNamespace=true|false (ALPHA - default=false)
-KubeletPodResources=true|false (BETA - default=true)
KubeletPodResourcesDynamicResources=true|false (ALPHA - default=false)
KubeletPodResourcesGet=true|false (ALPHA - default=false)
-KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
KubeletTracing=true|false (BETA - default=true)
-LegacyServiceAccountTokenTracking=true|false (BETA - default=true)
+LegacyServiceAccountTokenCleanUp=true|false (ALPHA - default=false)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
LogarithmicScaleDown=true|false (BETA - default=true)
LoggingAlphaOptions=true|false (ALPHA - default=false)
@@ -316,35 +307,35 @@ MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
MemoryManager=true|false (BETA - default=true)
MemoryQoS=true|false (ALPHA - default=false)
MinDomainsInPodTopologySpread=true|false (BETA - default=true)
-MinimizeIPTablesRestore=true|false (BETA - default=true)
MultiCIDRRangeAllocator=true|false (ALPHA - default=false)
MultiCIDRServiceAllocator=true|false (ALPHA - default=false)
-NetworkPolicyStatus=true|false (ALPHA - default=false)
NewVolumeManagerReconstruction=true|false (BETA - default=true)
NodeInclusionPolicyInPodTopologySpread=true|false (BETA - default=true)
NodeLogQuery=true|false (ALPHA - default=false)
-NodeOutOfServiceVolumeDetach=true|false (BETA - default=true)
-NodeSwap=true|false (ALPHA - default=false)
+NodeSwap=true|false (BETA - default=false)
OpenAPIEnums=true|false (BETA - default=true)
PDBUnhealthyPodEvictionPolicy=true|false (BETA - default=true)
+PersistentVolumeLastPhaseTransitionTime=true|false (ALPHA - default=false)
PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
PodDeletionCost=true|false (BETA - default=true)
PodDisruptionConditions=true|false (BETA - default=true)
-PodHasNetworkCondition=true|false (ALPHA - default=false)
+PodHostIPs=true|false (ALPHA - default=false)
+PodIndexLabel=true|false (BETA - default=true)
+PodReadyToStartContainersCondition=true|false (ALPHA - default=false)
PodSchedulingReadiness=true|false (BETA - default=true)
-ProbeTerminationGracePeriod=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
-ProxyTerminatingEndpoints=true|false (BETA - default=true)
QOSReserved=true|false (ALPHA - default=false)
ReadWriteOncePod=true|false (BETA - default=true)
RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
-RetroactiveDefaultStorageClass=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
SELinuxMountReadWriteOncePod=true|false (BETA - default=true)
+SchedulerQueueingHints=true|false (BETA - default=true)
SecurityContextDeny=true|false (ALPHA - default=false)
-ServiceNodePortStaticSubrange=true|false (ALPHA - default=false)
+ServiceNodePortStaticSubrange=true|false (BETA - default=true)
+SidecarContainers=true|false (ALPHA - default=false)
SizeMemoryBackedVolumes=true|false (BETA - default=true)
+SkipReadOnlyValidationGCE=true|false (ALPHA - default=false)
StableLoadBalancerNodeSet=true|false (BETA - default=true)
StatefulSetAutoDeletePVC=true|false (BETA - default=true)
StatefulSetStartOrdinal=true|false (BETA - default=true)
@@ -352,10 +343,11 @@ StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
TopologyAwareHints=true|false (BETA - default=true)
TopologyManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
-TopologyManagerPolicyBetaOptions=true|false (BETA - default=false)
-TopologyManagerPolicyOptions=true|false (ALPHA - default=false)
-UserNamespacesStatelessPodsSupport=true|false (ALPHA - default=false)
-ValidatingAdmissionPolicy=true|false (ALPHA - default=false)
+TopologyManagerPolicyBetaOptions=true|false (BETA - default=true)
+TopologyManagerPolicyOptions=true|false (BETA - default=true)
+UnknownVersionInteroperabilityProxy=true|false (ALPHA - default=false)
+UserNamespacesSupport=true|false (ALPHA - default=false)
+ValidatingAdmissionPolicy=true|false (BETA - default=false)
VolumeCapacityPriority=true|false (ALPHA - default=false)
WatchList=true|false (ALPHA - default=false)
WinDSR=true|false (ALPHA - default=false)
@@ -367,10 +359,9 @@ This parameter is ignored if a config file is specified by --config. APIListChunking=true|false (BETA - 默认值为 true)
APIPriorityAndFairness=true|false (BETA - 默认值为 true)
APIResponseCompression=true|false (BETA - 默认值为 true)
-APISelfSubjectReview=true|false (BETA - 默认值为 true)
APIServerIdentity=true|false (BETA - 默认值为 true)
APIServerTracing=true|false (BETA - 默认值为 true)
-AdmissionWebhookMatchConditions=true|false (ALPHA - 默认值为 false)
+AdmissionWebhookMatchConditions=true|false (BETA - 默认值为 true)
AggregatedDiscoveryEndpoint=true|false (BETA - 默认值为 true)
AllAlpha=true|false (ALPHA - 默认值为 false)
AllBeta=true|false (BETA - 默认值为 false)
@@ -379,32 +370,32 @@ AppArmor=true|false (BETA - 默认值为 true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - 默认值为 false)
CPUManagerPolicyBetaOptions=true|false (BETA - 默认值为 true)
CPUManagerPolicyOptions=true|false (BETA - 默认值为 true)
+CRDValidationRatcheting=true|false (ALPHA - 默认值为 false)
CSIMigrationPortworx=true|false (BETA - 默认值为 false)
-CSIMigrationRBD=true|false (ALPHA - 默认值为 false)
CSINodeExpandSecret=true|false (BETA - 默认值为 true)
CSIVolumeHealth=true|false (ALPHA - 默认值为 false)
CloudControllerManagerWebhook=true|false (ALPHA - 默认值为 false)
CloudDualStackNodeIPs=true|false (ALPHA - 默认值为 false)
ClusterTrustBundle=true|false (ALPHA - 默认值为 false)
ComponentSLIs=true|false (BETA - 默认值为 true)
+ConsistentListFromCache=true|false (ALPHA - 默认值为 false)
ContainerCheckpoint=true|false (ALPHA - 默认值为 false)
ContextualLogging=true|false (ALPHA - 默认值为 false)
+CronJobsScheduledAnnotation=true|false (BETA - 默认值为 true)
CrossNamespaceVolumeDataSource=true|false (ALPHA - 默认值为 false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - 默认值为 false)
CustomResourceValidationExpressions=true|false (BETA - 默认值为 true)
+DevicePluginCDIDevices=true|false (ALPHA - 默认值为 false)
DisableCloudProviders=true|false (ALPHA - 默认值为 false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - 默认值为 false)
DynamicResourceAllocation=true|false (ALPHA - 默认值为 false)
ElasticIndexedJob=true|false (BETA - 默认值为 true)
EventedPLEG=true|false (BETA - 默认值为 false)
-ExpandedDNSConfig=true|false (BETA - 默认值为 true)
-ExperimentalHostUserNamespaceDefaulting=true|false (BETA - 默认值为 false)
GracefulNodeShutdown=true|false (BETA - 默认值为 true)
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - 默认值为 true)
HPAContainerMetrics=true|false (BETA - 默认值为 true)
HPAScaleToZero=true|false (ALPHA - 默认值为 false)
HonorPVReclaimPolicy=true|false (ALPHA - 默认值为 false)
-IPTablesOwnershipCleanup=true|false (BETA - 默认值为 true)
InPlacePodVerticalScaling=true|false (ALPHA - 默认值为 false)
InTreePluginAWSUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - 默认值为 false)
@@ -412,18 +403,20 @@ InTreePluginAzureFileUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginGCEUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginOpenStackUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginPortworxUnregister=true|false (ALPHA - 默认值为 false)
-InTreePluginRBDUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginvSphereUnregister=true|false (ALPHA - 默认值为 false)
+JobBackoffLimitPerIndex=true|false (ALPHA - 默认值为 false)
JobPodFailurePolicy=true|false (BETA - 默认值为 true)
-JobReadyPods=true|false (BETA - 默认值为 true)
+JobPodReplacementPolicy=true|false (ALPHA - 默认值为 false)
J +obReadyPods=true|false (BETA - 默认值为 true)
KMSv2=true|false (BETA - 默认值为 true)
+KMSv2KDF=true|false (BETA - 默认值为 false)
+KubeProxyDrainingTerminatingNodes=true|false (ALPHA - 默认值为 false)
+KubeletCgroupDriverFromCRI=true|false (ALPHA - 默认值为 false)
KubeletInUserNamespace=true|false (ALPHA - 默认值为 false)
-KubeletPodResources=true|false (BETA - 默认值为 true)
KubeletPodResourcesDynamicResources=true|false (ALPHA - 默认值为 false)
KubeletPodResourcesGet=true|false (ALPHA - 默认值为 false)
-KubeletPodResourcesGetAllocatable=true|false (BETA - 默认值为 true)
KubeletTracing=true|false (BETA - 默认值为 true)
-LegacyServiceAccountTokenTracking=true|false (BETA - 默认值为 true)
+LegacyServiceAccountTokenCleanUp=true|false (ALPHA - 默认值为 false)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - 默认值为 false)
LogarithmicScaleDown=true|false (BETA - 默认值为 true)
LoggingAlphaOptions=true|false (ALPHA - 默认值为 false)
@@ -433,35 +426,35 @@ MaxUnavailableStatefulSet=true|false (ALPHA - 默认值为 false)
MemoryManager=true|false (BETA - 默认值为 true)
MemoryQoS=true|false (ALPHA - 默认值为 false)
MinDomainsInPodTopologySpread=true|false (BETA - 默认值为 true)
-MinimizeIPTablesRestore=true|false (BETA - 默认值为 true)
MultiCIDRRangeAllocator=true|false (ALPHA - 默认值为 false)
MultiCIDRServiceAllocator=true|false (ALPHA - 默认值为 false)
-NetworkPolicyStatus=true|false (ALPHA - 默认值为 false)
NewVolumeManagerReconstruction=true|false (BETA - 默认值为 true)
NodeInclusionPolicyInPodTopologySpread=true|false (BETA - 默认值为 true)
NodeLogQuery=true|false (ALPHA - 默认值为 false)
-NodeOutOfServiceVolumeDetach=true|false (BETA - 默认值为 true)
-NodeSwap=true|false (ALPHA - 默认值为 false)
+NodeSwap=true|false (BETA - 默认值为 false)
OpenAPIEnums=true|false (BETA - 默认值为 true)
PDBUnhealthyPodEvictionPolicy=true|false (BETA - 默认值为 true)
+PersistentVolumeLastPhaseTransitionTime=true|false (ALPHA - 默认值为 false)
PodAndContainerStatsFromCRI=true|false (ALPHA - 默认值为 false)
PodDeletionCost=true|false (BETA - 默认值为 true)
PodDisruptionConditions=true|false (BETA - 默认值为 true)
-PodHasNetworkCondition=true|false (ALPHA - 默认值为 false)
+PodHostIPs=true|false (ALPHA - 默认值为 false)
+PodIndexLabel=true|false (BETA - 默认值为 true)
+PodReadyToStartContainersCondition=true|false (ALPHA - 默认值为 false)
PodSchedulingReadiness=true|false (BETA - 默认值为 true)
-ProbeTerminationGracePeriod=true|false (BETA - 默认值为 true)
ProcMountType=true|false (ALPHA - 默认值为 false)
-ProxyTerminatingEndpoints=true|false (BETA - 默认值为 true)
QOSReserved=true|false (ALPHA - 默认值为 false)
ReadWriteOncePod=true|false (BETA - 默认值为 true)
RecoverVolumeExpansionFailure=true|false (ALPHA - 默认值为 false)
RemainingItemCount=true|false (BETA - 默认值为 true)
-RetroactiveDefaultStorageClass=true|false (BETA - 默认值为 true)
RotateKubeletServerCertificate=true|false (BETA - 默认值为 true)
SELinuxMountReadWriteOncePod=true|false (BETA - 默认值为 true)
+SchedulerQueueingHints=true|false (BETA - 默认值为 true)
SecurityContextDeny=true|false (ALPHA - 默认值为 false)
-ServiceNodePortStaticSubrange=true|false (ALPHA - 默认值为 false)
+ServiceNodePortStaticSubrange=true|false (BETA - 默认值为 true)
+SidecarContainers=true|false (ALPHA - 默认值为 false)
SizeMemoryBackedVolumes=true|false (BETA - 默认值为 true)
+SkipReadOnlyValidationGCE=true|false (ALPHA - 默认值为 false)
StableLoadBalancerNodeSet=true|false (BETA - 默认值为 true)
StatefulSetAutoDeletePVC=true|false (BETA - 默认值为 true)
StatefulSetStartOrdinal=true|false (BETA - 默认值为 true)
@@ -469,10 +462,11 @@ StorageVersionAPI=true|false (ALPHA - 默认值为 false)
StorageVersionHash=true|false (BETA - 默认值为 true)
TopologyAwareHints=true|false (BETA - 默认值为 true)
TopologyManagerPolicyAlphaOptions=true|false (ALPHA - 默认值为 false)
-TopologyManagerPolicyBetaOptions=true|false (BETA - 默认值为 false)
-TopologyManagerPolicyOptions=true|false (ALPHA - 默认值为 false)
-UserNamespacesStatelessPodsSupport=true|false (ALPHA - 默认值为 false)
-ValidatingAdmissionPolicy=true|false (ALPHA - 默认值为 false)
+TopologyManagerPolicyBetaOptions=true|false (BETA - 默认值为 true)
+TopologyManagerPolicyOptions=true|false (BETA - 默认值为 true)
+UnknownVersionInteroperabilityProxy=true|false (ALPHA - 默认值为 false)
+UserNamespacesSupport=true|false (ALPHA - 默认值为 false)
+ValidatingAdmissionPolicy=true|false (BETA - 默认值为 false)
VolumeCapacityPriority=true|false (ALPHA - 默认值为 false)
WatchList=true|false (ALPHA - 默认值为 false)
WinDSR=true|false (ALPHA - 默认值为 false)
@@ -740,6 +734,20 @@ Path to kubeconfig file with authorization information (the master location is s + +--log-flush-frequency duration     默认值:5s + + + +

+ +日志清洗之间的最大秒数 +

+ + + --log_backtrace_at <“file:N” 格式的字符串>     默认值:0 @@ -789,6 +797,20 @@ Defines the maximum size a log file can grow to (no effect when -logtostderr=tru

+ +--logging-format string     默认值:"text" + + + +

+ +设置日志格式。允许的格式为:"text"。 +

+ + + --logtostderr     默认值:true @@ -1043,25 +1065,29 @@ number for the log level verbosity --version version[=true] -

+ +

-打印版本信息并退出。 +--version, --version=raw 打印版本信息并退出; +--version=vX.Y.Z... 设置报告的版本。

---vmodule <逗号分割的 “pattern=N” 设置> +--vmodule pattern=N,... -

+ +

-以逗号分割的 pattern=N 设置的列表,用于文件过滤日志 -

+以逗号分割的 pattern=N 设置的列表,用于文件过滤日志(仅适用于文本日志格式) +

+ @@ -1079,4 +1105,3 @@ If set, write the default configuration values to this file and exit. - diff --git a/content/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler.md b/content/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler.md index 6b1f3cc58eeb8..518d5e9fa9a6c 100644 --- a/content/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler.md +++ b/content/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler.md @@ -10,17 +10,6 @@ weight: 30 auto_generated: true --> - - ## {{% heading "synopsis" %}} 监听 --secure-port 端口的 IP 地址。 集群的其余部分以及 CLI/ Web 客户端必须可以访问关联的接口。 如果为空,将使用所有接口(0.0.0.0 表示使用所有 IPv4 接口,“::” 表示使用所有 IPv6 接口)。 -如果为空或未指定地址 (0.0.0.0 或 ::),所有接口将被使用。 +如果为空或未指定地址 (0.0.0.0 或 ::),所有接口和 IP 地址簇将被使用。 @@ -274,10 +262,9 @@ A set of key=value pairs that describe feature gates for alpha/experimental feat APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
-APISelfSubjectReview=true|false (BETA - default=true)
APIServerIdentity=true|false (BETA - default=true)
APIServerTracing=true|false (BETA - default=true)
-AdmissionWebhookMatchConditions=true|false (ALPHA - default=false)
+AdmissionWebhookMatchConditions=true|false (BETA - default=true)
AggregatedDiscoveryEndpoint=true|false (BETA - default=true)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
@@ -286,32 +273,32 @@ AppArmor=true|false (BETA - default=true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
CPUManagerPolicyOptions=true|false (BETA - default=true)
+CRDValidationRatcheting=true|false (ALPHA - default=false)
CSIMigrationPortworx=true|false (BETA - default=false)
-CSIMigrationRBD=true|false (ALPHA - default=false)
CSINodeExpandSecret=true|false (BETA - default=true)
CSIVolumeHealth=true|false (ALPHA - default=false)
CloudControllerManagerWebhook=true|false (ALPHA - default=false)
CloudDualStackNodeIPs=true|false (ALPHA - default=false)
ClusterTrustBundle=true|false (ALPHA - default=false)
ComponentSLIs=true|false (BETA - default=true)
+ConsistentListFromCache=true|false (ALPHA - default=false)
ContainerCheckpoint=true|false (ALPHA - default=false)
ContextualLogging=true|false (ALPHA - default=false)
+CronJobsScheduledAnnotation=true|false (BETA - default=true)
CrossNamespaceVolumeDataSource=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
CustomResourceValidationExpressions=true|false (BETA - default=true)
+DevicePluginCDIDevices=true|false (ALPHA - default=false)
DisableCloudProviders=true|false (ALPHA - default=false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
DynamicResourceAllocation=true|false (ALPHA - default=false)
ElasticIndexedJob=true|false (BETA - default=true)
EventedPLEG=true|false (BETA - default=false)
-ExpandedDNSConfig=true|false (BETA - default=true)
-ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GracefulNodeShutdown=true|false (BETA - default=true)
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
HPAContainerMetrics=true|false (BETA - default=true)
HPAScaleToZero=true|false (ALPHA - default=false)
HonorPVReclaimPolicy=true|false (ALPHA - default=false)
-IPTablesOwnershipCleanup=true|false (BETA - default=true)
InPlacePodVerticalScaling=true|false (ALPHA - default=false)
InTreePluginAWSUnregister=true|false (ALPHA - default=false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
@@ -319,18 +306,20 @@ InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
InTreePluginGCEUnregister=true|false (ALPHA - default=false)
InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
-InTreePluginRBDUnregister=true|false (ALPHA - default=false)
InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
+JobBackoffLimitPerIndex=true|false (ALPHA - default=false)
JobPodFailurePolicy=true|false (BETA - default=true)
+JobPodReplacementPolicy=true|false (ALPHA - default=false)
JobReadyPods=true|false (BETA - default=true)
KMSv2=true|false (BETA - default=true)
+KMSv2KDF=true|false (BETA - default=false)
+KubeProxyDrainingTerminatingNodes=true|false (ALPHA - default=false)
+KubeletCgroupDriverFromCRI=true|false (ALPHA - default=false)
KubeletInUserNamespace=true|false (ALPHA - default=false)
-KubeletPodResources=true|false (BETA - default=true)
KubeletPodResourcesDynamicResources=true|false (ALPHA - default=false)
KubeletPodResourcesGet=true|false (ALPHA - default=false)
-KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
KubeletTracing=true|false (BETA - default=true)
-LegacyServiceAccountTokenTracking=true|false (BETA - default=true)
+LegacyServiceAccountTokenCleanUp=true|false (ALPHA - default=false)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
LogarithmicScaleDown=true|false (BETA - default=true)
LoggingAlphaOptions=true|false (ALPHA - default=false)
@@ -340,35 +329,35 @@ MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
MemoryManager=true|false (BETA - default=true)
MemoryQoS=true|false (ALPHA - default=false)
MinDomainsInPodTopologySpread=true|false (BETA - default=true)
-MinimizeIPTablesRestore=true|false (BETA - default=true)
MultiCIDRRangeAllocator=true|false (ALPHA - default=false)
MultiCIDRServiceAllocator=true|false (ALPHA - default=false)
-NetworkPolicyStatus=true|false (ALPHA - default=false)
NewVolumeManagerReconstruction=true|false (BETA - default=true)
NodeInclusionPolicyInPodTopologySpread=true|false (BETA - default=true)
NodeLogQuery=true|false (ALPHA - default=false)
-NodeOutOfServiceVolumeDetach=true|false (BETA - default=true)
-NodeSwap=true|false (ALPHA - default=false)
+NodeSwap=true|false (BETA - default=false)
OpenAPIEnums=true|false (BETA - default=true)
PDBUnhealthyPodEvictionPolicy=true|false (BETA - default=true)
+PersistentVolumeLastPhaseTransitionTime=true|false (ALPHA - default=false)
PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
PodDeletionCost=true|false (BETA - default=true)
PodDisruptionConditions=true|false (BETA - default=true)
-PodHasNetworkCondition=true|false (ALPHA - default=false)
+PodHostIPs=true|false (ALPHA - default=false)
+PodIndexLabel=true|false (BETA - default=true)
+PodReadyToStartContainersCondition=true|false (ALPHA - default=false)
PodSchedulingReadiness=true|false (BETA - default=true)
-ProbeTerminationGracePeriod=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
-ProxyTerminatingEndpoints=true|false (BETA - default=true)
QOSReserved=true|false (ALPHA - default=false)
ReadWriteOncePod=true|false (BETA - default=true)
RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
-RetroactiveDefaultStorageClass=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
SELinuxMountReadWriteOncePod=true|false (BETA - default=true)
+SchedulerQueueingHints=true|false (BETA - default=true)
SecurityContextDeny=true|false (ALPHA - default=false)
-ServiceNodePortStaticSubrange=true|false (ALPHA - default=false)
+ServiceNodePortStaticSubrange=true|false (BETA - default=true)
+SidecarContainers=true|false (ALPHA - default=false)
SizeMemoryBackedVolumes=true|false (BETA - default=true)
+SkipReadOnlyValidationGCE=true|false (ALPHA - default=false)
StableLoadBalancerNodeSet=true|false (BETA - default=true)
StatefulSetAutoDeletePVC=true|false (BETA - default=true)
StatefulSetStartOrdinal=true|false (BETA - default=true)
@@ -376,10 +365,11 @@ StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
TopologyAwareHints=true|false (BETA - default=true)
TopologyManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
-TopologyManagerPolicyBetaOptions=true|false (BETA - default=false)
-TopologyManagerPolicyOptions=true|false (ALPHA - default=false)
-UserNamespacesStatelessPodsSupport=true|false (ALPHA - default=false)
-ValidatingAdmissionPolicy=true|false (ALPHA - default=false)
+TopologyManagerPolicyBetaOptions=true|false (BETA - default=true)
+TopologyManagerPolicyOptions=true|false (BETA - default=true)
+UnknownVersionInteroperabilityProxy=true|false (ALPHA - default=false)
+UserNamespacesSupport=true|false (ALPHA - default=false)
+ValidatingAdmissionPolicy=true|false (BETA - default=false)
VolumeCapacityPriority=true|false (ALPHA - default=false)
WatchList=true|false (ALPHA - default=false)
WinDSR=true|false (ALPHA - default=false)
@@ -390,10 +380,9 @@ WindowsHostNetwork=true|false (ALPHA - default=true) APIListChunking=true|false (BETA - 默认值为 true)
APIPriorityAndFairness=true|false (BETA - 默认值为 true)
APIResponseCompression=true|false (BETA - 默认值为 true)
-APISelfSubjectReview=true|false (BETA - 默认值为 true)
APIServerIdentity=true|false (BETA - 默认值为 true)
APIServerTracing=true|false (BETA - 默认值为 true)
-AdmissionWebhookMatchConditions=true|false (ALPHA - 默认值为 false)
+AdmissionWebhookMatchConditions=true|false (BETA - 默认值为 true)
AggregatedDiscoveryEndpoint=true|false (BETA - 默认值为 true)
AllAlpha=true|false (ALPHA - 默认值为 false)
AllBeta=true|false (BETA - 默认值为 false)
@@ -402,32 +391,32 @@ AppArmor=true|false (BETA - 默认值为 true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - 默认值为 false)
CPUManagerPolicyBetaOptions=true|false (BETA - 默认值为 true)
CPUManagerPolicyOptions=true|false (BETA - 默认值为 true)
+CRDValidationRatcheting=true|false (ALPHA - 默认值为 false)
CSIMigrationPortworx=true|false (BETA - 默认值为 false)
-CSIMigrationRBD=true|false (ALPHA - 默认值为 false)
CSINodeExpandSecret=true|false (BETA - 默认值为 true)
CSIVolumeHealth=true|false (ALPHA - 默认值为 false)
CloudControllerManagerWebhook=true|false (ALPHA - 默认值为 false)
CloudDualStackNodeIPs=true|false (ALPHA - 默认值为 false)
ClusterTrustBundle=true|false (ALPHA - 默认值为 false)
ComponentSLIs=true|false (BETA - 默认值为 true)
+ConsistentListFromCache=true|false (ALPHA - 默认值为 false)
ContainerCheckpoint=true|false (ALPHA - 默认值为 false)
ContextualLogging=true|false (ALPHA - 默认值为 false)
+CronJobsScheduledAnnotation=true|false (BETA - 默认值为 true)
CrossNamespaceVolumeDataSource=true|false (ALPHA - 默认值为 false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - 默认值为 false)
CustomResourceValidationExpressions=true|false (BETA - 默认值为 true)
+DevicePluginCDIDevices=true|false (ALPHA - 默认值为 false)
DisableCloudProviders=true|false (ALPHA - 默认值为 false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - 默认值为 false)
DynamicResourceAllocation=true|false (ALPHA - 默认值为 false)
ElasticIndexedJob=true|false (BETA - 默认值为 true)
EventedPLEG=true|false (BETA - 默认值为 false)
-ExpandedDNSConfig=true|false (BETA - 默认值为 true)
-ExperimentalHostUserNamespaceDefaulting=true|false (BETA - 默认值为 false)
GracefulNodeShutdown=true|false (BETA - 默认值为 true)
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - 默认值为 true)
HPAContainerMetrics=true|false (BETA - 默认值为 true)
HPAScaleToZero=true|false (ALPHA - 默认值为 false)
HonorPVReclaimPolicy=true|false (ALPHA - 默认值为 false)
-IPTablesOwnershipCleanup=true|false (BETA - 默认值为 true)
InPlacePodVerticalScaling=true|false (ALPHA - 默认值为 false)
InTreePluginAWSUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - 默认值为 false)
@@ -435,18 +424,20 @@ InTreePluginAzureFileUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginGCEUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginOpenStackUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginPortworxUnregister=true|false (ALPHA - 默认值为 false)
-InTreePluginRBDUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginvSphereUnregister=true|false (ALPHA - 默认值为 false)
+JobBackoffLimitPerIndex=true|false (ALPHA - 默认值为 false)
JobPodFailurePolicy=true|false (BETA - 默认值为 true)
+JobPodReplacementPolicy=true|false (ALPHA - 默认值为 false)
JobReadyPods=true|false (BETA - 默认值为 true)
KMSv2=true|false (BETA - 默认值为 true)
+KMSv2KDF=true|false (BETA - 默认值为 false)
+KubeProxyDrainingTerminatingNodes=true|false (ALPHA - 默认值为 false)
+KubeletCgroupDriverFromCRI=true|false (ALPHA - 默认值为 false)
KubeletInUserNamespace=true|false (ALPHA - 默认值为 false)
-KubeletPodResources=true|false (BETA - 默认值为 true)
KubeletPodResourcesDynamicResources=true|false (ALPHA - 默认值为 false)
KubeletPodResourcesGet=true|false (ALPHA - 默认值为 false)
-KubeletPodResourcesGetAllocatable=true|false (BETA - 默认值为 true)
KubeletTracing=true|false (BETA - 默认值为 true)
-LegacyServiceAccountTokenTracking=true|false (BETA - 默认值为 true)
+LegacyServiceAccountTokenCleanUp=true|false (ALPHA - 默认值为 false)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - 默认值为 false)
LogarithmicScaleDown=true|false (BETA - 默认值为 true)
LoggingAlphaOptions=true|false (ALPHA - 默认值为 false)
@@ -456,35 +447,35 @@ MaxUnavailableStatefulSet=true|false (ALPHA - 默认值为 false)
MemoryManager=true|false (BETA - 默认值为 true)
MemoryQoS=true|false (ALPHA - 默认值为 false)
MinDomainsInPodTopologySpread=true|false (BETA - 默认值为 true)
-MinimizeIPTablesRestore=true|false (BETA - 默认值为 true)
MultiCIDRRangeAllocator=true|false (ALPHA - 默认值为 false)
MultiCIDRServiceAllocator=true|false (ALPHA - 默认值为 false)
-NetworkPolicyStatus=true|false (ALPHA - 默认值为 false)
NewVolumeManagerReconstruction=true|false (BETA - 默认值为 true)
NodeInclusionPolicyInPodTopologySpread=true|false (BETA - 默认值为 true)
NodeLogQuery=true|false (ALPHA - 默认值为 false)
-NodeOutOfServiceVolumeDetach=true|false (BETA - 默认值为 true)
-NodeSwap=true|false (ALPHA - 默认值为 false)
+NodeSwap=true|false (BETA - 默认值为 false)
OpenAPIEnums=true|false (BETA - 默认值为 true)
PDBUnhealthyPodEvictionPolicy=true|false (BETA - 默认值为 true)
+PersistentVolumeLastPhaseTransitionTime=true|false (ALPHA - 默认值为 false)
PodAndContainerStatsFromCRI=true|false (ALPHA - 默认值为 false)
PodDeletionCost=true|false (BETA - 默认值为 true)
PodDisruptionConditions=true|false (BETA - 默认值为 true)
-PodHasNetworkCondition=true|false (ALPHA - 默认值为 false)
+PodHostIPs=true|false (ALPHA - 默认值为 false)
+PodIndexLabel=true|false (BETA - 默认值为 true)
+PodReadyToStartContainersCondition=true|false (ALPHA - 默认值为 false)
PodSchedulingReadiness=true|false (BETA - 默认值为 true)
-ProbeTerminationGracePeriod=true|false (BETA - 默认值为 true)
ProcMountType=true|false (ALPHA - 默认值为 false)
-ProxyTerminatingEndpoints=true|false (BETA - 默认值为 true)
QOSReserved=true|false (ALPHA - 默认值为 false)
ReadWriteOncePod=true|false (BETA - 默认值为 true)
RecoverVolumeExpansionFailure=true|false (ALPHA - 默认值为 false)
RemainingItemCount=true|false (BETA - 默认值为 true)
-RetroactiveDefaultStorageClass=true|false (BETA - 默认值为 true)
RotateKubeletServerCertificate=true|false (BETA - 默认值为 true)
SELinuxMountReadWriteOncePod=true|false (BETA - 默认值为 true)
+SchedulerQueueingHints=true|false (BETA - 默认值为 true)
SecurityContextDeny=true|false (ALPHA - 默认值为 false)
-ServiceNodePortStaticSubrange=true|false (ALPHA - 默认值为 false)
+ServiceNodePortStaticSubrange=true|false (BETA - 默认值为 true)
+SidecarContainers=true|false (ALPHA - 默认值为 false)
SizeMemoryBackedVolumes=true|false (BETA - 默认值为 true)
+SkipReadOnlyValidationGCE=true|false (ALPHA - 默认值为 false)
StableLoadBalancerNodeSet=true|false (BETA - 默认值为 true)
StatefulSetAutoDeletePVC=true|false (BETA - 默认值为 true)
StatefulSetStartOrdinal=true|false (BETA - 默认值为 true)
@@ -492,10 +483,11 @@ StorageVersionAPI=true|false (ALPHA - 默认值为 false)
StorageVersionHash=true|false (BETA - 默认值为 true)
TopologyAwareHints=true|false (BETA - 默认值为 true)
TopologyManagerPolicyAlphaOptions=true|false (ALPHA - 默认值为 false)
-TopologyManagerPolicyBetaOptions=true|false (BETA - 默认值为 false)
-TopologyManagerPolicyOptions=true|false (ALPHA - 默认值为 false)
-UserNamespacesStatelessPodsSupport=true|false (ALPHA - 默认值为 false)
-ValidatingAdmissionPolicy=true|false (ALPHA - 默认值为 false)
+TopologyManagerPolicyBetaOptions=true|false (BETA - 默认值为 true)
+TopologyManagerPolicyOptions=true|false (BETA - 默认值为 true)
+UnknownVersionInteroperabilityProxy=true|false (ALPHA - 默认值为 false)
+UserNamespacesSupport=true|false (ALPHA - 默认值为 false)
+ValidatingAdmissionPolicy=true|false (BETA - 默认值为 false)
VolumeCapacityPriority=true|false (ALPHA - 默认值为 false)
WatchList=true|false (ALPHA - 默认值为 false)
WinDSR=true|false (ALPHA - 默认值为 false)
@@ -669,33 +661,6 @@ The duration the clients should wait between attempting acquisition and renewal - ---lock-object-name string     默认值:"kube-scheduler" - - - - -已弃用: 定义锁对象的名称。将被删除以便使用 --leader-elect-resource-name。 -如果 --config 指定了一个配置文件,那么这个参数将被忽略。 - - - - ---lock-object-namespace string     默认值:"kube-system" - - - - -已弃用: 定义锁对象的命名空间。将被删除以便使用 leader-elect-resource-namespace。 -如果 --config 指定了一个配置文件,那么这个参数将被忽略。 - - - - --log-flush-frequency duration     默认值:5s @@ -994,9 +959,10 @@ number for the log level verbosity -打印版本信息并退出。 +--version, --version=raw 打印版本信息并推出; +--version=vX.Y.Z... 设置报告的版本。 diff --git a/content/zh-cn/docs/reference/config-api/apiserver-config.v1alpha1.md b/content/zh-cn/docs/reference/config-api/apiserver-config.v1alpha1.md index 83ceea90d0a67..5ea667ab903eb 100644 --- a/content/zh-cn/docs/reference/config-api/apiserver-config.v1alpha1.md +++ b/content/zh-cn/docs/reference/config-api/apiserver-config.v1alpha1.md @@ -24,6 +24,62 @@ Package v1alpha1 is the v1alpha1 version of the API. - [EgressSelectorConfiguration](#apiserver-k8s-io-v1alpha1-EgressSelectorConfiguration) - [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration) +## `TracingConfiguration` {#TracingConfiguration} + + +**出现在:** + +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) + +- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration) + +

+ +TracingConfiguration 为 OpenTelemetry 跟踪客户端提供了不同版本的配置。 +

+ + + + + + + + + + + + +
字段描述
endpoint
+string +
+

+ + 采集器的端点,此组件将向其报告跟踪信息。 + 连接不安全,目前不支持 TLS。 + 推荐不设置,端点为 otlp grpc 默认值 localhost:4317。 +

+
samplingRatePerMillion
+int32 +
+

+ + SamplingRatePerMillion 是每百万 span 中采集的样本数。 + 推荐不设置。如果不设置,采集器将继承其父级 span 的采样率,否则不进行采样。 +

+
+ ## `AdmissionConfiguration` {#apiserver-k8s-io-v1alpha1-AdmissionConfiguration}

@@ -511,59 +567,3 @@ UDSTransport 设置通过 UDS 连接 konnectivity 服务器时需要的信息。 - -## `TracingConfiguration` {#TracingConfiguration} - - -**出现在:** - -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) - -- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration) - -

- -TracingConfiguration 为 OpenTelemetry 跟踪客户端提供了不同版本的配置。 -

- - - - - - - - - - - - -
字段描述
endpoint
-string -
-

- - 采集器的端点,此组件将向其报告跟踪信息。 - 连接不安全,目前不支持 TLS。 - 推荐不设置,端点为 otlp grpc 默认值 localhost:4317。 -

-
samplingRatePerMillion
-int32 -
-

- - SamplingRatePerMillion 是每百万 span 中采集的样本数。 - 推荐不设置。如果不设置,采集器将继承其父级 span 的采样率,否则不进行采样。 -

-
diff --git a/content/zh-cn/docs/reference/config-api/apiserver-config.v1beta1.md b/content/zh-cn/docs/reference/config-api/apiserver-config.v1beta1.md index 1d35c64e2e081..b4610b736dc8a 100644 --- a/content/zh-cn/docs/reference/config-api/apiserver-config.v1beta1.md +++ b/content/zh-cn/docs/reference/config-api/apiserver-config.v1beta1.md @@ -52,6 +52,65 @@ EgressSelectorConfiguration 为 Egress 选择算符客户端提供版本化的 +## `TracingConfiguration` {#TracingConfiguration} + + +**出现在:** + +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) + +- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration) + +- [TracingConfiguration](#apiserver-k8s-io-v1beta1-TracingConfiguration) + +

+ +TracingConfiguration 为 OpenTelemetry 跟踪客户端提供版本化的配置。 +

+ + + + + + + + + + + + + +
字段描述
endpoint
+string +
+

+ + 采集器的端点,此组件将向其报告跟踪信息。 + 连接不安全,目前不支持 TLS。 + 推荐不设置,端点为 otlp grpc 默认值 localhost:4317。 +

+
samplingRatePerMillion
+int32 +
+

+ + samplingRatePerMillion 是每百万 span 中采集的样本数。 + 推荐不设置。如果不设置,采集器将继承其父级 span 的采样率,否则不进行采样。 +

+
+ ## `TracingConfiguration` {#apiserver-k8s-io-v1beta1-TracingConfiguration}

@@ -412,61 +471,3 @@ UDSTransport 设置通过 UDS 连接 konnectivity 服务器时需要的信息。 - -## `TracingConfiguration` {#TracingConfiguration} - - -**出现在:** - -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) - -- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration) - -- [TracingConfiguration](#apiserver-k8s-io-v1beta1-TracingConfiguration) - -

- -TracingConfiguration 为 OpenTelemetry 跟踪客户端提供了不同版本的配置。 -

- - - - - - - - - - - - -
FieldDescription
endpoint
-string -
-

- - 采集器的端点,此组件将向其报告跟踪信息。 - 连接不安全,目前不支持 TLS。 - 推荐不设置,端点为 otlp grpc 默认值 localhost:4317。 -

-
samplingRatePerMillion
-int32 -
-

- - samplingRatePerMillion 是每百万 span 中采集的样本数。 - 推荐不设置。如果不设置,采集器将继承其父级 span 的采样率,否则不进行采样。 -

-
\ No newline at end of file diff --git a/content/zh-cn/docs/reference/config-api/kube-scheduler-config.v1.md b/content/zh-cn/docs/reference/config-api/kube-scheduler-config.v1.md index 3bad1b54515c7..636c76269af49 100644 --- a/content/zh-cn/docs/reference/config-api/kube-scheduler-config.v1.md +++ b/content/zh-cn/docs/reference/config-api/kube-scheduler-config.v1.md @@ -509,7 +509,7 @@ with the "default-scheduler" profile, if present here. with the extender. These extenders are shared by all scheduler profiles. -->

extenders 字段为调度器扩展模块(Extender)的列表,每个元素包含如何与某扩展模块通信的配置信息。 - 所有调度器模仿会共享此扩展模块列表。

+ 所有调度器方案会共享此扩展模块列表。

delayCacheUntilActive [Required]
diff --git a/content/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3.md b/content/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3.md index 6db16ca75e85a..a20ddf418ef8e 100644 --- a/content/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3.md +++ b/content/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3.md @@ -24,6 +24,248 @@ auto_generated: true - [PodTopologySpreadArgs](#kubescheduler-config-k8s-io-v1beta3-PodTopologySpreadArgs) - [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1beta3-VolumeBindingArgs) +## `ClientConnectionConfiguration` {#ClientConnectionConfiguration} + + +**出现在:** + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) + + +

ClientConnectionConfiguration 中包含用来构造客户端所需的细节。

+ + + + + + + + + + + + + + + + + + + + + +
字段描述
kubeconfig [必需]
+string +
+ +

此字段为指向 KubeConfig 文件的路径。

+
acceptContentTypes [必需]
+string +
+ +

+ acceptContentTypes 定义的是客户端与服务器建立连接时要发送的 Accept 头部, + 这里的设置值会覆盖默认值 "application/json"。此字段会影响某特定客户端与服务器的所有连接。 +

+
contentType [必需]
+string +
+ +

+ contentType 包含的是此客户端向服务器发送数据时使用的内容类型(Content Type)。 +

+
qps [必需]
+float32 +
+ +

qps 控制此连接允许的每秒查询次数。

+
burst [必需]
+int32 +
+ +

burst 允许在客户端超出其速率限制时可以累积的额外查询个数。

+
+ +## `DebuggingConfiguration` {#DebuggingConfiguration} + + +**出现在:** + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) + + +

DebuggingConfiguration 保存与调试功能相关的配置。

+ + + + + + + + + + + + +
字段描述
enableProfiling [必需]
+bool +
+ +

此字段允许通过 Web 接口 host:port/debug/pprof/ 执行性能分析。

+
enableContentionProfiling [必需]
+bool +
+ +

此字段在 enableProfiling 为 true 时允许执行阻塞性能分析。

+
+ +## `LeaderElectionConfiguration` {#LeaderElectionConfiguration} + + +**出现在:** + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) + + +

+LeaderElectionConfiguration 为能够支持领导者选举的组件定义其领导者选举客户端的配置。 +

+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
字段描述
leaderElect [必需]
+bool +
+ +

+ leaderElect 允许领导者选举客户端在进入主循环执行之前先获得领导者角色。 + 运行多副本组件时启用此功能有助于提高可用性。 +

+
leaseDuration [必需]
+meta/v1.Duration +
+ +

+ leaseDuration 是非领导角色候选者在观察到需要领导席位更新时要等待的时间; + 只有经过所设置时长才可以尝试去获得一个仍处于领导状态但需要被刷新的席位。 + 这里的设置值本质上意味着某个领导者在被另一个候选者替换掉之前可以停止运行的最长时长。 + 只有当启用了领导者选举时此字段有意义。 +

+
renewDeadline [必需]
+meta/v1.Duration +
+ +

+ renewDeadline 设置的是当前领导者在停止扮演领导角色之前需要刷新领导状态的时间间隔。 + 此值必须小于或等于租约期限的长度。只有到启用了领导者选举时此字段才有意义。 +

+
retryPeriod [必需]
+meta/v1.Duration +
+ +

+ retryPeriod 是客户端在连续两次尝试获得或者刷新领导状态之间需要等待的时长。 + 只有当启用了领导者选举时此字段才有意义。 +

+
resourceLock [必需]
+string +
+ +

resourceLock 给出在领导者选举期间要作为锁来使用的资源对象类型。

+
resourceName [必需]
+string +
+ +

resourceName 给出在领导者选举期间要作为锁来使用的资源对象名称。

+
resourceNamespace [必需]
+string +
+ +

resourceNamespace 给出在领导者选举期间要作为锁来使用的资源对象所在名字空间。

+
+ ## `DefaultPreemptionArgs` {#kubescheduler-config-k8s-io-v1beta3-DefaultPreemptionArgs}

此字段为调度器扩展模块(Extender)的列表, 每个元素包含如何与某扩展模块通信的配置信息。 - 所有调度器模仿会共享此扩展模块列表。

+ 所有调度器方案会共享此扩展模块列表。

@@ -1450,245 +1692,3 @@ UtilizationShapePoint represents single point of priority function shape. - -## `ClientConnectionConfiguration` {#ClientConnectionConfiguration} - - -**出现在:** - -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) - - -

ClientConnectionConfiguration 中包含用来构造客户端所需的细节。

- - - - - - - - - - - - - - - - - - - - - -
字段描述
kubeconfig [必需]
-string -
- -

此字段为指向 KubeConfig 文件的路径。

-
acceptContentTypes [必需]
-string -
- -

- acceptContentTypes 定义的是客户端与服务器建立连接时要发送的 Accept 头部, - 这里的设置值会覆盖默认值 "application/json"。此字段会影响某特定客户端与服务器的所有连接。 -

-
contentType [必需]
-string -
- -

- contentType 包含的是此客户端向服务器发送数据时使用的内容类型(Content Type)。 -

-
qps [必需]
-float32 -
- -

qps 控制此连接允许的每秒查询次数。

-
burst [必需]
-int32 -
- -

burst 允许在客户端超出其速率限制时可以累积的额外查询个数。

-
- -## `DebuggingConfiguration` {#DebuggingConfiguration} - - -**出现在:** - -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) - - -

DebuggingConfiguration 保存与调试功能相关的配置。

- - - - - - - - - - - - -
字段描述
enableProfiling [必需]
-bool -
- -

此字段允许通过 Web 接口 host:port/debug/pprof/ 执行性能分析。

-
enableContentionProfiling [必需]
-bool -
- -

此字段在 enableProfiling 为 true 时允许执行阻塞性能分析。

-
- -## `LeaderElectionConfiguration` {#LeaderElectionConfiguration} - - -**出现在:** - -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) - - -

-LeaderElectionConfiguration 为能够支持领导者选举的组件定义其领导者选举客户端的配置。 -

- - - - - - - - - - - - - - - - - - - - - - - - - - - -
字段描述
leaderElect [必需]
-bool -
- -

- leaderElect 允许领导者选举客户端在进入主循环执行之前先获得领导者角色。 - 运行多副本组件时启用此功能有助于提高可用性。 -

-
leaseDuration [必需]
-meta/v1.Duration -
- -

- leaseDuration 是非领导角色候选者在观察到需要领导席位更新时要等待的时间; - 只有经过所设置时长才可以尝试去获得一个仍处于领导状态但需要被刷新的席位。 - 这里的设置值本质上意味着某个领导者在被另一个候选者替换掉之前可以停止运行的最长时长。 - 只有当启用了领导者选举时此字段有意义。 -

-
renewDeadline [必需]
-meta/v1.Duration -
- -

- renewDeadline 设置的是当前领导者在停止扮演领导角色之前需要刷新领导状态的时间间隔。 - 此值必须小于或等于租约期限的长度。只有到启用了领导者选举时此字段才有意义。 -

-
retryPeriod [必需]
-meta/v1.Duration -
- -

- retryPeriod 是客户端在连续两次尝试获得或者刷新领导状态之间需要等待的时长。 - 只有当启用了领导者选举时此字段才有意义。 -

-
resourceLock [必需]
-string -
- -

resourceLock 给出在领导者选举期间要作为锁来使用的资源对象类型。

-
resourceName [必需]
-string -
- -

resourceName 给出在领导者选举期间要作为锁来使用的资源对象名称。

-
resourceNamespace [必需]
-string -
- -

resourceNamespace 给出在领导者选举期间要作为锁来使用的资源对象所在名字空间。

-
diff --git a/content/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3.md b/content/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3.md index 87640bbce8e46..cdba0a0859166 100644 --- a/content/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3.md +++ b/content/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3.md @@ -446,6 +446,140 @@ node only (e.g. the node ip).

- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration) - [JoinConfiguration](#kubeadm-k8s-io-v1beta3-JoinConfiguration) +## `BootstrapToken` {#BootstrapToken} + + +**出现在:** + +- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration) + + +

BootstrapToken 描述的是一个启动引导令牌,以 Secret 形式存储在集群中。

+ + + + + + + + + + + + + + + + + + + + + + + + +
字段描述
token [必需]
+BootstrapTokenString +
+ +

token 用来在节点与控制面之间建立双向的信任关系。 +在向集群中添加节点时使用。

+
description
+string +
+ +

description 设置一个对人友好的消息, + 说明为什么此令牌会存在以及其目标用途,这样其他管理员能够知道其目的。

+
ttl
+meta/v1.Duration +
+ +

ttl 定义此令牌的声明周期。默认为 24h。 +expiresttl 是互斥的。

+
expires
+meta/v1.Time +
+ +

expires 设置此令牌过期的时间戳。默认为在运行时基于 +ttl 来决定。 +expiresttl 是互斥的。

+
usages
+[]string +
+ +

usages 描述此令牌的可能使用方式。默认情况下, + 令牌可用于建立双向的信任关系;不过这里可以改变默认用途。

+
groups
+[]string +
+ +

groups 设定此令牌被用于身份认证时对应的附加用户组。

+
+ +## `BootstrapTokenString` {#BootstrapTokenString} + + +**出现在:** + +- [BootstrapToken](#BootstrapToken) + + +

BootstrapTokenString 形式为 abcdef.abcdef0123456789 的一个令牌, +用来从加入集群的节点角度验证 API 服务器的身份,或者 "kubeadm join" +在节点启动引导是作为一种身份认证方法。 +此令牌的生命期是短暂的,并且应该如此。

+ + + + + + + + + + + +
字段描述
- [必需]
+string +
+ + 无描述。 +
- [必需]
+string +
+ + 无描述。 +
## `ClusterConfiguration` {#kubeadm-k8s-io-v1beta3-ClusterConfiguration} @@ -1830,139 +1964,3 @@ first alpha-numerically. - -## `BootstrapToken` {#BootstrapToken} - - -**出现在:** - -- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration) - - -

BootstrapToken 描述的是一个启动引导令牌,以 Secret 形式存储在集群中。

- - - - - - - - - - - - - - - - - - - - - - - - -
字段描述
token [必需]
-BootstrapTokenString -
- -

token 用来在节点与控制面之间建立双向的信任关系。 -在向集群中添加节点时使用。

-
description
-string -
- -

description 设置一个对人友好的消息,说明为什么此令牌 -会存在以及其目标用途,这样其他管理员能够知道其目的。

-
ttl
-meta/v1.Duration -
- -

ttl 定义此令牌的声明周期。默认为 24h。 -expiresttl 是互斥的。

-
expires
-meta/v1.Time -
- -

expires 设置此令牌过期的时间戳。默认为在运行时基于 -ttl 来决定。 -expiresttl 是互斥的。

-
usages
-[]string -
- -

usages 描述此令牌的可能使用方式。默认情况下,令牌可用于 -建立双向的信任关系;不过这里可以改变默认用途。

-
groups
-[]string -
- -

groups 设定此令牌被用于身份认证时对应的附加用户组。

-
- -## `BootstrapTokenString` {#BootstrapTokenString} - - -**出现在:** - -- [BootstrapToken](#BootstrapToken) - - -

BootstrapTokenString 形式为 abcdef.abcdef0123456789 的一个令牌, -用来从加入集群的节点角度验证 API 服务器的身份,或者 "kubeadm join" -在节点启动引导是作为一种身份认证方法。 -此令牌的生命期是短暂的,并且应该如此。

- - - - - - - - - - - -
字段描述
- [必需]
-string -
- - 无描述。 -
- [必需]
-string -
- - 无描述。 -
- \ No newline at end of file diff --git a/content/zh-cn/docs/reference/config-api/kubeconfig.v1.md b/content/zh-cn/docs/reference/config-api/kubeconfig.v1.md new file mode 100644 index 0000000000000..4640d12fac704 --- /dev/null +++ b/content/zh-cn/docs/reference/config-api/kubeconfig.v1.md @@ -0,0 +1,910 @@ +--- +title: kube 配置 (v1) +content_type: tool-reference +package: v1 +--- + + + + +## 资源类型 + +- [Config](#Config) + +## `Config` {#Config} + + +

Config 保存以给定用户身份构建连接到远程 Kubernetes 集群所需的信息

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
字段描述
apiVersion
string
/v1
kind
string
Config
kind
+string +
+ +

来自 pkg/api/types.go TypeMeta 的遗留字段。

+
apiVersion
+string +
+ +

来自 pkg/api/types.go TypeMeta 的遗留字段。

+
preferences[必需]
+Preferences +
+ +

preferences保存用于 CLI 交互的一般信息。

+
clusters[必需]
+[]NamedCluster +
+ +

clusters 是从可引用名称到集群配置的映射。

+
users[必需]
+[]NamedAuthInfo +
+ +

users 是一个从可引用名称到用户配置的映射。

+
contexts[必需]
+[]NamedContext +
+ +

contexts 是从可引用名称到上下文配置的映射。

+
current-context[必需]
+string +
+ +

current-context 是默认情况下你想使用的上下文的名称。

+
extensions
+[]NamedExtension +
+ +

extensions 保存额外信息。这对于扩展程序是有用的,目的是使读写操作不会破解未知字段。

+
+ +## `AuthInfo` {#AuthInfo} + + +**出现在:** + +- [NamedAuthInfo](#NamedAuthInfo) + + +

AuthInfo 包含描述身份信息的信息。这一信息用来告诉 kubernetes 集群你是谁。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
字段描述
client-certificate
+string +
+ +

client-certificate 是 TLS 客户端证书文件的路径。

+
client-certificate-data
+[]byte +
+ +

client-certificate-data 包含用于 TLS 连接的、来自客户端证书的 PEM 编码的数据。 + 此字段值会覆盖 client-certificate 内容。

+
client-key
+string +
+ +

client-key 是用于 TLS 连接的客户端密钥文件的路径。

+
client-key-data
+[]byte +
+ +

client-key-data 包含用于 TLS 连接的、来自客户端密钥文件的 + PEM 编码数据。此数据会覆盖 client-key 的内容。

+
token
+string +
+ +

token 是用于向 kubernetes 集群进行身份验证的持有者令牌。

+
tokenFile
+string +
+ +

tokenFile 是一个指针,指向包含有持有者令牌(如上所述)的文件。 + 如果 tokentokenFile 都存在,token 优先。

+
as
+string +
+ +

as 是要冒充的用户名。名字与命令行标志相匹配。

+
as-uid
+string +
+ +

as-uid 是要冒充的 UID。

+
as-groups
+[]string +
+ +

as-groups 是要冒充的用户组。

+
as-user-extra
+map[string][]string +
+ +

as-user-extra 包含与要冒充的用户相关的额外信息。

+
username
+string +
+ +

username 是向 Kubernetes 集群进行基本认证的用户名。

+
password
+string +
+ +

password 是向 Kubernetes 集群进行基本认证的密码。

+
auth-provider
+AuthProviderConfig +
+ +

auth-provider 给出用于给定 Kubernetes 集群的自定义身份验证插件。

+
exec
+ExecConfig +
+ +

exec 指定了针对某 Kubernetes 集群的基于 exec + 的自定义身份认证插件。

+
extensions
+[]NamedExtension +
+ +

extensions 保存一些额外信息。这些信息对于扩展程序是有用的,目的是确保读写操作不会破坏未知字段。

+
+ +## `AuthProviderConfig` {#AuthProviderConfig} + + +**出现在:** + +- [AuthInfo](#AuthInfo) + + +

AuthProviderConfig 保存特定于某认证提供机制的配置。

+ + + + + + + + + + + + +
字段描述
name[必需]
+string +
+

配置选项名称。

+
config[必需]
+map[string]string +
+

配置选项取值映射。

+
+ +## `Cluster` {#Cluster} + + +**出现在:** + +- [NamedCluster](#NamedCluster) + + +

Cluster 包含有关如何与 Kubernetes 集群通信的信息。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
字段描述
server[必需]
+string +
+ +

server 是 Kubernetes 集群的地址(形式为 https://hostname:port)。

+
tls-server-name
+string +
+ +

tls-server-name 用于检查服务器证书。如果 tls-server-name + 是空的,则使用用于联系服务器的主机名。

+
insecure-skip-tls-verify
+bool +
+ +

insecure-skip-tls-verify 跳过服务器证书的有效性检查。 + 这样做将使你的 HTTPS 连接不安全。

+
certificate-authority
+string +
+ +

certificate-authority 是证书机构的证书文件的路径。

+
certificate-authority-data
+[]byte +
+ +

certificate-authority-data 包含 PEM 编码的证书机构证书。 + 覆盖 certificate-authority 的设置值。

+
proxy-url
+string +
+ +

proxy-url 是代理的 URL,该代理用于此客户端的所有请求。 + 带有 "http"、"https" 和 "socks5" 的 URL 是被支持的。 + 如果未提供此配置或为空字符串,客户端尝试使用 http_proxy 和 + https_proxy 环境变量构建代理配置。 + 如果这些环境变量也没有设置, 客户端不会尝试代理请求。 +

+ + +

socks5 代理当前不支持 SPDY 流式端点 + (execattachport-forward)。 +

+
disable-compression
+bool +
+ +

disable-compression 允许客户端选择不对发往服务器的所有请求进行响应压缩。 + 当客户端与服务器之间的网络带宽充足时,这对于加快请求(尤其是 list 操作)非常有用, + 能够节省进行(服务器端)压缩和(客户端)解压缩的时间。参见 + https://github.com/kubernetes/kubernetes/issues/112296。

+
extensions
+[]NamedExtension +
+ +

extensions 保存一些额外信息。 + 这些信息对于扩展程序是有用的,目的是确保读写操作不会破坏未知字段。

+
+ +## `Context` {#Context} + + +**出现在:** + +- [NamedContext](#NamedContext) + + +

Context 是一个元组,包含对集群 (我如何与某 Kubernetes 集群通信)、用户 (我如何标识自己) +和名字空间(我想要使用哪些资源子集)的引用。

+ + + + + + + + + + + + + + + + + + +
字段描述
cluster[必需]
+string +
+ +

cluster 是此上下文中的集群名称。

+
user[必需]
+string +
+ +

user 是此上下文的 authInfo 名称。

+
namespace
+string +
+ +

namespace 是在请求中未明确指定时使用的默认名字空间。

+
extensions
+[]NamedExtension +
+ +

extensions 保存一些额外信息。 + 这些信息对于扩展程序是有用的,目的是确保读写操作不会破坏未知字段。

+
+ +## `ExecConfig` {#ExecConfig} + + +**出现在:** + +- [AuthInfo](#AuthInfo) + + + +

ExecConfig 指定提供客户端凭证的命令。这个命令被执行(以 exec 方式) +并输出结构化的标准输出(stdout),其中包含了凭据。

+ + +

查看 client.authentication.k8s.io API +组以获取输入和输出的确切格式规范。

+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
字段描述
command[必需]
+string +
+ +

command 是要执行的命令。

+
args
+[]string +
+ +

args 是执行命令时要传递的参数。

+
env
+[]ExecEnvVar +
+ +

env 定义了要暴露给进程的额外的环境变量。这些与主机的环境变量以及 + client-go 使用的变量一起,用于传递参数给插件。

+
apiVersion[必需]
+string +
+ +

ExecInfo 的首选输入版本。返回的 ExecCredentials + 必须使用与输入相同的编码版本。

+
installHint[必需]
+string +
+ +

当似乎找不到可执行文件时,将向用户显示此文本。 + 例如,对于在 Mac OS 系统上安装 foo-cli 插件而言, + brew install foo-cli 这可能是不错的 installHint。

+
provideClusterInfo[必需]
+bool +
+ +

ProvideClusterInfo 决定是否提供集群信息。 + 这些信息可能包含非常大的 CA 数据,用来作为 KUBERNETES_EXEC_INFO + 环境变量的一部分提供给这个 exec 插件。 + 默认情况下,它被设置为 false。 + k8s.io/client-go/tools/auth/exec 包提供了用于读取这个环境变量的辅助方法。

+
interactiveMode
+ExecInteractiveMode +
+ +

interactiveMode 确定此插件与标准输入之间的关系。有效值为:

+
    +
  • "Never":这个 exec 插件从不使用标准输入;
  • +
  • "IfAvailable":这个 exec 插件希望使用标准输入,如果可用的话;
  • +
  • "Always":这个 exec 插件需要标准输入以正常运行。
  • +
+

查看 ExecInteractiveMode 值以了解更多详情。

+ + +

如果 apiVersionclient.authentication.k8s.io/v1alpha1 + 或 client.authentication.k8s.io/v1beta1, 则此字段是可选的, + 且当未设置时默认为 "IfAvailable"。否则,此字段是必需的。

+
+ +## `ExecEnvVar` {#ExecEnvVar} + + +**出现在:** + +- [ExecConfig](#ExecConfig) + + +

ExecEnvVar 用于在执行基于 exec 的凭据插件时要设置的环境变量。

+ + + + + + + + + + + + +
字段描述
name
[必需] +string +
+

环境变量名称。

value
[必需]
+string +
+

环境变量取值。

+ +## `ExecInteractiveMode` {#ExecInteractiveMode} + + +(`string` 的别名) + + +**出现在:** + +- [ExecConfig](#ExecConfig) + + + +

ExecInteractiveMode 是一个描述 exec 插件与标准输入间关系的字符串。

+ +## `NamedAuthInfo` {#NamedAuthInfo} + + +**出现在:** + +- [Config](#Config) + + +

NamedAuthInfo 将昵称与身份认证信息关联起来。

+ + + + + + + + + + + + +
字段描述
name[必需] +string + + +

name 是该 AuthInfo 的昵称。

+
user[必需]
+AuthInfo +
+ +

user 保存身份认证信息。

+
+ +## `NamedCluster` {#NamedCluster} + + +**出现在:** + +- [Config](#Config) + + +

NamedCluster 将昵称与集群信息关联起来。

+ + + + + + + + + + + + + +
字段描述
name
[必需]

+string +
+ +

name 是此集群的昵称。

+
cluster
[必需]
+Cluster +
+ +

cluster 保存集群的信息。

+
+ +## `NamedContext` {#NamedContext} + + +**出现在:** + +- [Config](#Config) + + +

NamedContext 将昵称与上下文信息关联起来。

+ + + + + + + + + + + + +
字段描述
name
[必需]

+string +
+ +

name 是此上下文的昵称。

+
context
[必需] +Context +
+ +

context 保存上下文信息。

+
+ +## `NamedExtension` {#NamedExtension} + + +**出现在:** + +- [Config](#Config) +- [AuthInfo](#AuthInfo) +- [Cluster](#Cluster) +- [Context](#Context) +- [Preferences](#Preferences) + + +

NamedExtension 将昵称与扩展信息关联起来。

+ + + + + + + + + + + + +
字段描述
name[必需]
+string +
+ +

name 是此扩展的昵称。

+
extension[必需]
+k8s.io/apimachinery/pkg/runtime.RawExtension +
+ +

extension 保存扩展信息。

+
+ +## `Preferences` {#Preferences} + + +**出现在:** + +- [Config](#Config) + + + + + + + + + + + + +
字段描述
colors
+bool +
+

是否采用彩色字符编码。

extensions
+[]NamedExtension +
+ +

extensions 保存一些额外信息。 + 这些信息对于扩展程序是有用的,目的是确保读写操作不会破坏未知字段。

+
diff --git a/content/zh-cn/docs/reference/kubectl/_index.md b/content/zh-cn/docs/reference/kubectl/_index.md index 7e80378ffc39b..42446e5a20193 100644 --- a/content/zh-cn/docs/reference/kubectl/_index.md +++ b/content/zh-cn/docs/reference/kubectl/_index.md @@ -52,7 +52,8 @@ For details about each command, including all the supported flags and subcommand 有关安装说明,请参见[安装 kubectl](/zh-cn/docs/tasks/tools/#kubectl); @@ -81,8 +82,8 @@ where `command`, `TYPE`, `NAME`, and `flags` are: 其中 `command`、`TYPE`、`NAME` 和 `flags` 分别是: * `NAME`:指定资源的名称。名称区分大小写。 如果省略名称,则显示所有资源的详细信息。例如:`kubectl get pods`。 @@ -112,16 +115,17 @@ for example `create`, `get`, `describe`, `delete`. * 要按类型和名称指定资源: @@ -138,7 +142,8 @@ for example `create`, `get`, `describe`, `delete`. 例子:`kubectl get -f ./pod.yaml` * `flags`: 指定可选的参数。例如,可以使用 `-s` 或 `--server` 参数指定 Kubernetes API 服务器的地址和端口。 @@ -161,7 +166,10 @@ If you need help, run `kubectl help` from the terminal window. ## 集群内身份验证和命名空间覆盖 {#in-cluster-authentication-and-namespace-overrides} 默认情况下,`kubectl` 命令首先确定它是否在 Pod 中运行,从而被视为在集群中运行。 它首先检查 `KUBERNETES_SERVICE_HOST` 和 `KUBERNETES_SERVICE_PORT` 环境变量以及 @@ -169,7 +177,9 @@ By default `kubectl` will first determine if it is running within a pod, and thu 如果三个条件都被满足,则假定在集群内进行身份验证。 为保持向后兼容性,如果在集群内身份验证期间设置了 `POD_NAMESPACE` 环境变量,它将覆盖服务帐户令牌中的默认命名空间。 @@ -181,7 +191,11 @@ To maintain backwards compatibility, if the `POD_NAMESPACE` environment variable **`POD_NAMESPACE` 环境变量** 如果设置了 `POD_NAMESPACE` 环境变量,对命名空间资源的 CLI 操作对象将使用该变量值作为默认值。 例如,如果该变量设置为 `seattle`,`kubectl get pods` 将返回 `seattle` 命名空间中的 Pod。 @@ -269,7 +283,7 @@ Operation | Syntax | Description `edit` | kubectl edit (-f FILENAME | TYPE NAME | TYPE/NAME) [flags] | Edit and update the definition of one or more resources on the server by using the default editor. `events` | `kubectl events` | List events `exec` | `kubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]` | Execute a command against a container in a pod. -`explain` | `kubectl explain [--recursive=false] [flags]` | Get documentation of various resources. For instance pods, nodes, services, etc. +`explain` | `kubectl explain TYPE [--recursive=false] [flags]` | Get documentation of various resources. For instance pods, nodes, services, etc. `expose` | kubectl expose (-f FILENAME | TYPE NAME | TYPE/NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type] [flags] | Expose a replication controller, service, or pod as a new Kubernetes service. `get` | kubectl get (-f FILENAME | TYPE [NAME | /NAME | -l label]) [--watch] [--sort-by=FIELD] [[-o | --output]=OUTPUT_FORMAT] [flags] | List one or more resources. `kustomize` | `kubectl kustomize [flags] [options]` | List a set of API resources generated from instructions in a kustomization.yaml file. The argument must be the path to the directory containing the file, or a git repository URL with a path suffix specifying same with respect to the repository root. @@ -286,7 +300,7 @@ Operation | Syntax | Description `scale` | kubectl scale (-f FILENAME | TYPE NAME | TYPE/NAME) --replicas=COUNT [--resource-version=version] [--current-replicas=count] [flags] | Update the size of the specified replication controller. `set` | `kubectl set SUBCOMMAND [options]` | Configure application resources. `taint` | `kubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N [options]` | Update the taints on one or more nodes. -`top` | `kubectl top [flags] [options]` | Display Resource (CPU/Memory/Storage) usage. +`top` | `kubectl top (POD | NODE) [flags] [options]` | Display Resource (CPU/Memory/Storage) usage. `uncordon` | `kubectl uncordon NODE [options]` | Mark node as schedulable. `version` | `kubectl version [--client] [flags]` | Display the Kubernetes version running on the client and server. `wait` | kubectl wait ([-f FILENAME] | resource.group/resource.name | resource.group [(-l label | --all)]) [--for=delete|--for condition=available] [options] | Experimental: Wait for a specific condition on one or many resources. @@ -316,8 +330,8 @@ Operation | Syntax | Description `edit` | kubectl edit (-f FILENAME | TYPE NAME | TYPE/NAME) [flags] | 使用默认编辑器编辑和更新服务器上一个或多个资源的定义。 `events` | `kubectl events` | 列举事件。 `exec` | `kubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]` | 对 Pod 中的容器执行命令。 -`explain` | `kubectl explain [--recursive=false] [flags]` | 获取多种资源的文档。例如 Pod、Node、Service 等。 -`expose` | kubectl expose (-f FILENAME | TYPE NAME | TYPE/NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type] [flags] | 将副本控制器、服务或 Pod 作为新的 Kubernetes 服务暴露。 +`explain` | `kubectl explain TYPE [--recursive=false] [flags]` | 获取多种资源的文档。例如 Pod、Node、Service 等。 +`expose` | kubectl expose (-f FILENAME | TYPE NAME | TYPE/NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type] [flags] | 将副本控制器、Service 或 Pod 作为新的 Kubernetes 服务暴露。 `get` | kubectl get (-f FILENAME | TYPE [NAME | /NAME | -l label]) [--watch] [--sort-by=FIELD] [[-o | --output]=OUTPUT_FORMAT] [flags] | 列出一个或多个资源。 `kustomize` | kubectl kustomize [flags] [options]` | 列出从 kustomization.yaml 文件中的指令生成的一组 API 资源。参数必须是包含文件的目录的路径,或者是 git 存储库 URL,其路径后缀相对于存储库根目录指定了相同的路径。 `label` | kubectl label (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags] | 添加或更新一个或多个资源的标签。 @@ -333,7 +347,7 @@ Operation | Syntax | Description `scale` | kubectl scale (-f FILENAME | TYPE NAME | TYPE/NAME) --replicas=COUNT [--resource-version=version] [--current-replicas=count] [flags] | 更新指定副本控制器的大小。 `set` | `kubectl set SUBCOMMAND [options]` | 配置应用资源。 `taint` | `kubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N [options]` | 更新一个或多个节点上的污点。 -`top` | `kubectl top [flags] [options]` | 显示资源(CPU、内存、存储)的使用情况。 +`top` | `kubectl top (POD | NODE) [flags] [options]` | 显示资源(CPU、内存、存储)的使用情况。 `uncordon` | `kubectl uncordon NODE [options]` | 将节点标记为可调度。 `version` | `kubectl version [--client] [flags]` | 显示运行在客户端和服务器上的 Kubernetes 版本。 `wait` | kubectl wait ([-f FILENAME] | resource.group/resource.name | resource.group [(-l label | --all)]) [--for=delete|--for condition=available] [options] | 实验特性:等待一种或多种资源的特定状况。 @@ -428,7 +442,9 @@ The following table includes a list of all the supported resource types and thei ## 输出选项 {#output-options} 有关如何格式化或排序某些命令的输出的信息,请参阅以下章节。有关哪些命令支持不同输出选项的详细信息, 请参阅 [kubectl](/zh-cn/docs/reference/kubectl/kubectl/) 参考文档。 @@ -439,7 +455,9 @@ Use the following sections for information about how you can format or sort the ### 格式化输出 {#formatting-output} 所有 `kubectl` 命令的默认输出格式都是人类可读的纯文本格式。要以特定格式在终端窗口输出详细信息, 可以将 `-o` 或 `--output` 参数添加到受支持的 `kubectl` 命令中。 @@ -470,16 +488,16 @@ Output format | Description `-o wide` | Output in the plain-text format with any additional information. For pods, the node name is included. `-o yaml` | Output a YAML formatted API object. --> -输出格式 | 描述 ---------------| ----------- -`-o custom-columns=` | 使用逗号分隔的[自定义列](#custom-columns)列表打印表。 +输出格式 | 描述 +------------------------------------| ----------- +`-o custom-columns=` | 使用逗号分隔的[自定义列](#custom-columns)列表打印表。 `-o custom-columns-file=` | 使用 `` 文件中的[自定义列](#custom-columns)模板打印表。 -`-o json` | 输出 JSON 格式的 API 对象 -`-o jsonpath=