From 006788d13b4356ef76ccebcac9de89af99fd80a1 Mon Sep 17 00:00:00 2001 From: utkarsh-singh1 Date: Mon, 27 Mar 2023 16:48:42 +0530 Subject: [PATCH 001/229] Updated kubect cheatsheet docmentation Signed-off-by: utkarsh-singh1 --- content/en/docs/reference/kubectl/cheatsheet.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index 54a0e439547be..9033fb2b22099 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -42,7 +42,7 @@ echo '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc ### FISH -Require kubectl version 1.23 or more. +Require kubectl version 1.23 or above. ```bash echo 'kubectl completion fish | source' >> ~/.config/fish/config.fish # add autocomplete permanently to your fish shell From 024aa7aa0c4bb3240b925363f1b4ac0545221fd5 Mon Sep 17 00:00:00 2001 From: Milas Bowman Date: Tue, 25 Apr 2023 08:42:48 -0400 Subject: [PATCH 002/229] editorconfig: preserve final newline in YAML I'm not sure why this is defaulted to `false` for all file types, so this just enables it for YAML for now. Signed-off-by: Milas Bowman --- .editorconfig | 3 +++ 1 file changed, 3 insertions(+) diff --git a/.editorconfig b/.editorconfig index bc1dfe40c8faa..1a235f9c90853 100644 --- a/.editorconfig +++ b/.editorconfig @@ -16,5 +16,8 @@ indent_size = 2 indent_style = space indent_size = 4 +[*.{yaml}] +insert_final_newline = true + [Makefile] indent_style = tab From d9ef7b3849a85533258d8d2c878cba26854e8f87 Mon Sep 17 00:00:00 2001 From: iyear Date: Tue, 25 Apr 2023 20:52:57 +0800 Subject: [PATCH 003/229] kubectl/jsonpath: add example of escaping termination character Signed-off-by: iyear --- content/en/docs/reference/kubectl/jsonpath.md | 33 +++++++++++-------- 1 file changed, 20 insertions(+), 13 deletions(-) diff --git a/content/en/docs/reference/kubectl/jsonpath.md b/content/en/docs/reference/kubectl/jsonpath.md index f1aa5f8ec4946..7477430efffb3 100644 --- a/content/en/docs/reference/kubectl/jsonpath.md +++ b/content/en/docs/reference/kubectl/jsonpath.md @@ -34,7 +34,12 @@ Given the JSON input: "items":[ { "kind":"None", - "metadata":{"name":"127.0.0.1"}, + "metadata":{ + "name":"127.0.0.1", + "labels":{ + "kubernetes.io/hostname":"127.0.0.1" + } + }, "status":{ "capacity":{"cpu":"4"}, "addresses":[{"type": "LegacyHostIP", "address":"127.0.0.1"}] @@ -65,18 +70,19 @@ Given the JSON input: } ``` -Function | Description | Example | Result ---------------------|---------------------------|-----------------------------------------------------------------|------------------ -`text` | the plain text | `kind is {.kind}` | `kind is List` -`@` | the current object | `{@}` | the same as input -`.` or `[]` | child operator | `{.kind}`, `{['kind']}` or `{['name\.type']}` | `List` -`..` | recursive descent | `{..name}` | `127.0.0.1 127.0.0.2 myself e2e` -`*` | wildcard. Get all objects | `{.items[*].metadata.name}` | `[127.0.0.1 127.0.0.2]` -`[start:end:step]` | subscript operator | `{.users[0].name}` | `myself` -`[,]` | union operator | `{.items[*]['metadata.name', 'status.capacity']}` | `127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8]` -`?()` | filter | `{.users[?(@.name=="e2e")].user.password}` | `secret` -`range`, `end` | iterate list | `{range .items[*]}[{.metadata.name}, {.status.capacity}] {end}` | `[127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]]` -`''` | quote interpreted string | `{range .items[*]}{.metadata.name}{'\t'}{end}` | `127.0.0.1 127.0.0.2` +Function | Description | Example | Result +--------------------|------------------------------|-----------------------------------------------------------------|------------------ +`text` | the plain text | `kind is {.kind}` | `kind is List` +`@` | the current object | `{@}` | the same as input +`.` or `[]` | child operator | `{.kind}`, `{['kind']}` or `{['name\.type']}` | `List` +`..` | recursive descent | `{..name}` | `127.0.0.1 127.0.0.2 myself e2e` +`*` | wildcard. Get all objects | `{.items[*].metadata.name}` | `[127.0.0.1 127.0.0.2]` +`[start:end:step]` | subscript operator | `{.users[0].name}` | `myself` +`[,]` | union operator | `{.items[*]['metadata.name', 'status.capacity']}` | `127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8]` +`?()` | filter | `{.users[?(@.name=="e2e")].user.password}` | `secret` +`range`, `end` | iterate list | `{range .items[*]}[{.metadata.name}, {.status.capacity}] {end}` | `[127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]]` +`''` | quote interpreted string | `{range .items[*]}{.metadata.name}{'\t'}{end}` | `127.0.0.1 127.0.0.2` +`\` | escape termination character | `{.items[0].metadata.labels.kubernetes\.io/hostname}` | `127.0.0.1` Examples using `kubectl` and JSONPath expressions: @@ -87,6 +93,7 @@ kubectl get pods -o=jsonpath='{.items[0]}' kubectl get pods -o=jsonpath='{.items[0].metadata.name}' kubectl get pods -o=jsonpath="{.items[*]['metadata.name', 'status.capacity']}" kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}' +kubectl get pods -o=jsonpath='{.items[0].metadata.labels.kubernetes\.io/hostname}' ``` {{< note >}} From 9afdd652c3b6ba1779a4cd31c906863f6c5432a8 Mon Sep 17 00:00:00 2001 From: NitishKumar06 Date: Wed, 10 May 2023 10:28:17 +0530 Subject: [PATCH 004/229] Added secret keyword to kubectl-command.html file Signed-off-by: NitishKumar06 --- static/docs/reference/generated/kubectl/kubectl-commands.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/static/docs/reference/generated/kubectl/kubectl-commands.html b/static/docs/reference/generated/kubectl/kubectl-commands.html index 4fff80b171a7d..474fcb6879daf 100644 --- a/static/docs/reference/generated/kubectl/kubectl-commands.html +++ b/static/docs/reference/generated/kubectl/kubectl-commands.html @@ -1735,7 +1735,7 @@

secret tls

Create a TLS secret from the given public/private key pair.

The public/private key pair must exist beforehand. The public key certificate must be .PEM encoded and match the given private key.

Usage

-

$ kubectl create tls NAME --cert=path/to/cert/file --key=path/to/key/file [--dry-run=server|client|none]

+

$ kubectl create secret tls NAME --cert=path/to/cert/file --key=path/to/key/file [--dry-run=server|client|none]

Flags

From 25bb30bac1fade2c63f16e0362bfe94700e7af52 Mon Sep 17 00:00:00 2001 From: hatfieldbrian <81722870+hatfieldbrian@users.noreply.github.com> Date: Sat, 13 May 2023 04:23:57 -0700 Subject: [PATCH 005/229] Update api-concepts.md Correct collection definition: a list of instances of a resource _type_ --- content/en/docs/reference/using-api/api-concepts.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md index 6e6b0fe9ae563..5fb289977ed93 100644 --- a/content/en/docs/reference/using-api/api-concepts.md +++ b/content/en/docs/reference/using-api/api-concepts.md @@ -34,7 +34,7 @@ API concepts: * A *resource type* is the name used in the URL (`pods`, `namespaces`, `services`) * All resource types have a concrete representation (their object schema) which is called a *kind* -* A list of instances of a resource is known as a *collection* +* A list of instances of a resource type is known as a *collection* * A single instance of a resource type is called a *resource*, and also usually represents an *object* * For some resource types, the API includes one or more *sub-resources*, which are represented as URI paths below the resource From 948dec77a865a5d98becd57bfd342325fdadf874 Mon Sep 17 00:00:00 2001 From: utkarsh-singh1 Date: Mon, 5 Jun 2023 15:41:17 +0530 Subject: [PATCH 006/229] Updated oudated links in pre-requisites-ref-docs.md Signed-off-by: utkarsh-singh1 --- .../contribute/generate-ref-docs/prerequisites-ref-docs.md | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md b/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md index c7199208130be..ff87f63aa1271 100644 --- a/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md +++ b/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md @@ -5,18 +5,17 @@ - You need to have these tools installed: - - [Python](https://www.python.org/downloads/) v3.7.x + - [Python](https://www.python.org/downloads/) v3.7.x+ - [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) - - [Golang](https://golang.org/doc/install) version 1.13+ + - [Golang](https://go.dev/dl/) version 1.13+ - [Pip](https://pypi.org/project/pip/) used to install PyYAML - [PyYAML](https://pyyaml.org/) v5.1.2 - [make](https://www.gnu.org/software/make/) - [gcc compiler/linker](https://gcc.gnu.org/) - - [Docker](https://docs.docker.com/engine/installation/) (Required only for `kubectl` command reference) + - [Docker](https://docs.docker.com/engine/installation/) (Required only for `kubectl` command reference, also kubernetes moving on from dockershim read [here](https://kubernetes.io/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/)) - Your `PATH` environment variable must include the required build tools, such as the `Go` binary and `python`. - You need to know how to create a pull request to a GitHub repository. This involves creating your own fork of the repository. For more information, see [Work from a local clone](/docs/contribute/new-content/open-a-pr/#fork-the-repo). - From d873f03e78f93c81b847f1d389141a7138479297 Mon Sep 17 00:00:00 2001 From: shubham82 Date: Mon, 19 Jun 2023 15:38:54 +0530 Subject: [PATCH 007/229] Add -subj Command Option. --- .../access-authn-authz/certificate-signing-requests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md index a6513853a7991..8dd6337e6c235 100644 --- a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md +++ b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md @@ -488,7 +488,7 @@ O is the group that this user will belong to. You can refer to ```shell openssl genrsa -out myuser.key 2048 -openssl req -new -key myuser.key -out myuser.csr +openssl req -new -key myuser.key -out myuser.csr -subj "/CN=myuser" ``` ### Create a CertificateSigningRequest {#create-certificatessigningrequest} From 512b71177c95f631cc8d9b376ab8f0c56c76e1da Mon Sep 17 00:00:00 2001 From: Morgan Rowse Date: Mon, 19 Jun 2023 15:16:49 +0200 Subject: [PATCH 008/229] Update list-all-running-container-images.md also include initContainers --- .../list-all-running-container-images.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md b/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md index 0ae940296241b..5a5f41008e913 100644 --- a/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md +++ b/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md @@ -23,7 +23,7 @@ of Containers for each. - Fetch all Pods in all namespaces using `kubectl get pods --all-namespaces` - Format the output to include only the list of Container image names - using `-o jsonpath={.items[*].spec.containers[*].image}`. This will recursively parse out the + using `-o jsonpath={.items[*].spec['initContainers', 'containers'][*].image}`. This will recursively parse out the `image` field from the returned json. - See the [jsonpath reference](/docs/reference/kubectl/jsonpath/) for further information on how to use jsonpath. @@ -33,7 +33,7 @@ of Containers for each. - Use `uniq` to aggregate image counts ```shell -kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\ +kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec['initContainers', 'containers'][*].image}" |\ tr -s '[[:space:]]' '\n' |\ sort |\ uniq -c @@ -42,7 +42,7 @@ The jsonpath is interpreted as follows: - `.items[*]`: for each returned value - `.spec`: get the spec -- `.containers[*]`: for each container +- `['initContainers', 'containers'][*]`: for each container - `.image`: get the image {{< note >}} From 810a7cc2c0111f0e26db2f448df57f1ec20a4cf8 Mon Sep 17 00:00:00 2001 From: Kevin Grigorenko Date: Mon, 26 Jun 2023 13:41:50 -0500 Subject: [PATCH 009/229] Clarify IBM Java and IBM Semeru Runtimes cgroupsV2 support Signed-off-by: Kevin Grigorenko --- content/en/blog/_posts/2022-08-31-cgroupv2-ga.md | 4 ++-- content/en/docs/concepts/architecture/cgroups.md | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/content/en/blog/_posts/2022-08-31-cgroupv2-ga.md b/content/en/blog/_posts/2022-08-31-cgroupv2-ga.md index d4345195746b8..4071d4458160d 100644 --- a/content/en/blog/_posts/2022-08-31-cgroupv2-ga.md +++ b/content/en/blog/_posts/2022-08-31-cgroupv2-ga.md @@ -118,8 +118,8 @@ Scenarios in which you might need to update to cgroup v2 include the following: DaemonSet for monitoring pods and containers, update it to v0.43.0 or later. * If you deploy Java applications, prefer to use versions which fully support cgroup v2: * [OpenJDK / HotSpot](https://bugs.openjdk.org/browse/JDK-8230305): jdk8u372, 11.0.16, 15 and later - * [IBM Semeru Runtimes](https://www.eclipse.org/openj9/docs/version0.33/#control-groups-v2-support): jdk8u345-b01, 11.0.16.0, 17.0.4.0, 18.0.2.0 and later - * [IBM Java](https://www.ibm.com/docs/en/sdk-java-technology/8?topic=new-service-refresh-7#whatsnew_sr7__fp15): 8.0.7.15 and later + * [IBM Semeru Runtimes](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.382.0, 11.0.20.0, 17.0.8.0, and later + * [IBM Java](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.8.6 and later ## Learn more diff --git a/content/en/docs/concepts/architecture/cgroups.md b/content/en/docs/concepts/architecture/cgroups.md index b0a98af6604b0..b96d89e0d6dd4 100644 --- a/content/en/docs/concepts/architecture/cgroups.md +++ b/content/en/docs/concepts/architecture/cgroups.md @@ -104,8 +104,8 @@ updated to newer versions that support cgroup v2. For example: DaemonSet for monitoring pods and containers, update it to v0.43.0 or later. * If you deploy Java applications, prefer to use versions which fully support cgroup v2: * [OpenJDK / HotSpot](https://bugs.openjdk.org/browse/JDK-8230305): jdk8u372, 11.0.16, 15 and later - * [IBM Semeru Runtimes](https://www.eclipse.org/openj9/docs/version0.33/#control-groups-v2-support): jdk8u345-b01, 11.0.16.0, 17.0.4.0, 18.0.2.0 and later - * [IBM Java](https://www.ibm.com/docs/en/sdk-java-technology/8?topic=new-service-refresh-7#whatsnew_sr7__fp15): 8.0.7.15 and later + * [IBM Semeru Runtimes](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.382.0, 11.0.20.0, 17.0.8.0, and later + * [IBM Java](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.8.6 and later * If you are using the [uber-go/automaxprocs](https://github.com/uber-go/automaxprocs) package, make sure the version you use is v1.5.1 or higher. From cfb6309c56eeb0ea8d56c0eb6d153cff2c043de7 Mon Sep 17 00:00:00 2001 From: Simon Engmann Date: Mon, 17 Jul 2023 12:37:02 +0200 Subject: [PATCH 010/229] Fix example errors for CrossNamespacePodAffinity Remove references to CrossNamespaceAffinity The scope CrossNamespaceAffinity does not exist. Attempting to feed the example YAML to `kubectl create` results in the following error: > The ResourceQuota "disable-cross-namespace-affinity" is invalid: > * spec.scopeSelector.matchExpressions.scopeName: Invalid value: > "CrossNamespaceAffinity": unsupported scope Add missing operator for CrossNamespacePodAffinity Trying to create the example ResourceQuotas without an operator results in the following error from `kubectl create`: > The ResourceQuota "disable-cross-namespace-affinity" is invalid: > * spec.scopeSelector.matchExpressions.operator: Invalid value: "": must be > 'Exist' when scope is any of ResourceQuotaScopeTerminating, > ResourceQuotaScopeNotTerminating, ResourceQuotaScopeBestEffort, > ResourceQuotaScopeNotBestEffort or > ResourceQuotaScopeCrossNamespacePodAffinity > * spec.scopeSelector.matchExpressions.operator: Invalid value: "": not a valid > selector operator The error message itself has another bug, as the operator is Exist*s*, not Exist. Signed-off-by: Simon Engmann --- content/en/docs/concepts/policy/resource-quotas.md | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md index c67880458a355..2186c071d1602 100644 --- a/content/en/docs/concepts/policy/resource-quotas.md +++ b/content/en/docs/concepts/policy/resource-quotas.md @@ -465,7 +465,7 @@ from getting scheduled in a failure domain. Using this scope operators can prevent certain namespaces (`foo-ns` in the example below) from having pods that use cross-namespace pod affinity by creating a resource quota object in -that namespace with `CrossNamespaceAffinity` scope and hard limit of 0: +that namespace with `CrossNamespacePodAffinity` scope and hard limit of 0: ```yaml apiVersion: v1 @@ -478,11 +478,12 @@ spec: pods: "0" scopeSelector: matchExpressions: - - scopeName: CrossNamespaceAffinity + - scopeName: CrossNamespacePodAffinity + operator: Exists ``` If operators want to disallow using `namespaces` and `namespaceSelector` by default, and -only allow it for specific namespaces, they could configure `CrossNamespaceAffinity` +only allow it for specific namespaces, they could configure `CrossNamespacePodAffinity` as a limited resource by setting the kube-apiserver flag --admission-control-config-file to the path of the following configuration file: @@ -497,12 +498,13 @@ plugins: limitedResources: - resource: pods matchScopes: - - scopeName: CrossNamespaceAffinity + - scopeName: CrossNamespacePodAffinity + operator: Exists ``` With the above configuration, pods can use `namespaces` and `namespaceSelector` in pod affinity only if the namespace where they are created have a resource quota object with -`CrossNamespaceAffinity` scope and a hard limit greater than or equal to the number of pods using those fields. +`CrossNamespacePodAffinity` scope and a hard limit greater than or equal to the number of pods using those fields. ## Requests compared to Limits {#requests-vs-limits} From 8face3d0085c8dd9a5eb02c644464d6be9fb3a88 Mon Sep 17 00:00:00 2001 From: Alex Serbul <22218473+AlexanderSerbul@users.noreply.github.com> Date: Wed, 19 Jul 2023 13:16:49 +0300 Subject: [PATCH 011/229] Update connect-applications-service.md We are talking about pods here, not about services yet. --- .../en/docs/tutorials/services/connect-applications-service.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tutorials/services/connect-applications-service.md b/content/en/docs/tutorials/services/connect-applications-service.md index 7425fa119bc67..24245584fafb8 100644 --- a/content/en/docs/tutorials/services/connect-applications-service.md +++ b/content/en/docs/tutorials/services/connect-applications-service.md @@ -59,7 +59,7 @@ to make queries against both IPs. Note that the containers are *not* using port the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same `containerPort`, and access them from any other pod or node in your cluster using the assigned IP -address for the Service. If you want to arrange for a specific port on the host +address for the pod. If you want to arrange for a specific port on the host Node to be forwarded to backing Pods, you can - but the networking model should mean that you do not need to do so. From 9ad322dabc4f9f915ff1fcbc99a84a179bcca6ac Mon Sep 17 00:00:00 2001 From: Alex Serbul <22218473+AlexanderSerbul@users.noreply.github.com> Date: Wed, 19 Jul 2023 13:50:34 +0300 Subject: [PATCH 012/229] Update connect-applications-service.md Semantic fix --- .../en/docs/tutorials/services/connect-applications-service.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tutorials/services/connect-applications-service.md b/content/en/docs/tutorials/services/connect-applications-service.md index 24245584fafb8..c1aaf87411c71 100644 --- a/content/en/docs/tutorials/services/connect-applications-service.md +++ b/content/en/docs/tutorials/services/connect-applications-service.md @@ -189,7 +189,7 @@ Note there's no mention of your Service. This is because you created the replica before the Service. Another disadvantage of doing this is that the scheduler might put both Pods on the same machine, which will take your entire Service down if it dies. We can do this the right way by killing the 2 Pods and waiting for the -Deployment to recreate them. This time around the Service exists *before* the +Deployment to recreate them. This time the Service exists *before* the replicas. This will give you scheduler-level Service spreading of your Pods (provided all your nodes have equal capacity), as well as the right environment variables: From b1a61e5916ad381725d1328d3ae7737dbe40b991 Mon Sep 17 00:00:00 2001 From: Lino Ngando <12659036+lngando@users.noreply.github.com> Date: Thu, 20 Jul 2023 10:37:06 +0200 Subject: [PATCH 013/229] Update connect-applications-service.md It is the ReplicaSet who recreates dead pods and not the deployment --- .../en/docs/tutorials/services/connect-applications-service.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/tutorials/services/connect-applications-service.md b/content/en/docs/tutorials/services/connect-applications-service.md index 7425fa119bc67..bd14faf47f7a7 100644 --- a/content/en/docs/tutorials/services/connect-applications-service.md +++ b/content/en/docs/tutorials/services/connect-applications-service.md @@ -71,7 +71,7 @@ if you're curious. So we have pods running nginx in a flat, cluster wide, address space. In theory, you could talk to these pods directly, but what happens when a node dies? The pods -die with it, and the Deployment will create new ones, with different IPs. This is +die with it, and the ReplicaSet inside the Deployment will create new ones, with different IPs. This is the problem a Service solves. A Kubernetes Service is an abstraction which defines a logical set of Pods running From d263d8c938c9e679d879b4fc10fed04f2e2fce0d Mon Sep 17 00:00:00 2001 From: Marcelo Giles Date: Fri, 21 Jul 2023 00:20:34 -0700 Subject: [PATCH 014/229] Add note for Linux distros that don't include glibc Reorder sentence --- .../tools/kubeadm/install-kubeadm.md | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md index 6fbe04549d777..b93e23d80b63e 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md @@ -29,6 +29,14 @@ see the [Creating a cluster with kubeadm](/docs/setup/production-environment/too * Swap disabled. You **MUST** disable swap in order for the kubelet to work properly. * For example, `sudo swapoff -a` will disable swapping temporarily. To make this change persistent across reboots, make sure swap is disabled in config files like `/etc/fstab`, `systemd.swap`, depending how it was configured on your system. +{{< note >}} +The `kubeadm` installation is done via binaries that use dynamic linking and assumes that your target system provides `glibc`. +This is a reasonable assumption on many Linux distributions (including Debian, Ubuntu, Fedora, CentOS, etc.) +but it is not always the case with custom and lightweight distributions which don't include `glibc` by default, such as Alpine Linux. +The expectation is that the distribution either includes `glibc` or a [compatibility layer](https://wiki.alpinelinux.org/wiki/Running_glibc_programs) +that provides the expected symbols. +{{< /note >}} + ## Verify the MAC address and product_uuid are unique for every node {#verify-mac-address} @@ -259,6 +267,10 @@ sudo mkdir -p /etc/systemd/system/kubelet.service.d curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf ``` +{{< note >}} +Please refer to the note in the [Before you begin](#before-you-begin) section for Linux distributions that do not include `glibc` by default. +{{< /note >}} + Install `kubectl` by following the instructions on [Install Tools page](/docs/tasks/tools/#kubectl). Enable and start `kubelet`: From 59eff873d88bbc9f64fad9a2f14c53d144f52c91 Mon Sep 17 00:00:00 2001 From: Marcelo Giles Date: Fri, 21 Jul 2023 22:51:38 -0700 Subject: [PATCH 015/229] Add note after restore cmd to specify that data-dir will be (re)created --- .../configure-upgrade-etcd.md | 22 ++++++++++++------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md index 3b4f90770bf84..f0b0786c73a25 100644 --- a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md +++ b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md @@ -271,16 +271,16 @@ that is not currently used by an etcd process. Taking the snapshot will not affect the performance of the member. Below is an example for taking a snapshot of the keyspace served by -`$ENDPOINT` to the file `snapshotdb`: +`$ENDPOINT` to the file `snapshot.db`: ```shell -ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshotdb +ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshot.db ``` Verify the snapshot: ```shell -ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshotdb +ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshot.db ``` ```console @@ -339,19 +339,25 @@ employed to recover the data of a failed cluster. Before starting the restore operation, a snapshot file must be present. It can either be a snapshot file from a previous backup operation, or from a remaining [data directory](https://etcd.io/docs/current/op-guide/configuration/#--data-dir). + Here is an example: ```shell -ETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 snapshot restore snapshotdb +ETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 snapshot restore snapshot.db ``` -Another example for restoring using etcdctl options: + +Another example for restoring using `etcdctl` options: + ```shell -ETCDCTL_API=3 etcdctl snapshot restore --data-dir snapshotdb +ETCDCTL_API=3 etcdctl --data-dir snapshot restore snapshot.db ``` -Yet another example would be to first export the environment variable +where `` is a directory that will be created during the restore process. + +Yet another example would be to first export the `ETCDCTL_API` environment variable: + ```shell export ETCDCTL_API=3 -etcdctl snapshot restore --data-dir snapshotdb +etcdctl --data-dir snapshot restore snapshot.db ``` For more information and examples on restoring a cluster from a snapshot file, see From 448c2a53b6d1dccc3cc04346a99afeea3913c42b Mon Sep 17 00:00:00 2001 From: Kundan Kumar Date: Fri, 21 Jul 2023 22:45:08 +0530 Subject: [PATCH 016/229] what's next for network plugin incorporated review comments --- .../compute-storage-net/network-plugins.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md index 5c6cfa7fc5842..a58f41abde860 100644 --- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md +++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md @@ -172,3 +172,9 @@ metadata: ## {{% heading "whatsnext" %}} +* Learn about [Network Policies](/docs/concepts/services-networking/network-policies/) using network + plugins +* Learn about [Cluster Networking](/docs/concepts/cluster-administration/networking/) + with network plugins +* Learn about the [Troubleshooting CNI plugin-related errors](/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/) + From f2374598b5c65871aa312193e383d734033d6b0b Mon Sep 17 00:00:00 2001 From: Andrey Goran Date: Tue, 25 Jul 2023 12:32:16 +0400 Subject: [PATCH 017/229] ID localization: replaced {{< codenew ... >}} with {{% codenew ... %}} in all files --- .../concepts/cluster-administration/logging.md | 10 +++++----- .../manage-deployment.md | 2 +- .../working-with-objects/kubernetes-objects.md | 2 +- .../concepts/policy/pod-security-policy.md | 6 +++--- .../scheduling-eviction/assign-pod-node.md | 6 +++--- ...tries-to-pod-etc-hosts-with-host-aliases.md | 2 +- .../connect-applications-service.md | 8 ++++---- .../services-networking/dns-pod-service.md | 2 +- .../concepts/services-networking/dual-stack.md | 6 +++--- .../concepts/services-networking/ingress.md | 2 +- .../workloads/controllers/daemonset.md | 2 +- .../workloads/controllers/deployment.md | 2 +- .../controllers/garbage-collection.md | 2 +- .../docs/concepts/workloads/controllers/job.md | 2 +- .../workloads/controllers/replicaset.md | 6 +++--- .../controllers/replicationcontroller.md | 2 +- .../pods/pod-topology-spread-constraints.md | 6 +++--- .../dns-debugging-resolution.md | 2 +- .../memory-constraint-namespace.md | 10 +++++----- .../assign-memory-resource.md | 6 +++--- .../assign-pods-nodes-using-node-affinity.md | 4 ++-- ...figure-liveness-readiness-startup-probes.md | 6 +++--- .../configure-persistent-volume-storage.md | 6 +++--- .../configure-pod-configmap.md | 18 +++++++++--------- .../configure-service-account.md | 2 +- .../configure-volume-storage.md | 2 +- .../pull-image-private-registry.md | 2 +- .../quality-service-pod.md | 8 ++++---- .../security-context.md | 8 ++++---- .../share-process-namespace.md | 2 +- .../debug-application-introspection.md | 2 +- .../get-shell-running-container.md | 2 +- .../define-command-argument-container.md | 2 +- .../define-environment-variable-container.md | 2 +- .../distribute-credentials-secure.md | 10 +++++----- .../job/automated-tasks-with-cron-jobs.md | 2 +- .../declarative-config.md | 10 +++++----- .../horizontal-pod-autoscale-walkthrough.md | 4 ++-- .../run-stateless-application-deployment.md | 6 +++--- content/id/docs/tutorials/hello-minikube.md | 4 ++-- .../stateful-application/basic-stateful-set.md | 4 ++-- .../expose-external-ip-address.md | 2 +- 42 files changed, 97 insertions(+), 97 deletions(-) diff --git a/content/id/docs/concepts/cluster-administration/logging.md b/content/id/docs/concepts/cluster-administration/logging.md index e00745d1d979f..33266635c333d 100644 --- a/content/id/docs/concepts/cluster-administration/logging.md +++ b/content/id/docs/concepts/cluster-administration/logging.md @@ -21,7 +21,7 @@ Arsitektur _logging_ pada level klaster yang akan dijelaskan berikut mengasumsik Pada bagian ini, kamu dapat melihat contoh tentang dasar _logging_ pada Kubernetes yang mengeluarkan data pada _standard output_. Demonstrasi berikut ini menggunakan sebuah [spesifikasi pod](/examples/debug/counter-pod.yaml) dengan kontainer yang akan menuliskan beberapa teks ke _standard output_ tiap detik. -{{< codenew file="debug/counter-pod.yaml" >}} +{{% codenew file="debug/counter-pod.yaml" %}} Untuk menjalankan pod ini, gunakan perintah berikut: @@ -126,13 +126,13 @@ Dengan menggunakan cara ini kamu dapat memisahkan aliran log dari bagian-bagian Sebagai contoh, sebuah pod berjalan pada satu kontainer tunggal, dan kontainer menuliskan ke dua berkas log yang berbeda, dengan dua format yang berbeda pula. Berikut ini _file_ konfigurasi untuk Pod: -{{< codenew file="admin/logging/two-files-counter-pod.yaml" >}} +{{% codenew file="admin/logging/two-files-counter-pod.yaml" %}} Hal ini akan menyulitkan untuk mengeluarkan log dalam format yang berbeda pada aliran log yang sama, meskipun kamu dapat me-_redirect_ keduanya ke `stdout` dari kontainer. Sebagai gantinya, kamu dapat menggunakan dua buah kontainer _sidecar_. Tiap kontainer _sidecar_ dapat membaca suatu berkas log tertentu dari _shared volume_ kemudian mengarahkan log ke `stdout`-nya sendiri. Berikut _file_ konfigurasi untuk pod yang memiliki dua buah kontainer _sidecard_: -{{< codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" >}} +{{% codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" %}} Saat kamu menjalankan pod ini, kamu dapat mengakses tiap aliran log secara terpisah dengan menjalankan perintah berikut: @@ -175,7 +175,7 @@ Menggunakan agen _logging_ di dalam kontainer _sidecar_ dapat berakibat pengguna Sebagai contoh, kamu dapat menggunakan [Stackdriver](/docs/tasks/debug-application-cluster/logging-stackdriver/), yang menggunakan fluentd sebagai agen _logging_. Berikut ini dua _file_ konfigurasi yang dapat kamu pakai untuk mengimplementasikan cara ini. _File_ yang pertama berisi sebuah [ConfigMap](/id/docs/tasks/configure-pod-container/configure-pod-configmap/) untuk mengonfigurasi fluentd. -{{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}} +{{% codenew file="admin/logging/fluentd-sidecar-config.yaml" %}} {{< note >}} Konfigurasi fluentd berada diluar cakupan artikel ini. Untuk informasi lebih lanjut tentang cara mengonfigurasi fluentd, silakan lihat [dokumentasi resmi fluentd ](http://docs.fluentd.org/). @@ -183,7 +183,7 @@ Konfigurasi fluentd berada diluar cakupan artikel ini. Untuk informasi lebih lan _File_ yang kedua mendeskripsikan sebuah pod yang memiliki kontainer _sidecar_ yang menjalankan fluentd. Pod ini melakukan _mount_ sebuah volume yang akan digunakan fluentd untuk mengambil data konfigurasinya. -{{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}} +{{% codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" %}} Setelah beberapa saat, kamu akan mendapati pesan log pada _interface_ Stackdriver. diff --git a/content/id/docs/concepts/cluster-administration/manage-deployment.md b/content/id/docs/concepts/cluster-administration/manage-deployment.md index 4bdc7f790e6dc..12525e39e1754 100644 --- a/content/id/docs/concepts/cluster-administration/manage-deployment.md +++ b/content/id/docs/concepts/cluster-administration/manage-deployment.md @@ -17,7 +17,7 @@ Kamu telah melakukan _deploy_ pada aplikasimu dan mengeksposnya melalui sebuah _ Banyak aplikasi memerlukan beberapa _resource_, seperti Deployment dan Service. Pengelolaan beberapa _resource_ dapat disederhanakan dengan mengelompokkannya dalam berkas yang sama (dengan pemisah `---` pada YAML). Contohnya: -{{< codenew file="application/nginx-app.yaml" >}} +{{% codenew file="application/nginx-app.yaml" %}} Beberapa _resource_ dapat dibuat seolah-olah satu _resource_: diff --git a/content/id/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/id/docs/concepts/overview/working-with-objects/kubernetes-objects.md index aa702827b9ad4..5195243acb70a 100644 --- a/content/id/docs/concepts/overview/working-with-objects/kubernetes-objects.md +++ b/content/id/docs/concepts/overview/working-with-objects/kubernetes-objects.md @@ -64,7 +64,7 @@ akan mengubah informasi yang kamu berikan ke dalam format JSON ketika melakukan Berikut merupakan contoh _file_ `.yaml` yang menunjukkan _field_ dan _spec_ objek untuk _Deployment_: -{{< codenew file="application/deployment.yaml" >}} +{{% codenew file="application/deployment.yaml" %}} Salah satu cara untuk membuat _Deployment_ menggunakan _file_ `.yaml` seperti yang dijabarkan di atas adalah dengan menggunakan perintah diff --git a/content/id/docs/concepts/policy/pod-security-policy.md b/content/id/docs/concepts/policy/pod-security-policy.md index 3646246150e83..d89e6ca7398f3 100644 --- a/content/id/docs/concepts/policy/pod-security-policy.md +++ b/content/id/docs/concepts/policy/pod-security-policy.md @@ -146,7 +146,7 @@ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n Beri definisi objek contoh PodSecurityPolicy dalam sebuah berkas. Ini adalah kebijakan yang mencegah pembuatan Pod-Pod yang _privileged_. -{{< codenew file="policy/example-psp.yaml" >}} +{{% codenew file="policy/example-psp.yaml" %}} Dan buatlah PodSecurityPolicy tersebut dengan `kubectl`: @@ -297,11 +297,11 @@ podsecuritypolicy "example" deleted Berikut adalah kebijakan dengan batasan paling sedikit yang dapat kamu buat, ekuivalen dengan tidak menggunakan _admission controller_ Pod Security Policy: -{{< codenew file="policy/privileged-psp.yaml" >}} +{{% codenew file="policy/privileged-psp.yaml" %}} Berikut adalah sebuah contoh kebijakan yang restriktif yang mengharuskan pengguna-pengguna untuk berjalan sebagai pengguna yang _unprivileged_, memblokir kemungkinan eskalasi menjadi _root_, dan mengharuskan penggunaan beberapa mekanisme keamanan. -{{< codenew file="policy/restricted-psp.yaml" >}} +{{% codenew file="policy/restricted-psp.yaml" %}} ## Referensi Kebijakan diff --git a/content/id/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/id/docs/concepts/scheduling-eviction/assign-pod-node.md index 4f0f838db5204..6139b0b2d00f0 100644 --- a/content/id/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/id/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -52,7 +52,7 @@ spec: Kemudian tambahkan sebuah `nodeSelector` seperti berikut: -{{< codenew file="pods/pod-nginx.yaml" >}} +{{% codenew file="pods/pod-nginx.yaml" %}} Ketika kamu menjalankan perintah `kubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml`, pod tersebut akan dijadwalkan pada node yang memiliki label yang dirinci. Kamu dapat memastikan penambahan nodeSelector berhasil dengan menjalankan `kubectl get pods -o wide` dan melihat "NODE" tempat Pod ditugaskan. @@ -110,7 +110,7 @@ Afinitas node dinyatakan sebagai _field_ `nodeAffinity` dari _field_ `affinity` Berikut ini contoh dari pod yang menggunakan afinitas node: -{{< codenew file="pods/pod-with-node-affinity.yaml" >}} +{{% codenew file="pods/pod-with-node-affinity.yaml" %}} Aturan afinitas node tersebut menyatakan pod hanya bisa ditugaskan pada node dengan label yang memiliki kunci `kubernetes.io/e2e-az-name` dan bernilai `e2e-az1` atau `e2e-az2`. Selain itu, dari semua node yang memenuhi kriteria tersebut, mode dengan label dengan kunci `another-node-label-key` and bernilai `another-node-label-value` harus lebih diutamakan. @@ -151,7 +151,7 @@ Afinitas antar pod dinyatakan sebagai _field_ `podAffinity` dari _field_ `affini #### Contoh pod yang menggunakan pod affinity: -{{< codenew file="pods/pod-with-pod-affinity.yaml" >}} +{{% codenew file="pods/pod-with-pod-affinity.yaml" %}} Afinitas pada pod tersebut menetapkan sebuah aturan afinitas pod dan aturan anti-afinitas pod. Pada contoh ini, `podAffinity` adalah `requiredDuringSchedulingIgnoredDuringExecution` sementara `podAntiAffinity` adalah `preferredDuringSchedulingIgnoredDuringExecution`. Aturan afinitas pod menyatakan bahwa pod dapat dijadwalkan pada node hanya jika node tersebut berada pada zona yang sama dengan minimal satu pod yang sudah berjalan yang memiliki label dengan kunci "security" dan bernilai "S1". (Lebih detail, pod dapat berjalan pada node N jika node N memiliki label dengan kunci `failure-domain.beta.kubernetes.io/zone`dan nilai V sehingga ada minimal satu node dalam klaster dengan kunci `failure-domain.beta.kubernetes.io/zone` dan bernilai V yang menjalankan pod yang memiliki label dengan kunci "security" dan bernilai "S1".) Aturan anti-afinitas pod menyatakan bahwa pod memilih untuk tidak dijadwalkan pada sebuah node jika node tersebut sudah menjalankan pod yang memiliki label dengan kunci "security" dan bernilai "S2". (Jika `topologyKey` adalah `failure-domain.beta.kubernetes.io/zone` maka dapat diartikan bahwa pod tidak dapat dijadwalkan pada node jika node berada pada zona yang sama dengan pod yang memiliki label dengan kunci "security" dan bernilai "S2".) Lihat [design doc](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md) untuk lebih banyak contoh afinitas dan anti-afinitas pod, baik `requiredDuringSchedulingIgnoredDuringExecution` diff --git a/content/id/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md b/content/id/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md index 26a2473f460c9..bd0f5339c8013 100644 --- a/content/id/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md +++ b/content/id/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md @@ -68,7 +68,7 @@ Selain _boilerplate default_, kita dapat menambahkan entri pada berkas `bar.remote` pada `10.1.2.3`, kita dapat melakukannya dengan cara menambahkan HostAliases pada Pod di bawah _field_ `.spec.hostAliases`: -{{< codenew file="service/networking/hostaliases-pod.yaml" >}} +{{% codenew file="service/networking/hostaliases-pod.yaml" %}} Pod ini kemudian dapat dihidupkan dengan perintah berikut: diff --git a/content/id/docs/concepts/services-networking/connect-applications-service.md b/content/id/docs/concepts/services-networking/connect-applications-service.md index 545f5a76e129a..b4fee74d27dae 100644 --- a/content/id/docs/concepts/services-networking/connect-applications-service.md +++ b/content/id/docs/concepts/services-networking/connect-applications-service.md @@ -25,7 +25,7 @@ Panduan ini menggunakan server *nginx* sederhana untuk mendemonstrasikan konsepn Kita melakukan ini di beberapa contoh sebelumnya, tetapi mari kita lakukan sekali lagi dan berfokus pada prespektif jaringannya. Buat sebuah *nginx Pod*, dan perhatikan bahwa templat tersebut mempunyai spesifikasi *port* kontainer: -{{< codenew file="service/networking/run-my-nginx.yaml" >}} +{{% codenew file="service/networking/run-my-nginx.yaml" %}} Ini membuat aplikasi tersebut dapat diakses dari *node* manapun di dalam klaster kamu. Cek lokasi *node* dimana *Pod* tersebut berjalan: ```shell @@ -66,7 +66,7 @@ service/my-nginx exposed Perintah di atas sama dengan `kubectl apply -f` dengan *yaml* sebagai berikut: -{{< codenew file="service/networking/nginx-svc.yaml" >}} +{{% codenew file="service/networking/nginx-svc.yaml" %}} Spesifikasi ini akan membuat *Service* yang membuka *TCP port 80* di setiap *Pod* dengan label `run: my-nginx` dan mengeksposnya ke dalam *port Service* (`targetPort`: adalah port kontainer yang menerima trafik, `port` adalah *service port* yang dapat berupa *port* apapun yang digunakan *Pod* lain untuk mengakses *Service*). @@ -253,7 +253,7 @@ nginxsecret Opaque 2 1m Sekarang modifikasi replika *nginx* untuk menjalankan server *https* menggunakan *certificate* di dalam *secret* dan *Service* untuk mengekspos semua *port* (80 dan 443): -{{< codenew file="service/networking/nginx-secure-app.yaml" >}} +{{% codenew file="service/networking/nginx-secure-app.yaml" %}} Berikut catatan penting tentang manifes *nginx-secure-app*: @@ -281,7 +281,7 @@ node $ curl -k https://10.244.3.5 Perlu dicatat bahwa kita menggunakan parameter `-k` saat menggunakan *curl*, ini karena kita tidak tau apapun tentang *Pod* yang menjalankan *nginx* saat pembuatan seritifikat, jadi kita harus memberitahu *curl* untuk mengabaikan ketidakcocokan *CName*. Dengan membuat *Service*, kita menghubungkan *CName* yang digunakan pada *certificate* dengan nama pada *DNS* yang digunakan *Pod*. Lakukan pengujian dari sebuah *Pod* (*secret* yang sama digunakan untuk agar mudah, *Pod* tersebut hanya membutuhkan *nginx.crt* untuk mengakses *Service*) -{{< codenew file="service/networking/curlpod.yaml" >}} +{{% codenew file="service/networking/curlpod.yaml" %}} ```shell kubectl apply -f ./curlpod.yaml diff --git a/content/id/docs/concepts/services-networking/dns-pod-service.md b/content/id/docs/concepts/services-networking/dns-pod-service.md index efdba8d7a13be..a7dc17f96f4cd 100644 --- a/content/id/docs/concepts/services-networking/dns-pod-service.md +++ b/content/id/docs/concepts/services-networking/dns-pod-service.md @@ -225,7 +225,7 @@ pada _field_ `dnsConfig`: Di bawah ini merupakan contoh sebuah Pod dengan pengaturan DNS kustom: -{{< codenew file="service/networking/custom-dns.yaml" >}} +{{% codenew file="service/networking/custom-dns.yaml" %}} Ketika Pod diatas dibuat, maka Container `test` memiliki isi berkas `/etc/resolv.conf` sebagai berikut: diff --git a/content/id/docs/concepts/services-networking/dual-stack.md b/content/id/docs/concepts/services-networking/dual-stack.md index 52e892f4f4704..6faed791617ff 100644 --- a/content/id/docs/concepts/services-networking/dual-stack.md +++ b/content/id/docs/concepts/services-networking/dual-stack.md @@ -96,19 +96,19 @@ Kubernetes akan mengalokasikan alamat IP (atau yang dikenal juga sebagai "_cluster IP_") dari `service-cluster-ip-range` yang dikonfigurasi pertama kali untuk Service ini. -{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}} +{{% codenew file="service/networking/dual-stack-default-svc.yaml" %}} Spesifikasi Service berikut memasukkan bagian `ipFamily`. Sehingga Kubernetes akan mengalokasikan alamat IPv6 (atau yang dikenal juga sebagai "_cluster IP_") dari `service-cluster-ip-range` yang dikonfigurasi untuk Service ini. -{{< codenew file="service/networking/dual-stack-ipv6-svc.yaml" >}} +{{% codenew file="service/networking/dual-stack-ipv6-svc.yaml" %}} Sebagai perbandingan, spesifikasi Service berikut ini akan dialokasikan sebuah alamat IPv4 (atau yang dikenal juga sebagai "_cluster IP_") dari `service-cluster-ip-range` yang dikonfigurasi untuk Service ini. -{{< codenew file="service/networking/dual-stack-ipv4-svc.yaml" >}} +{{% codenew file="service/networking/dual-stack-ipv4-svc.yaml" %}} ### Tipe _LoadBalancer_ diff --git a/content/id/docs/concepts/services-networking/ingress.md b/content/id/docs/concepts/services-networking/ingress.md index 84db01b37e827..8e129ff223cf7 100644 --- a/content/id/docs/concepts/services-networking/ingress.md +++ b/content/id/docs/concepts/services-networking/ingress.md @@ -132,7 +132,7 @@ akan diarahkan pada *backend default*. Terdapat konsep Kubernetes yang memungkinkan kamu untuk mengekspos sebuah Service, lihat [alternatif lain](#alternatif-lain). Kamu juga bisa membuat spesifikasi Ingress dengan *backend default* yang tidak memiliki *rules*. -{{< codenew file="service/networking/ingress.yaml" >}} +{{% codenew file="service/networking/ingress.yaml" %}} Jika kamu menggunakan `kubectl apply -f` kamu dapat melihat: diff --git a/content/id/docs/concepts/workloads/controllers/daemonset.md b/content/id/docs/concepts/workloads/controllers/daemonset.md index ea21a7b268fc7..4edafc7f444af 100644 --- a/content/id/docs/concepts/workloads/controllers/daemonset.md +++ b/content/id/docs/concepts/workloads/controllers/daemonset.md @@ -37,7 +37,7 @@ Kamu bisa definisikan DaemonSet dalam berkas YAML. Contohnya, berkas `daemonset.yaml` di bawah mendefinisikan DaemonSet yang menjalankan _image_ Docker fluentd-elasticsearch: -{{< codenew file="controllers/daemonset.yaml" >}} +{{% codenew file="controllers/daemonset.yaml" %}} * Buat DaemonSet berdasarkan berkas YAML: ``` diff --git a/content/id/docs/concepts/workloads/controllers/deployment.md b/content/id/docs/concepts/workloads/controllers/deployment.md index 18f1542418e33..f6a3244174fe0 100644 --- a/content/id/docs/concepts/workloads/controllers/deployment.md +++ b/content/id/docs/concepts/workloads/controllers/deployment.md @@ -41,7 +41,7 @@ Berikut adalah penggunaan yang umum pada Deployment: Berikut adalah contoh Deployment. Dia membuat ReplicaSet untuk membangkitkan tiga Pod `nginx`: -{{< codenew file="controllers/nginx-deployment.yaml" >}} +{{% codenew file="controllers/nginx-deployment.yaml" %}} Dalam contoh ini: diff --git a/content/id/docs/concepts/workloads/controllers/garbage-collection.md b/content/id/docs/concepts/workloads/controllers/garbage-collection.md index 5eb00cf987caa..121d148b2f20b 100644 --- a/content/id/docs/concepts/workloads/controllers/garbage-collection.md +++ b/content/id/docs/concepts/workloads/controllers/garbage-collection.md @@ -22,7 +22,7 @@ Kamu juga bisa menspesifikasikan hubungan antara pemilik dan dependen dengan car Berikut adalah berkas untuk sebuah ReplicaSet yang memiliki tiga Pod: -{{< codenew file="controllers/replicaset.yaml" >}} +{{% codenew file="controllers/replicaset.yaml" %}} Jika kamu membuat ReplicaSet tersebut dan kemudian melihat metadata Pod, kamu akan melihat kolom OwnerReferences: diff --git a/content/id/docs/concepts/workloads/controllers/job.md b/content/id/docs/concepts/workloads/controllers/job.md index 4a7cce3f2a4a3..03a58ea21223b 100644 --- a/content/id/docs/concepts/workloads/controllers/job.md +++ b/content/id/docs/concepts/workloads/controllers/job.md @@ -33,7 +33,7 @@ Berikut merupakan contoh konfigurasi Job. Job ini melakukan komputasi π hingga digit ke 2000 kemudian memberikan hasilnya sebagai keluaran. Job tersebut memerlukan waktu 10 detik untuk dapat diselesaikan. -{{< codenew file="controllers/job.yaml" >}} +{{% codenew file="controllers/job.yaml" %}} Kamu dapat menjalankan contoh tersebut dengan menjalankan perintah berikut: diff --git a/content/id/docs/concepts/workloads/controllers/replicaset.md b/content/id/docs/concepts/workloads/controllers/replicaset.md index 57b1124208a91..e43ccc57c0ac6 100644 --- a/content/id/docs/concepts/workloads/controllers/replicaset.md +++ b/content/id/docs/concepts/workloads/controllers/replicaset.md @@ -29,7 +29,7 @@ Hal ini berarti kamu boleh jadi tidak akan membutuhkan manipulasi objek ReplicaS ## Contoh -{{< codenew file="controllers/frontend.yaml" >}} +{{% codenew file="controllers/frontend.yaml" %}} Menyimpan _manifest_ ini dalam `frontend.yaml` dan mengirimkannya ke klaster Kubernetes akan membuat ReplicaSet yang telah didefinisikan beserta dengan Pod yang dikelola. @@ -131,7 +131,7 @@ Walaupun kamu bisa membuat Pod biasa tanpa masalah, sangat direkomendasikan untu Mengambil contoh ReplicaSet _frontend_ sebelumnya, dan Pod yang ditentukan pada _manifest_ berikut: -{{< codenew file="pods/pod-rs.yaml" >}} +{{% codenew file="pods/pod-rs.yaml" %}} Karena Pod tersebut tidak memiliki Controller (atau objek lain) sebagai referensi pemilik yang sesuai dengan selektor dari ReplicaSet _frontend_, Pod tersebut akan langsung diakuisisi oleh ReplicaSet. @@ -257,7 +257,7 @@ Jumlah Pod pada ReplicaSet dapat diatur dengan mengubah nilai dari _field_ `.spe Pengaturan jumlah Pod pada ReplicaSet juga dapat dilakukan mengunakan [Horizontal Pod Autoscalers (HPA)](/docs/tasks/run-application/horizontal-pod-autoscale/). Berikut adalah contoh HPA terhadap ReplicaSet yang telah dibuat pada contoh sebelumnya. -{{< codenew file="controllers/hpa-rs.yaml" >}} +{{% codenew file="controllers/hpa-rs.yaml" %}} Menyimpan _manifest_ ini dalam `hpa-rs.yaml` dan mengirimkannya ke klaster Kubernetes akan membuat HPA tersebut yang akan mengatur jumlah Pod pada ReplicaSet yang telah didefinisikan bergantung terhadap penggunaan CPU dari Pod yang direplikasi. diff --git a/content/id/docs/concepts/workloads/controllers/replicationcontroller.md b/content/id/docs/concepts/workloads/controllers/replicationcontroller.md index 48ec718a6df67..f53cac7f290c9 100644 --- a/content/id/docs/concepts/workloads/controllers/replicationcontroller.md +++ b/content/id/docs/concepts/workloads/controllers/replicationcontroller.md @@ -36,7 +36,7 @@ Sebuah contoh sederhana adalah membuat sebuah objek ReplicationController untuk Contoh ReplicationController ini mengonfigurasi tiga salinan dari peladen web nginx. -{{< codenew file="controllers/replication.yaml" >}} +{{% codenew file="controllers/replication.yaml" %}} Jalankan contoh di atas dengan mengunduh berkas contoh dan menjalankan perintah ini: diff --git a/content/id/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/id/docs/concepts/workloads/pods/pod-topology-spread-constraints.md index 6f27244ef6dea..188080abb5802 100644 --- a/content/id/docs/concepts/workloads/pods/pod-topology-spread-constraints.md +++ b/content/id/docs/concepts/workloads/pods/pod-topology-spread-constraints.md @@ -114,7 +114,7 @@ node2 dan node3 (`P` merepresentasikan Pod): Jika kita ingin Pod baru akan disebar secara merata berdasarkan Pod yang telah ada pada semua zona, maka _spec_ bernilai sebagai berikut: -{{< codenew file="pods/topology-spread-constraints/one-constraint.yaml" >}} +{{% codenew file="pods/topology-spread-constraints/one-constraint.yaml" %}} `topologyKey: zone` berarti persebaran merata hanya akan digunakan pada Node dengan pasangan label "zone: ". `whenUnsatisfiable: DoNotSchedule` memberitahukan penjadwal untuk membiarkan @@ -161,7 +161,7 @@ Ini dibuat berdasarkan contoh sebelumnya. Misalkan kamu memiliki klaster dengan Kamu dapat menggunakan 2 TopologySpreadConstraint untuk mengatur persebaran Pod pada zona dan Node: -{{< codenew file="pods/topology-spread-constraints/two-constraints.yaml" >}} +{{% codenew file="pods/topology-spread-constraints/two-constraints.yaml" %}} Dalam contoh ini, untuk memenuhi batasan pertama, Pod yang baru hanya akan ditempatkan pada "zoneB", sedangkan untuk batasan kedua, Pod yang baru hanya akan ditempatkan pada "node4". Maka hasil dari @@ -224,7 +224,7 @@ sesuai dengan nilai tersebut akan dilewatkan. berkas yaml seperti di bawah, jadi "mypod" akan ditempatkan pada "zoneB", bukan "zoneC". Demikian juga `spec.nodeSelector` akan digunakan. - {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} + {{% codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" %}} ### Batasan _default_ pada tingkat klaster diff --git a/content/id/docs/tasks/administer-cluster/dns-debugging-resolution.md b/content/id/docs/tasks/administer-cluster/dns-debugging-resolution.md index 67b451db52b34..fdc31a1147e9c 100644 --- a/content/id/docs/tasks/administer-cluster/dns-debugging-resolution.md +++ b/content/id/docs/tasks/administer-cluster/dns-debugging-resolution.md @@ -21,7 +21,7 @@ kube-dns. ### Membuat Pod sederhana yang digunakan sebagai lingkungan pengujian -{{< codenew file="admin/dns/dnsutils.yaml" >}} +{{% codenew file="admin/dns/dnsutils.yaml" %}} Gunakan manifes berikut untuk membuat sebuah Pod: diff --git a/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md b/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md index 1aae3e38f009b..3e24947edb5e3 100644 --- a/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md +++ b/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md @@ -40,7 +40,7 @@ kubectl create namespace constraints-mem-example Berikut berkas konfigurasi untuk sebuah LimitRange: -{{< codenew file="admin/resource/memory-constraints.yaml" >}} +{{% codenew file="admin/resource/memory-constraints.yaml" %}} Membuat LimitRange: @@ -85,7 +85,7 @@ Berikut berkas konfigurasi Pod yang memiliki satu Container. Manifes Container menentukan permintaan memori 600 MiB dan limit memori 800 MiB. Nilai tersebut memenuhi batasan minimum dan maksimum memori yang ditentukan oleh LimitRange. -{{< codenew file="admin/resource/memory-constraints-pod.yaml" >}} +{{% codenew file="admin/resource/memory-constraints-pod.yaml" %}} Membuat Pod: @@ -127,7 +127,7 @@ kubectl delete pod constraints-mem-demo --namespace=constraints-mem-example Berikut berkas konfigurasi untuk sebuah Pod yang memiliki satu Container. Container tersebut menentukan permintaan memori 800 MiB dan batas memori 1.5 GiB. -{{< codenew file="admin/resource/memory-constraints-pod-2.yaml" >}} +{{% codenew file="admin/resource/memory-constraints-pod-2.yaml" %}} Mencoba membuat Pod: @@ -148,7 +148,7 @@ pods "constraints-mem-demo-2" is forbidden: maximum memory usage per Container i Berikut berkas konfigurasi untuk sebuah Pod yang memiliki satu Container. Container tersebut menentukan permintaan memori 100 MiB dan limit memori 800 MiB. -{{< codenew file="admin/resource/memory-constraints-pod-3.yaml" >}} +{{% codenew file="admin/resource/memory-constraints-pod-3.yaml" %}} Mencoba membuat Pod: @@ -171,7 +171,7 @@ pods "constraints-mem-demo-3" is forbidden: minimum memory usage per Container i Berikut berkas konfigurasi untuk sebuah Pod yang memiliki satu Container. Container tersebut tidak menentukan permintaan memori dan juga limit memori. -{{< codenew file="admin/resource/memory-constraints-pod-4.yaml" >}} +{{% codenew file="admin/resource/memory-constraints-pod-4.yaml" %}} Mencoba membuat Pod: diff --git a/content/id/docs/tasks/configure-pod-container/assign-memory-resource.md b/content/id/docs/tasks/configure-pod-container/assign-memory-resource.md index bb092e86a58c6..4a50fb84159c2 100644 --- a/content/id/docs/tasks/configure-pod-container/assign-memory-resource.md +++ b/content/id/docs/tasks/configure-pod-container/assign-memory-resource.md @@ -69,7 +69,7 @@ Dalam latihan ini, kamu akan membuat Pod yang memiliki satu Container. Container sebesar 100 MiB dan batasan memori sebesar 200 MiB. Berikut berkas konfigurasi untuk Pod: -{{< codenew file="pods/resource/memory-request-limit.yaml" >}} +{{% codenew file="pods/resource/memory-request-limit.yaml" %}} Bagian `args` dalam berkas konfigurasi memberikan argumen untuk Container pada saat dimulai. Argumen`"--vm-bytes", "150M"` memberi tahu Container agar mencoba mengalokasikan memori sebesar 150 MiB. @@ -139,7 +139,7 @@ Dalam latihan ini, kamu membuat Pod yang mencoba mengalokasikan lebih banyak mem Berikut adalah berkas konfigurasi untuk Pod yang memiliki satu Container dengan berkas permintaan memori sebesar 50 MiB dan batasan memori sebesar 100 MiB: -{{< codenew file="pods/resource/memory-request-limit-2.yaml" >}} +{{% codenew file="pods/resource/memory-request-limit-2.yaml" %}} Dalam bagian `args` dari berkas konfigurasi, kamu dapat melihat bahwa Container tersebut akan mencoba mengalokasikan memori sebesar 250 MiB, yang jauh di atas batas yaitu 100 MiB. @@ -250,7 +250,7 @@ kapasitas dari Node mana pun dalam klaster kamu. Berikut adalah berkas konfigura Container dengan permintaan memori 1000 GiB, yang kemungkinan besar melebihi kapasitas dari setiap Node dalam klaster kamu. -{{< codenew file="pods/resource/memory-request-limit-3.yaml" >}} +{{% codenew file="pods/resource/memory-request-limit-3.yaml" %}} Buatlah Pod: diff --git a/content/id/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md b/content/id/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md index a60d862fd26c2..3d4a5d079b8d1 100644 --- a/content/id/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md +++ b/content/id/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md @@ -64,7 +64,7 @@ Afinitas Node di dalam klaster Kubernetes. Konfigurasi ini menunjukkan sebuah Pod yang memiliki afinitas node `requiredDuringSchedulingIgnoredDuringExecution`, `disktype: ssd`. Dengan kata lain, Pod hanya akan dijadwalkan hanya pada Node yang memiliki label `disktype=ssd`. -{{< codenew file="pods/pod-nginx-required-affinity.yaml" >}} +{{% codenew file="pods/pod-nginx-required-affinity.yaml" %}} 1. Terapkan konfigurasi berikut untuk membuat sebuah Pod yang akan dijadwalkan pada Node yang kamu pilih: @@ -90,7 +90,7 @@ Dengan kata lain, Pod hanya akan dijadwalkan hanya pada Node yang memiliki label Konfigurasi ini memberikan deskripsi sebuah Pod yang memiliki afinitas Node `preferredDuringSchedulingIgnoredDuringExecution`,`disktype: ssd`. Artinya Pod akan diutamakan dijalankan pada Node yang memiliki label `disktype=ssd`. -{{< codenew file="pods/pod-nginx-preferred-affinity.yaml" >}} +{{% codenew file="pods/pod-nginx-preferred-affinity.yaml" %}} 1. Terapkan konfigurasi berikut untuk membuat sebuah Pod yang akan dijadwalkan pada Node yang kamu pilih: diff --git a/content/id/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/id/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index d56f59a09a646..e03a9d97a331a 100644 --- a/content/id/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/id/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -46,7 +46,7 @@ Kubernetes menyediakan _probe liveness_ untuk mendeteksi dan memperbaiki situasi Pada latihan ini, kamu akan membuat Pod yang menjalankan Container dari image `registry.k8s.io/busybox`. Berikut ini adalah berkas konfigurasi untuk Pod tersebut: -{{< codenew file="pods/probe/exec-liveness.yaml" >}} +{{% codenew file="pods/probe/exec-liveness.yaml" %}} Pada berkas konfigurasi di atas, kamu dapat melihat bahwa Pod memiliki satu `Container`. _Field_ `periodSeconds` menentukan bahwa kubelet harus melakukan _probe liveness_ setiap 5 detik. @@ -128,7 +128,7 @@ liveness-exec 1/1 Running 1 1m Jenis kedua dari _probe liveness_ menggunakan sebuah permintaan GET HTTP. Berikut ini berkas konfigurasi untuk Pod yang menjalankan Container dari image `registry.k8s.io/liveness`. -{{< codenew file="pods/probe/http-liveness.yaml" >}} +{{% codenew file="pods/probe/http-liveness.yaml" %}} Pada berkas konfigurasi tersebut, kamu dapat melihat Pod memiliki sebuah Container. _Field_ `periodSeconds` menentukan bahwa kubelet harus mengerjakan _probe liveness_ setiap 3 detik. @@ -190,7 +190,7 @@ kubelet akan mencoba untuk membuka soket pada Container kamu dengan porta terten Jika koneksi dapat terbentuk dengan sukses, maka Container dianggap dalam kondisi sehat. Namun jika tidak berhasil terbentuk, maka Container dianggap gagal. -{{< codenew file="pods/probe/tcp-liveness-readiness.yaml" >}} +{{% codenew file="pods/probe/tcp-liveness-readiness.yaml" %}} Seperti yang terlihat, konfigurasi untuk pemeriksaan TCP cukup mirip dengan pemeriksaan HTTP. Contoh ini menggunakan _probe readiness_ dan _liveness_. diff --git a/content/id/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md b/content/id/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md index 858342880e7e1..79db4e848a754 100644 --- a/content/id/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md +++ b/content/id/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md @@ -93,7 +93,7 @@ untuk mengatur Berikut berkas konfigurasi untuk hostPath PersistentVolume: -{{< codenew file="pods/storage/pv-volume.yaml" >}} +{{% codenew file="pods/storage/pv-volume.yaml" %}} Berkas konfigurasi tersebut menentukan bahwa volume berada di `/mnt/data` pada klaster Node. Konfigurasi tersebut juga menentukan ukuran dari 10 gibibytes dan @@ -129,7 +129,7 @@ setidaknya untuk satu Node. Berikut berkas konfigurasi untuk PersistentVolumeClaim: -{{< codenew file="pods/storage/pv-claim.yaml" >}} +{{% codenew file="pods/storage/pv-claim.yaml" %}} Membuat sebuah PersistentVolumeClaim: @@ -169,7 +169,7 @@ Langkah selanjutnya adalah membuat sebuah Pod yang akan menggunakan PersistentVo Berikut berkas konfigurasi untuk Pod: -{{< codenew file="pods/storage/pv-pod.yaml" >}} +{{% codenew file="pods/storage/pv-pod.yaml" %}} Perhatikan bahwa berkas konfigurasi Pod menentukan sebuah PersistentVolumeClaim, tetapi tidak menentukan PeristentVolume. Dari sudut pandang Pod, _claim_ adalah volume. diff --git a/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md index bfdad56610635..201aef5fedcf9 100644 --- a/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -467,7 +467,7 @@ configmap/special-config-2-c92b5mmcf2 created 2. Memberikan nilai `special.how` yang sudah terdapat pada ConfigMap pada variabel _environment_ `SPECIAL_LEVEL_KEY` di spesifikasi Pod. - {{< codenew file="pods/pod-single-configmap-env-variable.yaml" >}} + {{% codenew file="pods/pod-single-configmap-env-variable.yaml" %}} Buat Pod: @@ -481,7 +481,7 @@ configmap/special-config-2-c92b5mmcf2 created * Seperti pada contoh sebelumnya, buat ConfigMap terlebih dahulu. - {{< codenew file="configmap/configmaps.yaml" >}} + {{% codenew file="configmap/configmaps.yaml" %}} Buat ConfigMap: @@ -491,7 +491,7 @@ configmap/special-config-2-c92b5mmcf2 created * Tentukan variabel _environment_ pada spesifikasi Pod. - {{< codenew file="pods/pod-multiple-configmap-env-variable.yaml" >}} + {{% codenew file="pods/pod-multiple-configmap-env-variable.yaml" %}} Buat Pod: @@ -509,7 +509,7 @@ Fungsi ini tersedia pada Kubernetes v1.6 dan selanjutnya. * Buat ConfigMap yang berisi beberapa pasangan kunci-nilai. - {{< codenew file="configmap/configmap-multikeys.yaml" >}} + {{% codenew file="configmap/configmap-multikeys.yaml" %}} Buat ConfigMap: @@ -519,7 +519,7 @@ Fungsi ini tersedia pada Kubernetes v1.6 dan selanjutnya. * Gunakan `envFrom` untuk menentukan seluruh data pada ConfigMap sebagai variabel _environment_ kontainer. Kunci dari ConfigMap akan menjadi nama variabel _environment_ di dalam Pod. - {{< codenew file="pods/pod-configmap-envFrom.yaml" >}} + {{% codenew file="pods/pod-configmap-envFrom.yaml" %}} Buat Pod: @@ -536,7 +536,7 @@ Kamu dapat menggunakan variabel _environment_ yang ditentukan ConfigMap pada bag Sebagai contoh, spesifikasi Pod berikut -{{< codenew file="pods/pod-configmap-env-var-valueFrom.yaml" >}} +{{% codenew file="pods/pod-configmap-env-var-valueFrom.yaml" %}} dibuat dengan menjalankan @@ -556,7 +556,7 @@ Seperti yang sudah dijelaskan pada [Membuat ConfigMap dari berkas](#membuat-conf Contoh pada bagian ini merujuk pada ConfigMap bernama `special-config`, Seperti berikut. -{{< codenew file="configmap/configmap-multikeys.yaml" >}} +{{% codenew file="configmap/configmap-multikeys.yaml" %}} Buat ConfigMap: @@ -570,7 +570,7 @@ Tambahkan nama ConfigMap di bawah bagian `volumes` pada spesifikasi Pod. Hal ini akan menambahkan data ConfigMap pada direktori yang ditentukan oleh `volumeMounts.mountPath` (pada kasus ini, `/etc/config`). Bagian `command` berisi daftar berkas pada direktori dengan nama-nama yang sesuai dengan kunci-kunci pada ConfigMap. -{{< codenew file="pods/pod-configmap-volume.yaml" >}} +{{% codenew file="pods/pod-configmap-volume.yaml" %}} Buat Pod: @@ -594,7 +594,7 @@ Jika ada beberapa berkas pada direktori `/etc/config/`, berkas-berkas tersebut a Gunakan kolom `path` untuk menentukan jalur berkas yang diinginkan untuk butir tertentu pada ConfigMap (butir ConfigMap tertentu). Pada kasus ini, butir `SPECIAL_LEVEL` akan akan dipasangkan sebagai `config-volume` pada `/etc/config/keys`. -{{< codenew file="pods/pod-configmap-volume-specific-key.yaml" >}} +{{% codenew file="pods/pod-configmap-volume-specific-key.yaml" %}} Buat Pod: diff --git a/content/id/docs/tasks/configure-pod-container/configure-service-account.md b/content/id/docs/tasks/configure-pod-container/configure-service-account.md index e53812d65a8b3..f469b257d85cd 100644 --- a/content/id/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/id/docs/tasks/configure-pod-container/configure-service-account.md @@ -282,7 +282,7 @@ Kubelet juga dapat memproyeksikan _token_ ServiceAccount ke Pod. Kamu dapat mene Perilaku ini diatur pada PodSpec menggunakan tipe ProjectedVolume yaitu [ServiceAccountToken](/id/docs/concepts/storage/volumes/#projected). Untuk memungkinkan Pod dengan _token_ dengan pengguna bertipe _"vault"_ dan durasi validitas selama dua jam, kamu harus mengubah bagian ini pada PodSpec: -{{< codenew file="pods/pod-projected-svc-token.yaml" >}} +{{% codenew file="pods/pod-projected-svc-token.yaml" %}} Buat Pod: diff --git a/content/id/docs/tasks/configure-pod-container/configure-volume-storage.md b/content/id/docs/tasks/configure-pod-container/configure-volume-storage.md index 02d664d530457..e6b6f365a45c0 100644 --- a/content/id/docs/tasks/configure-pod-container/configure-volume-storage.md +++ b/content/id/docs/tasks/configure-pod-container/configure-volume-storage.md @@ -25,7 +25,7 @@ _Filesystem_ dari sebuah Container hanya hidup selama Container itu juga hidup. Pada latihan ini, kamu membuat sebuah Pod yang menjalankan sebuah Container. Pod ini memiliki sebuah Volume dengan tipe [emptyDir](/id/docs/concepts/storage/volumes/#emptydir) yang tetap bertahan, meski Container berakhir dan dimulai ulang. Berikut berkas konfigurasi untuk Pod: -{{< codenew file="pods/storage/redis.yaml" >}} +{{% codenew file="pods/storage/redis.yaml" %}} 1. Membuat Pod: diff --git a/content/id/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/id/docs/tasks/configure-pod-container/pull-image-private-registry.md index 50aad8de9a15a..3fe2ce8407c3b 100644 --- a/content/id/docs/tasks/configure-pod-container/pull-image-private-registry.md +++ b/content/id/docs/tasks/configure-pod-container/pull-image-private-registry.md @@ -176,7 +176,7 @@ Kamu telah berhasil menetapkan kredensial Docker kamu sebagai sebuah Secret yang Berikut ini adalah berkas konfigurasi untuk Pod yang memerlukan akses ke kredensial Docker kamu pada `regcred`: -{{< codenew file="pods/private-reg-pod.yaml" >}} +{{% codenew file="pods/private-reg-pod.yaml" %}} Unduh berkas diatas: diff --git a/content/id/docs/tasks/configure-pod-container/quality-service-pod.md b/content/id/docs/tasks/configure-pod-container/quality-service-pod.md index c5337c8854a75..5ced04b84f1fb 100644 --- a/content/id/docs/tasks/configure-pod-container/quality-service-pod.md +++ b/content/id/docs/tasks/configure-pod-container/quality-service-pod.md @@ -41,7 +41,7 @@ Agar sebuah Pod memiliki kelas QoS Guaranteed: Berikut adalah berkas konfigurasi untuk sebuah Pod dengan satu Container. Container tersebut memiliki sebuah batasan memori dan sebuah permintaan memori, keduanya sama dengan 200MiB. Container itu juga mempunyai batasan CPU dan permintaan CPU yang sama sebesar 700 milliCPU: -{{< codenew file="pods/qos/qos-pod.yaml" >}} +{{% codenew file="pods/qos/qos-pod.yaml" %}} Buatlah Pod: @@ -100,7 +100,7 @@ Sebuah Pod akan mendapatkan kelas QoS Burstable apabila: Berikut adalah berkas konfigurasi untuk Pod dengan satu Container. Container yang dimaksud memiliki batasan memori sebesar 200MiB dan permintaan memori sebesar 100MiB. -{{< codenew file="pods/qos/qos-pod-2.yaml" >}} +{{% codenew file="pods/qos/qos-pod-2.yaml" %}} Buatlah Pod: @@ -147,7 +147,7 @@ Agar Pod mendapatkan kelas QoS BestEffort, Container dalam pod tidak boleh memiliki batasan atau permintaan memori atau CPU. Berikut adalah berkas konfigurasi untuk Pod dengan satu Container. Container yang dimaksud tidak memiliki batasan atau permintaan memori atau CPU apapun. -{{< codenew file="pods/qos/qos-pod-3.yaml" >}} +{{% codenew file="pods/qos/qos-pod-3.yaml" %}} Buatlah Pod: @@ -183,7 +183,7 @@ kubectl delete pod qos-demo-3 --namespace=qos-example Berikut adalah konfigurasi berkas untuk Pod yang memiliki dua Container. Satu Container menentukan permintaan memori sebesar 200MiB. Container yang lain tidak menentukan permintaan atau batasan apapun. -{{< codenew file="pods/qos/qos-pod-4.yaml" >}} +{{% codenew file="pods/qos/qos-pod-4.yaml" %}} Perhatikan bahwa Pod ini memenuhi kriteria untuk kelas QoS Burstable. Maksudnya, Container tersebut tidak memenuhi kriteria untuk kelas QoS Guaranteed, dan satu dari Container tersebut memiliki permintaan memori. diff --git a/content/id/docs/tasks/configure-pod-container/security-context.md b/content/id/docs/tasks/configure-pod-container/security-context.md index d190468399cf1..a8bd1bfdf9620 100644 --- a/content/id/docs/tasks/configure-pod-container/security-context.md +++ b/content/id/docs/tasks/configure-pod-container/security-context.md @@ -50,7 +50,7 @@ dalam spesifikasi Pod. Bagian `securityContext` adalah sebuah objek Aturan keamanan yang kamu tetapkan untuk Pod akan berlaku untuk semua Container dalam Pod tersebut. Berikut sebuah berkas konfigurasi untuk Pod yang memiliki volume `securityContext` dan `emptyDir`: -{{< codenew file="pods/security/security-context.yaml" >}} +{{% codenew file="pods/security/security-context.yaml" %}} Dalam berkas konfigurasi ini, bagian `runAsUser` menentukan bahwa dalam setiap Container pada Pod, semua proses dijalankan oleh ID pengguna 1000. Bagian `runAsGroup` menentukan grup utama dengan ID 3000 untuk @@ -191,7 +191,7 @@ ada aturan yang tumpang tindih. Aturan pada Container mempengaruhi volume pada P Berikut berkas konfigurasi untuk Pod yang hanya memiliki satu Container. Keduanya, baik Pod dan Container memiliki bagian `securityContext` sebagai berikut: -{{< codenew file="pods/security/security-context-2.yaml" >}} +{{% codenew file="pods/security/security-context-2.yaml" %}} Buatlah Pod tersebut: @@ -244,7 +244,7 @@ bagian `capabilities` pada `securityContext` di manifes Container-nya. Pertama-tama, mari melihat apa yang terjadi ketika kamu tidak menyertakan bagian `capabilities`. Berikut ini adalah berkas konfigurasi yang tidak menambah atau mengurangi kemampuan apa pun dari Container: -{{< codenew file="pods/security/security-context-3.yaml" >}} +{{% codenew file="pods/security/security-context-3.yaml" %}} Buatlah Pod tersebut: @@ -306,7 +306,7 @@ Container ini memiliki kapabilitas tambahan yang sudah ditentukan. Berikut ini adalah berkas konfigurasi untuk Pod yang hanya menjalankan satu Container. Konfigurasi ini menambahkan kapabilitas `CAP_NET_ADMIN` dan `CAP_SYS_TIME`: -{{< codenew file="pods/security/security-context-4.yaml" >}} +{{% codenew file="pods/security/security-context-4.yaml" %}} Buatlah Pod tersebut: diff --git a/content/id/docs/tasks/configure-pod-container/share-process-namespace.md b/content/id/docs/tasks/configure-pod-container/share-process-namespace.md index 9b32d74b3cdf6..c764bd8df3eaa 100644 --- a/content/id/docs/tasks/configure-pod-container/share-process-namespace.md +++ b/content/id/docs/tasks/configure-pod-container/share-process-namespace.md @@ -34,7 +34,7 @@ proses pemecahan masalah (_troubleshoot_) image kontainer yang tidak memiliki ut Pembagian _namespace_ proses (_Process Namespace Sharing_) diaktifkan menggunakan _field_ `shareProcessNamespace` `v1.PodSpec`. Sebagai contoh: -{{< codenew file="pods/share-process-namespace.yaml" >}} +{{% codenew file="pods/share-process-namespace.yaml" %}} 1. Buatlah sebuah Pod `nginx` di dalam klaster kamu: diff --git a/content/id/docs/tasks/debug-application-cluster/debug-application-introspection.md b/content/id/docs/tasks/debug-application-cluster/debug-application-introspection.md index 746c46f045a09..a2c7b2f318610 100644 --- a/content/id/docs/tasks/debug-application-cluster/debug-application-introspection.md +++ b/content/id/docs/tasks/debug-application-cluster/debug-application-introspection.md @@ -18,7 +18,7 @@ Pod kamu. Namun ada sejumlah cara untuk mendapatkan lebih banyak informasi tenta Dalam contoh ini, kamu menggunakan Deployment untuk membuat dua buah Pod, yang hampir sama dengan contoh sebelumnya. -{{< codenew file="application/nginx-with-request.yaml" >}} +{{% codenew file="application/nginx-with-request.yaml" %}} Buat Deployment dengan menjalankan perintah ini: diff --git a/content/id/docs/tasks/debug-application-cluster/get-shell-running-container.md b/content/id/docs/tasks/debug-application-cluster/get-shell-running-container.md index e15a8a4df6532..2c9e5fe38e781 100644 --- a/content/id/docs/tasks/debug-application-cluster/get-shell-running-container.md +++ b/content/id/docs/tasks/debug-application-cluster/get-shell-running-container.md @@ -26,7 +26,7 @@ mendapatkan _shell_ untuk masuk ke dalam Container yang sedang berjalan. Dalam latihan ini, kamu perlu membuat Pod yang hanya memiliki satu Container saja. Container tersebut menjalankan _image_ nginx. Berikut ini adalah berkas konfigurasi untuk Pod tersebut: -{{< codenew file="application/shell-demo.yaml" >}} +{{% codenew file="application/shell-demo.yaml" %}} Buatlah Pod tersebut: diff --git a/content/id/docs/tasks/inject-data-application/define-command-argument-container.md b/content/id/docs/tasks/inject-data-application/define-command-argument-container.md index 9f2cd7a7aefc8..f2d248232e004 100644 --- a/content/id/docs/tasks/inject-data-application/define-command-argument-container.md +++ b/content/id/docs/tasks/inject-data-application/define-command-argument-container.md @@ -44,7 +44,7 @@ Merujuk pada [catatan](#catatan) di bawah. Pada latihan ini, kamu akan membuat sebuah Pod baru yang menjalankan sebuah Container. Berkas konfigurasi untuk Pod mendefinisikan sebuah perintah dan dua argumen: -{{< codenew file="pods/commands.yaml" >}} +{{% codenew file="pods/commands.yaml" %}} 1. Buat sebuah Pod dengan berkas konfigurasi YAML: diff --git a/content/id/docs/tasks/inject-data-application/define-environment-variable-container.md b/content/id/docs/tasks/inject-data-application/define-environment-variable-container.md index 0f35ef27f7188..584866d4c4d12 100644 --- a/content/id/docs/tasks/inject-data-application/define-environment-variable-container.md +++ b/content/id/docs/tasks/inject-data-application/define-environment-variable-container.md @@ -30,7 +30,7 @@ Dalam latihan ini, kamu membuat sebuah Pod yang menjalankan satu buah Container. Berkas konfigurasi untuk Pod tersebut mendefinisikan sebuah variabel lingkungan dengan nama `DEMO_GREETING` yang bernilai `"Hello from the environment"`. Berikut berkas konfigurasi untuk Pod tersebut: -{{< codenew file="pods/inject/envars.yaml" >}} +{{% codenew file="pods/inject/envars.yaml" %}} 1. Buatlah sebuah Pod berdasarkan berkas konfigurasi YAML tersebut: diff --git a/content/id/docs/tasks/inject-data-application/distribute-credentials-secure.md b/content/id/docs/tasks/inject-data-application/distribute-credentials-secure.md index c08db9484f6cc..5d4c3633fac96 100644 --- a/content/id/docs/tasks/inject-data-application/distribute-credentials-secure.md +++ b/content/id/docs/tasks/inject-data-application/distribute-credentials-secure.md @@ -37,7 +37,7 @@ Gunakan alat yang telah dipercayai oleh OS kamu untuk menghindari risiko dari pe Berikut ini adalah berkas konfigurasi yang dapat kamu gunakan untuk membuat Secret yang akan menampung nama pengguna dan kata sandi kamu: -{{< codenew file="pods/inject/secret.yaml" >}} +{{% codenew file="pods/inject/secret.yaml" %}} 1. Membuat Secret @@ -95,7 +95,7 @@ Tentu saja ini lebih mudah. Pendekatan yang mendetil setiap langkah di atas bert Berikut ini adalah berkas konfigurasi yang dapat kamu gunakan untuk membuat Pod: -{{< codenew file="pods/inject/secret-pod.yaml" >}} +{{% codenew file="pods/inject/secret-pod.yaml" %}} 1. Membuat Pod: @@ -157,7 +157,7 @@ Berikut ini adalah berkas konfigurasi yang dapat kamu gunakan untuk membuat Pod: * Tentukan nilai `backend-username` yang didefinisikan di Secret ke variabel lingkungan `SECRET_USERNAME` di dalam spesifikasi Pod. - {{< codenew file="pods/inject/pod-single-secret-env-variable.yaml" >}} + {{% codenew file="pods/inject/pod-single-secret-env-variable.yaml" %}} * Membuat Pod: @@ -187,7 +187,7 @@ Berikut ini adalah berkas konfigurasi yang dapat kamu gunakan untuk membuat Pod: * Definisikan variabel lingkungan di dalam spesifikasi Pod. - {{< codenew file="pods/inject/pod-multiple-secret-env-variable.yaml" >}} + {{% codenew file="pods/inject/pod-multiple-secret-env-variable.yaml" %}} * Membuat Pod: @@ -221,7 +221,7 @@ Fitur ini tersedia mulai dari Kubernetes v1.6 dan yang lebih baru. * Gunakan envFrom untuk mendefinisikan semua data Secret sebagai variabel lingkungan Container. _Key_ dari Secret akan mennjadi nama variabel lingkungan di dalam Pod. - {{< codenew file="pods/inject/pod-secret-envFrom.yaml" >}} + {{% codenew file="pods/inject/pod-secret-envFrom.yaml" %}} * Membuat Pod: diff --git a/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md b/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md index 349a283d7a01a..6bf0f53532aa8 100644 --- a/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md +++ b/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md @@ -34,7 +34,7 @@ Untuk informasi lanjut mengenai keterbatasan, lihat [CronJob](/id/docs/concepts/ CronJob membutuhkan sebuah berkas konfigurasi. Ini adalah contoh dari berkas konfigurasi CronJob `.spec` yang akan mencetak waktu sekarang dan pesan "hello" setiap menit: -{{< codenew file="application/job/cronjob.yaml" >}} +{{% codenew file="application/job/cronjob.yaml" %}} Jalankan contoh CronJob menggunakan perintah berikut: diff --git a/content/id/docs/tasks/manage-kubernetes-objects/declarative-config.md b/content/id/docs/tasks/manage-kubernetes-objects/declarative-config.md index 88eeaf38d3079..073937e189409 100644 --- a/content/id/docs/tasks/manage-kubernetes-objects/declarative-config.md +++ b/content/id/docs/tasks/manage-kubernetes-objects/declarative-config.md @@ -52,7 +52,7 @@ Tambahkan parameter `-R` untuk memproses seluruh direktori secara rekursif. Berikut sebuah contoh *file* konfigurasi objek: -{{< codenew file="application/simple_deployment.yaml" >}} +{{% codenew file="application/simple_deployment.yaml" %}} Jalankan perintah `kubectl diff` untuk menampilkan objek yang akan dibuat: @@ -135,7 +135,7 @@ Tambahkan argumen `-R` untuk memproses seluruh direktori secara rekursif. Berikut sebuah contoh *file* konfigurasi: -{{< codenew file="application/simple_deployment.yaml" >}} +{{% codenew file="application/simple_deployment.yaml" %}} Buat objek dengan perintah `kubectl apply`:: @@ -248,7 +248,7 @@ spec: Perbarui *file* konfigurasi `simple_deployment.yaml`, ubah *image* dari `nginx:1.7.9` ke `nginx:1.11.9`, dan hapus *field* `minReadySeconds`: -{{< codenew file="application/update_deployment.yaml" >}} +{{% codenew file="application/update_deployment.yaml" %}} Terapkan perubahan yang telah dibuat di *file* konfigurasi: @@ -379,7 +379,7 @@ Perintah `kubectl apply` menulis konten dari berkas konfigurasi ke anotasi `kube Agar lebih jelas, simak contoh berikut. Misalkan, berikut adalah *file* konfigurasi untuk sebuah objek Deployment: -{{< codenew file="application/update_deployment.yaml" >}} +{{% codenew file="application/update_deployment.yaml" %}} Juga, misalkan, berikut adalah konfigurasi *live* dari objek Deployment yang sama: @@ -627,7 +627,7 @@ TODO(pwittrock): *Uncomment* ini untuk versi 1.6 Berikut adalah sebuah *file* konfigurasi untuk sebuah Deployment. Berkas berikut tidak menspesifikasikan `strategy`: -{{< codenew file="application/simple_deployment.yaml" >}} +{{% codenew file="application/simple_deployment.yaml" %}} Buat objek dengan perintah `kubectl apply`: diff --git a/content/id/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/id/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 7a23efa6ff3c4..1c16b087b79db 100644 --- a/content/id/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/content/id/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -57,7 +57,7 @@ Bagian ini mendefinisikan laman index.php yang melakukan beberapa komputasi inte Pertama, kita akan memulai Deployment yang menjalankan _image_ dan mengeksposnya sebagai Service menggunakan konfigurasi berikut: -{{< codenew file="application/php-apache.yaml" >}} +{{% codenew file="application/php-apache.yaml" %}} Jalankan perintah berikut: @@ -434,7 +434,7 @@ Semua metrik di HorizontalPodAutoscaler dan metrik API ditentukan menggunakan no Daripada menggunakan perintah `kubectl autoscale` untuk membuat HorizontalPodAutoscaler secara imperatif, kita dapat menggunakan berkas berikut untuk membuatnya secara deklaratif: -{{< codenew file="application/hpa/php-apache.yaml" >}} +{{% codenew file="application/hpa/php-apache.yaml" %}} Kita akan membuat _autoscaler_ dengan menjalankan perintah berikut: diff --git a/content/id/docs/tasks/run-application/run-stateless-application-deployment.md b/content/id/docs/tasks/run-application/run-stateless-application-deployment.md index 74e76c827be57..3e96eb1fda1e6 100644 --- a/content/id/docs/tasks/run-application/run-stateless-application-deployment.md +++ b/content/id/docs/tasks/run-application/run-stateless-application-deployment.md @@ -38,7 +38,7 @@ Kamu dapat menjalankan aplikasi dengan membuat sebuah objek Deployment Kubernete dapat mendeskripsikan sebuah Deployment di dalam berkas YAML. Sebagai contohnya, berkas YAML berikut mendeskripsikan sebuah Deployment yang menjalankan _image_ Docker nginx:1.14.2: -{{< codenew file="application/deployment.yaml" >}} +{{% codenew file="application/deployment.yaml" %}} 1. Buatlah sebuah Deployment berdasarkan berkas YAML: @@ -100,7 +100,7 @@ YAML berikut mendeskripsikan sebuah Deployment yang menjalankan _image_ Docker n Kamu dapat mengubah Deployment dengan cara mengaplikasikan berkas YAML yang baru. Berkas YAML ini memberikan spesifikasi Deployment untuk menggunakan Nginx versi 1.16.1. -{{< codenew file="application/deployment-update.yaml" >}} +{{% codenew file="application/deployment-update.yaml" %}} 1. Terapkan berkas YAML yang baru: @@ -116,7 +116,7 @@ Kamu dapat meningkatkan jumlah Pod di dalam Deployment dengan menerapkan berkas YAML baru. Berkas YAML ini akan meningkatkan jumlah replika menjadi 4, yang nantinya memberikan spesifikasi agar Deployment memiliki 4 buah Pod. -{{< codenew file="application/deployment-scale.yaml" >}} +{{% codenew file="application/deployment-scale.yaml" %}} 1. Terapkan berkas YAML: diff --git a/content/id/docs/tutorials/hello-minikube.md b/content/id/docs/tutorials/hello-minikube.md index 6790dbf47fba5..d2e4a5de76677 100644 --- a/content/id/docs/tutorials/hello-minikube.md +++ b/content/id/docs/tutorials/hello-minikube.md @@ -38,9 +38,9 @@ Kamupun bisa mengikuti tutorial ini kalau sudah instalasi minikube di lokal. Sil Tutorial ini menyediakan image Kontainer yang dibuat melalui barisan kode berikut: -{{< codenew language="js" file="minikube/server.js" >}} +{{% codenew language="js" file="minikube/server.js" %}} -{{< codenew language="conf" file="minikube/Dockerfile" >}} +{{% codenew language="conf" file="minikube/Dockerfile" %}} Untuk info lebih lanjut tentang perintah `docker build`, baca [dokumentasi Docker](https://docs.docker.com/engine/reference/commandline/build/). diff --git a/content/id/docs/tutorials/stateful-application/basic-stateful-set.md b/content/id/docs/tutorials/stateful-application/basic-stateful-set.md index b664a3bb8abf1..7ce5437d61bfc 100644 --- a/content/id/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/content/id/docs/tutorials/stateful-application/basic-stateful-set.md @@ -59,7 +59,7 @@ Contoh ini menciptakan sebuah [Service _headless_](/id/docs/concepts/services-networking/service/#service-headless), `nginx`, untuk mempublikasikan alamat IP Pod di dalam StatefulSet, `web`. -{{< codenew file="application/web/web.yaml" >}} +{{% codenew file="application/web/web.yaml" %}} Unduh contoh di atas, dan simpan ke dalam berkas dengan nama `web.yaml`. @@ -1075,7 +1075,7 @@ menjalankan atau mengakhiri semua Pod secara bersamaan (paralel), dan tidak menu suatu Pod menjadi Running dan Ready atau benar-benar berakhir sebelum menjalankan atau mengakhiri Pod yang lain. -{{< codenew file="application/web/web-parallel.yaml" >}} +{{% codenew file="application/web/web-parallel.yaml" %}} Unduh contoh di atas, dan simpan ke sebuah berkas dengan nama `web-parallel.yaml`. diff --git a/content/id/docs/tutorials/stateless-application/expose-external-ip-address.md b/content/id/docs/tutorials/stateless-application/expose-external-ip-address.md index df297f4c634b7..2152c8e0e3621 100644 --- a/content/id/docs/tutorials/stateless-application/expose-external-ip-address.md +++ b/content/id/docs/tutorials/stateless-application/expose-external-ip-address.md @@ -42,7 +42,7 @@ yang mengekspos alamat IP eksternal. 1. Jalankan sebuah aplikasi Hello World pada klaster kamu: -{{< codenew file="service/load-balancer-example.yaml" >}} +{{% codenew file="service/load-balancer-example.yaml" %}} ```shell kubectl apply -f https://k8s.io/examples/service/load-balancer-example.yaml From 67b931ea1482821dabfb7938362ee531beeea307 Mon Sep 17 00:00:00 2001 From: Haripriya Date: Fri, 28 Jul 2023 18:36:05 +0530 Subject: [PATCH 018/229] created issue-wrangler.md --- .../contribute/participate/issue-wrangler.md | 40 +++++++++++++++++++ 1 file changed, 40 insertions(+) create mode 100644 content/en/docs/contribute/participate/issue-wrangler.md diff --git a/content/en/docs/contribute/participate/issue-wrangler.md b/content/en/docs/contribute/participate/issue-wrangler.md new file mode 100644 index 0000000000000..7788bd012418c --- /dev/null +++ b/content/en/docs/contribute/participate/issue-wrangler.md @@ -0,0 +1,40 @@ +--- +title: Issue Wranglers +content_type: concept +weight: 20 +--- + + + +There are many issues that need triage, and in order to reduce our reliance on formal approvers or reviewers, we have introduced a new role to wrangle issues every week.The main responsibility of this role is to bridge the gap between organizational contributor and reviewer. +This section covers the duties of a PR wrangler. + + + +## Duties + +Each day in a week-long shift as Issue Wrangler: + +- Making sure the issue is worded and titled correctly to provide contributors with adequate information. +- Identifying whether the issue falls under the support category and assigning a "triage/accepted" status. +- Assuring the issue is tagged with the appropriate sig/area/kind labels. +- Keeping an eye on stale & rotten issues within the kubernetes/website repository. +- [Issues board](https://github.com/orgs/kubernetes/projects/72/views/1) maintenance would be nice + +### Requirements + +- Must be an active member of the Kubernetes organization. +- A minimum of 15 quality contributions to Kubernetes (of which a certain amount should be directed towards kubernetes/website). +- Performing the role in an informal capacity already + +### What is its place in the contributor hierarchy? + +- In between a contributor and a reviewer. +- For someone who assumes the role and demonstrates ability, the next step is to shadow a PR Wrangler and review PRs informally. + +### Process Implementation + +- Identify people who are already triaging issues and put them on a roster. +- Pilot a shadow program and gauge interest +- The mantel may be passed on if there is interest. +- Keep repeating. From 5cc83a83634b875e31e34bfbae6f58cd2222a211 Mon Sep 17 00:00:00 2001 From: Haripriya Date: Sat, 29 Jul 2023 22:31:56 +0530 Subject: [PATCH 019/229] updated as per suggestions --- content/en/docs/contribute/participate/issue-wrangler.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/content/en/docs/contribute/participate/issue-wrangler.md b/content/en/docs/contribute/participate/issue-wrangler.md index 7788bd012418c..65e3514890fc8 100644 --- a/content/en/docs/contribute/participate/issue-wrangler.md +++ b/content/en/docs/contribute/participate/issue-wrangler.md @@ -6,8 +6,7 @@ weight: 20 -There are many issues that need triage, and in order to reduce our reliance on formal approvers or reviewers, we have introduced a new role to wrangle issues every week.The main responsibility of this role is to bridge the gap between organizational contributor and reviewer. -This section covers the duties of a PR wrangler. +In order to reduce the burden on the [PR Wrangler](/docs/contribute/participate/pr-wrangler), formal approvers, and reviewers, members of SIG Docs take week long shifts [triaging and categorising issues](/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues) for the repository. @@ -15,7 +14,7 @@ This section covers the duties of a PR wrangler. Each day in a week-long shift as Issue Wrangler: -- Making sure the issue is worded and titled correctly to provide contributors with adequate information. +- Triage and tag incoming issues daily. See [Triage and categorize issues](https://github.com/kubernetes/website/blob/main/docs/contribute/review/for-approvers/#triage-and-categorize-issues) for guidelines on how SIG Docs uses metadata. - Identifying whether the issue falls under the support category and assigning a "triage/accepted" status. - Assuring the issue is tagged with the appropriate sig/area/kind labels. - Keeping an eye on stale & rotten issues within the kubernetes/website repository. From dcee41b6e50a8ad014f95332c1c64dfcd8a532d8 Mon Sep 17 00:00:00 2001 From: Haripriya Date: Sun, 30 Jul 2023 00:54:45 +0530 Subject: [PATCH 020/229] added prow commands and when to close issues --- .../contribute/participate/issue-wrangler.md | 35 ++++++++++++++----- 1 file changed, 27 insertions(+), 8 deletions(-) diff --git a/content/en/docs/contribute/participate/issue-wrangler.md b/content/en/docs/contribute/participate/issue-wrangler.md index 65e3514890fc8..099d6701185a8 100644 --- a/content/en/docs/contribute/participate/issue-wrangler.md +++ b/content/en/docs/contribute/participate/issue-wrangler.md @@ -26,14 +26,33 @@ Each day in a week-long shift as Issue Wrangler: - A minimum of 15 quality contributions to Kubernetes (of which a certain amount should be directed towards kubernetes/website). - Performing the role in an informal capacity already -### What is its place in the contributor hierarchy? +### Helpful Prow commands for wranglers -- In between a contributor and a reviewer. -- For someone who assumes the role and demonstrates ability, the next step is to shadow a PR Wrangler and review PRs informally. +``` +# reopen an issue +/reopen -### Process Implementation +# transfer issues that don't fit in k/website to another repository +/transfer[-issue] -- Identify people who are already triaging issues and put them on a roster. -- Pilot a shadow program and gauge interest -- The mantel may be passed on if there is interest. -- Keep repeating. +# change the state of rotten issues +/remove-lifecycle rotten +``` + +### When to close Issues + +For an open source project to succeed, good issue management is crucial. But it is also critical to resolve issues in order to maintain the repository and communicate clearly with contributors and users. + +Close issues when: + +- A similar issue has been reported more than once. It is also advisable to direct the users to the original issue. +- It is very difficult to understand and address the issue presented by the author with the information provided. + However, encourage the user to provide more details or reopen the issue if they can reproduce it later. +- Having implemented the same functionality elsewhere. One can close this issue and direct user to the appropriate place. +- Feature requests that are not currently planned or aligned with the project's goals. +- If the assignee has not responded to comments or feedback in more than two weeks + The issue can be assigned to someone who is highly motivated to contribute. +- In cases where an issue appears to be spam and is clearly unrelated. +- If the issue is related to an external limitation or dependency and is beyond the control of the project. + +To close an issue, leave a `/close` comment on the issue. From a62c27cbdcfaee81bdf7c1a4a7c7ed455adbd5b2 Mon Sep 17 00:00:00 2001 From: Dipankar Das Date: Tue, 1 Aug 2023 14:40:52 +0530 Subject: [PATCH 021/229] added note for secret to be in same namespace as workloads secret is used to have authenticate with the private Container registry it should be in the same namespace which container the workloads Signed-off-by: Dipankar Das --- .../configure-pod-container/pull-image-private-registry.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md index e871d9bb810b6..549fd8f6802c8 100644 --- a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md +++ b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md @@ -211,7 +211,9 @@ kubectl get pod private-reg ``` {{< note >}} -In case the Pod fails to start with the status `ImagePullBackOff`, view the Pod events: +Please ensure that the Pod or Deployment, and so on, created within a particular namespace contains the necessary secret in that same namespace. + +Also, in case the Pod fails to start with the status `ImagePullBackOff`, view the Pod events: ```shell kubectl describe pod private-reg ``` @@ -242,4 +244,4 @@ Events: * Learn more about [using a private registry](/docs/concepts/containers/images/#using-a-private-registry). * Learn more about [adding image pull secrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account). * See [kubectl create secret docker-registry](/docs/reference/generated/kubectl/kubectl-commands/#-em-secret-docker-registry-em-). -* See the `imagePullSecrets` field within the [container definitions](/docs/reference/kubernetes-api/workload-resources/pod-v1/#containers) of a Pod +* See the `imagePullSecrets` field within the [container definitions](/docs/reference/kubernetes-api/workload-resources/pod-v1/#containers) of a Pod \ No newline at end of file From aa6e4639349a45309846ccf77af44a48dda4cb7e Mon Sep 17 00:00:00 2001 From: Dipankar Das <65275144+dipankardas011@users.noreply.github.com> Date: Wed, 2 Aug 2023 12:44:18 +0530 Subject: [PATCH 022/229] Update content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md Co-authored-by: Tim Bannister --- .../configure-pod-container/pull-image-private-registry.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md index 549fd8f6802c8..312cdb6248536 100644 --- a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md +++ b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md @@ -211,7 +211,10 @@ kubectl get pod private-reg ``` {{< note >}} -Please ensure that the Pod or Deployment, and so on, created within a particular namespace contains the necessary secret in that same namespace. +To use image pull secrets for a Pod (or a Deployment, or other object that +has a pod template that you are using), you need to make sure that the appropriate +Secret does exist in the right namespace. The namespace to use is the same +namespace where you defined the Pod. Also, in case the Pod fails to start with the status `ImagePullBackOff`, view the Pod events: ```shell From 25e953f521e65edb2fcab3c9ab783c56cb615058 Mon Sep 17 00:00:00 2001 From: utkarsh-singh1 Date: Sat, 12 Aug 2023 23:21:10 +0530 Subject: [PATCH 023/229] Updated /docs/reference/kubectl/cheatsheet.md Signed-off-by: utkarsh-singh1 --- content/en/docs/reference/kubectl/cheatsheet.md | 9 --------- 1 file changed, 9 deletions(-) diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index df49986712c5f..3cb201634670a 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -39,15 +39,6 @@ complete -o default -F __start_kubectl k source <(kubectl completion zsh) # set up autocomplete in zsh into the current shell echo '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc # add autocomplete permanently to your zsh shell ``` - -### FISH - -Require kubectl version 1.23 or above. - -```bash -echo 'kubectl completion fish | source' >> ~/.config/fish/config.fish # add autocomplete permanently to your fish shell -``` - ### A note on `--all-namespaces` Appending `--all-namespaces` happens frequently enough that you should be aware of the shorthand for `--all-namespaces`: From 7ed368d0be2f5b31bb38655e19f7899db167f3d1 Mon Sep 17 00:00:00 2001 From: windsonsea Date: Mon, 7 Aug 2023 10:05:47 +0800 Subject: [PATCH 024/229] Clean up /releases --- content/en/releases/_index.md | 13 ++- content/en/releases/download.md | 6 +- content/en/releases/notes.md | 8 +- content/en/releases/release-managers.md | 5 +- content/en/releases/version-skew-policy.md | 114 ++++++++++++++------- 5 files changed, 100 insertions(+), 46 deletions(-) diff --git a/content/en/releases/_index.md b/content/en/releases/_index.md index 5b62f95d82add..092729a2f477f 100644 --- a/content/en/releases/_index.md +++ b/content/en/releases/_index.md @@ -4,13 +4,17 @@ title: Releases type: docs --- - -The Kubernetes project maintains release branches for the most recent three minor releases ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). Kubernetes 1.19 and newer receive [approximately 1 year of patch support](/releases/patch-releases/#support-period). Kubernetes 1.18 and older received approximately 9 months of patch support. +The Kubernetes project maintains release branches for the most recent three minor releases +({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). +Kubernetes 1.19 and newer receive +[approximately 1 year of patch support](/releases/patch-releases/#support-period). +Kubernetes 1.18 and older received approximately 9 months of patch support. Kubernetes versions are expressed as **x.y.z**, -where **x** is the major version, **y** is the minor version, and **z** is the patch version, following [Semantic Versioning](https://semver.org/) terminology. +where **x** is the major version, **y** is the minor version, and **z** is the patch version, +following [Semantic Versioning](https://semver.org/) terminology. More information in the [version skew policy](/releases/version-skew-policy/) document. @@ -22,6 +26,7 @@ More information in the [version skew policy](/releases/version-skew-policy/) do ## Upcoming Release -Check out the [schedule](https://github.com/kubernetes/sig-release/tree/master/releases/release-{{< skew nextMinorVersion >}}) for the upcoming **{{< skew nextMinorVersion >}}** Kubernetes release! +Check out the [schedule](https://github.com/kubernetes/sig-release/tree/master/releases/release-{{< skew nextMinorVersion >}}) +for the upcoming **{{< skew nextMinorVersion >}}** Kubernetes release! ## Helpful Resources diff --git a/content/en/releases/download.md b/content/en/releases/download.md index 0cee6e3556afb..c728ec015f9a8 100644 --- a/content/en/releases/download.md +++ b/content/en/releases/download.md @@ -43,6 +43,7 @@ You can fetch that list using: ```shell curl -Ls "https://sbom.k8s.io/$(curl -Ls https://dl.k8s.io/release/stable.txt)/release" | grep "SPDXID: SPDXRef-Package-registry.k8s.io" | grep -v sha256 | cut -d- -f3- | sed 's/-/\//' | sed 's/-v1/:v1/' ``` + For Kubernetes v{{< skew currentVersion >}}, the only kind of code artifact that you can verify integrity for is a container image, using the experimental signing support. @@ -50,11 +51,10 @@ signing support. To manually verify signed container images of Kubernetes core components, refer to [Verify Signed Container Images](/docs/tasks/administer-cluster/verify-signed-artifacts). - - ## Binaries -Find links to download Kubernetes components (and their checksums) in the [CHANGELOG](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG) files. +Find links to download Kubernetes components (and their checksums) in the +[CHANGELOG](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG) files. Alternately, use [downloadkubernetes.com](https://www.downloadkubernetes.com/) to filter by version and architecture. diff --git a/content/en/releases/notes.md b/content/en/releases/notes.md index 1bb60c810627c..bcda7d0a04437 100644 --- a/content/en/releases/notes.md +++ b/content/en/releases/notes.md @@ -8,6 +8,10 @@ sitemap: priority: 0.5 --- -Release notes can be found by reading the [Changelog](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG) that matches your Kubernetes version. View the changelog for {{< skew currentVersionAddMinor 0 >}} on [GitHub](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-{{< skew currentVersionAddMinor 0 >}}.md). +Release notes can be found by reading the [Changelog](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG) +that matches your Kubernetes version. View the changelog for {{< skew currentVersionAddMinor 0 >}} on +[GitHub](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-{{< skew currentVersionAddMinor 0 >}}.md). -Alternately, release notes can be searched and filtered online at: [relnotes.k8s.io](https://relnotes.k8s.io). View filtered release notes for {{< skew currentVersionAddMinor 0 >}} on [relnotes.k8s.io](https://relnotes.k8s.io/?releaseVersions={{< skew currentVersionAddMinor 0 >}}.0). +Alternately, release notes can be searched and filtered online at: [relnotes.k8s.io](https://relnotes.k8s.io). +View filtered release notes for {{< skew currentVersionAddMinor 0 >}} on +[relnotes.k8s.io](https://relnotes.k8s.io/?releaseVersions={{< skew currentVersionAddMinor 0 >}}.0). diff --git a/content/en/releases/release-managers.md b/content/en/releases/release-managers.md index 34fab5552f9b5..cbfd77b66e597 100644 --- a/content/en/releases/release-managers.md +++ b/content/en/releases/release-managers.md @@ -31,7 +31,10 @@ The responsibilities of each role are described below. ### Security Embargo Policy -Some information about releases is subject to embargo and we have defined policy about how those embargoes are set. Please refer to the [Security Embargo Policy](https://github.com/kubernetes/committee-security-response/blob/main/private-distributors-list.md#embargo-policy) for more information. +Some information about releases is subject to embargo and we have defined policy about +how those embargoes are set. Please refer to the +[Security Embargo Policy](https://github.com/kubernetes/committee-security-response/blob/main/private-distributors-list.md#embargo-policy) +for more information. ## Handbooks diff --git a/content/en/releases/version-skew-policy.md b/content/en/releases/version-skew-policy.md index 7a9f1c753a1b2..7031402e5c356 100644 --- a/content/en/releases/version-skew-policy.md +++ b/content/en/releases/version-skew-policy.md @@ -20,13 +20,19 @@ Specific cluster deployment tools may place additional restrictions on version s ## Supported versions -Kubernetes versions are expressed as **x.y.z**, where **x** is the major version, **y** is the minor version, and **z** is the patch version, following [Semantic Versioning](https://semver.org/) terminology. -For more information, see [Kubernetes Release Versioning](https://git.k8s.io/sig-release/release-engineering/versioning.md#kubernetes-release-versioning). +Kubernetes versions are expressed as **x.y.z**, where **x** is the major version, +**y** is the minor version, and **z** is the patch version, following +[Semantic Versioning](https://semver.org/) terminology. For more information, see +[Kubernetes Release Versioning](https://git.k8s.io/sig-release/release-engineering/versioning.md#kubernetes-release-versioning). -The Kubernetes project maintains release branches for the most recent three minor releases ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). Kubernetes 1.19 and newer receive [approximately 1 year of patch support](/releases/patch-releases/#support-period). Kubernetes 1.18 and older received approximately 9 months of patch support. +The Kubernetes project maintains release branches for the most recent three minor releases +({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). +Kubernetes 1.19 and newer receive [approximately 1 year of patch support](/releases/patch-releases/#support-period). +Kubernetes 1.18 and older received approximately 9 months of patch support. -Applicable fixes, including security fixes, may be backported to those three release branches, depending on severity and feasibility. -Patch releases are cut from those branches at a [regular cadence](/releases/patch-releases/#cadence), plus additional urgent releases, when required. +Applicable fixes, including security fixes, may be backported to those three release branches, +depending on severity and feasibility. Patch releases are cut from those branches at a +[regular cadence](/releases/patch-releases/#cadence), plus additional urgent releases, when required. The [Release Managers](/releases/release-managers/) group owns this decision. @@ -36,7 +42,8 @@ For more information, see the Kubernetes [patch releases](/releases/patch-releas ### kube-apiserver -In [highly-available (HA) clusters](/docs/setup/production-environment/tools/kubeadm/high-availability/), the newest and oldest `kube-apiserver` instances must be within one minor version. +In [highly-available (HA) clusters](/docs/setup/production-environment/tools/kubeadm/high-availability/), +the newest and oldest `kube-apiserver` instances must be within one minor version. Example: @@ -51,7 +58,8 @@ Example: Example: * `kube-apiserver` is at **{{< skew currentVersion >}}** -* `kubelet` is supported at **{{< skew currentVersion >}}**, **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}** +* `kubelet` is supported at **{{< skew currentVersion >}}**, **{{< skew currentVersionAddMinor -1 >}}**, + **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}** {{< note >}} If version skew exists between `kube-apiserver` instances in an HA cluster, this narrows the allowed `kubelet` versions. @@ -60,18 +68,24 @@ If version skew exists between `kube-apiserver` instances in an HA cluster, this Example: * `kube-apiserver` instances are at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** -* `kubelet` is supported at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}** (**{{< skew currentVersion >}}** is not supported because that would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**) +* `kubelet` is supported at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, + and **{{< skew currentVersionAddMinor -3 >}}** (**{{< skew currentVersion >}}** is not supported because that + would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**) ### kube-proxy * `kube-proxy` must not be newer than `kube-apiserver`. -* `kube-proxy` may be up to three minor versions older than `kube-apiserver` (`kube-proxy` < 1.25 may only be up to two minor versions older than `kube-apiserver`). -* `kube-proxy` may be up to three minor versions older or newer than the `kubelet` instance it runs alongside (`kube-proxy` < 1.25 may only be up to two minor versions older or newer than the `kubelet` instance it runs alongside). +* `kube-proxy` may be up to three minor versions older than `kube-apiserver` + (`kube-proxy` < 1.25 may only be up to two minor versions older than `kube-apiserver`). +* `kube-proxy` may be up to three minor versions older or newer than the `kubelet` instance + it runs alongside (`kube-proxy` < 1.25 may only be up to two minor versions older or newer + than the `kubelet` instance it runs alongside). Example: * `kube-apiserver` is at **{{< skew currentVersion >}}** -* `kube-proxy` is supported at **{{< skew currentVersion >}}**, **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}** +* `kube-proxy` is supported at **{{< skew currentVersion >}}**, **{{< skew currentVersionAddMinor -1 >}}**, + **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}** {{< note >}} If version skew exists between `kube-apiserver` instances in an HA cluster, this narrows the allowed `kube-proxy` versions. @@ -80,26 +94,36 @@ If version skew exists between `kube-apiserver` instances in an HA cluster, this Example: * `kube-apiserver` instances are at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** -* `kube-proxy` is supported at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}** (**{{< skew currentVersion >}}** is not supported because that would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**) +* `kube-proxy` is supported at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, + and **{{< skew currentVersionAddMinor -3 >}}** (**{{< skew currentVersion >}}** is not supported because that would + be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**) ### kube-controller-manager, kube-scheduler, and cloud-controller-manager -`kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` must not be newer than the `kube-apiserver` instances they communicate with. They are expected to match the `kube-apiserver` minor version, but may be up to one minor version older (to allow live upgrades). +`kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` must not be newer than the +`kube-apiserver` instances they communicate with. They are expected to match the `kube-apiserver` minor version, +but may be up to one minor version older (to allow live upgrades). Example: * `kube-apiserver` is at **{{< skew currentVersion >}}** -* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** +* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported + at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** {{< note >}} -If version skew exists between `kube-apiserver` instances in an HA cluster, and these components can communicate with any `kube-apiserver` instance in the cluster (for example, via a load balancer), this narrows the allowed versions of these components. +If version skew exists between `kube-apiserver` instances in an HA cluster, and these components +can communicate with any `kube-apiserver` instance in the cluster (for example, via a load balancer), +this narrows the allowed versions of these components. {{< /note >}} Example: * `kube-apiserver` instances are at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** -* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` communicate with a load balancer that can route to any `kube-apiserver` instance -* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at **{{< skew currentVersionAddMinor -1 >}}** (**{{< skew currentVersion >}}** is not supported because that would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**) +* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` communicate with a load balancer + that can route to any `kube-apiserver` instance +* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at + **{{< skew currentVersionAddMinor -1 >}}** (**{{< skew currentVersion >}}** is not supported + because that would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**) ### kubectl @@ -108,7 +132,8 @@ Example: Example: * `kube-apiserver` is at **{{< skew currentVersion >}}** -* `kubectl` is supported at **{{< skew currentVersionAddMinor 1 >}}**, **{{< skew currentVersion >}}**, and **{{< skew currentVersionAddMinor -1 >}}** +* `kubectl` is supported at **{{< skew currentVersionAddMinor 1 >}}**, **{{< skew currentVersion >}}**, + and **{{< skew currentVersionAddMinor -1 >}}** {{< note >}} If version skew exists between `kube-apiserver` instances in an HA cluster, this narrows the supported `kubectl` versions. @@ -117,21 +142,24 @@ If version skew exists between `kube-apiserver` instances in an HA cluster, this Example: * `kube-apiserver` instances are at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** -* `kubectl` is supported at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** (other versions would be more than one minor version skewed from one of the `kube-apiserver` components) +* `kubectl` is supported at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** + (other versions would be more than one minor version skewed from one of the `kube-apiserver` components) ## Supported component upgrade order -The supported version skew between components has implications on the order in which components must be upgraded. -This section describes the order in which components must be upgraded to transition an existing cluster from version **{{< skew currentVersionAddMinor -1 >}}** to version **{{< skew currentVersion >}}**. +The supported version skew between components has implications on the order +in which components must be upgraded. This section describes the order in +which components must be upgraded to transition an existing cluster from version +**{{< skew currentVersionAddMinor -1 >}}** to version **{{< skew currentVersion >}}**. Optionally, when preparing to upgrade, the Kubernetes project recommends that you do the following to benefit from as many regression and bug fixes as -possible during your upgrade: +possible during your upgrade: -* Ensure that components are on the most recent patch version of your current - minor version. -* Upgrade components to the most recent patch version of the target minor - version. +* Ensure that components are on the most recent patch version of your current + minor version. +* Upgrade components to the most recent patch version of the target minor + version. For example, if you're running version {{}}, ensure that you're on the most recent patch version. Then, upgrade to the most @@ -142,12 +170,19 @@ recent patch version of {{}}. Pre-requisites: * In a single-instance cluster, the existing `kube-apiserver` instance is **{{< skew currentVersionAddMinor -1 >}}** -* In an HA cluster, all `kube-apiserver` instances are at **{{< skew currentVersionAddMinor -1 >}}** or **{{< skew currentVersion >}}** (this ensures maximum skew of 1 minor version between the oldest and newest `kube-apiserver` instance) -* The `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` instances that communicate with this server are at version **{{< skew currentVersionAddMinor -1 >}}** (this ensures they are not newer than the existing API server version, and are within 1 minor version of the new API server version) -* `kubelet` instances on all nodes are at version **{{< skew currentVersionAddMinor -1 >}}** or **{{< skew currentVersionAddMinor -2 >}}** (this ensures they are not newer than the existing API server version, and are within 2 minor versions of the new API server version) +* In an HA cluster, all `kube-apiserver` instances are at **{{< skew currentVersionAddMinor -1 >}}** or + **{{< skew currentVersion >}}** (this ensures maximum skew of 1 minor version between the oldest and newest `kube-apiserver` instance) +* The `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` instances that + communicate with this server are at version **{{< skew currentVersionAddMinor -1 >}}** + (this ensures they are not newer than the existing API server version, and are within 1 minor version of the new API server version) +* `kubelet` instances on all nodes are at version **{{< skew currentVersionAddMinor -1 >}}** or **{{< skew currentVersionAddMinor -2 >}}** + (this ensures they are not newer than the existing API server version, and are within 2 minor versions of the new API server version) * Registered admission webhooks are able to handle the data the new `kube-apiserver` instance will send them: - * `ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration` objects are updated to include any new versions of REST resources added in **{{< skew currentVersion >}}** (or use the [`matchPolicy: Equivalent` option](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchpolicy) available in v1.15+) - * The webhooks are able to handle any new versions of REST resources that will be sent to them, and any new fields added to existing versions in **{{< skew currentVersion >}}** + * `ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration` objects are updated to include + any new versions of REST resources added in **{{< skew currentVersion >}}** + (or use the [`matchPolicy: Equivalent` option](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchpolicy) available in v1.15+) + * The webhooks are able to handle any new versions of REST resources that will be sent to them, + and any new fields added to existing versions in **{{< skew currentVersion >}}** Upgrade `kube-apiserver` to **{{< skew currentVersion >}}** @@ -161,7 +196,9 @@ require `kube-apiserver` to not skip minor versions when upgrading, even in sing Pre-requisites: -* The `kube-apiserver` instances these components communicate with are at **{{< skew currentVersion >}}** (in HA clusters in which these control plane components can communicate with any `kube-apiserver` instance in the cluster, all `kube-apiserver` instances must be upgraded before upgrading these components) +* The `kube-apiserver` instances these components communicate with are at **{{< skew currentVersion >}}** + (in HA clusters in which these control plane components can communicate with any `kube-apiserver` + instance in the cluster, all `kube-apiserver` instances must be upgraded before upgrading these components) Upgrade `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` to **{{< skew currentVersion >}}**. There is no @@ -175,7 +212,8 @@ Pre-requisites: * The `kube-apiserver` instances the `kubelet` communicates with are at **{{< skew currentVersion >}}** -Optionally upgrade `kubelet` instances to **{{< skew currentVersion >}}** (or they can be left at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, or **{{< skew currentVersionAddMinor -3 >}}**) +Optionally upgrade `kubelet` instances to **{{< skew currentVersion >}}** (or they can be left at +**{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, or **{{< skew currentVersionAddMinor -3 >}}**) {{< note >}} Before performing a minor version `kubelet` upgrade, [drain](/docs/tasks/administer-cluster/safely-drain-node/) pods from that node. @@ -183,7 +221,8 @@ In-place minor version `kubelet` upgrades are not supported. {{}} {{< warning >}} -Running a cluster with `kubelet` instances that are persistently three minor versions behind `kube-apiserver` means they must be upgraded before the control plane can be upgraded. +Running a cluster with `kubelet` instances that are persistently three minor versions behind +`kube-apiserver` means they must be upgraded before the control plane can be upgraded. {{}} ### kube-proxy @@ -192,8 +231,11 @@ Pre-requisites: * The `kube-apiserver` instances `kube-proxy` communicates with are at **{{< skew currentVersion >}}** -Optionally upgrade `kube-proxy` instances to **{{< skew currentVersion >}}** (or they can be left at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, or **{{< skew currentVersionAddMinor -3 >}}**) +Optionally upgrade `kube-proxy` instances to **{{< skew currentVersion >}}** +(or they can be left at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, +or **{{< skew currentVersionAddMinor -3 >}}**) {{< warning >}} -Running a cluster with `kube-proxy` instances that are persistently three minor versions behind `kube-apiserver` means they must be upgraded before the control plane can be upgraded. +Running a cluster with `kube-proxy` instances that are persistently three minor versions behind +`kube-apiserver` means they must be upgraded before the control plane can be upgraded. {{}} From 68de69d5e38c9fed19d1b093c11c4ab2c8a8fc13 Mon Sep 17 00:00:00 2001 From: Haripriya Date: Fri, 18 Aug 2023 23:29:35 +0530 Subject: [PATCH 025/229] added commits as per suggestions --- .../contribute/participate/issue-wrangler.md | 37 ++++++++++++++----- 1 file changed, 28 insertions(+), 9 deletions(-) diff --git a/content/en/docs/contribute/participate/issue-wrangler.md b/content/en/docs/contribute/participate/issue-wrangler.md index 099d6701185a8..0100de6ada76d 100644 --- a/content/en/docs/contribute/participate/issue-wrangler.md +++ b/content/en/docs/contribute/participate/issue-wrangler.md @@ -6,7 +6,7 @@ weight: 20 -In order to reduce the burden on the [PR Wrangler](/docs/contribute/participate/pr-wrangler), formal approvers, and reviewers, members of SIG Docs take week long shifts [triaging and categorising issues](/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues) for the repository. +In order to reduce the burden on the [PR Wrangler](/docs/contribute/participate/pr-wranglers),and formal approvers, and reviewers, members of SIG Docs take week long shifts [triaging and categorising issues](/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues) for the repository. @@ -14,19 +14,19 @@ In order to reduce the burden on the [PR Wrangler](/docs/contribute/participate/ Each day in a week-long shift as Issue Wrangler: -- Triage and tag incoming issues daily. See [Triage and categorize issues](https://github.com/kubernetes/website/blob/main/docs/contribute/review/for-approvers/#triage-and-categorize-issues) for guidelines on how SIG Docs uses metadata. +- Triage and tag incoming issues daily. See [Triage and categorize issues](https://github.com/kubernetes/website/blob/main/content/en/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues) for guidelines on how SIG Docs uses metadata. - Identifying whether the issue falls under the support category and assigning a "triage/accepted" status. -- Assuring the issue is tagged with the appropriate sig/area/kind labels. +- Ensuring the issue is tagged with the appropriate sig/area/kind labels. - Keeping an eye on stale & rotten issues within the kubernetes/website repository. -- [Issues board](https://github.com/orgs/kubernetes/projects/72/views/1) maintenance would be nice +- Maintenance of [Issues board](https://github.com/orgs/kubernetes/projects/72/views/1) would be nice ### Requirements - Must be an active member of the Kubernetes organization. -- A minimum of 15 quality contributions to Kubernetes (of which a certain amount should be directed towards kubernetes/website). +- A minimum of 15 [non-trivial](https://www.kubernetes.dev/docs/guide/pull-requests/#trivial-edits) contributions to Kubernetes (of which a certain amount should be directed towards kubernetes/website). - Performing the role in an informal capacity already -### Helpful Prow commands for wranglers +### Helpful [Prow commands](https://prow.k8s.io/command-help) for wranglers ``` # reopen an issue @@ -37,6 +37,27 @@ Each day in a week-long shift as Issue Wrangler: # change the state of rotten issues /remove-lifecycle rotten + +# change the state of stale issues +/remove-lifecycle stale + +# assign sig to an issue +/sig + +# add specific area +/area + +# for beginner friendly issues +/good-first-issue + +# issues that needs help +/help wanted + +# tagging issue as support specific +/kind support + +# to accept triaging for an issue +/triage accepted ``` ### When to close Issues @@ -45,13 +66,11 @@ For an open source project to succeed, good issue management is crucial. But it Close issues when: -- A similar issue has been reported more than once. It is also advisable to direct the users to the original issue. +- When a similar issue is reported more than once, you'll first tag it as /triage duplicate; link it to the main issue & then close it. It is also advisable to direct the users to the original issue. - It is very difficult to understand and address the issue presented by the author with the information provided. However, encourage the user to provide more details or reopen the issue if they can reproduce it later. - Having implemented the same functionality elsewhere. One can close this issue and direct user to the appropriate place. - Feature requests that are not currently planned or aligned with the project's goals. -- If the assignee has not responded to comments or feedback in more than two weeks - The issue can be assigned to someone who is highly motivated to contribute. - In cases where an issue appears to be spam and is clearly unrelated. - If the issue is related to an external limitation or dependency and is beyond the control of the project. From 1cf3b648f643acfa1575d7837b788c02e4d95cca Mon Sep 17 00:00:00 2001 From: Haripriya Date: Fri, 1 Sep 2023 00:51:23 +0530 Subject: [PATCH 026/229] Update issue-wrangler.md --- content/en/docs/contribute/participate/issue-wrangler.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/content/en/docs/contribute/participate/issue-wrangler.md b/content/en/docs/contribute/participate/issue-wrangler.md index 0100de6ada76d..1150c838da352 100644 --- a/content/en/docs/contribute/participate/issue-wrangler.md +++ b/content/en/docs/contribute/participate/issue-wrangler.md @@ -6,7 +6,7 @@ weight: 20 -In order to reduce the burden on the [PR Wrangler](/docs/contribute/participate/pr-wranglers),and formal approvers, and reviewers, members of SIG Docs take week long shifts [triaging and categorising issues](/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues) for the repository. +Alongside the [PR Wrangler](/docs/contribute/participate/pr-wranglers),and formal approvers, and reviewers, members of SIG Docs take week long shifts [triaging and categorising issues](/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues) for the repository. @@ -58,6 +58,9 @@ Each day in a week-long shift as Issue Wrangler: # to accept triaging for an issue /triage accepted + +# closing an issue we won't be working on and haven't fixed yet +/close not-planned ``` ### When to close Issues From 70f29136d26b672e755048290c9586552ef785f2 Mon Sep 17 00:00:00 2001 From: Lucifergene Date: Wed, 6 Sep 2023 22:16:39 +0530 Subject: [PATCH 027/229] Added Tabs to mention RollingUpdate Deployment Strategy YAMLs --- .../workloads/controllers/deployment.md | 99 +++++++++++++++++++ 1 file changed, 99 insertions(+) diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md index 17b8b9f221b25..9b1e5f065bd06 100644 --- a/content/en/docs/concepts/workloads/controllers/deployment.md +++ b/content/en/docs/concepts/workloads/controllers/deployment.md @@ -1197,6 +1197,105 @@ rolling update starts, such that the total number of old and new Pods does not e Pods. Once old Pods have been killed, the new ReplicaSet can be scaled up further, ensuring that the total number of Pods running at any time during the update is at most 130% of desired Pods. +Here are some Rolling Update Deployment examples that use the `maxUnavailable` and `maxSurge`: + +{{< tabs name="tab_with_md" >}} +{{% tab name="Max Unavailable" %}} + + ```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + strategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 1 + ``` + +{{% /tab %}} +{{% tab name="Max Surge" %}} + + ```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + strategy: + type: RollingUpdate + rollingUpdate: + maxSurge: 1 + ``` + +{{% /tab %}} +{{% tab name="Hybrid" %}} + + ```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nginx-deployment + labels: + app: nginx +spec: + replicas: 3 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - name: nginx + image: nginx:1.14.2 + ports: + - containerPort: 80 + strategy: + type: RollingUpdate + rollingUpdate: + maxSurge: 1 + maxUnavailable: 1 + ``` + +{{% /tab %}} +{{< /tabs >}} + ### Progress Deadline Seconds `.spec.progressDeadlineSeconds` is an optional field that specifies the number of seconds you want From 285b6fa176ee4ae1683cb9136ede11779a3a278c Mon Sep 17 00:00:00 2001 From: pegasas <616672335@qq.com> Date: Fri, 18 Aug 2023 22:01:50 +0800 Subject: [PATCH 028/229] Document snag with stringData and server-side apply --- content/en/docs/concepts/configuration/secret.md | 8 ++++++++ .../configmap-secret/managing-secret-using-config-file.md | 8 ++++++++ 2 files changed, 16 insertions(+) diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index 8d26c463f0bf8..071e0ed8361cc 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -387,6 +387,10 @@ stringData: password: t0p-Secret # required field for kubernetes.io/basic-auth ``` +{{< note >}} +`stringData` for a Secret does not work well with server-side apply +{{< /note >}} + The basic authentication Secret type is provided only for convenience. You can create an `Opaque` type for credentials used for basic authentication. However, using the defined and public Secret type (`kubernetes.io/basic-auth`) helps other @@ -545,6 +549,10 @@ stringData: usage-bootstrap-signing: "true" ``` +{{< note >}} +`stringData` for a Secret does not work well with server-side apply +{{< /note >}} + ## Working with Secrets ### Creating a Secret diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md index 7245624bf8349..17696ae16e6c4 100644 --- a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md +++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md @@ -109,6 +109,10 @@ stringData: password: ``` +{{< note >}} +`stringData` for a Secret does not work well with server-side apply +{{< /note >}} + When you retrieve the Secret data, the command returns the encoded values, and not the plaintext values you provided in `stringData`. @@ -152,6 +156,10 @@ stringData: username: administrator ``` +{{< note >}} +`stringData` for a Secret does not work well with server-side apply +{{< /note >}} + The `Secret` object is created as follows: ```yaml From d96bb79c5a8fe65cc7b28955f8bc70850487dc83 Mon Sep 17 00:00:00 2001 From: Arhell Date: Sat, 9 Sep 2023 03:06:59 +0300 Subject: [PATCH 029/229] [id] Update configure-pod-configmap.md --- .../tasks/configure-pod-container/configure-pod-configmap.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md index bfdad56610635..4449862b772bd 100644 --- a/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -545,6 +545,9 @@ kubectl create -f https://kubernetes.io/examples/pods/pod-configmap-env-var-valu ``` menghasilkan keluaran pada kontainer `test-container` seperti berikut: +```shell +kubectl logs dapi-test-pod +``` ```shell very charm From 21652e34e5cbfb8f81a988916ea113934a7c0df1 Mon Sep 17 00:00:00 2001 From: Edith Puclla Date: Tue, 12 Sep 2023 16:00:19 -0500 Subject: [PATCH 030/229] [es] Add concepts/storage/projected-volumes.md --- .../concepts/storage/projected-volumes.md | 115 ++++++++++++++++++ ...rojected-secret-downwardapi-configmap.yaml | 35 ++++++ ...ed-secrets-nondefault-permission-mode.yaml | 27 ++++ .../projected-service-account-token.yaml | 21 ++++ 4 files changed, 198 insertions(+) create mode 100644 content/es/docs/concepts/storage/projected-volumes.md create mode 100644 content/es/examples/pods/storage/projected-secret-downwardapi-configmap.yaml create mode 100644 content/es/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml create mode 100644 content/es/examples/pods/storage/projected-service-account-token.yaml diff --git a/content/es/docs/concepts/storage/projected-volumes.md b/content/es/docs/concepts/storage/projected-volumes.md new file mode 100644 index 0000000000000..b59d1afd465ba --- /dev/null +++ b/content/es/docs/concepts/storage/projected-volumes.md @@ -0,0 +1,115 @@ +--- +reviewers: + - ramrodo + - raelga + - electrocucaracha +title: Volúmenes proyectados +content_type: concept +weight: 21 # just after persistent volumes +--- + + + +Este documento describe los _projected volumes_ en Kubernetes. Necesita estar familiarizado con [volumes](/docs/concepts/storage/volumes/). + + + +## Introducción + +Un volumen `projected` asigna varias fuentes de volúmenes existentes al mismo directorio. + +Actualmente se pueden proyectar los siguientes tipos de fuentes de volumen: + +- [`secret`](/docs/concepts/storage/volumes/#secret) +- [`downwardAPI`](/docs/concepts/storage/volumes/#downwardapi) +- [`configMap`](/docs/concepts/storage/volumes/#configmap) +- [`serviceAccountToken`](#serviceaccounttoken) + +Se requiere que todas las fuentes estén en el mismo espacio de nombres que el Pod. Para más detalles, +vea [all-in-one volume](https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md) documento de diseño. + +### Configuración de ejemplo con un secreto, una downwardAPI, y una configMap {#example-configuration-secret-downwardapi-configmap} + +{{% code_sample file="pods/storage/projected-secret-downwardapi-configmap.yaml" %}} + +### Configuración de ejemplo: secretos con un modo de permiso no predeterminado establecido {#example-configuration-secrets-nondefault-permission-mode} + +{{% code_sample file="pods/storage/projected-secrets-nondefault-permission-mode.yaml" %}} + +Cada fuente de volumen proyectada aparece en la especificación bajo `sources`. Los parámetros son casi los mismos con dos excepciones: + +- Para los secretos, el campo `secretName` se ha cambiado a `name` para que sea coherente con el nombre de ConfigMap. +- El `defaultMode` solo se puede especificar en el nivel proyectado y no para cada fuente de volumen. However, Sin embargo, como se ilustra arriba, puede configurar explícitamente el `mode` para cada proyección individual. + +## volúmenes proyectados de serviceAccountToken {#serviceaccounttoken} + +Puede inyectar el token para la [service account](/docs/reference/access-authn-authz/authentication/#service-account-tokens) actual +en un Pod en una ruta especificada. Por ejemplo: + +{{% code_sample file="pods/storage/projected-service-account-token.yaml" %}} + +El Pod de ejemplo tiene un volumen proyectado que contiene el token de cuenta de servicio inyectado. Los contenedores en este Pod pueden usar ese token para acceder al servidor API de Kubernetes, autenticándose con la identidad de [the pod's ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/). + +El campo `audience` contiene la audiencia prevista del +token. Un destinatario del token debe identificarse con un identificador especificado en la audiencia del token y, de lo contrario, debe rechazar el token. Este campo es opcional y de forma predeterminada es el identificador del servidor API. + +The `expirationSeconds` es la duración esperada de validez del token de la cuenta de servicio. El valor predeterminado es 1 hora y debe durar al menos 10 minutos (600 segundos). Un administrador +también puede limitar su valor máximo especificando la opción `--service-account-max-token-expiration` +para el servidor API. El campo `path` especifica una ruta relativa al punto de montaje del volumen proyectado. + +{{< note >}} +Un contenedor que utiliza una fuente de volumen proyectada como montaje de volumen [`subPath`](/docs/concepts/storage/volumes/#using-subpath) +no recibirá actualizaciones para esas fuentes de volumen. +{{< /note >}} + +## Interacciones SecurityContext + +La [proposal](https://git.k8s.io/enhancements/keps/sig-storage/2451-service-account-token-volumes#proposal) para el manejo de permisos de archivos en la mejora del volumen de cuentas de servicio proyectadas introdujo los archivos proyectados que tienen los permisos de propietario correctos establecidos. + +### Linux + +En los pods de Linux que tienen un volumen proyectado y `RunAsUser` configurado en el Pod +[`SecurityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context), +los archivos proyectados tienen la conjunto de propiedad correcto, incluida la propiedad del usuario del contenedor. + +Cuando todos los contenedores en un pod tienen el mismo `runAsUser` configurado en su +[`PodSecurityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) +or container +[`SecurityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context-1), +entonces el kubelet garantiza que el contenido del volumen `serviceAccountToken` sea propiedad de ese usuario y que el archivo token tenga su modo de permiso establecido en `0600`. + +{{< note >}} +{{< glossary_tooltip text="Ephemeral containers" term_id="ephemeral-container" >}} +agregado a un pod después de su creación _not_ cambia los permisos de volumen que se establecieron cuando se creó el pod. + +Si los permisos de volumen `serviceAccountToken` de un Pod se establecieron en `0600` porque todos los demás contenedores en el Pod tienen el mismo `runAsUser`, los contenedores efímeros deben usar el mismo `runAsUser` para poder leer el token. +{{< /note >}} + +### Windows + +En los pods de Windows que tienen un volumen proyectado y `RunAsUsername` configurado en el pod `SecurityContext`, la propiedad no se aplica debido a la forma en que se administran las cuentas de usuario en Windows. Windows almacena y administra cuentas de grupos y usuarios locales en un archivo de base de datos llamado Administrador de cuentas de seguridad (SAM). Cada contenedor mantiene su propia instancia de la base de datos SAM, de la cual el host no tiene visibilidad mientras el contenedor se está ejecutando. Los contenedores de Windows están diseñados para ejecutar la parte del modo de usuario del sistema operativo de forma aislada del host, de ahí el mantenimiento de una base de datos SAM virtual. Como resultado, el kubelet que se ejecuta en el host no tiene la capacidad de configurar dinámicamente la propiedad de los archivos del host para cuentas de contenedores virtualizados. Se recomienda que, si los archivos de la máquina host se van a compartir con el contenedor, se coloquen en su propio montaje de volumen fuera de `C:\`. + +De forma predeterminada, los archivos proyectados tendrán la siguiente propiedad, como se muestra en un archivo de volumen proyectado de ejemplo: + +```powershell +PS C:\> Get-Acl C:\var\run\secrets\kubernetes.io\serviceaccount\..2021_08_31_22_22_18.318230061\ca.crt | Format-List + +Path : Microsoft.PowerShell.Core\FileSystem::C:\var\run\secrets\kubernetes.io\serviceaccount\..2021_08_31_22_22_18.318230061\ca.crt +Owner : BUILTIN\Administrators +Group : NT AUTHORITY\SYSTEM +Access : NT AUTHORITY\SYSTEM Allow FullControl + BUILTIN\Administrators Allow FullControl + BUILTIN\Users Allow ReadAndExecute, Synchronize +Audit : +Sddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU) +``` + +Esto implica que todos los usuarios administradores como `ContainerAdministrator` tendrán acceso de lectura, escritura y ejecución, mientras que los usuarios que no sean administradores tendrán acceso de lectura y ejecución. + +{{< note >}} +En general, se desaconseja otorgar acceso al contenedor al host, ya que puede abrir la puerta a posibles vulnerabilidades de seguridad. + +Creating a Windows Pod with `RunAsUser` in it's `SecurityContext` will result in +the Pod being stuck at `ContainerCreating` forever. So it is advised to not use +the Linux only `RunAsUser` option with Windows Pods. +{{< /note >}} diff --git a/content/es/examples/pods/storage/projected-secret-downwardapi-configmap.yaml b/content/es/examples/pods/storage/projected-secret-downwardapi-configmap.yaml new file mode 100644 index 0000000000000..453dc08c0c7d9 --- /dev/null +++ b/content/es/examples/pods/storage/projected-secret-downwardapi-configmap.yaml @@ -0,0 +1,35 @@ +apiVersion: v1 +kind: Pod +metadata: + name: volume-test +spec: + containers: + - name: container-test + image: busybox:1.28 + volumeMounts: + - name: all-in-one + mountPath: "/projected-volume" + readOnly: true + volumes: + - name: all-in-one + projected: + sources: + - secret: + name: mysecret + items: + - key: username + path: my-group/my-username + - downwardAPI: + items: + - path: "labels" + fieldRef: + fieldPath: metadata.labels + - path: "cpu_limit" + resourceFieldRef: + containerName: container-test + resource: limits.cpu + - configMap: + name: myconfigmap + items: + - key: config + path: my-group/my-config diff --git a/content/es/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml b/content/es/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml new file mode 100644 index 0000000000000..b921fd93c5833 --- /dev/null +++ b/content/es/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml @@ -0,0 +1,27 @@ +apiVersion: v1 +kind: Pod +metadata: + name: volume-test +spec: + containers: + - name: container-test + image: busybox:1.28 + volumeMounts: + - name: all-in-one + mountPath: "/projected-volume" + readOnly: true + volumes: + - name: all-in-one + projected: + sources: + - secret: + name: mysecret + items: + - key: username + path: my-group/my-username + - secret: + name: mysecret2 + items: + - key: password + path: my-group/my-password + mode: 511 diff --git a/content/es/examples/pods/storage/projected-service-account-token.yaml b/content/es/examples/pods/storage/projected-service-account-token.yaml new file mode 100644 index 0000000000000..cc307659a78ef --- /dev/null +++ b/content/es/examples/pods/storage/projected-service-account-token.yaml @@ -0,0 +1,21 @@ +apiVersion: v1 +kind: Pod +metadata: + name: sa-token-test +spec: + containers: + - name: container-test + image: busybox:1.28 + volumeMounts: + - name: token-vol + mountPath: "/service-account" + readOnly: true + serviceAccountName: default + volumes: + - name: token-vol + projected: + sources: + - serviceAccountToken: + audience: api + expirationSeconds: 3600 + path: token From fe9e053b727d8e35d27af4806ac151123414ff97 Mon Sep 17 00:00:00 2001 From: Edith Puclla Date: Tue, 12 Sep 2023 16:24:25 -0500 Subject: [PATCH 031/229] [es] Modify concepts/storage/projected-volumes.md --- .../concepts/storage/projected-volumes.md | 20 +++++++++---------- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/content/es/docs/concepts/storage/projected-volumes.md b/content/es/docs/concepts/storage/projected-volumes.md index b59d1afd465ba..ac1a6935f37a2 100644 --- a/content/es/docs/concepts/storage/projected-volumes.md +++ b/content/es/docs/concepts/storage/projected-volumes.md @@ -10,13 +10,13 @@ weight: 21 # just after persistent volumes -Este documento describe los _projected volumes_ en Kubernetes. Necesita estar familiarizado con [volumes](/docs/concepts/storage/volumes/). +Este documento describe los _volúmenes proyectados_ en Kubernetes. Necesita estar familiarizado con [volumes](/docs/concepts/storage/volumes/). ## Introducción -Un volumen `projected` asigna varias fuentes de volúmenes existentes al mismo directorio. +Un volumen `proyectado` asigna varias fuentes de volúmenes existentes al mismo directorio. Actualmente se pueden proyectar los siguientes tipos de fuentes de volumen: @@ -26,7 +26,7 @@ Actualmente se pueden proyectar los siguientes tipos de fuentes de volumen: - [`serviceAccountToken`](#serviceaccounttoken) Se requiere que todas las fuentes estén en el mismo espacio de nombres que el Pod. Para más detalles, -vea [all-in-one volume](https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md) documento de diseño. +vea el documento de diseño [all-in-one volume](https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md). ### Configuración de ejemplo con un secreto, una downwardAPI, y una configMap {#example-configuration-secret-downwardapi-configmap} @@ -39,9 +39,9 @@ vea [all-in-one volume](https://git.k8s.io/design-proposals-archive/node/all-in- Cada fuente de volumen proyectada aparece en la especificación bajo `sources`. Los parámetros son casi los mismos con dos excepciones: - Para los secretos, el campo `secretName` se ha cambiado a `name` para que sea coherente con el nombre de ConfigMap. -- El `defaultMode` solo se puede especificar en el nivel proyectado y no para cada fuente de volumen. However, Sin embargo, como se ilustra arriba, puede configurar explícitamente el `mode` para cada proyección individual. +- El `defaultMode` solo se puede especificar en el nivel proyectado y no para cada fuente de volumen. Sin embargo, como se ilustra arriba, puede configurar explícitamente el `mode` para cada proyección individual. -## volúmenes proyectados de serviceAccountToken {#serviceaccounttoken} +## Volúmenes proyectados de serviceAccountToken {#serviceaccounttoken} Puede inyectar el token para la [service account](/docs/reference/access-authn-authz/authentication/#service-account-tokens) actual en un Pod en una ruta especificada. Por ejemplo: @@ -64,7 +64,7 @@ no recibirá actualizaciones para esas fuentes de volumen. ## Interacciones SecurityContext -La [proposal](https://git.k8s.io/enhancements/keps/sig-storage/2451-service-account-token-volumes#proposal) para el manejo de permisos de archivos en la mejora del volumen de cuentas de servicio proyectadas introdujo los archivos proyectados que tienen los permisos de propietario correctos establecidos. +La [propuesta](https://git.k8s.io/enhancements/keps/sig-storage/2451-service-account-token-volumes#proposal) para el manejo de permisos de archivos en la mejora del volumen de cuentas de servicio proyectadas introdujo los archivos proyectados que tienen los permisos de propietario correctos establecidos. ### Linux @@ -74,13 +74,13 @@ los archivos proyectados tienen la conjunto de propiedad correcto, incluida la p Cuando todos los contenedores en un pod tienen el mismo `runAsUser` configurado en su [`PodSecurityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) -or container +o el contenedor [`SecurityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context-1), entonces el kubelet garantiza que el contenido del volumen `serviceAccountToken` sea propiedad de ese usuario y que el archivo token tenga su modo de permiso establecido en `0600`. {{< note >}} {{< glossary_tooltip text="Ephemeral containers" term_id="ephemeral-container" >}} -agregado a un pod después de su creación _not_ cambia los permisos de volumen que se establecieron cuando se creó el pod. +agregado a un pod después de su creación _no_ cambia los permisos de volumen que se establecieron cuando se creó el pod. Si los permisos de volumen `serviceAccountToken` de un Pod se establecieron en `0600` porque todos los demás contenedores en el Pod tienen el mismo `runAsUser`, los contenedores efímeros deben usar el mismo `runAsUser` para poder leer el token. {{< /note >}} @@ -109,7 +109,5 @@ Esto implica que todos los usuarios administradores como `ContainerAdministrator {{< note >}} En general, se desaconseja otorgar acceso al contenedor al host, ya que puede abrir la puerta a posibles vulnerabilidades de seguridad. -Creating a Windows Pod with `RunAsUser` in it's `SecurityContext` will result in -the Pod being stuck at `ContainerCreating` forever. So it is advised to not use -the Linux only `RunAsUser` option with Windows Pods. +Crear un Pod de Windows con `RunAsUser` en su `SecurityContext` dará como resultado que el Pod quede atascado en `ContainerCreating` para siempre. Por lo tanto, se recomienda no utilizar la opción `RunAsUser` exclusiva de Linux con Windows Pods. {{< /note >}} From 30aac26f6bbdc04df3eb52bfb5e31c724ffc8713 Mon Sep 17 00:00:00 2001 From: Ayush Gupta Date: Tue, 19 Sep 2023 14:30:23 +0530 Subject: [PATCH 032/229] Feature addition in section why do you need Kubernetes --- content/en/docs/concepts/overview/_index.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/content/en/docs/concepts/overview/_index.md b/content/en/docs/concepts/overview/_index.md index 200b3e2ea337e..12c150c6cafa8 100644 --- a/content/en/docs/concepts/overview/_index.md +++ b/content/en/docs/concepts/overview/_index.md @@ -129,6 +129,14 @@ Kubernetes provides you with: Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration. +* **Batch execution** + In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired. +* **Horizontal scaling** + Scale your application up and down with a simple command, with a UI, or automatically based on CPU usage. +* **IPv4/IPv6 dual-stack** + Allocation of IPv4 and IPv6 addresses to Pods and Services +* **Designed for extensibility** + Add features to your Kubernetes cluster without changing upstream source code. ## What Kubernetes is not From eb2a2807d6058e3043d5d0cb96820ea288767fb3 Mon Sep 17 00:00:00 2001 From: satyampsoni Date: Wed, 20 Sep 2023 01:06:31 +0530 Subject: [PATCH 033/229] subscribe to kubeweelkly enchanced #42961 --- layouts/index.html | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/layouts/index.html b/layouts/index.html index 1e14ba92281ea..142743eea3a2e 100644 --- a/layouts/index.html +++ b/layouts/index.html @@ -18,13 +18,12 @@ /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ -

{{ T "main_kubeweekly_baseline" }}

-
+ @@ -33,8 +32,7 @@
{{ T "main_kubeweekly_past_link" }}
-
-
+
From c54ec49f258968a59df187ecbb1e005b2e118994 Mon Sep 17 00:00:00 2001 From: windsonsea Date: Wed, 20 Sep 2023 09:56:01 +0800 Subject: [PATCH 034/229] [zh] Sync /command-line-tools-reference/kube-proxy.md --- .../kube-proxy.md | 165 ++++++++++-------- 1 file changed, 95 insertions(+), 70 deletions(-) diff --git a/content/zh-cn/docs/reference/command-line-tools-reference/kube-proxy.md b/content/zh-cn/docs/reference/command-line-tools-reference/kube-proxy.md index 7df32035bee08..c358565a9a14a 100644 --- a/content/zh-cn/docs/reference/command-line-tools-reference/kube-proxy.md +++ b/content/zh-cn/docs/reference/command-line-tools-reference/kube-proxy.md @@ -9,17 +9,6 @@ content_type: tool-reference weight: 30 --> - - ## {{% heading "synopsis" %}} + 逗号分隔的文件列表,用于检查 boot-id。使用第一个存在的文件。

@@ -250,10 +240,9 @@ A set of key=value pairs that describe feature gates for alpha/experimental feat APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
-APISelfSubjectReview=true|false (BETA - default=true)
APIServerIdentity=true|false (BETA - default=true)
APIServerTracing=true|false (BETA - default=true)
-AdmissionWebhookMatchConditions=true|false (ALPHA - default=false)
+AdmissionWebhookMatchConditions=true|false (BETA - default=true)
AggregatedDiscoveryEndpoint=true|false (BETA - default=true)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
@@ -262,32 +251,32 @@ AppArmor=true|false (BETA - default=true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
CPUManagerPolicyOptions=true|false (BETA - default=true)
+CRDValidationRatcheting=true|false (ALPHA - default=false)
CSIMigrationPortworx=true|false (BETA - default=false)
-CSIMigrationRBD=true|false (ALPHA - default=false)
CSINodeExpandSecret=true|false (BETA - default=true)
CSIVolumeHealth=true|false (ALPHA - default=false)
CloudControllerManagerWebhook=true|false (ALPHA - default=false)
CloudDualStackNodeIPs=true|false (ALPHA - default=false)
ClusterTrustBundle=true|false (ALPHA - default=false)
ComponentSLIs=true|false (BETA - default=true)
+ConsistentListFromCache=true|false (ALPHA - default=false)
ContainerCheckpoint=true|false (ALPHA - default=false)
ContextualLogging=true|false (ALPHA - default=false)
+CronJobsScheduledAnnotation=true|false (BETA - default=true)
CrossNamespaceVolumeDataSource=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
CustomResourceValidationExpressions=true|false (BETA - default=true)
+DevicePluginCDIDevices=true|false (ALPHA - default=false)
DisableCloudProviders=true|false (ALPHA - default=false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
DynamicResourceAllocation=true|false (ALPHA - default=false)
ElasticIndexedJob=true|false (BETA - default=true)
EventedPLEG=true|false (BETA - default=false)
-ExpandedDNSConfig=true|false (BETA - default=true)
-ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GracefulNodeShutdown=true|false (BETA - default=true)
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
HPAContainerMetrics=true|false (BETA - default=true)
HPAScaleToZero=true|false (ALPHA - default=false)
HonorPVReclaimPolicy=true|false (ALPHA - default=false)
-IPTablesOwnershipCleanup=true|false (BETA - default=true)
InPlacePodVerticalScaling=true|false (ALPHA - default=false)
InTreePluginAWSUnregister=true|false (ALPHA - default=false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
@@ -295,18 +284,20 @@ InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
InTreePluginGCEUnregister=true|false (ALPHA - default=false)
InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
-InTreePluginRBDUnregister=true|false (ALPHA - default=false)
InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
+JobBackoffLimitPerIndex=true|false (ALPHA - default=false)
JobPodFailurePolicy=true|false (BETA - default=true)
-JobReadyPods=true|false (BETA - default=true)
+JobPodReplacementPolicy=true|false (ALPHA - default=false)
J +obReadyPods=true|false (BETA - default=true)
KMSv2=true|false (BETA - default=true)
+KMSv2KDF=true|false (BETA - default=false)
+KubeProxyDrainingTerminatingNodes=true|false (ALPHA - default=false)
+KubeletCgroupDriverFromCRI=true|false (ALPHA - default=false)
KubeletInUserNamespace=true|false (ALPHA - default=false)
-KubeletPodResources=true|false (BETA - default=true)
KubeletPodResourcesDynamicResources=true|false (ALPHA - default=false)
KubeletPodResourcesGet=true|false (ALPHA - default=false)
-KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
KubeletTracing=true|false (BETA - default=true)
-LegacyServiceAccountTokenTracking=true|false (BETA - default=true)
+LegacyServiceAccountTokenCleanUp=true|false (ALPHA - default=false)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
LogarithmicScaleDown=true|false (BETA - default=true)
LoggingAlphaOptions=true|false (ALPHA - default=false)
@@ -316,35 +307,35 @@ MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
MemoryManager=true|false (BETA - default=true)
MemoryQoS=true|false (ALPHA - default=false)
MinDomainsInPodTopologySpread=true|false (BETA - default=true)
-MinimizeIPTablesRestore=true|false (BETA - default=true)
MultiCIDRRangeAllocator=true|false (ALPHA - default=false)
MultiCIDRServiceAllocator=true|false (ALPHA - default=false)
-NetworkPolicyStatus=true|false (ALPHA - default=false)
NewVolumeManagerReconstruction=true|false (BETA - default=true)
NodeInclusionPolicyInPodTopologySpread=true|false (BETA - default=true)
NodeLogQuery=true|false (ALPHA - default=false)
-NodeOutOfServiceVolumeDetach=true|false (BETA - default=true)
-NodeSwap=true|false (ALPHA - default=false)
+NodeSwap=true|false (BETA - default=false)
OpenAPIEnums=true|false (BETA - default=true)
PDBUnhealthyPodEvictionPolicy=true|false (BETA - default=true)
+PersistentVolumeLastPhaseTransitionTime=true|false (ALPHA - default=false)
PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
PodDeletionCost=true|false (BETA - default=true)
PodDisruptionConditions=true|false (BETA - default=true)
-PodHasNetworkCondition=true|false (ALPHA - default=false)
+PodHostIPs=true|false (ALPHA - default=false)
+PodIndexLabel=true|false (BETA - default=true)
+PodReadyToStartContainersCondition=true|false (ALPHA - default=false)
PodSchedulingReadiness=true|false (BETA - default=true)
-ProbeTerminationGracePeriod=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
-ProxyTerminatingEndpoints=true|false (BETA - default=true)
QOSReserved=true|false (ALPHA - default=false)
ReadWriteOncePod=true|false (BETA - default=true)
RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
-RetroactiveDefaultStorageClass=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
SELinuxMountReadWriteOncePod=true|false (BETA - default=true)
+SchedulerQueueingHints=true|false (BETA - default=true)
SecurityContextDeny=true|false (ALPHA - default=false)
-ServiceNodePortStaticSubrange=true|false (ALPHA - default=false)
+ServiceNodePortStaticSubrange=true|false (BETA - default=true)
+SidecarContainers=true|false (ALPHA - default=false)
SizeMemoryBackedVolumes=true|false (BETA - default=true)
+SkipReadOnlyValidationGCE=true|false (ALPHA - default=false)
StableLoadBalancerNodeSet=true|false (BETA - default=true)
StatefulSetAutoDeletePVC=true|false (BETA - default=true)
StatefulSetStartOrdinal=true|false (BETA - default=true)
@@ -352,10 +343,11 @@ StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
TopologyAwareHints=true|false (BETA - default=true)
TopologyManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
-TopologyManagerPolicyBetaOptions=true|false (BETA - default=false)
-TopologyManagerPolicyOptions=true|false (ALPHA - default=false)
-UserNamespacesStatelessPodsSupport=true|false (ALPHA - default=false)
-ValidatingAdmissionPolicy=true|false (ALPHA - default=false)
+TopologyManagerPolicyBetaOptions=true|false (BETA - default=true)
+TopologyManagerPolicyOptions=true|false (BETA - default=true)
+UnknownVersionInteroperabilityProxy=true|false (ALPHA - default=false)
+UserNamespacesSupport=true|false (ALPHA - default=false)
+ValidatingAdmissionPolicy=true|false (BETA - default=false)
VolumeCapacityPriority=true|false (ALPHA - default=false)
WatchList=true|false (ALPHA - default=false)
WinDSR=true|false (ALPHA - default=false)
@@ -367,10 +359,9 @@ This parameter is ignored if a config file is specified by --config. APIListChunking=true|false (BETA - 默认值为 true)
APIPriorityAndFairness=true|false (BETA - 默认值为 true)
APIResponseCompression=true|false (BETA - 默认值为 true)
-APISelfSubjectReview=true|false (BETA - 默认值为 true)
APIServerIdentity=true|false (BETA - 默认值为 true)
APIServerTracing=true|false (BETA - 默认值为 true)
-AdmissionWebhookMatchConditions=true|false (ALPHA - 默认值为 false)
+AdmissionWebhookMatchConditions=true|false (BETA - 默认值为 true)
AggregatedDiscoveryEndpoint=true|false (BETA - 默认值为 true)
AllAlpha=true|false (ALPHA - 默认值为 false)
AllBeta=true|false (BETA - 默认值为 false)
@@ -379,32 +370,32 @@ AppArmor=true|false (BETA - 默认值为 true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - 默认值为 false)
CPUManagerPolicyBetaOptions=true|false (BETA - 默认值为 true)
CPUManagerPolicyOptions=true|false (BETA - 默认值为 true)
+CRDValidationRatcheting=true|false (ALPHA - 默认值为 false)
CSIMigrationPortworx=true|false (BETA - 默认值为 false)
-CSIMigrationRBD=true|false (ALPHA - 默认值为 false)
CSINodeExpandSecret=true|false (BETA - 默认值为 true)
CSIVolumeHealth=true|false (ALPHA - 默认值为 false)
CloudControllerManagerWebhook=true|false (ALPHA - 默认值为 false)
CloudDualStackNodeIPs=true|false (ALPHA - 默认值为 false)
ClusterTrustBundle=true|false (ALPHA - 默认值为 false)
ComponentSLIs=true|false (BETA - 默认值为 true)
+ConsistentListFromCache=true|false (ALPHA - 默认值为 false)
ContainerCheckpoint=true|false (ALPHA - 默认值为 false)
ContextualLogging=true|false (ALPHA - 默认值为 false)
+CronJobsScheduledAnnotation=true|false (BETA - 默认值为 true)
CrossNamespaceVolumeDataSource=true|false (ALPHA - 默认值为 false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - 默认值为 false)
CustomResourceValidationExpressions=true|false (BETA - 默认值为 true)
+DevicePluginCDIDevices=true|false (ALPHA - 默认值为 false)
DisableCloudProviders=true|false (ALPHA - 默认值为 false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - 默认值为 false)
DynamicResourceAllocation=true|false (ALPHA - 默认值为 false)
ElasticIndexedJob=true|false (BETA - 默认值为 true)
EventedPLEG=true|false (BETA - 默认值为 false)
-ExpandedDNSConfig=true|false (BETA - 默认值为 true)
-ExperimentalHostUserNamespaceDefaulting=true|false (BETA - 默认值为 false)
GracefulNodeShutdown=true|false (BETA - 默认值为 true)
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - 默认值为 true)
HPAContainerMetrics=true|false (BETA - 默认值为 true)
HPAScaleToZero=true|false (ALPHA - 默认值为 false)
HonorPVReclaimPolicy=true|false (ALPHA - 默认值为 false)
-IPTablesOwnershipCleanup=true|false (BETA - 默认值为 true)
InPlacePodVerticalScaling=true|false (ALPHA - 默认值为 false)
InTreePluginAWSUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - 默认值为 false)
@@ -412,18 +403,20 @@ InTreePluginAzureFileUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginGCEUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginOpenStackUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginPortworxUnregister=true|false (ALPHA - 默认值为 false)
-InTreePluginRBDUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginvSphereUnregister=true|false (ALPHA - 默认值为 false)
+JobBackoffLimitPerIndex=true|false (ALPHA - 默认值为 false)
JobPodFailurePolicy=true|false (BETA - 默认值为 true)
-JobReadyPods=true|false (BETA - 默认值为 true)
+JobPodReplacementPolicy=true|false (ALPHA - 默认值为 false)
J +obReadyPods=true|false (BETA - 默认值为 true)
KMSv2=true|false (BETA - 默认值为 true)
+KMSv2KDF=true|false (BETA - 默认值为 false)
+KubeProxyDrainingTerminatingNodes=true|false (ALPHA - 默认值为 false)
+KubeletCgroupDriverFromCRI=true|false (ALPHA - 默认值为 false)
KubeletInUserNamespace=true|false (ALPHA - 默认值为 false)
-KubeletPodResources=true|false (BETA - 默认值为 true)
KubeletPodResourcesDynamicResources=true|false (ALPHA - 默认值为 false)
KubeletPodResourcesGet=true|false (ALPHA - 默认值为 false)
-KubeletPodResourcesGetAllocatable=true|false (BETA - 默认值为 true)
KubeletTracing=true|false (BETA - 默认值为 true)
-LegacyServiceAccountTokenTracking=true|false (BETA - 默认值为 true)
+LegacyServiceAccountTokenCleanUp=true|false (ALPHA - 默认值为 false)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - 默认值为 false)
LogarithmicScaleDown=true|false (BETA - 默认值为 true)
LoggingAlphaOptions=true|false (ALPHA - 默认值为 false)
@@ -433,35 +426,35 @@ MaxUnavailableStatefulSet=true|false (ALPHA - 默认值为 false)
MemoryManager=true|false (BETA - 默认值为 true)
MemoryQoS=true|false (ALPHA - 默认值为 false)
MinDomainsInPodTopologySpread=true|false (BETA - 默认值为 true)
-MinimizeIPTablesRestore=true|false (BETA - 默认值为 true)
MultiCIDRRangeAllocator=true|false (ALPHA - 默认值为 false)
MultiCIDRServiceAllocator=true|false (ALPHA - 默认值为 false)
-NetworkPolicyStatus=true|false (ALPHA - 默认值为 false)
NewVolumeManagerReconstruction=true|false (BETA - 默认值为 true)
NodeInclusionPolicyInPodTopologySpread=true|false (BETA - 默认值为 true)
NodeLogQuery=true|false (ALPHA - 默认值为 false)
-NodeOutOfServiceVolumeDetach=true|false (BETA - 默认值为 true)
-NodeSwap=true|false (ALPHA - 默认值为 false)
+NodeSwap=true|false (BETA - 默认值为 false)
OpenAPIEnums=true|false (BETA - 默认值为 true)
PDBUnhealthyPodEvictionPolicy=true|false (BETA - 默认值为 true)
+PersistentVolumeLastPhaseTransitionTime=true|false (ALPHA - 默认值为 false)
PodAndContainerStatsFromCRI=true|false (ALPHA - 默认值为 false)
PodDeletionCost=true|false (BETA - 默认值为 true)
PodDisruptionConditions=true|false (BETA - 默认值为 true)
-PodHasNetworkCondition=true|false (ALPHA - 默认值为 false)
+PodHostIPs=true|false (ALPHA - 默认值为 false)
+PodIndexLabel=true|false (BETA - 默认值为 true)
+PodReadyToStartContainersCondition=true|false (ALPHA - 默认值为 false)
PodSchedulingReadiness=true|false (BETA - 默认值为 true)
-ProbeTerminationGracePeriod=true|false (BETA - 默认值为 true)
ProcMountType=true|false (ALPHA - 默认值为 false)
-ProxyTerminatingEndpoints=true|false (BETA - 默认值为 true)
QOSReserved=true|false (ALPHA - 默认值为 false)
ReadWriteOncePod=true|false (BETA - 默认值为 true)
RecoverVolumeExpansionFailure=true|false (ALPHA - 默认值为 false)
RemainingItemCount=true|false (BETA - 默认值为 true)
-RetroactiveDefaultStorageClass=true|false (BETA - 默认值为 true)
RotateKubeletServerCertificate=true|false (BETA - 默认值为 true)
SELinuxMountReadWriteOncePod=true|false (BETA - 默认值为 true)
+SchedulerQueueingHints=true|false (BETA - 默认值为 true)
SecurityContextDeny=true|false (ALPHA - 默认值为 false)
-ServiceNodePortStaticSubrange=true|false (ALPHA - 默认值为 false)
+ServiceNodePortStaticSubrange=true|false (BETA - 默认值为 true)
+SidecarContainers=true|false (ALPHA - 默认值为 false)
SizeMemoryBackedVolumes=true|false (BETA - 默认值为 true)
+SkipReadOnlyValidationGCE=true|false (ALPHA - 默认值为 false)
StableLoadBalancerNodeSet=true|false (BETA - 默认值为 true)
StatefulSetAutoDeletePVC=true|false (BETA - 默认值为 true)
StatefulSetStartOrdinal=true|false (BETA - 默认值为 true)
@@ -469,10 +462,11 @@ StorageVersionAPI=true|false (ALPHA - 默认值为 false)
StorageVersionHash=true|false (BETA - 默认值为 true)
TopologyAwareHints=true|false (BETA - 默认值为 true)
TopologyManagerPolicyAlphaOptions=true|false (ALPHA - 默认值为 false)
-TopologyManagerPolicyBetaOptions=true|false (BETA - 默认值为 false)
-TopologyManagerPolicyOptions=true|false (ALPHA - 默认值为 false)
-UserNamespacesStatelessPodsSupport=true|false (ALPHA - 默认值为 false)
-ValidatingAdmissionPolicy=true|false (ALPHA - 默认值为 false)
+TopologyManagerPolicyBetaOptions=true|false (BETA - 默认值为 true)
+TopologyManagerPolicyOptions=true|false (BETA - 默认值为 true)
+UnknownVersionInteroperabilityProxy=true|false (ALPHA - 默认值为 false)
+UserNamespacesSupport=true|false (ALPHA - 默认值为 false)
+ValidatingAdmissionPolicy=true|false (BETA - 默认值为 false)
VolumeCapacityPriority=true|false (ALPHA - 默认值为 false)
WatchList=true|false (ALPHA - 默认值为 false)
WinDSR=true|false (ALPHA - 默认值为 false)
@@ -740,6 +734,20 @@ Path to kubeconfig file with authorization information (the master location is s +
+ + + + + + @@ -789,6 +797,20 @@ Defines the maximum size a log file can grow to (no effect when -logtostderr=tru

+ + + + + + + @@ -1043,25 +1065,29 @@ number for the log level verbosity - - + - +以逗号分割的 pattern=N 设置的列表,用于文件过滤日志(仅适用于文本日志格式) +

+ @@ -1079,4 +1105,3 @@ If set, write the default configuration values to this file and exit.
--log-flush-frequency duration     默认值:5s
+

+ +日志清洗之间的最大秒数 +

+
--log_backtrace_at <“file:N” 格式的字符串>     默认值:0
--logging-format string     默认值:"text"
+

+ +设置日志格式。允许的格式为:"text"。 +

+
--logtostderr     默认值:true
--version version[=true]

+

+

-打印版本信息并退出。 +--version, --version=raw 打印版本信息并退出; +--version=vX.Y.Z... 设置报告的版本。

--vmodule <逗号分割的 “pattern=N” 设置>--vmodule pattern=N,...

+

+

-以逗号分割的 pattern=N 设置的列表,用于文件过滤日志 -

- From c9c9a3349fd2f8083a18bb27e7f05af46d38695e Mon Sep 17 00:00:00 2001 From: Edith Puclla <58795858+edithturn@users.noreply.github.com> Date: Wed, 20 Sep 2023 14:56:30 -0500 Subject: [PATCH 035/229] Update content/es/docs/concepts/storage/projected-volumes.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Rodolfo Martínez Vega --- content/es/docs/concepts/storage/projected-volumes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/storage/projected-volumes.md b/content/es/docs/concepts/storage/projected-volumes.md index ac1a6935f37a2..478beb232a1bc 100644 --- a/content/es/docs/concepts/storage/projected-volumes.md +++ b/content/es/docs/concepts/storage/projected-volumes.md @@ -1,7 +1,7 @@ --- reviewers: - ramrodo - - raelga + - krol3 - electrocucaracha title: Volúmenes proyectados content_type: concept From d7cc2d810a113c5b5170286cb8ae961e7ad08b67 Mon Sep 17 00:00:00 2001 From: Edith Puclla <58795858+edithturn@users.noreply.github.com> Date: Wed, 20 Sep 2023 14:57:41 -0500 Subject: [PATCH 036/229] Update content/es/docs/concepts/storage/projected-volumes.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Rodolfo Martínez Vega --- content/es/docs/concepts/storage/projected-volumes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/storage/projected-volumes.md b/content/es/docs/concepts/storage/projected-volumes.md index 478beb232a1bc..d7014c3a8251b 100644 --- a/content/es/docs/concepts/storage/projected-volumes.md +++ b/content/es/docs/concepts/storage/projected-volumes.md @@ -10,7 +10,7 @@ weight: 21 # just after persistent volumes -Este documento describe los _volúmenes proyectados_ en Kubernetes. Necesita estar familiarizado con [volumes](/docs/concepts/storage/volumes/). +Este documento describe los _volúmenes proyectados_ en Kubernetes. Necesita estar familiarizado con [volúmenes](/es/docs/concepts/storage/volumes/). From adf78b96f0a1e0b9577d5578483d011ca600079b Mon Sep 17 00:00:00 2001 From: Edith Puclla <58795858+edithturn@users.noreply.github.com> Date: Wed, 20 Sep 2023 15:01:50 -0500 Subject: [PATCH 037/229] Update content/es/docs/concepts/storage/projected-volumes.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Gracias por esto! Co-authored-by: Rodolfo Martínez Vega --- content/es/docs/concepts/storage/projected-volumes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/storage/projected-volumes.md b/content/es/docs/concepts/storage/projected-volumes.md index d7014c3a8251b..e496094a1d90c 100644 --- a/content/es/docs/concepts/storage/projected-volumes.md +++ b/content/es/docs/concepts/storage/projected-volumes.md @@ -28,7 +28,7 @@ Actualmente se pueden proyectar los siguientes tipos de fuentes de volumen: Se requiere que todas las fuentes estén en el mismo espacio de nombres que el Pod. Para más detalles, vea el documento de diseño [all-in-one volume](https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md). -### Configuración de ejemplo con un secreto, una downwardAPI, y una configMap {#example-configuration-secret-downwardapi-configmap} +### Configuración de ejemplo con un secreto, una downwardAPI y una configMap {#example-configuration-secret-downwardapi-configmap} {{% code_sample file="pods/storage/projected-secret-downwardapi-configmap.yaml" %}} From cab22c412d92b79914bb3709c3a115276dbf2502 Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Wed, 20 Sep 2023 18:18:47 +0100 Subject: [PATCH 038/229] Revise tutorial introduction --- .../stateful-application/basic-stateful-set.md | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/content/en/docs/tutorials/stateful-application/basic-stateful-set.md b/content/en/docs/tutorials/stateful-application/basic-stateful-set.md index eefaaa18e0019..2d9354e61e58a 100644 --- a/content/en/docs/tutorials/stateful-application/basic-stateful-set.md +++ b/content/en/docs/tutorials/stateful-application/basic-stateful-set.md @@ -27,14 +27,25 @@ following Kubernetes concepts: * [Headless Services](/docs/concepts/services-networking/service/#headless-services) * [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) * [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/) -* [StatefulSets](/docs/concepts/workloads/controllers/statefulset/) * The [kubectl](/docs/reference/kubectl/kubectl/) command line tool +{{% include "task-tutorial-prereqs.md" %}} +You should configure `kubectl` to use a context that uses the `default` +namespace. +If you are using an existing cluster, make sure that it's OK to use that +cluster's default namespace to practice. Ideally, practice in a cluster +that doesn't run any real workloads. + +It's also useful to read the concept page about [StatefulSets](/docs/concepts/workloads/controllers/statefulset/). + {{< note >}} This tutorial assumes that your cluster is configured to dynamically provision -PersistentVolumes. If your cluster is not configured to do so, you +PersistentVolumes. You'll also need to have a [default StorageClass](/docs/concepts/storage/storage-classes/#default-storageclass). +If your cluster is not configured to provision storage dynamically, you will have to manually provision two 1 GiB volumes prior to starting this -tutorial. +tutorial and +set up your cluster so that those PersistentVolumes map to the +PersistentVolumeClaim templates that the StatefulSet defines. {{< /note >}} ## {{% heading "objectives" %}} From 648e2ba33385209dd777cb66bdf02893840a41cb Mon Sep 17 00:00:00 2001 From: Richa Banker Date: Mon, 24 Jul 2023 16:44:55 -0700 Subject: [PATCH 039/229] Add an entry in glossary for GVR Co-authored-by: Tim Bannister --- .../glossary/group-version-resource.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) create mode 100644 content/en/docs/reference/glossary/group-version-resource.md diff --git a/content/en/docs/reference/glossary/group-version-resource.md b/content/en/docs/reference/glossary/group-version-resource.md new file mode 100644 index 0000000000000..cdd208fd5ed8e --- /dev/null +++ b/content/en/docs/reference/glossary/group-version-resource.md @@ -0,0 +1,18 @@ +--- +title: Group Version Resource +id: gvr +date: 2023-07-24 +short_description: > + The API group, API version and name of a Kubernetes API. + +aka: ["GVR"] +tags: +- architecture +--- +Means of representing unique Kubernetes API resource. + + + +Group Version Resources (GVRs) specify the API group, API version, and resource (name for the object kind as it appears in the URI) associated with accessing a particular id of object in Kubernetes. +GVRs let you define and distinguish different Kubernetes objects, and to specify a way of accessing +objects that is stable even as APIs change. \ No newline at end of file From 7dcb1c4cb5f4576863847840e122105414a6e181 Mon Sep 17 00:00:00 2001 From: Arhell Date: Sat, 23 Sep 2023 02:32:37 +0300 Subject: [PATCH 040/229] [ja] remove "O=system:masters" from "kube-apiserver-etcd-client".md --- content/ja/docs/setup/best-practices/certificates.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/ja/docs/setup/best-practices/certificates.md b/content/ja/docs/setup/best-practices/certificates.md index a782b52fee1e9..a499631875a05 100644 --- a/content/ja/docs/setup/best-practices/certificates.md +++ b/content/ja/docs/setup/best-practices/certificates.md @@ -67,7 +67,7 @@ CAの秘密鍵をクラスターにコピーしたくない場合、自身で全 | kube-etcd | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` | | kube-etcd-peer | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` | | kube-etcd-healthcheck-client | etcd-ca | | client | | -| kube-apiserver-etcd-client | etcd-ca | system:masters | client | | +| kube-apiserver-etcd-client | etcd-ca | | client | | | kube-apiserver | kubernetes-ca | | server | ``, ``, ``, `[1]` | | kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | | | front-proxy-client | kubernetes-front-proxy-ca | | client | | From da06ff06c8ff8b87b3315d524b906e71455ccc65 Mon Sep 17 00:00:00 2001 From: MeenuyD Date: Sun, 24 Sep 2023 20:54:02 +0530 Subject: [PATCH 041/229] Fix: secret missing in docker-registry secret command in kubectl-command docs --- static/docs/reference/generated/kubectl/kubectl-commands.html | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/static/docs/reference/generated/kubectl/kubectl-commands.html b/static/docs/reference/generated/kubectl/kubectl-commands.html index 4fff80b171a7d..cd4388a1f1171 100644 --- a/static/docs/reference/generated/kubectl/kubectl-commands.html +++ b/static/docs/reference/generated/kubectl/kubectl-commands.html @@ -1503,7 +1503,8 @@

secret docker-registry

nodes to pull images on your behalf, they must have the credentials. You can provide this information by creating a dockercfg secret and attaching it to your service account.

Usage

-

$ kubectl create docker-registry NAME --docker-username=user --docker-password=password --docker-email=email [--docker-server=string] [--from-file=[key=]source] [--dry-run=server|client|none]

+

$ kubectl create secret docker-registry NAME --docker-username=user --docker-password=password --docker-email=email [--docker-server=string] [--from-file=[key=]source] [--dry-run=server|client|none] +

Flags

From b4412b123655b91c75c790d1fd9950955e2ce548 Mon Sep 17 00:00:00 2001 From: niranjandarshann Date: Mon, 25 Sep 2023 10:52:03 +0530 Subject: [PATCH 042/229] Added glossary of CSI --- content/en/docs/concepts/storage/ephemeral-volumes.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/content/en/docs/concepts/storage/ephemeral-volumes.md b/content/en/docs/concepts/storage/ephemeral-volumes.md index 77844874348d7..f92f544768fd0 100644 --- a/content/en/docs/concepts/storage/ephemeral-volumes.md +++ b/content/en/docs/concepts/storage/ephemeral-volumes.md @@ -47,8 +47,7 @@ different purposes: [secret](/docs/concepts/storage/volumes/#secret): inject different kinds of Kubernetes data into a Pod - [CSI ephemeral volumes](#csi-ephemeral-volumes): - similar to the previous volume kinds, but provided by special - [CSI drivers](https://github.com/container-storage-interface/spec/blob/master/spec.md) + similar to the previous volume kinds, but provided by special {{< glossary_tooltip text="CSI" term_id="csi" >}} drivers which specifically [support this feature](https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html) - [generic ephemeral volumes](#generic-ephemeral-volumes), which can be provided by all storage drivers that also support persistent volumes From 27473c3381f5bc56f4ccca7e2dcd20d2cb63ac33 Mon Sep 17 00:00:00 2001 From: Mohammed Affan Date: Thu, 24 Aug 2023 15:52:36 +0530 Subject: [PATCH 043/229] Add eviction thresholds parameters Update content/en/docs/tasks/administer-cluster/kubelet-config-file.md Co-authored-by: Qiming Teng --- .../administer-cluster/kubelet-config-file.md | 23 ++++++++++++++----- 1 file changed, 17 insertions(+), 6 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md index 506dd2723e00b..815f9f70c3aec 100644 --- a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md +++ b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md @@ -35,14 +35,22 @@ address: "192.168.0.8" port: 20250 serializeImagePulls: false evictionHard: - memory.available: "200Mi" + memory.available: "100Mi" + nodefs.available: "10%" + nodefs.inodesFree: "5%" + imagefs.available: "15%" ``` -In the example, the kubelet is configured to serve on IP address 192.168.0.8 and port 20250, pull images in parallel, -and evict Pods when available memory drops below 200Mi. Since only one of the four evictionHard thresholds is configured, -other evictionHard thresholds are reset to 0 from their built-in defaults. -All other kubelet configuration values are left at their built-in defaults, unless overridden -by flags. Command line flags which target the same value as a config file will override that value. +In this example, the kubelet is configured with the following settings: + +1. `address`: The kubelet will serve on IP address `192.168.0.8`. +2. `port`: The kubelet will serve on port `20250`. +3. `serializeImagePulls`: Image pulls will be done in parallel. +4. `evictionHard`: The kubelet will evict Pods under one of the following conditions: + - When the node's available memory drops below 100MiB. + - When the node's main filesystem's available space is less than 10%. + - When the image filesystem's available space is less than 15%. + - When more than 95% of the node's main filesystem's inodes are in use. {{< note >}} In the example, by changing the default value of only one parameter for @@ -51,6 +59,9 @@ will be set to zero. In order to provide custom values, you should provide all the threshold values respectively. {{< /note >}} +The `imagefs` is an optional filesystem that container runtimes use to store container +images and container writable layers. + ## Start a kubelet process configured via the config file {{< note >}} From f2635821d773ef56f433c6938a8642ec74471a7b Mon Sep 17 00:00:00 2001 From: kujiraitakahiro Date: Mon, 25 Sep 2023 20:53:03 +0900 Subject: [PATCH 044/229] Update content/ja/docs/reference/glossary/kubeadm.md add "ja" link page Update content/ja/docs/reference/glossary/addons.md add "ja" link page Update content/ja/docs/reference/glossary/addons.md add "ja" link page Create kubeadm.md Update addons.md Create addons.md Co-Authored-By: atoato88 --- content/ja/docs/reference/glossary/addons.md | 16 ++++++++++++++++ content/ja/docs/reference/glossary/kubeadm.md | 18 ++++++++++++++++++ 2 files changed, 34 insertions(+) create mode 100644 content/ja/docs/reference/glossary/addons.md create mode 100644 content/ja/docs/reference/glossary/kubeadm.md diff --git a/content/ja/docs/reference/glossary/addons.md b/content/ja/docs/reference/glossary/addons.md new file mode 100644 index 0000000000000..199aa23e6b1b1 --- /dev/null +++ b/content/ja/docs/reference/glossary/addons.md @@ -0,0 +1,16 @@ +--- +title: Add-ons +id: addons +date: 2019-12-15 +full_link: /ja/docs/concepts/cluster-administration/addons/ +short_description: > + Kubernetesの機能を拡張するリソース。 + +aka: +tags: +- tool +--- + Kubernetesの機能を拡張するリソース。 + + +[Installing addons](/ja/docs/concepts/cluster-administration/addons/)では、クラスターのアドオン使用について詳しく説明し、いくつかの人気のあるアドオンを列挙します。 diff --git a/content/ja/docs/reference/glossary/kubeadm.md b/content/ja/docs/reference/glossary/kubeadm.md new file mode 100644 index 0000000000000..f84150c9790e2 --- /dev/null +++ b/content/ja/docs/reference/glossary/kubeadm.md @@ -0,0 +1,18 @@ +--- +title: Kubeadm +id: kubeadm +date: 2018-04-12 +full_link: /ja/docs/reference/setup-tools/kubeadm/ +short_description: > + Kubernetesを迅速にインストールし、安全なクラスタをセットアップするためのツール。 + +aka: +tags: +- tool +- operation +--- + Kubernetesを迅速にインストールし、安全なクラスタをセットアップするためのツール。 + + + +kubeadmを使用して、コントロールプレーンとワーカーノード{{< glossary_tooltip text="ワーカーノード" term_id="node" >}}コンポーネントの両方をインストールできます。 From 0f1a7a1b7b155a34bedebc64a0cd38244a06674d Mon Sep 17 00:00:00 2001 From: Sascha Grunert Date: Tue, 15 Aug 2023 12:11:14 +0200 Subject: [PATCH 045/229] Fix `config.json` interpretation As outlined in https://github.com/kubernetes/kubernetes/issues/119941, the implementation is more specific than a regular glob match. Updating the docs to reflect that. Signed-off-by: Sascha Grunert --- content/en/docs/concepts/containers/images.md | 42 ++++++++----------- 1 file changed, 17 insertions(+), 25 deletions(-) diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md index b01b2fd112eef..b4b837ae32fc6 100644 --- a/content/en/docs/concepts/containers/images.md +++ b/content/en/docs/concepts/containers/images.md @@ -265,38 +265,26 @@ See [Configure a kubelet image credential provider](/docs/tasks/administer-clust The interpretation of `config.json` varies between the original Docker implementation and the Kubernetes interpretation. In Docker, the `auths` keys can only specify root URLs, whereas Kubernetes allows glob URLs as well as -prefix-matched paths. This means that a `config.json` like this is valid: +prefix-matched paths. The only limitation is that glob patterns (`*`) have to +include the dot (`.`) for each subdomain. The amount of matched subdomains has +to be equal to the amount of glob patterns (`*.`), for example: + +- `*.kubernetes.io` will *not* match `kubernetes.io`, but `abc.kubernetes.io` +- `*.*.kubernetes.io` will *not* match `abc.kubernetes.io`, but `abc.def.kubernetes.io` +- `prefix.*.io` will match `prefix.kubernetes.io` +- `*-good.kubernetes.io` will match `prefix-good.kubernetes.io` + +This means that a `config.json` like this is valid: ```json { "auths": { - "*my-registry.io/images": { - "auth": "…" - } + "my-registry.io/images": { "auth": "…" }, + "*.my-registry.io/images": { "auth": "…" } } } ``` -The root URL (`*my-registry.io`) is matched by using the following syntax: - -``` -pattern: - { term } - -term: - '*' matches any sequence of non-Separator characters - '?' matches any single non-Separator character - '[' [ '^' ] { character-range } ']' - character class (must be non-empty) - c matches character c (c != '*', '?', '\\', '[') - '\\' c matches character c - -character-range: - c matches character c (c != '\\', '-', ']') - '\\' c matches character c - lo '-' hi matches character c for lo <= c <= hi -``` - Image pull operations would now pass the credentials to the CRI container runtime for every valid pattern. For example the following container image names would match successfully: @@ -305,10 +293,14 @@ would match successfully: - `my-registry.io/images/my-image` - `my-registry.io/images/another-image` - `sub.my-registry.io/images/my-image` + +But not: + - `a.sub.my-registry.io/images/my-image` +- `a.b.sub.my-registry.io/images/my-image` The kubelet performs image pulls sequentially for every found credential. This -means, that multiple entries in `config.json` are possible, too: +means, that multiple entries in `config.json` for different paths are possible, too: ```json { From 0ce4025b7071d6308a249a66d22225562f9a1300 Mon Sep 17 00:00:00 2001 From: Michael Date: Thu, 28 Sep 2023 22:41:29 +0800 Subject: [PATCH 046/229] Revert SVG images and fix typos in memory qos cgroup v2 blog --- .../container-memory-high-best-effort.svg | 480 +++++- .../container-memory-high-limit.svg | 1296 ++++++++++++--- .../container-memory-high-no-limits.svg | 1152 ++++++++++--- .../container-memory-high.svg | 1445 ++++++++++++++--- .../2023-05-05-memory-qos-cgroups-v2/index.md | 27 +- 5 files changed, 3615 insertions(+), 785 deletions(-) diff --git a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-best-effort.svg b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-best-effort.svg index cf9283885855e..e35b2f39509bb 100644 --- a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-best-effort.svg +++ b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-best-effort.svg @@ -1,87 +1,395 @@ - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-limit.svg b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-limit.svg index 3a545f20dd85f..a2ba00c58fd4e 100644 --- a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-limit.svg +++ b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-limit.svg @@ -1,226 +1,1072 @@ - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-no-limits.svg b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-no-limits.svg index 845f5d0d07bb2..57b207b80a0be 100644 --- a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-no-limits.svg +++ b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-no-limits.svg @@ -1,203 +1,951 @@ - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high.svg b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high.svg index 02357ef901582..4ba0b15957a28 100644 --- a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high.svg +++ b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high.svg @@ -1,252 +1,1195 @@ - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/index.md b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/index.md index 26b1d626fd171..ff72afe083322 100644 --- a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/index.md +++ b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/index.md @@ -128,18 +128,14 @@ enforces the limit to prevent the container from using more than the configured resource limit. If a process in a container tries to consume more than the specified limit, kernel terminates a process(es) with an Out of Memory (OOM) error. -```formula -memory.max = pod.spec.containers[i].resources.limits[memory] -``` +{{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-max.svg" title="memory.max maps to limits.memory" alt="memory.max maps to limits.memory" >}} `memory.min` is mapped to `requests.memory`, which results in reservation of memory resources that should never be reclaimed by the kernel. This is how Memory QoS ensures the availability of memory for Kubernetes pods. If there's no unprotected reclaimable memory available, the OOM killer is invoked to make more memory available. -```formula -memory.min = pod.spec.containers[i].resources.requests[memory] -``` +{{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-min.svg" title="memory.min maps to requests.memory" alt="memory.min maps to requests.memory" >}} For memory protection, in addition to the original way of limiting memory usage, Memory QoS throttles workload approaching its memory limit, ensuring that the system is not overwhelmed @@ -149,10 +145,7 @@ the KubeletConfiguration when you enable MemoryQoS feature. It is set to 0.9 by `requests.memory` and `limits.memory` as in the formula below, and rounding down the value to the nearest page size: -```formula -memory.high = pod.spec.containers[i].resources.requests[memory] + MemoryThrottlingFactor * -{(pod.spec.containers[i].resources.limits[memory] or NodeAllocatableMemory) - pod.spec.containers[i].resources.requests[memory]} -``` +{{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-high.svg" title="memory.high formula" alt="memory.high formula" >}} {{< note >}} If a container has no memory limits specified, `limits.memory` is substituted for node allocatable memory. @@ -256,26 +249,18 @@ as per QOS classes: * When requests.memory and limits.memory are set, the formula is used as-is: - ```formula - memory.high = pod.spec.containers[i].resources.requests[memory] + MemoryThrottlingFactor * - {(pod.spec.containers[i].resources.limits[memory]) - pod.spec.containers[i].resources.requests[memory]} - ``` + {{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-high-limit.svg" title="memory.high when requests and limits are set" alt="memory.high when requests and limits are set" >}} * When requests.memory is set and limits.memory is not set, limits.memory is substituted for node allocatable memory in the formula: - ```formula - memory.high = pod.spec.containers[i].resources.requests[memory] + MemoryThrottlingFactor * - {(NodeAllocatableMemory) - pod.spec.containers[i].resources.requests[memory]} - ``` + {{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-high-no-limits.svg" title="memory.high when requests and limits are not set" alt="memory.high when requests and limits are not set" >}} 1. **BestEffort** by their QoS definition do not require any memory or CPU limits or requests. For this case, kubernetes sets requests.memory = 0 and substitute limits.memory for node allocatable memory in the formula: - ```formula - memory.high = MemoryThrottlingFactor * NodeAllocatableMemory - ``` + {{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-high-best-effort.svg" title="memory.high for BestEffort Pod" alt="memory.high for BestEffort Pod" >}} **Summary**: Only Pods in Burstable and BestEffort QoS classes will set `memory.high`. Guaranteed QoS pods do not set `memory.high` as their memory is guaranteed. From 053b689f63298dfa96cd17e1acb90ce1aaee7b2c Mon Sep 17 00:00:00 2001 From: Akihito INOH Date: Fri, 29 Sep 2023 06:32:43 +0900 Subject: [PATCH 047/229] Update ja glossary about secret This commit update glossary about secret on ja content. The current content is a bit old, that's why I create this update. --- content/ja/docs/reference/glossary/secret.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/content/ja/docs/reference/glossary/secret.md b/content/ja/docs/reference/glossary/secret.md index 3324279bd65f9..8f7196f0c6d9c 100644 --- a/content/ja/docs/reference/glossary/secret.md +++ b/content/ja/docs/reference/glossary/secret.md @@ -15,4 +15,6 @@ tags: -機密情報の取り扱い方法を細かく制御することができ、保存時には[暗号化](/ja/docs/tasks/administer-cluster/encrypt-data/#ensure-all-secrets-are-encrypted)するなど、誤って公開してしまうリスクを減らすことができます。{{< glossary_tooltip text="Pod" term_id="pod" >}}は、ボリュームマウントされたファイルとして、またはPodのイメージをPullするkubeletによって、Secretを参照します。Secretは機密情報を扱うのに最適で、機密でない情報には[ConfigMap](/ja/docs/tasks/configure-pod-container/configure-pod-configmap/)が適しています。 +Secretは、機密情報の使用方法をより管理しやすくし、偶発的な漏洩のリスクを減らすことができます。Secretの値はbase64文字列としてエンコードされ、デフォルトでは暗号化されずに保存されますが、[保存時に暗号化](/docs/tasks/administer-cluster/encrypt-data/#ensure-all-secrets-are-encrypted)するように設定することもできます。 + +{{< glossary_tooltip text="Pod" term_id="pod" >}}は、ボリュームマウントや環境変数など、さまざまな方法でSecretを参照できます。Secretは機密データ用に設計されており、[ConfigMap](/ja/docs/tasks/configure-pod-container/configure-pod-configmap/)は非機密データ用に設計されています。 \ No newline at end of file From f8f5a5c7a67dee2eab29e52c63a6453cb93ee8d7 Mon Sep 17 00:00:00 2001 From: Michael Date: Fri, 29 Sep 2023 11:34:06 +0800 Subject: [PATCH 048/229] [zh] Sync /concepts/configuration/secret.md --- .../docs/concepts/configuration/secret.md | 337 ++++++++++-------- 1 file changed, 185 insertions(+), 152 deletions(-) diff --git a/content/zh-cn/docs/concepts/configuration/secret.md b/content/zh-cn/docs/concepts/configuration/secret.md index bcace1d3f74ac..a93cd9f2b9d0b 100644 --- a/content/zh-cn/docs/concepts/configuration/secret.md +++ b/content/zh-cn/docs/concepts/configuration/secret.md @@ -330,8 +330,8 @@ Kubernetes 并不对类型的名称作任何限制。不过,如果你要使用 则你必须满足为该类型所定义的所有要求。 如果你要定义一种公开使用的 Secret 类型,请遵守 Secret 类型的约定和结构, @@ -339,19 +339,19 @@ by a `/`. For example: `cloud-hosting.example.net/cloud-api-credentials`. 例如:`cloud-hosting.example.net/cloud-api-credentials`。 ### Opaque Secret -当 Secret 配置文件中未作显式设定时,默认的 Secret 类型是 `Opaque`。 -当你使用 `kubectl` 来创建一个 Secret 时,你会使用 `generic` -子命令来标明要创建的是一个 `Opaque` 类型 Secret。 -例如,下面的命令会创建一个空的 `Opaque` 类型 Secret 对象: +当你未在 Secret 清单中显式指定类型时,默认的 Secret 类型是 `Opaque`。 +当你使用 `kubectl` 来创建一个 Secret 时,你必须使用 `generic` +子命令来标明要创建的是一个 `Opaque` 类型的 Secret。 +例如,下面的命令会创建一个空的 `Opaque` 类型的 Secret: ```shell kubectl create secret generic empty-secret @@ -361,7 +361,7 @@ kubectl get secret empty-secret -输出类似于 +输出类似于: ``` NAME TYPE DATA AGE @@ -376,89 +376,87 @@ In this case, `0` means you have created an empty Secret. 在这个例子中,`0` 意味着你刚刚创建了一个空的 Secret。 -### 服务账号令牌 Secret {#service-account-token-secrets} - -类型为 `kubernetes.io/service-account-token` 的 Secret -用来存放标识某{{< glossary_tooltip text="服务账号" term_id="service-account" >}}的令牌凭据。 +{{< glossary_tooltip text="ServiceAccount" term_id="service-account" >}}. This +is a legacy mechanism that provides long-lived ServiceAccount credentials to +Pods. +--> +### ServiceAccount 令牌 Secret {#service-account-token-secrets} + +类型为 `kubernetes.io/service-account-token` 的 Secret 用来存放标识某 +{{< glossary_tooltip text="ServiceAccount" term_id="service-account" >}} 的令牌凭据。 +这是为 Pod 提供长期有效 ServiceAccount 凭据的传统机制。 + + +在 Kubernetes v1.22 及更高版本中,推荐的方法是通过使用 +[`TokenRequest`](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) API +来获取短期自动轮换的 ServiceAccount 令牌。你可以使用以下方法获取这些短期令牌: + + +- 直接调用 `TokenRequest` API,或者使用像 `kubectl` 这样的 API 客户端。 + 例如,你可以使用 + [`kubectl create token`](/docs/reference/generated/kubectl/kubectl-commands#-em-token-em-) 命令。 +- 在 Pod 清单中请求使用[投射卷](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume)挂载的令牌。 + Kubernetes 会创建令牌并将其挂载到 Pod 中。 + 当挂载令牌的 Pod 被删除时,此令牌会自动失效。 + 更多细节参阅[启动使用服务账号令牌投射的 Pod](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#launch-a-pod-using-service-account-token-projection)。 {{< note >}} -Kubernetes 在 v1.22 版本之前都会自动创建用来访问 Kubernetes API 的凭据。 -这一老的机制是基于创建可被挂载到运行中 Pod 内的令牌 Secret 来实现的。 -在最近的版本中,包括 Kubernetes v{{< skew currentVersion >}} 中,API 凭据是直接通过 -[TokenRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) -API 来获得的,这一凭据会使用[投射卷](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume)挂载到 -Pod 中。使用这种方式获得的令牌有确定的生命期,并且在挂载它们的 Pod 被删除时自动作废。 - - -你仍然可以[手动创建](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-service-account-api-token) -服务账号令牌。例如,当你需要一个永远都不过期的令牌时。 -不过,仍然建议使用 [TokenRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) -子资源来获得访问 API 服务器的令牌。 -你可以使用 [`kubectl create token`](/docs/reference/generated/kubectl/kubectl-commands#-em-token-em-) -命令调用 `TokenRequest` API 获得令牌。 -{{< /note >}} - - 只有在你无法使用 `TokenRequest` API 来获取令牌, 并且你能够接受因为将永不过期的令牌凭据写入到可读取的 API 对象而带来的安全风险时, -才应该创建服务账号令牌 Secret 对象。 +才应该创建 ServiceAccount 令牌 Secret。 +更多细节参阅[为 ServiceAccount 手动创建长期有效的 API 令牌](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-service-account-api-token)。 +{{< /note >}} 使用这种 Secret 类型时,你需要确保对象的注解 `kubernetes.io/service-account-name` -被设置为某个已有的服务账号名称。 -如果你同时负责 ServiceAccount 和 Secret 对象的创建,应该先创建 ServiceAccount 对象。 +被设置为某个已有的 ServiceAccount 名称。 +如果你同时创建 ServiceAccount 和 Secret 对象,应该先创建 ServiceAccount 对象。 -当 Secret 对象被创建之后,某个 Kubernetes{{< glossary_tooltip text="控制器" term_id="controller" >}}会填写 -Secret 的其它字段,例如 `kubernetes.io/service-account.uid` 注解以及 `data` 字段中的 -`token` 键值,使之包含实际的令牌内容。 +当 Secret 对象被创建之后,某个 Kubernetes +{{< glossary_tooltip text="控制器" term_id="controller" >}}会填写 +Secret 的其它字段,例如 `kubernetes.io/service-account.uid` 注解和 +`data` 字段中的 `token` 键值(该键包含一个身份认证令牌)。 -下面的配置实例声明了一个服务账号令牌 Secret: +下面的配置实例声明了一个 ServiceAccount 令牌 Secret: -参考 [ServiceAccount](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/) -文档了解服务账号的工作原理。你也可以查看 +参考 [ServiceAccount](/zh-cn/docs/concepts/security/service-accounts/) +文档了解 ServiceAccount 的工作原理。你也可以查看 [`Pod`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) 资源中的 `automountServiceAccountToken` 和 `serviceAccountName` 字段文档, -进一步了解从 Pod 中引用服务账号凭据。 +进一步了解从 Pod 中引用 ServiceAccount 凭据。 -`kubernetes.io/dockercfg` 是一种保留类型,用来存放 `~/.dockercfg` 文件的序列化形式。 -该文件是配置 Docker 命令行的一种老旧形式。使用此 Secret 类型时,你需要确保 -Secret 的 `data` 字段中包含名为 `.dockercfg` 的主键,其对应键值是用 base64 -编码的某 `~/.dockercfg` 文件的内容。 +- `kubernetes.io/dockercfg`:存放 `~/.dockercfg` 文件的序列化形式,它是配置 Docker + 命令行的一种老旧形式。Secret 的 `data` 字段包含名为 `.dockercfg` 的主键, + 其值是用 base64 编码的某 `~/.dockercfg` 文件的内容。 +- `kubernetes.io/dockerconfigjson`:存放 JSON 数据的序列化形式, + 该 JSON 也遵从 `~/.docker/config.json` 文件的格式规则,而后者是 `~/.dockercfg` + 的新版本格式。使用此 Secret 类型时,Secret 对象的 `data` 字段必须包含 + `.dockerconfigjson` 键,其键值为 base64 编码的字符串包含 `~/.docker/config.json` + 文件的内容。 -类型 `kubernetes.io/dockerconfigjson` 被设计用来保存 JSON 数据的序列化形式, -该 JSON 也遵从 `~/.docker/config.json` 文件的格式规则,而后者是 `~/.dockercfg` -的新版本格式。使用此 Secret 类型时,Secret 对象的 `data` 字段必须包含 -`.dockerconfigjson` 键,其键值为 base64 编码的字符串包含 `~/.docker/config.json` -文件的内容。 - 下面是一个 `kubernetes.io/dockercfg` 类型 Secret 的示例: ```yaml @@ -570,20 +560,20 @@ If you do not want to perform the base64 encoding, you can choose to use the {{< /note >}} -当你使用清单文件来创建这两类 Secret 时,API 服务器会检查 `data` 字段中是否存在所期望的主键, +当你使用清单文件通过 Docker 配置来创建 Secret 时,API 服务器会检查 `data` 字段中是否存在所期望的主键, 并且验证其中所提供的键值是否是合法的 JSON 数据。 不过,API 服务器不会检查 JSON 数据本身是否是一个合法的 Docker 配置文件内容。 -当你没有 Docker 配置文件,或者你想使用 `kubectl` 创建一个 Secret -来访问容器仓库时,你可以这样做: +你还可以使用 `kubectl` 创建一个 Secret 来访问容器仓库时, +当你没有 Docker 配置文件时你可以这样做: ```shell kubectl create secret docker-registry secret-tiger-docker \ @@ -594,22 +584,24 @@ kubectl create secret docker-registry secret-tiger-docker \ ``` -上面的命令创建一个类型为 `kubernetes.io/dockerconfigjson` 的 Secret。 -如果你对 `.data.dockerconfigjson` 内容进行转储并执行 base64 解码: +此命令创建一个类型为 `kubernetes.io/dockerconfigjson` 的 Secret。 + +从这个新的 Secret 中获取 `.data.dockerconfigjson` 字段并执行数据解码: ```shell kubectl get secret secret-tiger-docker -o jsonpath='{.data.*}' | base64 -d ``` -那么输出等价于这个 JSON 文档(这也是一个有效的 Docker 配置文件): +输出等价于以下 JSON 文档(这也是一个有效的 Docker 配置文件): ```json { @@ -657,16 +649,29 @@ Secret must contain one of the following two keys: - `password`: 用于身份认证的密码或令牌。 以上两个键的键值都是 base64 编码的字符串。 -当然你也可以在创建 Secret 时使用 `stringData` 字段来提供明文形式的内容。 +当然你也可以在 Secret 清单中的使用 `stringData` 字段来提供明文形式的内容。 + 以下清单是基本身份验证 Secret 的示例: + ```yaml apiVersion: v1 kind: Secret @@ -693,7 +698,7 @@ The Kubernetes API verifies that the required keys are set for a Secret of this API 服务器会检查 Secret 配置中是否提供了所需要的主键。 ```yaml apiVersion: v1 kind: Secret @@ -724,18 +742,18 @@ data: ``` -提供 SSH 身份认证类型的 Secret 仅仅是出于用户方便性考虑。 -你也可以使用 `Opaque` 类型来保存用于 SSH 身份认证的凭据。 +提供 SSH 身份认证类型的 Secret 仅仅是出于方便性考虑。 +你可以使用 `Opaque` 类型来保存用于 SSH 身份认证的凭据。 不过,使用预定义的、公开的 Secret 类型(`kubernetes.io/ssh-auth`) 有助于其他人理解你的 Secret 的用途,也可以就其中包含的主键名形成约定。 -API 服务器确实会检查 Secret 配置中是否提供了所需要的主键。 +Kubernetes API 会验证这种类型的 Secret 中是否设定了所需的主键。 {{< caution >}} ### TLS Secret -Kubernetes 提供一种内置的 `kubernetes.io/tls` Secret 类型,用来存放 TLS -场合通常要使用的证书及其相关密钥。 +`kubernetes.io/tls` Secret 类型用来存放 TLS 场合通常要使用的证书及其相关密钥。 + TLS Secret 的一种典型用法是为 [Ingress](/zh-cn/docs/concepts/services-networking/ingress/) 资源配置传输过程中的数据加密,不过也可以用于其他资源或者直接在负载中使用。 当使用此类型的 Secret 时,Secret 配置中的 `data` (或 `stringData`)字段必须包含 @@ -779,6 +799,23 @@ TLS Secret 的一种典型用法是为 [Ingress](/zh-cn/docs/concepts/services-n 下面的 YAML 包含一个 TLS Secret 的配置示例: + ```yaml apiVersion: v1 kind: Secret @@ -796,21 +833,20 @@ stringData: ``` -提供 TLS 类型的 Secret 仅仅是出于用户方便性考虑。 -你也可以使用 `Opaque` 类型来保存用于 TLS 服务器与/或客户端的凭据。 -不过,使用内置的 Secret 类型的有助于对凭据格式进行归一化处理,并且 -API 服务器确实会检查 Secret 配置中是否提供了所需要的主键。 +提供 TLS 类型的 Secret 仅仅是出于方便性考虑。 +你可以创建 `Opaque` 类型的 Secret 来保存用于 TLS 身份认证的凭据。 +不过,使用已定义和公开的 Secret 类型有助于确保你自己项目中的 Secret 格式的一致性。 +API 服务器会验证这种类型的 Secret 是否设定了所需的主键。 -当使用 `kubectl` 来创建 TLS Secret 时,你可以像下面的例子一样使用 `tls` -子命令: +要使用 `kubectl` 创建 TLS Secret,你可以使用 `tls` 子命令: ```shell kubectl create secret tls my-tls-secret \ @@ -828,15 +864,13 @@ and must match the given private key for `--key`. ### 启动引导令牌 Secret {#bootstrap-token-secrets} -通过将 Secret 的 `type` 设置为 `bootstrap.kubernetes.io/token` -可以创建启动引导令牌类型的 Secret。这种类型的 Secret 被设计用来支持节点的启动引导过程。 +`bootstrap.kubernetes.io/token` Secret 类型针对的是节点启动引导过程所用的令牌。 其中包含用来为周知的 ConfigMap 签名的令牌。 -上面的 YAML 文件可能看起来令人费解,因为其中的数值均为 base64 编码的字符串。 -实际上,你完全可以使用下面的 YAML 来创建一个一模一样的 Secret: +你也可以在 Secret 的 `stringData` 字段中提供值,而无需对其进行 base64 编码: ```yaml apiVersion: v1 From 15133c0c6459321d5f5e1fdf9e29098ce7da84e4 Mon Sep 17 00:00:00 2001 From: kujiraitakahiro Date: Sat, 30 Sep 2023 13:41:51 +0900 Subject: [PATCH 049/229] Update content/ja/docs/reference/glossary/kubeadm.md Co-authored-by: inukai <82919057+t-inu@users.noreply.github.com> --- content/ja/docs/reference/glossary/kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/ja/docs/reference/glossary/kubeadm.md b/content/ja/docs/reference/glossary/kubeadm.md index f84150c9790e2..eecc3db14591f 100644 --- a/content/ja/docs/reference/glossary/kubeadm.md +++ b/content/ja/docs/reference/glossary/kubeadm.md @@ -4,7 +4,7 @@ id: kubeadm date: 2018-04-12 full_link: /ja/docs/reference/setup-tools/kubeadm/ short_description: > - Kubernetesを迅速にインストールし、安全なクラスタをセットアップするためのツール。 + Kubernetesを迅速にインストールし、安全なクラスターをセットアップするためのツール。 aka: tags: From b2468531f82c361831ade5e61268c08cdb1d26c7 Mon Sep 17 00:00:00 2001 From: Arhell Date: Sun, 1 Oct 2023 00:43:43 +0300 Subject: [PATCH 050/229] [ja] Updated link referring to Portworx CSI Drivers --- content/ja/docs/concepts/storage/volumes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/ja/docs/concepts/storage/volumes.md b/content/ja/docs/concepts/storage/volumes.md index c4445d49c49f7..df385712b7a7b 100644 --- a/content/ja/docs/concepts/storage/volumes.md +++ b/content/ja/docs/concepts/storage/volumes.md @@ -899,7 +899,7 @@ spec: Portworxの`CSIMigration`機能が追加されましたが、Kubernetes 1.23ではAlpha状態であるため、デフォルトで無効になっています。 すべてのプラグイン操作を既存のツリー内プラグインから`pxd.portworx.com`Container Storage Interface(CSI)ドライバーにリダイレクトします。 -[Portworx CSIドライバー](https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/csi/)をクラスターにインストールする必要があります。 +[Portworx CSIドライバー](https://docs.portworx.com/portworx-enterprise/operations/operate-kubernetes/storage-operations/csi)をクラスターにインストールする必要があります。 この機能を有効にするには、kube-controller-managerとkubeletで`CSIMigrationPortworx=true`を設定します。 ## subPathの使用 {#using-subpath} From 18d86031f4bf9fa5ef7be38e1101b010df81c4a6 Mon Sep 17 00:00:00 2001 From: kujiraitakahiro Date: Sun, 1 Oct 2023 16:42:52 +0900 Subject: [PATCH 051/229] Update content/ja/docs/reference/glossary/addons.md Co-authored-by: inukai <82919057+t-inu@users.noreply.github.com> --- content/ja/docs/reference/glossary/addons.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/ja/docs/reference/glossary/addons.md b/content/ja/docs/reference/glossary/addons.md index 199aa23e6b1b1..ee781366e0231 100644 --- a/content/ja/docs/reference/glossary/addons.md +++ b/content/ja/docs/reference/glossary/addons.md @@ -13,4 +13,4 @@ tags: Kubernetesの機能を拡張するリソース。 -[Installing addons](/ja/docs/concepts/cluster-administration/addons/)では、クラスターのアドオン使用について詳しく説明し、いくつかの人気のあるアドオンを列挙します。 +[アドオンのインストール](/ja/docs/concepts/cluster-administration/addons/)では、クラスターのアドオン使用について詳しく説明し、いくつかの人気のあるアドオンを列挙します。 From 12e4c487dda0aad0bc2a90b3718d044e7d0829d4 Mon Sep 17 00:00:00 2001 From: Gauravpadam <1032201077@tcetmumbai.in> Date: Tue, 26 Sep 2023 23:11:30 +0530 Subject: [PATCH 052/229] Defined i18nDir in hugo.toml add back symbolic links resetting branch defaults add back symlinks Added current symbolic links to gitignore The changes are limited to windows now Add back symbolic links .gitignore modified reverse .gitignore Add .env to .gitignore configurations for i18nDir Modify README.md Re-structure directory Restore commit defined i18nDir to ./data/i18n Restore readme.md --- hugo.toml | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/hugo.toml b/hugo.toml index ef2cb6a4aca1d..58c4a1aa0ff1e 100644 --- a/hugo.toml +++ b/hugo.toml @@ -301,6 +301,7 @@ languageName = "English" # Weight used for sorting. weight = 1 languagedirection = "ltr" +i18nDir = "./data/i18n" [languages.zh-cn] title = "Kubernetes" @@ -503,4 +504,4 @@ languagedirection = "ltr" [languages.uk.params] time_format_blog = "02.01.2006" # A list of language codes to look for untranslated content, ordered from left to right. -language_alternatives = ["en"] +language_alternatives = ["en"] \ No newline at end of file From 6296f46c8792d4551a33cefd39557250aa75015e Mon Sep 17 00:00:00 2001 From: Gauravpadam <1032201077@tcetmumbai.in> Date: Sun, 1 Oct 2023 11:28:06 +0530 Subject: [PATCH 053/229] Added github repo link to feedback Limited changes to en.toml Multiline string changed html to markdown --- data/i18n/en/en.toml | 2 +- layouts/partials/feedback.html | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/data/i18n/en/en.toml b/data/i18n/en/en.toml index 8ec60129fa5a6..f5ca7d43f422c 100644 --- a/data/i18n/en/en.toml +++ b/data/i18n/en/en.toml @@ -265,7 +265,7 @@ other = "Workload" other = "suggest an improvement" [layouts_docs_partials_feedback_issue] -other = "Open an issue in the GitHub repo if you want to " +other = """Open an issue in the [GitHub Repository](https://www.github.com/kubernetes/website/) if you want to """ [layouts_docs_partials_feedback_or] other = "or" diff --git a/layouts/partials/feedback.html b/layouts/partials/feedback.html index f062cd847240c..1be98c1915de9 100644 --- a/layouts/partials/feedback.html +++ b/layouts/partials/feedback.html @@ -9,7 +9,7 @@

{{ T "feedback_heading" }}

Stack Overflow. - {{ T "layouts_docs_partials_feedback_issue" }} + {{ T "layouts_docs_partials_feedback_issue" | markdownify }} From 51e7852be31651d766df1153932b656d3b71159e Mon Sep 17 00:00:00 2001 From: kujiraitakahiro Date: Tue, 3 Oct 2023 07:52:45 +0900 Subject: [PATCH 054/229] Update content/ja/docs/reference/glossary/kubeadm.md Co-authored-by: inukai <82919057+t-inu@users.noreply.github.com> --- content/ja/docs/reference/glossary/kubeadm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/ja/docs/reference/glossary/kubeadm.md b/content/ja/docs/reference/glossary/kubeadm.md index eecc3db14591f..535f26183dc5a 100644 --- a/content/ja/docs/reference/glossary/kubeadm.md +++ b/content/ja/docs/reference/glossary/kubeadm.md @@ -11,7 +11,7 @@ tags: - tool - operation --- - Kubernetesを迅速にインストールし、安全なクラスタをセットアップするためのツール。 + Kubernetesを迅速にインストールし、安全なクラスターをセットアップするためのツール。 From 8b94250cc97fb2459e88422fab1d9311ac07e88a Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Tue, 3 Oct 2023 14:09:48 +0800 Subject: [PATCH 055/229] Update config API reference --- .../config-api/apiserver-admission.v1.md | 1 - .../config-api/apiserver-audit.v1.md | 1 - .../config-api/apiserver-config.v1.md | 1 - .../config-api/apiserver-config.v1alpha1.md | 84 +- .../config-api/apiserver-config.v1beta1.md | 88 +- .../config-api/apiserver-encryption.v1.md | 3 +- .../apiserver-eventratelimit.v1alpha1.md | 1 - .../apiserver-webhookadmission.v1.md | 1 - .../config-api/client-authentication.v1.md | 1 - .../client-authentication.v1beta1.md | 1 - .../config-api/imagepolicy.v1alpha1.md | 1 - ...kube-controller-manager-config.v1alpha1.md | 1954 ++++++++--------- .../config-api/kube-proxy-config.v1alpha1.md | 6 +- .../config-api/kube-scheduler-config.v1.md | 6 +- .../kube-scheduler-config.v1beta3.md | 354 +-- .../config-api/kubeadm-config.v1beta3.md | 208 +- .../config-api/kubeadm-config.v1beta4.md | 212 +- .../reference/config-api/kubeconfig.v1.md | 77 + .../reference/config-api/kubelet-config.v1.md | 3 +- .../config-api/kubelet-config.v1alpha1.md | 1 - .../config-api/kubelet-config.v1beta1.md | 1039 ++++----- .../kubelet-credentialprovider.v1.md | 1 - .../kubelet-credentialprovider.v1alpha1.md | 1 - .../kubelet-credentialprovider.v1beta1.md | 3 +- 24 files changed, 2056 insertions(+), 1992 deletions(-) diff --git a/content/en/docs/reference/config-api/apiserver-admission.v1.md b/content/en/docs/reference/config-api/apiserver-admission.v1.md index 5555e6f5c12b4..0423f38cf2d53 100644 --- a/content/en/docs/reference/config-api/apiserver-admission.v1.md +++ b/content/en/docs/reference/config-api/apiserver-admission.v1.md @@ -11,7 +11,6 @@ auto_generated: true - [AdmissionReview](#admission-k8s-io-v1-AdmissionReview) - ## `AdmissionReview` {#admission-k8s-io-v1-AdmissionReview} diff --git a/content/en/docs/reference/config-api/apiserver-audit.v1.md b/content/en/docs/reference/config-api/apiserver-audit.v1.md index abab04f1bd2e1..b874126a28716 100644 --- a/content/en/docs/reference/config-api/apiserver-audit.v1.md +++ b/content/en/docs/reference/config-api/apiserver-audit.v1.md @@ -14,7 +14,6 @@ auto_generated: true - [Policy](#audit-k8s-io-v1-Policy) - [PolicyList](#audit-k8s-io-v1-PolicyList) - ## `Event` {#audit-k8s-io-v1-Event} diff --git a/content/en/docs/reference/config-api/apiserver-config.v1.md b/content/en/docs/reference/config-api/apiserver-config.v1.md index ec78a45da1a51..c133724ec70bd 100644 --- a/content/en/docs/reference/config-api/apiserver-config.v1.md +++ b/content/en/docs/reference/config-api/apiserver-config.v1.md @@ -12,7 +12,6 @@ auto_generated: true - [AdmissionConfiguration](#apiserver-config-k8s-io-v1-AdmissionConfiguration) - ## `AdmissionConfiguration` {#apiserver-config-k8s-io-v1-AdmissionConfiguration} diff --git a/content/en/docs/reference/config-api/apiserver-config.v1alpha1.md b/content/en/docs/reference/config-api/apiserver-config.v1alpha1.md index 0c85b397f61f7..47899f794e7fe 100644 --- a/content/en/docs/reference/config-api/apiserver-config.v1alpha1.md +++ b/content/en/docs/reference/config-api/apiserver-config.v1alpha1.md @@ -15,6 +15,47 @@ auto_generated: true - [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration) + + +## `TracingConfiguration` {#TracingConfiguration} + + +**Appears in:** + +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) + +- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration) + + +

TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.

+ + +
+ + + + + + + + + + + +
FieldDescription
endpoint
+string +
+

Endpoint of the collector this component will report traces to. +The connection is insecure, and does not currently support TLS. +Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.

+
samplingRatePerMillion
+int32 +
+

SamplingRatePerMillion is the number of samples to collect per million spans. +Recommended is unset. If unset, sampler respects its parent span's sampling +rate, but otherwise never samples.

+
+ ## `AdmissionConfiguration` {#apiserver-k8s-io-v1alpha1-AdmissionConfiguration} @@ -360,45 +401,4 @@ This does not use a unix:// prefix. (Eg: /etc/srv/kubernetes/konnectivity-server - - - - -## `TracingConfiguration` {#TracingConfiguration} - - -**Appears in:** - -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) - -- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration) - - -

TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.

- - - - - - - - - - - - - - -
FieldDescription
endpoint
-string -
-

Endpoint of the collector this component will report traces to. -The connection is insecure, and does not currently support TLS. -Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.

-
samplingRatePerMillion
-int32 -
-

SamplingRatePerMillion is the number of samples to collect per million spans. -Recommended is unset. If unset, sampler respects its parent span's sampling -rate, but otherwise never samples.

-
\ No newline at end of file + \ No newline at end of file diff --git a/content/en/docs/reference/config-api/apiserver-config.v1beta1.md b/content/en/docs/reference/config-api/apiserver-config.v1beta1.md index 6acb3540cd06f..06dfaab72291e 100644 --- a/content/en/docs/reference/config-api/apiserver-config.v1beta1.md +++ b/content/en/docs/reference/config-api/apiserver-config.v1beta1.md @@ -14,6 +14,49 @@ auto_generated: true - [TracingConfiguration](#apiserver-k8s-io-v1beta1-TracingConfiguration) + + +## `TracingConfiguration` {#TracingConfiguration} + + +**Appears in:** + +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) + +- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration) + +- [TracingConfiguration](#apiserver-k8s-io-v1beta1-TracingConfiguration) + + +

TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.

+ + + + + + + + + + + + + + +
FieldDescription
endpoint
+string +
+

Endpoint of the collector this component will report traces to. +The connection is insecure, and does not currently support TLS. +Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.

+
samplingRatePerMillion
+int32 +
+

SamplingRatePerMillion is the number of samples to collect per million spans. +Recommended is unset. If unset, sampler respects its parent span's sampling +rate, but otherwise never samples.

+
+ ## `EgressSelectorConfiguration` {#apiserver-k8s-io-v1beta1-EgressSelectorConfiguration} @@ -291,47 +334,4 @@ This does not use a unix:// prefix. (Eg: /etc/srv/kubernetes/konnectivity-server - - - - -## `TracingConfiguration` {#TracingConfiguration} - - -**Appears in:** - -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) - -- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration) - -- [TracingConfiguration](#apiserver-k8s-io-v1beta1-TracingConfiguration) - - -

TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.

- - - - - - - - - - - - - - -
FieldDescription
endpoint
-string -
-

Endpoint of the collector this component will report traces to. -The connection is insecure, and does not currently support TLS. -Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.

-
samplingRatePerMillion
-int32 -
-

SamplingRatePerMillion is the number of samples to collect per million spans. -Recommended is unset. If unset, sampler respects its parent span's sampling -rate, but otherwise never samples.

-
\ No newline at end of file + \ No newline at end of file diff --git a/content/en/docs/reference/config-api/apiserver-encryption.v1.md b/content/en/docs/reference/config-api/apiserver-encryption.v1.md index 148dc374e8cad..49e7695dc5062 100644 --- a/content/en/docs/reference/config-api/apiserver-encryption.v1.md +++ b/content/en/docs/reference/config-api/apiserver-encryption.v1.md @@ -12,7 +12,6 @@ auto_generated: true - [EncryptionConfiguration](#apiserver-config-k8s-io-v1-EncryptionConfiguration) - ## `EncryptionConfiguration` {#apiserver-config-k8s-io-v1-EncryptionConfiguration} @@ -20,7 +19,7 @@ auto_generated: true

EncryptionConfiguration stores the complete configuration for encryption providers. It also allows the use of wildcards to specify the resources that should be encrypted. -Use '*.<group>' to encrypt all resources within a group or '*.*' to encrypt all resources. +Use '*<group>o encrypt all resources within a group or '*.*' to encrypt all resources. '*.' can be used to encrypt all resource in the core group. '*.*' will encrypt all resources, even custom resources that are added after API server start. Use of wildcards that overlap within the same resource list or across multiple diff --git a/content/en/docs/reference/config-api/apiserver-eventratelimit.v1alpha1.md b/content/en/docs/reference/config-api/apiserver-eventratelimit.v1alpha1.md index 2189c4910d277..60a5bcbedf9d2 100644 --- a/content/en/docs/reference/config-api/apiserver-eventratelimit.v1alpha1.md +++ b/content/en/docs/reference/config-api/apiserver-eventratelimit.v1alpha1.md @@ -11,7 +11,6 @@ auto_generated: true - [Configuration](#eventratelimit-admission-k8s-io-v1alpha1-Configuration) - ## `Configuration` {#eventratelimit-admission-k8s-io-v1alpha1-Configuration} diff --git a/content/en/docs/reference/config-api/apiserver-webhookadmission.v1.md b/content/en/docs/reference/config-api/apiserver-webhookadmission.v1.md index b806f3b6c6075..9520d2ce53768 100644 --- a/content/en/docs/reference/config-api/apiserver-webhookadmission.v1.md +++ b/content/en/docs/reference/config-api/apiserver-webhookadmission.v1.md @@ -12,7 +12,6 @@ auto_generated: true - [WebhookAdmission](#apiserver-config-k8s-io-v1-WebhookAdmission) - ## `WebhookAdmission` {#apiserver-config-k8s-io-v1-WebhookAdmission} diff --git a/content/en/docs/reference/config-api/client-authentication.v1.md b/content/en/docs/reference/config-api/client-authentication.v1.md index 53e602d0f22a2..33150093d9488 100644 --- a/content/en/docs/reference/config-api/client-authentication.v1.md +++ b/content/en/docs/reference/config-api/client-authentication.v1.md @@ -11,7 +11,6 @@ auto_generated: true - [ExecCredential](#client-authentication-k8s-io-v1-ExecCredential) - ## `ExecCredential` {#client-authentication-k8s-io-v1-ExecCredential} diff --git a/content/en/docs/reference/config-api/client-authentication.v1beta1.md b/content/en/docs/reference/config-api/client-authentication.v1beta1.md index d9e55d0ee2beb..95f65e4bbd597 100644 --- a/content/en/docs/reference/config-api/client-authentication.v1beta1.md +++ b/content/en/docs/reference/config-api/client-authentication.v1beta1.md @@ -11,7 +11,6 @@ auto_generated: true - [ExecCredential](#client-authentication-k8s-io-v1beta1-ExecCredential) - ## `ExecCredential` {#client-authentication-k8s-io-v1beta1-ExecCredential} diff --git a/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md b/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md index f6eaa915a8b41..e3ffcf0b73e2b 100644 --- a/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md +++ b/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md @@ -11,7 +11,6 @@ auto_generated: true - [ImageReview](#imagepolicy-k8s-io-v1alpha1-ImageReview) - ## `ImageReview` {#imagepolicy-k8s-io-v1alpha1-ImageReview} diff --git a/content/en/docs/reference/config-api/kube-controller-manager-config.v1alpha1.md b/content/en/docs/reference/config-api/kube-controller-manager-config.v1alpha1.md index 348c557807eed..d63e35f68a973 100644 --- a/content/en/docs/reference/config-api/kube-controller-manager-config.v1alpha1.md +++ b/content/en/docs/reference/config-api/kube-controller-manager-config.v1alpha1.md @@ -9,301 +9,366 @@ auto_generated: true ## Resource Types -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) - [LeaderMigrationConfiguration](#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration) +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + -## `KubeControllerManagerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration} +## `NodeControllerConfiguration` {#NodeControllerConfiguration} +**Appears in:** -

KubeControllerManagerConfiguration contains elements describing kube-controller manager.

+- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) + + +

NodeControllerConfiguration contains elements describing NodeController.

- - - - - +
FieldDescription
apiVersion
string
kubecontrollermanager.config.k8s.io/v1alpha1
kind
string
KubeControllerManagerConfiguration
Generic [Required]
-GenericControllerManagerConfiguration +
ConcurrentNodeSyncs [Required]
+int32
-

Generic holds configuration for a generic controller-manager

+

ConcurrentNodeSyncs is the number of workers +concurrently synchronizing nodes

KubeCloudShared [Required]
-KubeCloudSharedConfiguration +
+ +## `ServiceControllerConfiguration` {#ServiceControllerConfiguration} + + +**Appears in:** + +- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

ServiceControllerConfiguration contains elements describing ServiceController.

+ + + + + + + + - +
FieldDescription
ConcurrentServiceSyncs [Required]
+int32
-

KubeCloudSharedConfiguration holds configuration for shared related features -both in cloud controller manager and kube-controller manager.

+

concurrentServiceSyncs is the number of services that are +allowed to sync concurrently. Larger number = more responsive service +management, but more CPU (and network) load.

AttachDetachController [Required]
-AttachDetachControllerConfiguration +
+ + +## `CloudControllerManagerConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration} + + + +

CloudControllerManagerConfiguration contains elements describing cloud-controller manager.

+ + + + + + + + + + + - - - - - - +
FieldDescription
apiVersion
string
cloudcontrollermanager.config.k8s.io/v1alpha1
kind
string
CloudControllerManagerConfiguration
Generic [Required]
+GenericControllerManagerConfiguration
-

AttachDetachControllerConfiguration holds configuration for -AttachDetachController related features.

+

Generic holds configuration for a generic controller-manager

CSRSigningController [Required]
-CSRSigningControllerConfiguration +
KubeCloudShared [Required]
+KubeCloudSharedConfiguration
-

CSRSigningControllerConfiguration holds configuration for -CSRSigningController related features.

+

KubeCloudSharedConfiguration holds configuration for shared related features +both in cloud controller manager and kube-controller manager.

DaemonSetController [Required]
-DaemonSetControllerConfiguration +
NodeController [Required]
+NodeControllerConfiguration
-

DaemonSetControllerConfiguration holds configuration for DaemonSetController +

NodeController holds configuration for node controller related features.

DeploymentController [Required]
-DeploymentControllerConfiguration +
ServiceController [Required]
+ServiceControllerConfiguration
-

DeploymentControllerConfiguration holds configuration for -DeploymentController related features.

+

ServiceControllerConfiguration holds configuration for ServiceController +related features.

StatefulSetController [Required]
-StatefulSetControllerConfiguration +
NodeStatusUpdateFrequency [Required]
+meta/v1.Duration
-

StatefulSetControllerConfiguration holds configuration for -StatefulSetController related features.

+

NodeStatusUpdateFrequency is the frequency at which the controller updates nodes' status

DeprecatedController [Required]
-DeprecatedControllerConfiguration +
Webhook [Required]
+WebhookConfiguration
-

DeprecatedControllerConfiguration holds configuration for some deprecated -features.

+

Webhook is the configuration for cloud-controller-manager hosted webhooks

EndpointController [Required]
-EndpointControllerConfiguration +
+ +## `CloudProviderConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration} + + +**Appears in:** + +- [KubeCloudSharedConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration) + + +

CloudProviderConfiguration contains basically elements about cloud provider.

+ + + + + + + + - - +
FieldDescription
Name [Required]
+string
-

EndpointControllerConfiguration holds configuration for EndpointController -related features.

+

Name is the provider for cloud services.

EndpointSliceController [Required]
-EndpointSliceControllerConfiguration +
CloudConfigFile [Required]
+string
-

EndpointSliceControllerConfiguration holds configuration for -EndpointSliceController related features.

+

cloudConfigFile is the path to the cloud provider configuration file.

EndpointSliceMirroringController [Required]
-EndpointSliceMirroringControllerConfiguration +
+ +## `KubeCloudSharedConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration} + + +**Appears in:** + +- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

KubeCloudSharedConfiguration contains elements shared by both kube-controller manager +and cloud-controller manager, but not genericconfig.

+ + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
FieldDescription
CloudProvider [Required]
+CloudProviderConfiguration
-

EndpointSliceMirroringControllerConfiguration holds configuration for -EndpointSliceMirroringController related features.

+

CloudProviderConfiguration holds configuration for CloudProvider related features.

EphemeralVolumeController [Required]
-EphemeralVolumeControllerConfiguration +
ExternalCloudVolumePlugin [Required]
+string
-

EphemeralVolumeControllerConfiguration holds configuration for EphemeralVolumeController -related features.

+

externalCloudVolumePlugin specifies the plugin to use when cloudProvider is "external". +It is currently used by the in repo cloud providers to handle node and volume control in the KCM.

GarbageCollectorController [Required]
-GarbageCollectorControllerConfiguration +
UseServiceAccountCredentials [Required]
+bool
-

GarbageCollectorControllerConfiguration holds configuration for -GarbageCollectorController related features.

+

useServiceAccountCredentials indicates whether controllers should be run with +individual service account credentials.

HPAController [Required]
-HPAControllerConfiguration +
AllowUntaggedCloud [Required]
+bool
-

HPAControllerConfiguration holds configuration for HPAController related features.

+

run with untagged cloud instances

JobController [Required]
-JobControllerConfiguration +
RouteReconciliationPeriod [Required]
+meta/v1.Duration
-

JobControllerConfiguration holds configuration for JobController related features.

+

routeReconciliationPeriod is the period for reconciling routes created for Nodes by cloud provider..

CronJobController [Required]
-CronJobControllerConfiguration +
NodeMonitorPeriod [Required]
+meta/v1.Duration
-

CronJobControllerConfiguration holds configuration for CronJobController related features.

+

nodeMonitorPeriod is the period for syncing NodeStatus in NodeController.

LegacySATokenCleaner [Required]
-LegacySATokenCleanerConfiguration +
ClusterName [Required]
+string
-

LegacySATokenCleanerConfiguration holds configuration for LegacySATokenCleaner related features.

+

clusterName is the instance prefix for the cluster.

NamespaceController [Required]
-NamespaceControllerConfiguration +
ClusterCIDR [Required]
+string
-

NamespaceControllerConfiguration holds configuration for NamespaceController -related features.

+

clusterCIDR is CIDR Range for Pods in cluster.

NodeIPAMController [Required]
-NodeIPAMControllerConfiguration +
AllocateNodeCIDRs [Required]
+bool
-

NodeIPAMControllerConfiguration holds configuration for NodeIPAMController -related features.

+

AllocateNodeCIDRs enables CIDRs for Pods to be allocated and, if +ConfigureCloudRoutes is true, to be set on the cloud provider.

NodeLifecycleController [Required]
-NodeLifecycleControllerConfiguration +
CIDRAllocatorType [Required]
+string
-

NodeLifecycleControllerConfiguration holds configuration for -NodeLifecycleController related features.

+

CIDRAllocatorType determines what kind of pod CIDR allocator will be used.

PersistentVolumeBinderController [Required]
-PersistentVolumeBinderControllerConfiguration +
ConfigureCloudRoutes [Required]
+bool
-

PersistentVolumeBinderControllerConfiguration holds configuration for -PersistentVolumeBinderController related features.

+

configureCloudRoutes enables CIDRs allocated with allocateNodeCIDRs +to be configured on the cloud provider.

PodGCController [Required]
-PodGCControllerConfiguration +
NodeSyncPeriod [Required]
+meta/v1.Duration
-

PodGCControllerConfiguration holds configuration for PodGCController -related features.

+

nodeSyncPeriod is the period for syncing nodes from cloudprovider. Longer +periods will result in fewer calls to cloud provider, but may delay addition +of new nodes to cluster.

ReplicaSetController [Required]
-ReplicaSetControllerConfiguration -
-

ReplicaSetControllerConfiguration holds configuration for ReplicaSet related features.

-
ReplicationController [Required]
-ReplicationControllerConfiguration -
-

ReplicationControllerConfiguration holds configuration for -ReplicationController related features.

-
ResourceQuotaController [Required]
-ResourceQuotaControllerConfiguration -
-

ResourceQuotaControllerConfiguration holds configuration for -ResourceQuotaController related features.

-
SAController [Required]
-SAControllerConfiguration -
-

SAControllerConfiguration holds configuration for ServiceAccountController -related features.

-
ServiceController [Required]
-ServiceControllerConfiguration -
-

ServiceControllerConfiguration holds configuration for ServiceController -related features.

-
TTLAfterFinishedController [Required]
-TTLAfterFinishedControllerConfiguration -
-

TTLAfterFinishedControllerConfiguration holds configuration for -TTLAfterFinishedController related features.

-
ValidatingAdmissionPolicyStatusController [Required]
-ValidatingAdmissionPolicyStatusControllerConfiguration +
+ +## `WebhookConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-WebhookConfiguration} + + +**Appears in:** + +- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) + + +

WebhookConfiguration contains configuration related to +cloud-controller-manager hosted webhooks

+ + + + + + + +
FieldDescription
Webhooks [Required]
+[]string
-

ValidatingAdmissionPolicyStatusControllerConfiguration holds configuration for -ValidatingAdmissionPolicyStatusController related features.

+

Webhooks is the list of webhooks to enable or disable +'*' means "all enabled by default webhooks" +'foo' means "enable 'foo'" +'-foo' means "disable 'foo'" +first item for a particular name wins

+ + -## `AttachDetachControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-AttachDetachControllerConfiguration} +## `LeaderMigrationConfiguration` {#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration} **Appears in:** -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) +- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration) -

AttachDetachControllerConfiguration contains elements describing AttachDetachController.

+

LeaderMigrationConfiguration provides versioned configuration for all migrating leader locks.

+ + + - - + + +
FieldDescription
apiVersion
string
controllermanager.config.k8s.io/v1alpha1
kind
string
LeaderMigrationConfiguration
DisableAttachDetachReconcilerSync [Required]
-bool +
leaderName [Required]
+string
-

Reconciler runs a periodic loop to reconcile the desired state of the with -the actual state of the world by triggering attach detach operations. -This flag enables or disables reconcile. Is false by default, and thus enabled.

+

LeaderName is the name of the leader election resource that protects the migration +E.g. 1-20-KCM-to-1-21-CCM

ReconcilerSyncLoopPeriod [Required]
-meta/v1.Duration +
resourceLock [Required]
+string
-

ReconcilerSyncLoopPeriod is the amount of time the reconciler sync states loop -wait between successive executions. Is set to 5 sec by default.

+

ResourceLock indicates the resource object type that will be used to lock +Should be "leases" or "endpoints"

+
controllerLeaders [Required]
+[]ControllerLeaderConfiguration +
+

ControllerLeaders contains a list of migrating leader lock configurations

-## `CSRSigningConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningConfiguration} +## `ControllerLeaderConfiguration` {#controllermanager-config-k8s-io-v1alpha1-ControllerLeaderConfiguration} **Appears in:** -- [CSRSigningControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningControllerConfiguration) +- [LeaderMigrationConfiguration](#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration) -

CSRSigningConfiguration holds information about a particular CSR signer

+

ControllerLeaderConfiguration provides the configuration for a migrating leader lock.

@@ -311,34 +376,37 @@ wait between successive executions. Is set to 5 sec by default.

- -
CertFile [Required]
+
name [Required]
string
-

certFile is the filename containing a PEM-encoded -X509 CA certificate used to issue certificates

+

Name is the name of the controller being migrated +E.g. service-controller, route-controller, cloud-node-controller, etc

KeyFile [Required]
+
component [Required]
string
-

keyFile is the filename containing a PEM-encoded -RSA or ECDSA private key used to issue certificates

+

Component is the name of the component in which the controller should be running. +E.g. kube-controller-manager, cloud-controller-manager, etc +Or '*' meaning the controller can be run under any component that participates in the migration

-## `CSRSigningControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningControllerConfiguration} +## `GenericControllerManagerConfiguration` {#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration} **Appears in:** +- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) + - [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

CSRSigningControllerConfiguration contains elements describing CSRSigningController.

+

GenericControllerManagerConfiguration holds configuration for a generic controller-manager.

@@ -346,534 +414,332 @@ RSA or ECDSA private key used to issue certificates

- - - - - - - - -
ClusterSigningCertFile [Required]
-string +
Port [Required]
+int32
-

clusterSigningCertFile is the filename containing a PEM-encoded -X509 CA certificate used to issue cluster-scoped certificates

+

port is the port that the controller-manager's http service runs on.

ClusterSigningKeyFile [Required]
+
Address [Required]
string
-

clusterSigningCertFile is the filename containing a PEM-encoded -RSA or ECDSA private key used to issue cluster-scoped certificates

+

address is the IP address to serve on (set to 0.0.0.0 for all interfaces).

KubeletServingSignerConfiguration [Required]
-CSRSigningConfiguration +
MinResyncPeriod [Required]
+meta/v1.Duration
-

kubeletServingSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kubelet-serving signer

+

minResyncPeriod is the resync period in reflectors; will be random between +minResyncPeriod and 2*minResyncPeriod.

KubeletClientSignerConfiguration [Required]
-CSRSigningConfiguration +
ClientConnection [Required]
+ClientConnectionConfiguration
-

kubeletClientSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kube-apiserver-client-kubelet

+

ClientConnection specifies the kubeconfig file and client connection +settings for the proxy server to use when communicating with the apiserver.

KubeAPIServerClientSignerConfiguration [Required]
-CSRSigningConfiguration +
ControllerStartInterval [Required]
+meta/v1.Duration
-

kubeAPIServerClientSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kube-apiserver-client

+

How long to wait between starting controller managers

LegacyUnknownSignerConfiguration [Required]
-CSRSigningConfiguration +
LeaderElection [Required]
+LeaderElectionConfiguration
-

legacyUnknownSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/legacy-unknown

+

leaderElection defines the configuration of leader election client.

ClusterSigningDuration [Required]
-meta/v1.Duration +
Controllers [Required]
+[]string
-

clusterSigningDuration is the max length of duration signed certificates will be given. -Individual CSRs may request shorter certs by setting spec.expirationSeconds.

+

Controllers is the list of controllers to enable or disable +'*' means "all enabled by default controllers" +'foo' means "enable 'foo'" +'-foo' means "disable 'foo'" +first item for a particular name wins

- -## `CronJobControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CronJobControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

CronJobControllerConfiguration contains elements describing CrongJob2Controller.

- - - - - - - - + + + + + +
FieldDescription
ConcurrentCronJobSyncs [Required]
-int32 +
Debugging [Required]
+DebuggingConfiguration
-

concurrentCronJobSyncs is the number of job objects that are -allowed to sync concurrently. Larger number = more responsive jobs, -but more CPU (and network) load.

+

DebuggingConfiguration holds configuration for Debugging related features.

+
LeaderMigrationEnabled [Required]
+bool +
+

LeaderMigrationEnabled indicates whether Leader Migration should be enabled for the controller manager.

+
LeaderMigration [Required]
+LeaderMigrationConfiguration +
+

LeaderMigration holds the configuration for Leader Migration.

+ + -## `DaemonSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DaemonSetControllerConfiguration} +## `KubeControllerManagerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration} -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - -

DaemonSetControllerConfiguration contains elements describing DaemonSetController.

+

KubeControllerManagerConfiguration contains elements describing kube-controller manager.

+ + + - - -
FieldDescription
apiVersion
string
kubecontrollermanager.config.k8s.io/v1alpha1
kind
string
KubeControllerManagerConfiguration
ConcurrentDaemonSetSyncs [Required]
-int32 +
Generic [Required]
+GenericControllerManagerConfiguration
-

concurrentDaemonSetSyncs is the number of daemonset objects that are -allowed to sync concurrently. Larger number = more responsive daemonset, -but more CPU (and network) load.

+

Generic holds configuration for a generic controller-manager

- -## `DeploymentControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DeploymentControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

DeploymentControllerConfiguration contains elements describing DeploymentController.

- - - - - - - - - -
FieldDescription
ConcurrentDeploymentSyncs [Required]
-int32 +
KubeCloudShared [Required]
+KubeCloudSharedConfiguration
-

concurrentDeploymentSyncs is the number of deployment objects that are -allowed to sync concurrently. Larger number = more responsive deployments, -but more CPU (and network) load.

+

KubeCloudSharedConfiguration holds configuration for shared related features +both in cloud controller manager and kube-controller manager.

- -## `DeprecatedControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DeprecatedControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

DeprecatedControllerConfiguration contains elements be deprecated.

- - - - -## `EndpointControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

EndpointControllerConfiguration contains elements describing EndpointController.

- - - - - - - - - - -
FieldDescription
ConcurrentEndpointSyncs [Required]
-int32 +
AttachDetachController [Required]
+AttachDetachControllerConfiguration
-

concurrentEndpointSyncs is the number of endpoint syncing operations -that will be done concurrently. Larger number = faster endpoint updating, -but more CPU (and network) load.

+

AttachDetachControllerConfiguration holds configuration for +AttachDetachController related features.

EndpointUpdatesBatchPeriod [Required]
-meta/v1.Duration +
CSRSigningController [Required]
+CSRSigningControllerConfiguration
-

EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period. -Processing of pod changes will be delayed by this duration to join them with potential -upcoming updates and reduce the overall number of endpoints updates.

+

CSRSigningControllerConfiguration holds configuration for +CSRSigningController related features.

- -## `EndpointSliceControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointSliceControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

EndpointSliceControllerConfiguration contains elements describing -EndpointSliceController.

- - - - - - - - - - - -
FieldDescription
ConcurrentServiceEndpointSyncs [Required]
-int32 +
DaemonSetController [Required]
+DaemonSetControllerConfiguration
-

concurrentServiceEndpointSyncs is the number of service endpoint syncing -operations that will be done concurrently. Larger number = faster -endpoint slice updating, but more CPU (and network) load.

+

DaemonSetControllerConfiguration holds configuration for DaemonSetController +related features.

MaxEndpointsPerSlice [Required]
-int32 +
DeploymentController [Required]
+DeploymentControllerConfiguration
-

maxEndpointsPerSlice is the maximum number of endpoints that will be -added to an EndpointSlice. More endpoints per slice will result in fewer -and larger endpoint slices, but larger resources.

+

DeploymentControllerConfiguration holds configuration for +DeploymentController related features.

EndpointUpdatesBatchPeriod [Required]
-meta/v1.Duration +
StatefulSetController [Required]
+StatefulSetControllerConfiguration
-

EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period. -Processing of pod changes will be delayed by this duration to join them with potential -upcoming updates and reduce the overall number of endpoints updates.

+

StatefulSetControllerConfiguration holds configuration for +StatefulSetController related features.

- -## `EndpointSliceMirroringControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointSliceMirroringControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

EndpointSliceMirroringControllerConfiguration contains elements describing -EndpointSliceMirroringController.

- - - - - - - - - - - -
FieldDescription
MirroringConcurrentServiceEndpointSyncs [Required]
-int32 +
DeprecatedController [Required]
+DeprecatedControllerConfiguration
-

mirroringConcurrentServiceEndpointSyncs is the number of service endpoint -syncing operations that will be done concurrently. Larger number = faster -endpoint slice updating, but more CPU (and network) load.

+

DeprecatedControllerConfiguration holds configuration for some deprecated +features.

MirroringMaxEndpointsPerSubset [Required]
-int32 +
EndpointController [Required]
+EndpointControllerConfiguration
-

mirroringMaxEndpointsPerSubset is the maximum number of endpoints that -will be mirrored to an EndpointSlice for an EndpointSubset.

+

EndpointControllerConfiguration holds configuration for EndpointController +related features.

MirroringEndpointUpdatesBatchPeriod [Required]
-meta/v1.Duration +
EndpointSliceController [Required]
+EndpointSliceControllerConfiguration
-

mirroringEndpointUpdatesBatchPeriod can be used to batch EndpointSlice -updates. All updates triggered by EndpointSlice changes will be delayed -by up to 'mirroringEndpointUpdatesBatchPeriod'. If other addresses in the -same Endpoints resource change in that period, they will be batched to a -single EndpointSlice update. Default 0 value means that each Endpoints -update triggers an EndpointSlice update.

+

EndpointSliceControllerConfiguration holds configuration for +EndpointSliceController related features.

- -## `EphemeralVolumeControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EphemeralVolumeControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

EphemeralVolumeControllerConfiguration contains elements describing EphemeralVolumeController.

- - - - - - - - - -
FieldDescription
ConcurrentEphemeralVolumeSyncs [Required]
-int32 +
EndpointSliceMirroringController [Required]
+EndpointSliceMirroringControllerConfiguration
-

ConcurrentEphemeralVolumeSyncseSyncs is the number of ephemeral volume syncing operations -that will be done concurrently. Larger number = faster ephemeral volume updating, -but more CPU (and network) load.

+

EndpointSliceMirroringControllerConfiguration holds configuration for +EndpointSliceMirroringController related features.

- -## `GarbageCollectorControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-GarbageCollectorControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

GarbageCollectorControllerConfiguration contains elements describing GarbageCollectorController.

- - - - - - - - - - - -
FieldDescription
EnableGarbageCollector [Required]
-bool +
EphemeralVolumeController [Required]
+EphemeralVolumeControllerConfiguration
-

enables the generic garbage collector. MUST be synced with the -corresponding flag of the kube-apiserver. WARNING: the generic garbage -collector is an alpha feature.

+

EphemeralVolumeControllerConfiguration holds configuration for EphemeralVolumeController +related features.

ConcurrentGCSyncs [Required]
-int32 +
GarbageCollectorController [Required]
+GarbageCollectorControllerConfiguration
-

concurrentGCSyncs is the number of garbage collector workers that are -allowed to sync concurrently.

+

GarbageCollectorControllerConfiguration holds configuration for +GarbageCollectorController related features.

GCIgnoredResources [Required]
-[]GroupResource +
HPAController [Required]
+HPAControllerConfiguration
-

gcIgnoredResources is the list of GroupResources that garbage collection should ignore.

+

HPAControllerConfiguration holds configuration for HPAController related features.

- -## `GroupResource` {#kubecontrollermanager-config-k8s-io-v1alpha1-GroupResource} - - -**Appears in:** - -- [GarbageCollectorControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-GarbageCollectorControllerConfiguration) - - -

GroupResource describes an group resource.

- - - - - - - - - - -
FieldDescription
Group [Required]
-string +
JobController [Required]
+JobControllerConfiguration
-

group is the group portion of the GroupResource.

+

JobControllerConfiguration holds configuration for JobController related features.

Resource [Required]
-string +
CronJobController [Required]
+CronJobControllerConfiguration
-

resource is the resource portion of the GroupResource.

+

CronJobControllerConfiguration holds configuration for CronJobController related features.

- -## `HPAControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-HPAControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

HPAControllerConfiguration contains elements describing HPAController.

- - - - - - - - - - - - - - - - -
FieldDescription
ConcurrentHorizontalPodAutoscalerSyncs [Required]
-int32 +
LegacySATokenCleaner [Required]
+LegacySATokenCleanerConfiguration
-

ConcurrentHorizontalPodAutoscalerSyncs is the number of HPA objects that are allowed to sync concurrently. -Larger number = more responsive HPA processing, but more CPU (and network) load.

+

LegacySATokenCleanerConfiguration holds configuration for LegacySATokenCleaner related features.

HorizontalPodAutoscalerSyncPeriod [Required]
-meta/v1.Duration +
NamespaceController [Required]
+NamespaceControllerConfiguration
-

HorizontalPodAutoscalerSyncPeriod is the period for syncing the number of -pods in horizontal pod autoscaler.

+

NamespaceControllerConfiguration holds configuration for NamespaceController +related features.

HorizontalPodAutoscalerUpscaleForbiddenWindow [Required]
-meta/v1.Duration +
NodeIPAMController [Required]
+NodeIPAMControllerConfiguration
-

HorizontalPodAutoscalerUpscaleForbiddenWindow is a period after which next upscale allowed.

+

NodeIPAMControllerConfiguration holds configuration for NodeIPAMController +related features.

HorizontalPodAutoscalerDownscaleStabilizationWindow [Required]
-meta/v1.Duration +
NodeLifecycleController [Required]
+NodeLifecycleControllerConfiguration
-

HorizontalPodAutoscalerDowncaleStabilizationWindow is a period for which autoscaler will look -backwards and not scale down below any recommendation it made during that period.

+

NodeLifecycleControllerConfiguration holds configuration for +NodeLifecycleController related features.

HorizontalPodAutoscalerDownscaleForbiddenWindow [Required]
-meta/v1.Duration +
PersistentVolumeBinderController [Required]
+PersistentVolumeBinderControllerConfiguration
-

HorizontalPodAutoscalerDownscaleForbiddenWindow is a period after which next downscale allowed.

+

PersistentVolumeBinderControllerConfiguration holds configuration for +PersistentVolumeBinderController related features.

HorizontalPodAutoscalerTolerance [Required]
-float64 +
PodGCController [Required]
+PodGCControllerConfiguration
-

HorizontalPodAutoscalerTolerance is the tolerance for when -resource usage suggests upscaling/downscaling

+

PodGCControllerConfiguration holds configuration for PodGCController +related features.

HorizontalPodAutoscalerCPUInitializationPeriod [Required]
-meta/v1.Duration +
ReplicaSetController [Required]
+ReplicaSetControllerConfiguration
-

HorizontalPodAutoscalerCPUInitializationPeriod is the period after pod start when CPU samples -might be skipped.

+

ReplicaSetControllerConfiguration holds configuration for ReplicaSet related features.

HorizontalPodAutoscalerInitialReadinessDelay [Required]
-meta/v1.Duration +
ReplicationController [Required]
+ReplicationControllerConfiguration
-

HorizontalPodAutoscalerInitialReadinessDelay is period after pod start during which readiness -changes are treated as readiness being set for the first time. The only effect of this is that -HPA will disregard CPU samples from unready pods that had last readiness change during that -period.

+

ReplicationControllerConfiguration holds configuration for +ReplicationController related features.

- -## `JobControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-JobControllerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

JobControllerConfiguration contains elements describing JobController.

- - - - - - - - - -
FieldDescription
ConcurrentJobSyncs [Required]
-int32 +
ResourceQuotaController [Required]
+ResourceQuotaControllerConfiguration
-

concurrentJobSyncs is the number of job objects that are -allowed to sync concurrently. Larger number = more responsive jobs, -but more CPU (and network) load.

+

ResourceQuotaControllerConfiguration holds configuration for +ResourceQuotaController related features.

- -## `LegacySATokenCleanerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-LegacySATokenCleanerConfiguration} - - -**Appears in:** - -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) - - -

LegacySATokenCleanerConfiguration contains elements describing LegacySATokenCleaner

- - - - - - - - + + + + + + + + +
FieldDescription
CleanUpPeriod [Required]
-meta/v1.Duration +
SAController [Required]
+SAControllerConfiguration +
+

SAControllerConfiguration holds configuration for ServiceAccountController +related features.

+
ServiceController [Required]
+ServiceControllerConfiguration +
+

ServiceControllerConfiguration holds configuration for ServiceController +related features.

+
TTLAfterFinishedController [Required]
+TTLAfterFinishedControllerConfiguration +
+

TTLAfterFinishedControllerConfiguration holds configuration for +TTLAfterFinishedController related features.

+
ValidatingAdmissionPolicyStatusController [Required]
+ValidatingAdmissionPolicyStatusControllerConfiguration
-

CleanUpPeriod is the period of time since the last usage of an -auto-generated service account token before it can be deleted.

+

ValidatingAdmissionPolicyStatusControllerConfiguration holds configuration for +ValidatingAdmissionPolicyStatusController related features.

-## `NamespaceControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NamespaceControllerConfiguration} +## `AttachDetachControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-AttachDetachControllerConfiguration} **Appears in:** @@ -881,7 +747,7 @@ auto-generated service account token before it can be deleted.

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

NamespaceControllerConfiguration contains elements describing NamespaceController.

+

AttachDetachControllerConfiguration contains elements describing AttachDetachController.

@@ -889,34 +755,35 @@ auto-generated service account token before it can be deleted.

- -
NamespaceSyncPeriod [Required]
-meta/v1.Duration +
DisableAttachDetachReconcilerSync [Required]
+bool
-

namespaceSyncPeriod is the period for syncing namespace life-cycle -updates.

+

Reconciler runs a periodic loop to reconcile the desired state of the with +the actual state of the world by triggering attach detach operations. +This flag enables or disables reconcile. Is false by default, and thus enabled.

ConcurrentNamespaceSyncs [Required]
-int32 +
ReconcilerSyncLoopPeriod [Required]
+meta/v1.Duration
-

concurrentNamespaceSyncs is the number of namespace objects that are -allowed to sync concurrently.

+

ReconcilerSyncLoopPeriod is the amount of time the reconciler sync states loop +wait between successive executions. Is set to 5 sec by default.

-## `NodeIPAMControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NodeIPAMControllerConfiguration} +## `CSRSigningConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningConfiguration} **Appears in:** -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) +- [CSRSigningControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningControllerConfiguration) -

NodeIPAMControllerConfiguration contains elements describing NodeIpamController.

+

CSRSigningConfiguration holds information about a particular CSR signer

@@ -924,45 +791,26 @@ allowed to sync concurrently.

- - - - - - - - - - -
ServiceCIDR [Required]
+
CertFile [Required]
string
-

serviceCIDR is CIDR Range for Services in cluster.

+

certFile is the filename containing a PEM-encoded +X509 CA certificate used to issue certificates

SecondaryServiceCIDR [Required]
+
KeyFile [Required]
string
-

secondaryServiceCIDR is CIDR Range for Services in cluster. This is used in dual stack clusters. SecondaryServiceCIDR must be of different IP family than ServiceCIDR

-
NodeCIDRMaskSize [Required]
-int32 -
-

NodeCIDRMaskSize is the mask size for node cidr in cluster.

-
NodeCIDRMaskSizeIPv4 [Required]
-int32 -
-

NodeCIDRMaskSizeIPv4 is the mask size for node cidr in dual-stack cluster.

-
NodeCIDRMaskSizeIPv6 [Required]
-int32 -
-

NodeCIDRMaskSizeIPv6 is the mask size for node cidr in dual-stack cluster.

+

keyFile is the filename containing a PEM-encoded +RSA or ECDSA private key used to issue certificates

-## `NodeLifecycleControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NodeLifecycleControllerConfiguration} +## `CSRSigningControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningControllerConfiguration} **Appears in:** @@ -970,7 +818,7 @@ allowed to sync concurrently.

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

NodeLifecycleControllerConfiguration contains elements describing NodeLifecycleController.

+

CSRSigningControllerConfiguration contains elements describing CSRSigningController.

@@ -978,64 +826,62 @@ allowed to sync concurrently.

- - - - - - -
NodeEvictionRate [Required]
-float32 +
ClusterSigningCertFile [Required]
+string
-

nodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is healthy

+

clusterSigningCertFile is the filename containing a PEM-encoded +X509 CA certificate used to issue cluster-scoped certificates

SecondaryNodeEvictionRate [Required]
-float32 +
ClusterSigningKeyFile [Required]
+string
-

secondaryNodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy

+

clusterSigningCertFile is the filename containing a PEM-encoded +RSA or ECDSA private key used to issue cluster-scoped certificates

NodeStartupGracePeriod [Required]
-meta/v1.Duration +
KubeletServingSignerConfiguration [Required]
+CSRSigningConfiguration
-

nodeStartupGracePeriod is the amount of time which we allow starting a node to -be unresponsive before marking it unhealthy.

+

kubeletServingSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kubelet-serving signer

NodeMonitorGracePeriod [Required]
-meta/v1.Duration +
KubeletClientSignerConfiguration [Required]
+CSRSigningConfiguration
-

nodeMontiorGracePeriod is the amount of time which we allow a running node to be -unresponsive before marking it unhealthy. Must be N times more than kubelet's -nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet -to post node status.

+

kubeletClientSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kube-apiserver-client-kubelet

PodEvictionTimeout [Required]
-meta/v1.Duration +
KubeAPIServerClientSignerConfiguration [Required]
+CSRSigningConfiguration
-

podEvictionTimeout is the grace period for deleting pods on failed nodes.

+

kubeAPIServerClientSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kube-apiserver-client

LargeClusterSizeThreshold [Required]
-int32 +
LegacyUnknownSignerConfiguration [Required]
+CSRSigningConfiguration
-

secondaryNodeEvictionRate is implicitly overridden to 0 for clusters smaller than or equal to largeClusterSizeThreshold

+

legacyUnknownSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/legacy-unknown

UnhealthyZoneThreshold [Required]
-float32 +
ClusterSigningDuration [Required]
+meta/v1.Duration
-

Zone is treated as unhealthy in nodeEvictionRate and secondaryNodeEvictionRate when at least -unhealthyZoneThreshold (no less than 3) of Nodes in the zone are NotReady

+

clusterSigningDuration is the max length of duration signed certificates will be given. +Individual CSRs may request shorter certs by setting spec.expirationSeconds.

-## `PersistentVolumeBinderControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeBinderControllerConfiguration} +## `CronJobControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CronJobControllerConfiguration} **Appears in:** @@ -1043,8 +889,7 @@ unhealthyZoneThreshold (no less than 3) of Nodes in the zone are NotReady

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

PersistentVolumeBinderControllerConfiguration contains elements describing -PersistentVolumeBinderController.

+

CronJobControllerConfiguration contains elements describing CrongJob2Controller.

@@ -1052,49 +897,27 @@ PersistentVolumeBinderController.

- - - - - - - - - -
PVClaimBinderSyncPeriod [Required]
-meta/v1.Duration -
-

pvClaimBinderSyncPeriod is the period for syncing persistent volumes -and persistent volume claims.

-
VolumeConfiguration [Required]
-VolumeConfiguration -
-

volumeConfiguration holds configuration for volume related features.

-
VolumeHostCIDRDenylist [Required]
-[]string -
-

DEPRECATED: VolumeHostCIDRDenylist is a list of CIDRs that should not be reachable by the -controller from plugins.

-
VolumeHostAllowLocalLoopback [Required]
-bool +
ConcurrentCronJobSyncs [Required]
+int32
-

DEPRECATED: VolumeHostAllowLocalLoopback indicates if local loopback hosts (127.0.0.1, etc) -should be allowed from plugins.

+

concurrentCronJobSyncs is the number of job objects that are +allowed to sync concurrently. Larger number = more responsive jobs, +but more CPU (and network) load.

-## `PersistentVolumeRecyclerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeRecyclerConfiguration} +## `DaemonSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DaemonSetControllerConfiguration} **Appears in:** -- [VolumeConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-VolumeConfiguration) +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

PersistentVolumeRecyclerConfiguration contains elements describing persistent volume plugins.

+

DaemonSetControllerConfiguration contains elements describing DaemonSetController.

@@ -1102,69 +925,19 @@ should be allowed from plugins.

- - - - - - - - - - - - - - - - - - -
MaximumRetry [Required]
-int32 -
-

maximumRetry is number of retries the PV recycler will execute on failure to recycle -PV.

-
MinimumTimeoutNFS [Required]
-int32 -
-

minimumTimeoutNFS is the minimum ActiveDeadlineSeconds to use for an NFS Recycler -pod.

-
PodTemplateFilePathNFS [Required]
-string -
-

podTemplateFilePathNFS is the file path to a pod definition used as a template for -NFS persistent volume recycling

-
IncrementTimeoutNFS [Required]
-int32 -
-

incrementTimeoutNFS is the increment of time added per Gi to ActiveDeadlineSeconds -for an NFS scrubber pod.

-
PodTemplateFilePathHostPath [Required]
-string -
-

podTemplateFilePathHostPath is the file path to a pod definition used as a template for -HostPath persistent volume recycling. This is for development and testing only and -will not work in a multi-node cluster.

-
MinimumTimeoutHostPath [Required]
-int32 -
-

minimumTimeoutHostPath is the minimum ActiveDeadlineSeconds to use for a HostPath -Recycler pod. This is for development and testing only and will not work in a multi-node -cluster.

-
IncrementTimeoutHostPath [Required]
+
ConcurrentDaemonSetSyncs [Required]
int32
-

incrementTimeoutHostPath is the increment of time added per Gi to ActiveDeadlineSeconds -for a HostPath scrubber pod. This is for development and testing only and will not work -in a multi-node cluster.

+

concurrentDaemonSetSyncs is the number of daemonset objects that are +allowed to sync concurrently. Larger number = more responsive daemonset, +but more CPU (and network) load.

-## `PodGCControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PodGCControllerConfiguration} +## `DeploymentControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DeploymentControllerConfiguration} **Appears in:** @@ -1172,7 +945,7 @@ in a multi-node cluster.

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

PodGCControllerConfiguration contains elements describing PodGCController.

+

DeploymentControllerConfiguration contains elements describing DeploymentController.

@@ -1180,19 +953,19 @@ in a multi-node cluster.

-
TerminatedPodGCThreshold [Required]
+
ConcurrentDeploymentSyncs [Required]
int32
-

terminatedPodGCThreshold is the number of terminated pods that can exist -before the terminated pod garbage collector starts deleting terminated pods. -If <= 0, the terminated pod garbage collector is disabled.

+

concurrentDeploymentSyncs is the number of deployment objects that are +allowed to sync concurrently. Larger number = more responsive deployments, +but more CPU (and network) load.

-## `ReplicaSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ReplicaSetControllerConfiguration} +## `DeprecatedControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DeprecatedControllerConfiguration} **Appears in:** @@ -1200,27 +973,12 @@ If <= 0, the terminated pod garbage collector is disabled.

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

ReplicaSetControllerConfiguration contains elements describing ReplicaSetController.

+

DeprecatedControllerConfiguration contains elements be deprecated.

- - - - - - - - - -
FieldDescription
ConcurrentRSSyncs [Required]
-int32 -
-

concurrentRSSyncs is the number of replica sets that are allowed to sync -concurrently. Larger number = more responsive replica management, but more -CPU (and network) load.

-
-## `ReplicationControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ReplicationControllerConfiguration} + +## `EndpointControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointControllerConfiguration} **Appears in:** @@ -1228,7 +986,7 @@ CPU (and network) load.

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

ReplicationControllerConfiguration contains elements describing ReplicationController.

+

EndpointControllerConfiguration contains elements describing EndpointController.

@@ -1236,19 +994,28 @@ CPU (and network) load.

- + + +
ConcurrentRCSyncs [Required]
+
ConcurrentEndpointSyncs [Required]
int32
-

concurrentRCSyncs is the number of replication controllers that are -allowed to sync concurrently. Larger number = more responsive replica -management, but more CPU (and network) load.

+

concurrentEndpointSyncs is the number of endpoint syncing operations +that will be done concurrently. Larger number = faster endpoint updating, +but more CPU (and network) load.

+
EndpointUpdatesBatchPeriod [Required]
+meta/v1.Duration +
+

EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period. +Processing of pod changes will be delayed by this duration to join them with potential +upcoming updates and reduce the overall number of endpoints updates.

-## `ResourceQuotaControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ResourceQuotaControllerConfiguration} +## `EndpointSliceControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointSliceControllerConfiguration} **Appears in:** @@ -1256,7 +1023,8 @@ management, but more CPU (and network) load.

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

ResourceQuotaControllerConfiguration contains elements describing ResourceQuotaController.

+

EndpointSliceControllerConfiguration contains elements describing +EndpointSliceController.

@@ -1264,27 +1032,37 @@ management, but more CPU (and network) load.

- - + + +
ResourceQuotaSyncPeriod [Required]
-meta/v1.Duration +
ConcurrentServiceEndpointSyncs [Required]
+int32
-

resourceQuotaSyncPeriod is the period for syncing quota usage status -in the system.

+

concurrentServiceEndpointSyncs is the number of service endpoint syncing +operations that will be done concurrently. Larger number = faster +endpoint slice updating, but more CPU (and network) load.

ConcurrentResourceQuotaSyncs [Required]
+
MaxEndpointsPerSlice [Required]
int32
-

concurrentResourceQuotaSyncs is the number of resource quotas that are -allowed to sync concurrently. Larger number = more responsive quota -management, but more CPU (and network) load.

+

maxEndpointsPerSlice is the maximum number of endpoints that will be +added to an EndpointSlice. More endpoints per slice will result in fewer +and larger endpoint slices, but larger resources.

+
EndpointUpdatesBatchPeriod [Required]
+meta/v1.Duration +
+

EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period. +Processing of pod changes will be delayed by this duration to join them with potential +upcoming updates and reduce the overall number of endpoints updates.

-## `SAControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-SAControllerConfiguration} +## `EndpointSliceMirroringControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointSliceMirroringControllerConfiguration} **Appears in:** @@ -1292,7 +1070,8 @@ management, but more CPU (and network) load.

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

SAControllerConfiguration contains elements describing ServiceAccountController.

+

EndpointSliceMirroringControllerConfiguration contains elements describing +EndpointSliceMirroringController.

@@ -1300,34 +1079,39 @@ management, but more CPU (and network) load.

- - -
ServiceAccountKeyFile [Required]
-string +
MirroringConcurrentServiceEndpointSyncs [Required]
+int32
-

serviceAccountKeyFile is the filename containing a PEM-encoded private RSA key -used to sign service account tokens.

+

mirroringConcurrentServiceEndpointSyncs is the number of service endpoint +syncing operations that will be done concurrently. Larger number = faster +endpoint slice updating, but more CPU (and network) load.

ConcurrentSATokenSyncs [Required]
+
MirroringMaxEndpointsPerSubset [Required]
int32
-

concurrentSATokenSyncs is the number of service account token syncing operations -that will be done concurrently.

+

mirroringMaxEndpointsPerSubset is the maximum number of endpoints that +will be mirrored to an EndpointSlice for an EndpointSubset.

RootCAFile [Required]
-string +
MirroringEndpointUpdatesBatchPeriod [Required]
+meta/v1.Duration
-

rootCAFile is the root certificate authority will be included in service -account's token secret. This must be a valid PEM-encoded CA bundle.

+

mirroringEndpointUpdatesBatchPeriod can be used to batch EndpointSlice +updates. All updates triggered by EndpointSlice changes will be delayed +by up to 'mirroringEndpointUpdatesBatchPeriod'. If other addresses in the +same Endpoints resource change in that period, they will be batched to a +single EndpointSlice update. Default 0 value means that each Endpoints +update triggers an EndpointSlice update.

-## `StatefulSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-StatefulSetControllerConfiguration} +## `EphemeralVolumeControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EphemeralVolumeControllerConfiguration} **Appears in:** @@ -1335,7 +1119,7 @@ account's token secret. This must be a valid PEM-encoded CA bundle.

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

StatefulSetControllerConfiguration contains elements describing StatefulSetController.

+

EphemeralVolumeControllerConfiguration contains elements describing EphemeralVolumeController.

@@ -1343,19 +1127,19 @@ account's token secret. This must be a valid PEM-encoded CA bundle.

-
ConcurrentStatefulSetSyncs [Required]
+
ConcurrentEphemeralVolumeSyncs [Required]
int32
-

concurrentStatefulSetSyncs is the number of statefulset objects that are -allowed to sync concurrently. Larger number = more responsive statefulsets, +

ConcurrentEphemeralVolumeSyncseSyncs is the number of ephemeral volume syncing operations +that will be done concurrently. Larger number = faster ephemeral volume updating, but more CPU (and network) load.

-## `TTLAfterFinishedControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-TTLAfterFinishedControllerConfiguration} +## `GarbageCollectorControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-GarbageCollectorControllerConfiguration} **Appears in:** @@ -1363,7 +1147,7 @@ but more CPU (and network) load.

- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

TTLAfterFinishedControllerConfiguration contains elements describing TTLAfterFinishedController.

+

GarbageCollectorControllerConfiguration contains elements describing GarbageCollectorController.

@@ -1371,26 +1155,42 @@ but more CPU (and network) load.

- + + + + + +
ConcurrentTTLSyncs [Required]
+
EnableGarbageCollector [Required]
+bool +
+

enables the generic garbage collector. MUST be synced with the +corresponding flag of the kube-apiserver. WARNING: the generic garbage +collector is an alpha feature.

+
ConcurrentGCSyncs [Required]
int32
-

concurrentTTLSyncs is the number of TTL-after-finished collector workers that are +

concurrentGCSyncs is the number of garbage collector workers that are allowed to sync concurrently.

GCIgnoredResources [Required]
+[]GroupResource +
+

gcIgnoredResources is the list of GroupResources that garbage collection should ignore.

+
-## `ValidatingAdmissionPolicyStatusControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ValidatingAdmissionPolicyStatusControllerConfiguration} +## `GroupResource` {#kubecontrollermanager-config-k8s-io-v1alpha1-GroupResource} **Appears in:** -- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) +- [GarbageCollectorControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-GarbageCollectorControllerConfiguration) -

ValidatingAdmissionPolicyStatusControllerConfiguration contains elements describing ValidatingAdmissionPolicyStatusController.

+

GroupResource describes an group resource.

@@ -1398,32 +1198,32 @@ allowed to sync concurrently.

- + + +
ConcurrentPolicySyncs [Required]
-int32 +
Group [Required]
+string
-

ConcurrentPolicySyncs is the number of policy objects that are -allowed to sync concurrently. Larger number = quicker type checking, -but more CPU (and network) load. -The default value is 5.

+

group is the group portion of the GroupResource.

+
Resource [Required]
+string +
+

resource is the resource portion of the GroupResource.

-## `VolumeConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-VolumeConfiguration} +## `HPAControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-HPAControllerConfiguration} **Appears in:** -- [PersistentVolumeBinderControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeBinderControllerConfiguration) +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

VolumeConfiguration contains all enumerated flags meant to configure all volume -plugins. From this config, the controller-manager binary will create many instances of -volume.VolumeConfig, each containing only the configuration needed for that plugin which -are then passed to the appropriate plugin. The ControllerManager binary is the only part -of the code which knows what plugins are supported and which flags correspond to each plugin.

+

HPAControllerConfiguration contains elements describing HPAController.

@@ -1431,54 +1231,82 @@ of the code which knows what plugins are supported and which flags correspond to - - - - + + + + + + + + + + + +
EnableHostPathProvisioning [Required]
-bool +
ConcurrentHorizontalPodAutoscalerSyncs [Required]
+int32
-

enableHostPathProvisioning enables HostPath PV provisioning when running without a -cloud provider. This allows testing and development of provisioning features. HostPath -provisioning is not supported in any way, won't work in a multi-node cluster, and -should not be used for anything other than testing or development.

+

ConcurrentHorizontalPodAutoscalerSyncs is the number of HPA objects that are allowed to sync concurrently. +Larger number = more responsive HPA processing, but more CPU (and network) load.

EnableDynamicProvisioning [Required]
-bool +
HorizontalPodAutoscalerSyncPeriod [Required]
+meta/v1.Duration
-

enableDynamicProvisioning enables the provisioning of volumes when running within an environment -that supports dynamic provisioning. Defaults to true.

+

HorizontalPodAutoscalerSyncPeriod is the period for syncing the number of +pods in horizontal pod autoscaler.

PersistentVolumeRecyclerConfiguration [Required]
-PersistentVolumeRecyclerConfiguration +
HorizontalPodAutoscalerUpscaleForbiddenWindow [Required]
+meta/v1.Duration
-

persistentVolumeRecyclerConfiguration holds configuration for persistent volume plugins.

+

HorizontalPodAutoscalerUpscaleForbiddenWindow is a period after which next upscale allowed.

FlexVolumePluginDir [Required]
-string +
HorizontalPodAutoscalerDownscaleStabilizationWindow [Required]
+meta/v1.Duration
-

volumePluginDir is the full path of the directory in which the flex -volume plugin should search for additional third party volume plugins

+

HorizontalPodAutoscalerDowncaleStabilizationWindow is a period for which autoscaler will look +backwards and not scale down below any recommendation it made during that period.

+
HorizontalPodAutoscalerDownscaleForbiddenWindow [Required]
+meta/v1.Duration +
+

HorizontalPodAutoscalerDownscaleForbiddenWindow is a period after which next downscale allowed.

+
HorizontalPodAutoscalerTolerance [Required]
+float64 +
+

HorizontalPodAutoscalerTolerance is the tolerance for when +resource usage suggests upscaling/downscaling

+
HorizontalPodAutoscalerCPUInitializationPeriod [Required]
+meta/v1.Duration +
+

HorizontalPodAutoscalerCPUInitializationPeriod is the period after pod start when CPU samples +might be skipped.

+
HorizontalPodAutoscalerInitialReadinessDelay [Required]
+meta/v1.Duration +
+

HorizontalPodAutoscalerInitialReadinessDelay is period after pod start during which readiness +changes are treated as readiness being set for the first time. The only effect of this is that +HPA will disregard CPU samples from unready pods that had last readiness change during that +period.

- - - -## `NodeControllerConfiguration` {#NodeControllerConfiguration} +## `JobControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-JobControllerConfiguration} **Appears in:** -- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

NodeControllerConfiguration contains elements describing NodeController.

+

JobControllerConfiguration contains elements describing JobController.

@@ -1486,28 +1314,54 @@ volume plugin should search for additional third party volume plugins

- + + +
ConcurrentNodeSyncs [Required]
+
ConcurrentJobSyncs [Required]
int32
-

ConcurrentNodeSyncs is the number of workers -concurrently synchronizing nodes

+

concurrentJobSyncs is the number of job objects that are +allowed to sync concurrently. Larger number = more responsive jobs, +but more CPU (and network) load.

+
+ +## `LegacySATokenCleanerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-LegacySATokenCleanerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

LegacySATokenCleanerConfiguration contains elements describing LegacySATokenCleaner

+ + + + + + + + +
FieldDescription
CleanUpPeriod [Required]
+meta/v1.Duration +
+

CleanUpPeriod is the period of time since the last usage of an +auto-generated service account token before it can be deleted.

-## `ServiceControllerConfiguration` {#ServiceControllerConfiguration} +## `NamespaceControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NamespaceControllerConfiguration} **Appears in:** -- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) - - [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

ServiceControllerConfiguration contains elements describing ServiceController.

+

NamespaceControllerConfiguration contains elements describing NamespaceController.

@@ -1515,92 +1369,88 @@ concurrently synchronizing nodes

- + + +
ConcurrentServiceSyncs [Required]
+
NamespaceSyncPeriod [Required]
+meta/v1.Duration +
+

namespaceSyncPeriod is the period for syncing namespace life-cycle +updates.

+
ConcurrentNamespaceSyncs [Required]
int32
-

concurrentServiceSyncs is the number of services that are -allowed to sync concurrently. Larger number = more responsive service -management, but more CPU (and network) load.

+

concurrentNamespaceSyncs is the number of namespace objects that are +allowed to sync concurrently.

- - -## `CloudControllerManagerConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration} +## `NodeIPAMControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NodeIPAMControllerConfiguration} +**Appears in:** -

CloudControllerManagerConfiguration contains elements describing cloud-controller manager.

+- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

NodeIPAMControllerConfiguration contains elements describing NodeIpamController.

- - - - - - - - - - -
FieldDescription
apiVersion
string
cloudcontrollermanager.config.k8s.io/v1alpha1
kind
string
CloudControllerManagerConfiguration
Generic [Required]
-GenericControllerManagerConfiguration -
-

Generic holds configuration for a generic controller-manager

-
KubeCloudShared [Required]
-KubeCloudSharedConfiguration +
ServiceCIDR [Required]
+string
-

KubeCloudSharedConfiguration holds configuration for shared related features -both in cloud controller manager and kube-controller manager.

+

serviceCIDR is CIDR Range for Services in cluster.

NodeController [Required]
-NodeControllerConfiguration +
SecondaryServiceCIDR [Required]
+string
-

NodeController holds configuration for node controller -related features.

+

secondaryServiceCIDR is CIDR Range for Services in cluster. This is used in dual stack clusters. SecondaryServiceCIDR must be of different IP family than ServiceCIDR

ServiceController [Required]
-ServiceControllerConfiguration +
NodeCIDRMaskSize [Required]
+int32
-

ServiceControllerConfiguration holds configuration for ServiceController -related features.

+

NodeCIDRMaskSize is the mask size for node cidr in cluster.

NodeStatusUpdateFrequency [Required]
-meta/v1.Duration +
NodeCIDRMaskSizeIPv4 [Required]
+int32
-

NodeStatusUpdateFrequency is the frequency at which the controller updates nodes' status

+

NodeCIDRMaskSizeIPv4 is the mask size for node cidr in dual-stack cluster.

Webhook [Required]
-WebhookConfiguration +
NodeCIDRMaskSizeIPv6 [Required]
+int32
-

Webhook is the configuration for cloud-controller-manager hosted webhooks

+

NodeCIDRMaskSizeIPv6 is the mask size for node cidr in dual-stack cluster.

-## `CloudProviderConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration} +## `NodeLifecycleControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NodeLifecycleControllerConfiguration} **Appears in:** -- [KubeCloudSharedConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration) +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

CloudProviderConfiguration contains basically elements about cloud provider.

+

NodeLifecycleControllerConfiguration contains elements describing NodeLifecycleController.

@@ -1608,35 +1458,73 @@ related features.

- - + + + + + + + + + + + + + + +
Name [Required]
-string +
NodeEvictionRate [Required]
+float32
-

Name is the provider for cloud services.

+

nodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is healthy

CloudConfigFile [Required]
-string +
SecondaryNodeEvictionRate [Required]
+float32
-

cloudConfigFile is the path to the cloud provider configuration file.

+

secondaryNodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy

+
NodeStartupGracePeriod [Required]
+meta/v1.Duration +
+

nodeStartupGracePeriod is the amount of time which we allow starting a node to +be unresponsive before marking it unhealthy.

+
NodeMonitorGracePeriod [Required]
+meta/v1.Duration +
+

nodeMontiorGracePeriod is the amount of time which we allow a running node to be +unresponsive before marking it unhealthy. Must be N times more than kubelet's +nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet +to post node status.

+
PodEvictionTimeout [Required]
+meta/v1.Duration +
+

podEvictionTimeout is the grace period for deleting pods on failed nodes.

+
LargeClusterSizeThreshold [Required]
+int32 +
+

secondaryNodeEvictionRate is implicitly overridden to 0 for clusters smaller than or equal to largeClusterSizeThreshold

+
UnhealthyZoneThreshold [Required]
+float32 +
+

Zone is treated as unhealthy in nodeEvictionRate and secondaryNodeEvictionRate when at least +unhealthyZoneThreshold (no less than 3) of Nodes in the zone are NotReady

-## `KubeCloudSharedConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration} +## `PersistentVolumeBinderControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeBinderControllerConfiguration} **Appears in:** -- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) - - [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

KubeCloudSharedConfiguration contains elements shared by both kube-controller manager -and cloud-controller manager, but not genericconfig.

+

PersistentVolumeBinderControllerConfiguration contains elements describing +PersistentVolumeBinderController.

@@ -1644,109 +1532,155 @@ and cloud-controller manager, but not genericconfig.

- - - - - +
CloudProvider [Required]
-CloudProviderConfiguration +
PVClaimBinderSyncPeriod [Required]
+meta/v1.Duration
-

CloudProviderConfiguration holds configuration for CloudProvider related features.

+

pvClaimBinderSyncPeriod is the period for syncing persistent volumes +and persistent volume claims.

ExternalCloudVolumePlugin [Required]
-string +
VolumeConfiguration [Required]
+VolumeConfiguration
-

externalCloudVolumePlugin specifies the plugin to use when cloudProvider is "external". -It is currently used by the in repo cloud providers to handle node and volume control in the KCM.

+

volumeConfiguration holds configuration for volume related features.

UseServiceAccountCredentials [Required]
-bool +
VolumeHostCIDRDenylist [Required]
+[]string
-

useServiceAccountCredentials indicates whether controllers should be run with -individual service account credentials.

+

DEPRECATED: VolumeHostCIDRDenylist is a list of CIDRs that should not be reachable by the +controller from plugins.

AllowUntaggedCloud [Required]
+
VolumeHostAllowLocalLoopback [Required]
bool
-

run with untagged cloud instances

+

DEPRECATED: VolumeHostAllowLocalLoopback indicates if local loopback hosts (127.0.0.1, etc) +should be allowed from plugins.

RouteReconciliationPeriod [Required]
-meta/v1.Duration +
+ +## `PersistentVolumeRecyclerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeRecyclerConfiguration} + + +**Appears in:** + +- [VolumeConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-VolumeConfiguration) + + +

PersistentVolumeRecyclerConfiguration contains elements describing persistent volume plugins.

+ + + + + + + + - - - - - - - +
FieldDescription
MaximumRetry [Required]
+int32
-

routeReconciliationPeriod is the period for reconciling routes created for Nodes by cloud provider..

+

maximumRetry is number of retries the PV recycler will execute on failure to recycle +PV.

NodeMonitorPeriod [Required]
-meta/v1.Duration +
MinimumTimeoutNFS [Required]
+int32
-

nodeMonitorPeriod is the period for syncing NodeStatus in NodeController.

+

minimumTimeoutNFS is the minimum ActiveDeadlineSeconds to use for an NFS Recycler +pod.

ClusterName [Required]
+
PodTemplateFilePathNFS [Required]
string
-

clusterName is the instance prefix for the cluster.

+

podTemplateFilePathNFS is the file path to a pod definition used as a template for +NFS persistent volume recycling

ClusterCIDR [Required]
-string +
IncrementTimeoutNFS [Required]
+int32
-

clusterCIDR is CIDR Range for Pods in cluster.

+

incrementTimeoutNFS is the increment of time added per Gi to ActiveDeadlineSeconds +for an NFS scrubber pod.

AllocateNodeCIDRs [Required]
-bool +
PodTemplateFilePathHostPath [Required]
+string
-

AllocateNodeCIDRs enables CIDRs for Pods to be allocated and, if -ConfigureCloudRoutes is true, to be set on the cloud provider.

+

podTemplateFilePathHostPath is the file path to a pod definition used as a template for +HostPath persistent volume recycling. This is for development and testing only and +will not work in a multi-node cluster.

CIDRAllocatorType [Required]
-string +
MinimumTimeoutHostPath [Required]
+int32
-

CIDRAllocatorType determines what kind of pod CIDR allocator will be used.

+

minimumTimeoutHostPath is the minimum ActiveDeadlineSeconds to use for a HostPath +Recycler pod. This is for development and testing only and will not work in a multi-node +cluster.

ConfigureCloudRoutes [Required]
-bool +
IncrementTimeoutHostPath [Required]
+int32
-

configureCloudRoutes enables CIDRs allocated with allocateNodeCIDRs -to be configured on the cloud provider.

+

incrementTimeoutHostPath is the increment of time added per Gi to ActiveDeadlineSeconds +for a HostPath scrubber pod. This is for development and testing only and will not work +in a multi-node cluster.

NodeSyncPeriod [Required]
-meta/v1.Duration +
+ +## `PodGCControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PodGCControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

PodGCControllerConfiguration contains elements describing PodGCController.

+ + + + + + + +
FieldDescription
TerminatedPodGCThreshold [Required]
+int32
-

nodeSyncPeriod is the period for syncing nodes from cloudprovider. Longer -periods will result in fewer calls to cloud provider, but may delay addition -of new nodes to cluster.

+

terminatedPodGCThreshold is the number of terminated pods that can exist +before the terminated pod garbage collector starts deleting terminated pods. +If <= 0, the terminated pod garbage collector is disabled.

-## `WebhookConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-WebhookConfiguration} +## `ReplicaSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ReplicaSetControllerConfiguration} **Appears in:** -- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

WebhookConfiguration contains configuration related to -cloud-controller-manager hosted webhooks

+

ReplicaSetControllerConfiguration contains elements describing ReplicaSetController.

@@ -1754,77 +1688,55 @@ cloud-controller-manager hosted webhooks

-
Webhooks [Required]
-[]string +
ConcurrentRSSyncs [Required]
+int32
-

Webhooks is the list of webhooks to enable or disable -'*' means "all enabled by default webhooks" -'foo' means "enable 'foo'" -'-foo' means "disable 'foo'" -first item for a particular name wins

+

concurrentRSSyncs is the number of replica sets that are allowed to sync +concurrently. Larger number = more responsive replica management, but more +CPU (and network) load.

- - - -## `LeaderMigrationConfiguration` {#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration} +## `ReplicationControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ReplicationControllerConfiguration} **Appears in:** -- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration) +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

LeaderMigrationConfiguration provides versioned configuration for all migrating leader locks.

+

ReplicationControllerConfiguration contains elements describing ReplicationController.

- - - - - - - - - -
FieldDescription
apiVersion
string
controllermanager.config.k8s.io/v1alpha1
kind
string
LeaderMigrationConfiguration
leaderName [Required]
-string -
-

LeaderName is the name of the leader election resource that protects the migration -E.g. 1-20-KCM-to-1-21-CCM

-
resourceLock [Required]
-string -
-

ResourceLock indicates the resource object type that will be used to lock -Should be "leases" or "endpoints"

-
controllerLeaders [Required]
-[]ControllerLeaderConfiguration +
ConcurrentRCSyncs [Required]
+int32
-

ControllerLeaders contains a list of migrating leader lock configurations

+

concurrentRCSyncs is the number of replication controllers that are +allowed to sync concurrently. Larger number = more responsive replica +management, but more CPU (and network) load.

-## `ControllerLeaderConfiguration` {#controllermanager-config-k8s-io-v1alpha1-ControllerLeaderConfiguration} +## `ResourceQuotaControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ResourceQuotaControllerConfiguration} **Appears in:** -- [LeaderMigrationConfiguration](#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration) +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

ControllerLeaderConfiguration provides the configuration for a migrating leader lock.

+

ResourceQuotaControllerConfiguration contains elements describing ResourceQuotaController.

@@ -1832,37 +1744,35 @@ Should be "leases" or "endpoints"

- -
name [Required]
-string +
ResourceQuotaSyncPeriod [Required]
+meta/v1.Duration
-

Name is the name of the controller being migrated -E.g. service-controller, route-controller, cloud-node-controller, etc

+

resourceQuotaSyncPeriod is the period for syncing quota usage status +in the system.

component [Required]
-string +
ConcurrentResourceQuotaSyncs [Required]
+int32
-

Component is the name of the component in which the controller should be running. -E.g. kube-controller-manager, cloud-controller-manager, etc -Or '*' meaning the controller can be run under any component that participates in the migration

+

concurrentResourceQuotaSyncs is the number of resource quotas that are +allowed to sync concurrently. Larger number = more responsive quota +management, but more CPU (and network) load.

-## `GenericControllerManagerConfiguration` {#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration} +## `SAControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-SAControllerConfiguration} **Appears in:** -- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) - - [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) -

GenericControllerManagerConfiguration holds configuration for a generic controller-manager.

+

SAControllerConfiguration contains elements describing ServiceAccountController.

@@ -1870,80 +1780,168 @@ Or '*' meaning the controller can be run under any component that participates i - - - - +
Port [Required]
-int32 +
ServiceAccountKeyFile [Required]
+string
-

port is the port that the controller-manager's http service runs on.

+

serviceAccountKeyFile is the filename containing a PEM-encoded private RSA key +used to sign service account tokens.

Address [Required]
-string +
ConcurrentSATokenSyncs [Required]
+int32
-

address is the IP address to serve on (set to 0.0.0.0 for all interfaces).

+

concurrentSATokenSyncs is the number of service account token syncing operations +that will be done concurrently.

MinResyncPeriod [Required]
-meta/v1.Duration +
RootCAFile [Required]
+string
-

minResyncPeriod is the resync period in reflectors; will be random between -minResyncPeriod and 2*minResyncPeriod.

+

rootCAFile is the root certificate authority will be included in service +account's token secret. This must be a valid PEM-encoded CA bundle.

ClientConnection [Required]
-ClientConnectionConfiguration +
+ +## `StatefulSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-StatefulSetControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

StatefulSetControllerConfiguration contains elements describing StatefulSetController.

+ + + + + + + + - +
FieldDescription
ConcurrentStatefulSetSyncs [Required]
+int32
-

ClientConnection specifies the kubeconfig file and client connection -settings for the proxy server to use when communicating with the apiserver.

+

concurrentStatefulSetSyncs is the number of statefulset objects that are +allowed to sync concurrently. Larger number = more responsive statefulsets, +but more CPU (and network) load.

ControllerStartInterval [Required]
-meta/v1.Duration +
+ +## `TTLAfterFinishedControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-TTLAfterFinishedControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

TTLAfterFinishedControllerConfiguration contains elements describing TTLAfterFinishedController.

+ + + + + + + + - +
FieldDescription
ConcurrentTTLSyncs [Required]
+int32
-

How long to wait between starting controller managers

+

concurrentTTLSyncs is the number of TTL-after-finished collector workers that are +allowed to sync concurrently.

LeaderElection [Required]
-LeaderElectionConfiguration +
+ +## `ValidatingAdmissionPolicyStatusControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ValidatingAdmissionPolicyStatusControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

ValidatingAdmissionPolicyStatusControllerConfiguration contains elements describing ValidatingAdmissionPolicyStatusController.

+ + + + + + + + - +
FieldDescription
ConcurrentPolicySyncs [Required]
+int32
-

leaderElection defines the configuration of leader election client.

+

ConcurrentPolicySyncs is the number of policy objects that are +allowed to sync concurrently. Larger number = quicker type checking, +but more CPU (and network) load. +The default value is 5.

Controllers [Required]
-[]string +
+ +## `VolumeConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-VolumeConfiguration} + + +**Appears in:** + +- [PersistentVolumeBinderControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeBinderControllerConfiguration) + + +

VolumeConfiguration contains all enumerated flags meant to configure all volume +plugins. From this config, the controller-manager binary will create many instances of +volume.VolumeConfig, each containing only the configuration needed for that plugin which +are then passed to the appropriate plugin. The ControllerManager binary is the only part +of the code which knows what plugins are supported and which flags correspond to each plugin.

+ + + + + + + + - - - diff --git a/content/en/docs/reference/config-api/kube-proxy-config.v1alpha1.md b/content/en/docs/reference/config-api/kube-proxy-config.v1alpha1.md index 328655b5d117e..ddc65f29800e7 100644 --- a/content/en/docs/reference/config-api/kube-proxy-config.v1alpha1.md +++ b/content/en/docs/reference/config-api/kube-proxy-config.v1alpha1.md @@ -12,6 +12,7 @@ auto_generated: true - [KubeProxyConfiguration](#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration) + ## `ClientConnectionConfiguration` {#ClientConnectionConfiguration} @@ -80,10 +81,10 @@ client.

**Appears in:** -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration) - - [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration) + - [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration) @@ -201,7 +202,6 @@ during leader election cycles.

FieldDescription
EnableHostPathProvisioning [Required]
+bool
-

Controllers is the list of controllers to enable or disable -'*' means "all enabled by default controllers" -'foo' means "enable 'foo'" -'-foo' means "disable 'foo'" -first item for a particular name wins

+

enableHostPathProvisioning enables HostPath PV provisioning when running without a +cloud provider. This allows testing and development of provisioning features. HostPath +provisioning is not supported in any way, won't work in a multi-node cluster, and +should not be used for anything other than testing or development.

Debugging [Required]
-DebuggingConfiguration +
EnableDynamicProvisioning [Required]
+bool
-

DebuggingConfiguration holds configuration for Debugging related features.

+

enableDynamicProvisioning enables the provisioning of volumes when running within an environment +that supports dynamic provisioning. Defaults to true.

LeaderMigrationEnabled [Required]
-bool +
PersistentVolumeRecyclerConfiguration [Required]
+PersistentVolumeRecyclerConfiguration
-

LeaderMigrationEnabled indicates whether Leader Migration should be enabled for the controller manager.

+

persistentVolumeRecyclerConfiguration holds configuration for persistent volume plugins.

LeaderMigration [Required]
-LeaderMigrationConfiguration +
FlexVolumePluginDir [Required]
+string
-

LeaderMigration holds the configuration for Leader Migration.

+

volumePluginDir is the full path of the directory in which the flex +volume plugin should search for additional third party volume plugins

- ## `KubeProxyConfiguration` {#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration} diff --git a/content/en/docs/reference/config-api/kube-scheduler-config.v1.md b/content/en/docs/reference/config-api/kube-scheduler-config.v1.md index cb07bc0654b23..d2159b93e1b55 100644 --- a/content/en/docs/reference/config-api/kube-scheduler-config.v1.md +++ b/content/en/docs/reference/config-api/kube-scheduler-config.v1.md @@ -19,6 +19,7 @@ auto_generated: true - [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1-VolumeBindingArgs) + ## `ClientConnectionConfiguration` {#ClientConnectionConfiguration} @@ -119,10 +120,10 @@ enableProfiling is true.

**Appears in:** -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration) - - [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1-KubeSchedulerConfiguration) +

LeaderElectionConfiguration defines the configuration of leader election clients for components that can run with leader election enabled.

@@ -200,7 +201,6 @@ during leader election cycles.

- ## `DefaultPreemptionArgs` {#kubescheduler-config-k8s-io-v1-DefaultPreemptionArgs} diff --git a/content/en/docs/reference/config-api/kube-scheduler-config.v1beta3.md b/content/en/docs/reference/config-api/kube-scheduler-config.v1beta3.md index 6fc64f5bba2d4..7060addcd1119 100644 --- a/content/en/docs/reference/config-api/kube-scheduler-config.v1beta3.md +++ b/content/en/docs/reference/config-api/kube-scheduler-config.v1beta3.md @@ -19,6 +19,182 @@ auto_generated: true - [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1beta3-VolumeBindingArgs) + + +## `ClientConnectionConfiguration` {#ClientConnectionConfiguration} + + +**Appears in:** + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) + + +

ClientConnectionConfiguration contains details for constructing a client.

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
kubeconfig [Required]
+string +
+

kubeconfig is the path to a KubeConfig file.

+
acceptContentTypes [Required]
+string +
+

acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the +default value of 'application/json'. This field will control all connections to the server used by a particular +client.

+
contentType [Required]
+string +
+

contentType is the content type used when sending data to the server from this client.

+
qps [Required]
+float32 +
+

qps controls the number of queries per second allowed for this connection.

+
burst [Required]
+int32 +
+

burst allows extra queries to accumulate when a client is exceeding its rate.

+
+ +## `DebuggingConfiguration` {#DebuggingConfiguration} + + +**Appears in:** + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) + + +

DebuggingConfiguration holds configuration for Debugging related features.

+ + + + + + + + + + + + + + +
FieldDescription
enableProfiling [Required]
+bool +
+

enableProfiling enables profiling via web interface host:port/debug/pprof/

+
enableContentionProfiling [Required]
+bool +
+

enableContentionProfiling enables block profiling, if +enableProfiling is true.

+
+ +## `LeaderElectionConfiguration` {#LeaderElectionConfiguration} + + +**Appears in:** + +- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) + + +

LeaderElectionConfiguration defines the configuration of leader election +clients for components that can run with leader election enabled.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
leaderElect [Required]
+bool +
+

leaderElect enables a leader election client to gain leadership +before executing the main loop. Enable this when running replicated +components for high availability.

+
leaseDuration [Required]
+meta/v1.Duration +
+

leaseDuration is the duration that non-leader candidates will wait +after observing a leadership renewal until attempting to acquire +leadership of a led but unrenewed leader slot. This is effectively the +maximum duration that a leader can be stopped before it is replaced +by another candidate. This is only applicable if leader election is +enabled.

+
renewDeadline [Required]
+meta/v1.Duration +
+

renewDeadline is the interval between attempts by the acting master to +renew a leadership slot before it stops leading. This must be less +than or equal to the lease duration. This is only applicable if leader +election is enabled.

+
retryPeriod [Required]
+meta/v1.Duration +
+

retryPeriod is the duration the clients should wait between attempting +acquisition and renewal of a leadership. This is only applicable if +leader election is enabled.

+
resourceLock [Required]
+string +
+

resourceLock indicates the resource object type that will be used to lock +during leader election cycles.

+
resourceName [Required]
+string +
+

resourceName indicates the name of resource object that will be used to lock +during leader election cycles.

+
resourceNamespace [Required]
+string +
+

resourceName indicates the namespace of resource object that will be used to lock +during leader election cycles.

+
+ ## `DefaultPreemptionArgs` {#kubescheduler-config-k8s-io-v1beta3-DefaultPreemptionArgs} @@ -1074,180 +1250,4 @@ Weight defaults to 1 if not specified or explicitly set to 0.

- - - - -## `ClientConnectionConfiguration` {#ClientConnectionConfiguration} - - -**Appears in:** - -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) - - -

ClientConnectionConfiguration contains details for constructing a client.

- - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
kubeconfig [Required]
-string -
-

kubeconfig is the path to a KubeConfig file.

-
acceptContentTypes [Required]
-string -
-

acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the -default value of 'application/json'. This field will control all connections to the server used by a particular -client.

-
contentType [Required]
-string -
-

contentType is the content type used when sending data to the server from this client.

-
qps [Required]
-float32 -
-

qps controls the number of queries per second allowed for this connection.

-
burst [Required]
-int32 -
-

burst allows extra queries to accumulate when a client is exceeding its rate.

-
- -## `DebuggingConfiguration` {#DebuggingConfiguration} - - -**Appears in:** - -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) - - -

DebuggingConfiguration holds configuration for Debugging related features.

- - - - - - - - - - - - - - -
FieldDescription
enableProfiling [Required]
-bool -
-

enableProfiling enables profiling via web interface host:port/debug/pprof/

-
enableContentionProfiling [Required]
-bool -
-

enableContentionProfiling enables block profiling, if -enableProfiling is true.

-
- -## `LeaderElectionConfiguration` {#LeaderElectionConfiguration} - - -**Appears in:** - -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration) - - -

LeaderElectionConfiguration defines the configuration of leader election -clients for components that can run with leader election enabled.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
leaderElect [Required]
-bool -
-

leaderElect enables a leader election client to gain leadership -before executing the main loop. Enable this when running replicated -components for high availability.

-
leaseDuration [Required]
-meta/v1.Duration -
-

leaseDuration is the duration that non-leader candidates will wait -after observing a leadership renewal until attempting to acquire -leadership of a led but unrenewed leader slot. This is effectively the -maximum duration that a leader can be stopped before it is replaced -by another candidate. This is only applicable if leader election is -enabled.

-
renewDeadline [Required]
-meta/v1.Duration -
-

renewDeadline is the interval between attempts by the acting master to -renew a leadership slot before it stops leading. This must be less -than or equal to the lease duration. This is only applicable if leader -election is enabled.

-
retryPeriod [Required]
-meta/v1.Duration -
-

retryPeriod is the duration the clients should wait between attempting -acquisition and renewal of a leadership. This is only applicable if -leader election is enabled.

-
resourceLock [Required]
-string -
-

resourceLock indicates the resource object type that will be used to lock -during leader election cycles.

-
resourceName [Required]
-string -
-

resourceName indicates the name of resource object that will be used to lock -during leader election cycles.

-
resourceNamespace [Required]
-string -
-

resourceName indicates the namespace of resource object that will be used to lock -during leader election cycles.

-
\ No newline at end of file + \ No newline at end of file diff --git a/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md b/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md index 3972691620bb0..9d94c614de168 100644 --- a/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md +++ b/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md @@ -264,6 +264,109 @@ node only (e.g. the node ip).

- [JoinConfiguration](#kubeadm-k8s-io-v1beta3-JoinConfiguration) + + +## `BootstrapToken` {#BootstrapToken} + + +**Appears in:** + +- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration) + + +

BootstrapToken describes one bootstrap token, stored as a Secret in the cluster

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
token [Required]
+BootstrapTokenString +
+

token is used for establishing bidirectional trust between nodes and control-planes. +Used for joining nodes in the cluster.

+
description
+string +
+

description sets a human-friendly message why this token exists and what it's used +for, so other administrators can know its purpose.

+
ttl
+meta/v1.Duration +
+

ttl defines the time to live for this token. Defaults to 24h. +expires and ttl are mutually exclusive.

+
expires
+meta/v1.Time +
+

expires specifies the timestamp when this token expires. Defaults to being set +dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.

+
usages
+[]string +
+

usages describes the ways in which this token can be used. Can by default be used +for establishing bidirectional trust, but that can be changed here.

+
groups
+[]string +
+

groups specifies the extra groups that this token will authenticate as when/if +used for authentication

+
+ +## `BootstrapTokenString` {#BootstrapTokenString} + + +**Appears in:** + +- [BootstrapToken](#BootstrapToken) + + +

BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used +for both validation of the practically of the API server from a joining node's point +of view and as an authentication method for the node in the bootstrap phase of +"kubeadm join". This token is and should be short-lived.

+ + + + + + + + + + + + + + +
FieldDescription
- [Required]
+string +
+ No description provided.
- [Required]
+string +
+ No description provided.
+ ## `ClusterConfiguration` {#kubeadm-k8s-io-v1beta3-ClusterConfiguration} @@ -1237,107 +1340,4 @@ first alpha-numerically.

- - - - -## `BootstrapToken` {#BootstrapToken} - - -**Appears in:** - -- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration) - - -

BootstrapToken describes one bootstrap token, stored as a Secret in the cluster

- - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
token [Required]
-BootstrapTokenString -
-

token is used for establishing bidirectional trust between nodes and control-planes. -Used for joining nodes in the cluster.

-
description
-string -
-

description sets a human-friendly message why this token exists and what it's used -for, so other administrators can know its purpose.

-
ttl
-meta/v1.Duration -
-

ttl defines the time to live for this token. Defaults to 24h. -expires and ttl are mutually exclusive.

-
expires
-meta/v1.Time -
-

expires specifies the timestamp when this token expires. Defaults to being set -dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.

-
usages
-[]string -
-

usages describes the ways in which this token can be used. Can by default be used -for establishing bidirectional trust, but that can be changed here.

-
groups
-[]string -
-

groups specifies the extra groups that this token will authenticate as when/if -used for authentication

-
- -## `BootstrapTokenString` {#BootstrapTokenString} - - -**Appears in:** - -- [BootstrapToken](#BootstrapToken) - - -

BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used -for both validation of the practically of the API server from a joining node's point -of view and as an authentication method for the node in the bootstrap phase of -"kubeadm join". This token is and should be short-lived.

- - - - - - - - - - - - - - -
FieldDescription
- [Required]
-string -
- No description provided.
- [Required]
-string -
- No description provided.
\ No newline at end of file + \ No newline at end of file diff --git a/content/en/docs/reference/config-api/kubeadm-config.v1beta4.md b/content/en/docs/reference/config-api/kubeadm-config.v1beta4.md index f7349db30c606..1689232505839 100644 --- a/content/en/docs/reference/config-api/kubeadm-config.v1beta4.md +++ b/content/en/docs/reference/config-api/kubeadm-config.v1beta4.md @@ -291,6 +291,111 @@ node only (e.g. the node ip).

- [ResetConfiguration](#kubeadm-k8s-io-v1beta4-ResetConfiguration) + + +## `BootstrapToken` {#BootstrapToken} + + +**Appears in:** + +- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration) + +- [InitConfiguration](#kubeadm-k8s-io-v1beta4-InitConfiguration) + + +

BootstrapToken describes one bootstrap token, stored as a Secret in the cluster

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
token [Required]
+BootstrapTokenString +
+

token is used for establishing bidirectional trust between nodes and control-planes. +Used for joining nodes in the cluster.

+
description
+string +
+

description sets a human-friendly message why this token exists and what it's used +for, so other administrators can know its purpose.

+
ttl
+meta/v1.Duration +
+

ttl defines the time to live for this token. Defaults to 24h. +expires and ttl are mutually exclusive.

+
expires
+meta/v1.Time +
+

expires specifies the timestamp when this token expires. Defaults to being set +dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.

+
usages
+[]string +
+

usages describes the ways in which this token can be used. Can by default be used +for establishing bidirectional trust, but that can be changed here.

+
groups
+[]string +
+

groups specifies the extra groups that this token will authenticate as when/if +used for authentication

+
+ +## `BootstrapTokenString` {#BootstrapTokenString} + + +**Appears in:** + +- [BootstrapToken](#BootstrapToken) + + +

BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used +for both validation of the practically of the API server from a joining node's point +of view and as an authentication method for the node in the bootstrap phase of +"kubeadm join". This token is and should be short-lived.

+ + + + + + + + + + + + + + +
FieldDescription
- [Required]
+string +
+ No description provided.
- [Required]
+string +
+ No description provided.
+ ## `ClusterConfiguration` {#kubeadm-k8s-io-v1beta4-ClusterConfiguration} @@ -424,7 +529,7 @@ information.

bootstrapTokens
-[]invalid type +[]BootstrapToken

BootstrapTokens is respected at kubeadm init time and describes a set of Bootstrap Tokens to create. @@ -1322,107 +1427,4 @@ first alpha-numerically.

- - - - -## `BootstrapToken` {#BootstrapToken} - - -**Appears in:** - -- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration) - - -

BootstrapToken describes one bootstrap token, stored as a Secret in the cluster

- - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
token [Required]
-BootstrapTokenString -
-

token is used for establishing bidirectional trust between nodes and control-planes. -Used for joining nodes in the cluster.

-
description
-string -
-

description sets a human-friendly message why this token exists and what it's used -for, so other administrators can know its purpose.

-
ttl
-meta/v1.Duration -
-

ttl defines the time to live for this token. Defaults to 24h. -expires and ttl are mutually exclusive.

-
expires
-meta/v1.Time -
-

expires specifies the timestamp when this token expires. Defaults to being set -dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.

-
usages
-[]string -
-

usages describes the ways in which this token can be used. Can by default be used -for establishing bidirectional trust, but that can be changed here.

-
groups
-[]string -
-

groups specifies the extra groups that this token will authenticate as when/if -used for authentication

-
- -## `BootstrapTokenString` {#BootstrapTokenString} - - -**Appears in:** - -- [BootstrapToken](#BootstrapToken) - - -

BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used -for both validation of the practically of the API server from a joining node's point -of view and as an authentication method for the node in the bootstrap phase of -"kubeadm join". This token is and should be short-lived.

- - - - - - - - - - - - - - -
FieldDescription
- [Required]
-string -
- No description provided.
- [Required]
-string -
- No description provided.
\ No newline at end of file + \ No newline at end of file diff --git a/content/en/docs/reference/config-api/kubeconfig.v1.md b/content/en/docs/reference/config-api/kubeconfig.v1.md index 42cf3bd7cc9c6..72a5c63358ce8 100644 --- a/content/en/docs/reference/config-api/kubeconfig.v1.md +++ b/content/en/docs/reference/config-api/kubeconfig.v1.md @@ -11,6 +11,83 @@ auto_generated: true - [Config](#Config) + + +## `Config` {#Config} + + + +

Config holds the information needed to build connect to remote kubernetes clusters as a given user

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
apiVersion
string
/v1
kind
string
Config
kind
+string +
+

Legacy field from pkg/api/types.go TypeMeta. +TODO(jlowdermilk): remove this after eliminating downstream dependencies.

+
apiVersion
+string +
+

Legacy field from pkg/api/types.go TypeMeta. +TODO(jlowdermilk): remove this after eliminating downstream dependencies.

+
preferences [Required]
+Preferences +
+

Preferences holds general information to be use for cli interactions

+
clusters [Required]
+[]NamedCluster +
+

Clusters is a map of referencable names to cluster configs

+
users [Required]
+[]NamedAuthInfo +
+

AuthInfos is a map of referencable names to user configs

+
contexts [Required]
+[]NamedContext +
+

Contexts is a map of referencable names to context configs

+
current-context [Required]
+string +
+

CurrentContext is the name of the context that you would like to use by default

+
extensions
+[]NamedExtension +
+

Extensions holds additional information. This is useful for extenders so that reads and writes don't clobber unknown fields

+
## `AuthInfo` {#AuthInfo} diff --git a/content/en/docs/reference/config-api/kubelet-config.v1.md b/content/en/docs/reference/config-api/kubelet-config.v1.md index cd7d676e072db..24ba05ca33e2a 100644 --- a/content/en/docs/reference/config-api/kubelet-config.v1.md +++ b/content/en/docs/reference/config-api/kubelet-config.v1.md @@ -11,7 +11,6 @@ auto_generated: true - [CredentialProviderConfig](#kubelet-config-k8s-io-v1-CredentialProviderConfig) - ## `CredentialProviderConfig` {#kubelet-config-k8s-io-v1-CredentialProviderConfig} @@ -82,7 +81,7 @@ and URL path.

Each entry in matchImages is a pattern which can optionally contain a port and a path. Globs can be used in the domain, but not in the port or the path. Globs are supported as subdomains like '*.k8s.io' or 'k8s.*.io', and top-level-domains such as 'k8s.*'. -Matching partial subdomains like 'app*.k8s.io' is also supported. Each glob can only match +Matching partial subdomains like 'app.k8s.io' is also supported. Each glob can only match a single subdomain segment, so *.io does not match *.k8s.io.

A match exists between an image and a matchImage when all of the below are true:

    diff --git a/content/en/docs/reference/config-api/kubelet-config.v1alpha1.md b/content/en/docs/reference/config-api/kubelet-config.v1alpha1.md index 6082c2f7ecfe1..99602ebceef6f 100644 --- a/content/en/docs/reference/config-api/kubelet-config.v1alpha1.md +++ b/content/en/docs/reference/config-api/kubelet-config.v1alpha1.md @@ -11,7 +11,6 @@ auto_generated: true - [CredentialProviderConfig](#kubelet-config-k8s-io-v1alpha1-CredentialProviderConfig) - ## `CredentialProviderConfig` {#kubelet-config-k8s-io-v1alpha1-CredentialProviderConfig} diff --git a/content/en/docs/reference/config-api/kubelet-config.v1beta1.md b/content/en/docs/reference/config-api/kubelet-config.v1beta1.md index 877e3c2240468..a760d11d1cd8a 100644 --- a/content/en/docs/reference/config-api/kubelet-config.v1beta1.md +++ b/content/en/docs/reference/config-api/kubelet-config.v1beta1.md @@ -14,6 +14,279 @@ auto_generated: true - [SerializedNodeConfigSource](#kubelet-config-k8s-io-v1beta1-SerializedNodeConfigSource) + + +## `FormatOptions` {#FormatOptions} + + +**Appears in:** + +- [LoggingConfiguration](#LoggingConfiguration) + + +

    FormatOptions contains options for the different logging formats.

    + + + + + + + + + + + +
    FieldDescription
    json [Required]
    +JSONOptions +
    +

    [Alpha] JSON contains options for logging format "json". +Only available when the LoggingAlphaOptions feature gate is enabled.

    +
    + +## `JSONOptions` {#JSONOptions} + + +**Appears in:** + +- [FormatOptions](#FormatOptions) + + +

    JSONOptions contains options for logging format "json".

    + + + + + + + + + + + + + + +
    FieldDescription
    splitStream [Required]
    +bool +
    +

    [Alpha] SplitStream redirects error messages to stderr while +info messages go to stdout, with buffering. The default is to write +both to stdout, without buffering. Only available when +the LoggingAlphaOptions feature gate is enabled.

    +
    infoBufferSize [Required]
    +k8s.io/apimachinery/pkg/api/resource.QuantityValue +
    +

    [Alpha] InfoBufferSize sets the size of the info stream when +using split streams. The default is zero, which disables buffering. +Only available when the LoggingAlphaOptions feature gate is enabled.

    +
    + +## `LogFormatFactory` {#LogFormatFactory} + + + +

    LogFormatFactory provides support for a certain additional, +non-default log format.

    + + + + +## `LoggingConfiguration` {#LoggingConfiguration} + + +**Appears in:** + +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) + + +

    LoggingConfiguration contains logging options.

    + + + + + + + + + + + + + + + + + + + + + + + +
    FieldDescription
    format [Required]
    +string +
    +

    Format Flag specifies the structure of log messages. +default value of format is text

    +
    flushFrequency [Required]
    +TimeOrMetaDuration +
    +

    Maximum time between log flushes. +If a string, parsed as a duration (i.e. "1s") +If an int, the maximum number of nanoseconds (i.e. 1s = 1000000000). +Ignored if the selected logging backend writes log messages without buffering.

    +
    verbosity [Required]
    +VerbosityLevel +
    +

    Verbosity is the threshold that determines which log messages are +logged. Default is zero which logs only the most important +messages. Higher values enable additional messages. Error messages +are always logged.

    +
    vmodule [Required]
    +VModuleConfiguration +
    +

    VModule overrides the verbosity threshold for individual files. +Only supported for "text" log format.

    +
    options [Required]
    +FormatOptions +
    +

    [Alpha] Options holds additional parameters that are specific +to the different logging formats. Only the options for the selected +format get used, but all of them get validated. +Only available when the LoggingAlphaOptions feature gate is enabled.

    +
    + +## `LoggingOptions` {#LoggingOptions} + + + +

    LoggingOptions can be used with ValidateAndApplyWithOptions to override +certain global defaults.

    + + + + + + + + + + + + + + +
    FieldDescription
    ErrorStream [Required]
    +io.Writer +
    +

    ErrorStream can be used to override the os.Stderr default.

    +
    InfoStream [Required]
    +io.Writer +
    +

    InfoStream can be used to override the os.Stdout default.

    +
    + +## `TimeOrMetaDuration` {#TimeOrMetaDuration} + + +**Appears in:** + +- [LoggingConfiguration](#LoggingConfiguration) + + +

    TimeOrMetaDuration is present only for backwards compatibility for the +flushFrequency field, and new fields should use metav1.Duration.

    + + + + + + + + + + + + + + +
    FieldDescription
    Duration [Required]
    +meta/v1.Duration +
    +

    Duration holds the duration

    +
    - [Required]
    +bool +
    +

    SerializeAsString controls whether the value is serialized as a string or an integer

    +
    + +## `TracingConfiguration` {#TracingConfiguration} + + +**Appears in:** + +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) + + +

    TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.

    + + + + + + + + + + + + + + +
    FieldDescription
    endpoint
    +string +
    +

    Endpoint of the collector this component will report traces to. +The connection is insecure, and does not currently support TLS. +Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.

    +
    samplingRatePerMillion
    +int32 +
    +

    SamplingRatePerMillion is the number of samples to collect per million spans. +Recommended is unset. If unset, sampler respects its parent span's sampling +rate, but otherwise never samples.

    +
    + +## `VModuleConfiguration` {#VModuleConfiguration} + +(Alias of `[]k8s.io/component-base/logs/api/v1.VModuleItem`) + +**Appears in:** + +- [LoggingConfiguration](#LoggingConfiguration) + + +

    VModuleConfiguration is a collection of individual file names or patterns +and the corresponding verbosity threshold.

    + + + + +## `VerbosityLevel` {#VerbosityLevel} + +(Alias of `uint32`) + +**Appears in:** + +- [LoggingConfiguration](#LoggingConfiguration) + + + +

    VerbosityLevel represents a klog or logr verbosity threshold.

    + + + + ## `CredentialProviderConfig` {#kubelet-config-k8s-io-v1beta1-CredentialProviderConfig} @@ -1180,366 +1453,100 @@ Default: 0.9

    registerWithTaints are an array of taints to add to a node object when the kubelet registers itself. This only takes effect when registerNode -is true and upon the initial registration of the node. -Default: nil

    - - -registerNode
    -bool - - -

    registerNode enables automatic registration with the apiserver. -Default: true

    - - -tracing
    -TracingConfiguration - - -

    Tracing specifies the versioned configuration for OpenTelemetry tracing clients. -See https://kep.k8s.io/2832 for more details. -Default: nil

    - - -localStorageCapacityIsolation
    -bool - - -

    LocalStorageCapacityIsolation enables local ephemeral storage isolation feature. The default setting is true. -This feature allows users to set request/limit for container's ephemeral storage and manage it in a similar way -as cpu and memory. It also allows setting sizeLimit for emptyDir volume, which will trigger pod eviction if disk -usage from the volume exceeds the limit. -This feature depends on the capability of detecting correct root file system disk usage. For certain systems, -such as kind rootless, if this capability cannot be supported, the feature LocalStorageCapacityIsolation should be -disabled. Once disabled, user should not set request/limit for container's ephemeral storage, or sizeLimit for emptyDir. -Default: true

    - - -containerRuntimeEndpoint [Required]
    -string - - -

    ContainerRuntimeEndpoint is the endpoint of container runtime. -Unix Domain Sockets are supported on Linux, while npipe and tcp endpoints are supported on Windows. -Examples:'unix:///path/to/runtime.sock', 'npipe:////./pipe/runtime'

    - - -imageServiceEndpoint
    -string - - -

    ImageServiceEndpoint is the endpoint of container image service. -Unix Domain Socket are supported on Linux, while npipe and tcp endpoints are supported on Windows. -Examples:'unix:///path/to/runtime.sock', 'npipe:////./pipe/runtime'. -If not specified, the value in containerRuntimeEndpoint is used.

    - - - - - -## `SerializedNodeConfigSource` {#kubelet-config-k8s-io-v1beta1-SerializedNodeConfigSource} - - - -

    SerializedNodeConfigSource allows us to serialize v1.NodeConfigSource. -This type is used internally by the Kubelet for tracking checkpointed dynamic configs. -It exists in the kubeletconfig API group because it is classified as a versioned input to the Kubelet.

    - - - - - - - - - - - - - - -
    FieldDescription
    apiVersion
    string
    kubelet.config.k8s.io/v1beta1
    kind
    string
    SerializedNodeConfigSource
    source
    -core/v1.NodeConfigSource -
    -

    source is the source that we are serializing.

    -
    - -## `CredentialProvider` {#kubelet-config-k8s-io-v1beta1-CredentialProvider} - - -**Appears in:** - -- [CredentialProviderConfig](#kubelet-config-k8s-io-v1beta1-CredentialProviderConfig) - - -

    CredentialProvider represents an exec plugin to be invoked by the kubelet. The plugin is only -invoked when an image being pulled matches the images handled by the plugin (see matchImages).

    - - - - - - - - - - - - - - - - - - - - - - - - - - -
    FieldDescription
    name [Required]
    -string -
    -

    name is the required name of the credential provider. It must match the name of the -provider executable as seen by the kubelet. The executable must be in the kubelet's -bin directory (set by the --image-credential-provider-bin-dir flag).

    -
    matchImages [Required]
    -[]string -
    -

    matchImages is a required list of strings used to match against images in order to -determine if this provider should be invoked. If one of the strings matches the -requested image from the kubelet, the plugin will be invoked and given a chance -to provide credentials. Images are expected to contain the registry domain -and URL path.

    -

    Each entry in matchImages is a pattern which can optionally contain a port and a path. -Globs can be used in the domain, but not in the port or the path. Globs are supported -as subdomains like '*.k8s.io' or 'k8s.*.io', and top-level-domains such as 'k8s.*'. -Matching partial subdomains like 'app*.k8s.io' is also supported. Each glob can only match -a single subdomain segment, so *.io does not match *.k8s.io.

    -

    A match exists between an image and a matchImage when all of the below are true:

    -
      -
    • Both contain the same number of domain parts and each part matches.
    • -
    • The URL path of an imageMatch must be a prefix of the target image URL path.
    • -
    • If the imageMatch contains a port, then the port must match in the image as well.
    • -
    -

    Example values of matchImages:

    -
      -
    • 123456789.dkr.ecr.us-east-1.amazonaws.com
    • -
    • *.azurecr.io
    • -
    • gcr.io
    • -
    • *.*.registry.io
    • -
    • registry.io:8080/path
    • -
    -
    defaultCacheDuration [Required]
    -meta/v1.Duration -
    -

    defaultCacheDuration is the default duration the plugin will cache credentials in-memory -if a cache duration is not provided in the plugin response. This field is required.

    -
    apiVersion [Required]
    -string -
    -

    Required input version of the exec CredentialProviderRequest. The returned CredentialProviderResponse -MUST use the same encoding version as the input. Current supported values are:

    -
      -
    • credentialprovider.kubelet.k8s.io/v1beta1
    • -
    -
    args
    -[]string -
    -

    Arguments to pass to the command when executing it.

    -
    env
    -[]ExecEnvVar -
    -

    Env defines additional environment variables to expose to the process. These -are unioned with the host's environment, as well as variables client-go uses -to pass argument to the plugin.

    -
    - -## `ExecEnvVar` {#kubelet-config-k8s-io-v1beta1-ExecEnvVar} - - -**Appears in:** - -- [CredentialProvider](#kubelet-config-k8s-io-v1beta1-CredentialProvider) - - -

    ExecEnvVar is used for setting environment variables when executing an exec-based -credential plugin.

    - - - - - - - - - - - - - - -
    FieldDescription
    name [Required]
    -string -
    - No description provided.
    value [Required]
    -string -
    - No description provided.
    - -## `KubeletAnonymousAuthentication` {#kubelet-config-k8s-io-v1beta1-KubeletAnonymousAuthentication} - - -**Appears in:** - -- [KubeletAuthentication](#kubelet-config-k8s-io-v1beta1-KubeletAuthentication) - - - - - - - - - - - - -
    FieldDescription
    enabled
    -bool -
    -

    enabled allows anonymous requests to the kubelet server. -Requests that are not rejected by another authentication method are treated as -anonymous requests. -Anonymous requests have a username of system:anonymous, and a group name of -system:unauthenticated.

    -
    - -## `KubeletAuthentication` {#kubelet-config-k8s-io-v1beta1-KubeletAuthentication} - - -**Appears in:** - -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) - - - - - - - - - + + - - - -
    FieldDescription
    x509
    -KubeletX509Authentication +is true and upon the initial registration of the node. +Default: nil

    +
    registerNode
    +bool
    -

    x509 contains settings related to x509 client certificate authentication.

    +

    registerNode enables automatic registration with the apiserver. +Default: true

    webhook
    -KubeletWebhookAuthentication +
    tracing
    +TracingConfiguration
    -

    webhook contains settings related to webhook bearer token authentication.

    +

    Tracing specifies the versioned configuration for OpenTelemetry tracing clients. +See https://kep.k8s.io/2832 for more details. +Default: nil

    anonymous
    -KubeletAnonymousAuthentication +
    localStorageCapacityIsolation
    +bool
    -

    anonymous contains settings related to anonymous authentication.

    +

    LocalStorageCapacityIsolation enables local ephemeral storage isolation feature. The default setting is true. +This feature allows users to set request/limit for container's ephemeral storage and manage it in a similar way +as cpu and memory. It also allows setting sizeLimit for emptyDir volume, which will trigger pod eviction if disk +usage from the volume exceeds the limit. +This feature depends on the capability of detecting correct root file system disk usage. For certain systems, +such as kind rootless, if this capability cannot be supported, the feature LocalStorageCapacityIsolation should be +disabled. Once disabled, user should not set request/limit for container's ephemeral storage, or sizeLimit for emptyDir. +Default: true

    - -## `KubeletAuthorization` {#kubelet-config-k8s-io-v1beta1-KubeletAuthorization} - - -**Appears in:** - -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) - - - - - - - - - -
    FieldDescription
    mode
    -KubeletAuthorizationMode +
    containerRuntimeEndpoint [Required]
    +string
    -

    mode is the authorization mode to apply to requests to the kubelet server. -Valid values are AlwaysAllow and Webhook. -Webhook mode uses the SubjectAccessReview API to determine authorization.

    +

    ContainerRuntimeEndpoint is the endpoint of container runtime. +Unix Domain Sockets are supported on Linux, while npipe and tcp endpoints are supported on Windows. +Examples:'unix:///path/to/runtime.sock', 'npipe:////./pipe/runtime'

    webhook
    -KubeletWebhookAuthorization +
    imageServiceEndpoint
    +string
    -

    webhook contains settings related to Webhook authorization.

    +

    ImageServiceEndpoint is the endpoint of container image service. +Unix Domain Socket are supported on Linux, while npipe and tcp endpoints are supported on Windows. +Examples:'unix:///path/to/runtime.sock', 'npipe:////./pipe/runtime'. +If not specified, the value in containerRuntimeEndpoint is used.

    -## `KubeletAuthorizationMode` {#kubelet-config-k8s-io-v1beta1-KubeletAuthorizationMode} - -(Alias of `string`) - -**Appears in:** - -- [KubeletAuthorization](#kubelet-config-k8s-io-v1beta1-KubeletAuthorization) - - - - - -## `KubeletWebhookAuthentication` {#kubelet-config-k8s-io-v1beta1-KubeletWebhookAuthentication} +## `SerializedNodeConfigSource` {#kubelet-config-k8s-io-v1beta1-SerializedNodeConfigSource} -**Appears in:** - -- [KubeletAuthentication](#kubelet-config-k8s-io-v1beta1-KubeletAuthentication) +

    SerializedNodeConfigSource allows us to serialize v1.NodeConfigSource. +This type is used internally by the Kubelet for tracking checkpointed dynamic configs. +It exists in the kubeletconfig API group because it is classified as a versioned input to the Kubelet.

    + + + - - - -
    FieldDescription
    apiVersion
    string
    kubelet.config.k8s.io/v1beta1
    kind
    string
    SerializedNodeConfigSource
    enabled
    -bool -
    -

    enabled allows bearer token authentication backed by the -tokenreviews.authentication.k8s.io API.

    -
    cacheTTL
    -meta/v1.Duration +
    source
    +core/v1.NodeConfigSource
    -

    cacheTTL enables caching of authentication results

    +

    source is the source that we are serializing.

    -## `KubeletWebhookAuthorization` {#kubelet-config-k8s-io-v1beta1-KubeletWebhookAuthorization} +## `CredentialProvider` {#kubelet-config-k8s-io-v1beta1-CredentialProvider} **Appears in:** -- [KubeletAuthorization](#kubelet-config-k8s-io-v1beta1-KubeletAuthorization) +- [CredentialProviderConfig](#kubelet-config-k8s-io-v1beta1-CredentialProviderConfig) + +

    CredentialProvider represents an exec plugin to be invoked by the kubelet. The plugin is only +invoked when an image being pulled matches the images handled by the plugin (see matchImages).

    @@ -1547,133 +1554,122 @@ tokenreviews.authentication.k8s.io API.

    - - + + + - -
    cacheAuthorizedTTL
    -meta/v1.Duration +
    name [Required]
    +string
    -

    cacheAuthorizedTTL is the duration to cache 'authorized' responses from the -webhook authorizer.

    +

    name is the required name of the credential provider. It must match the name of the +provider executable as seen by the kubelet. The executable must be in the kubelet's +bin directory (set by the --image-credential-provider-bin-dir flag).

    cacheUnauthorizedTTL
    +
    matchImages [Required]
    +[]string +
    +

    matchImages is a required list of strings used to match against images in order to +determine if this provider should be invoked. If one of the strings matches the +requested image from the kubelet, the plugin will be invoked and given a chance +to provide credentials. Images are expected to contain the registry domain +and URL path.

    +

    Each entry in matchImages is a pattern which can optionally contain a port and a path. +Globs can be used in the domain, but not in the port or the path. Globs are supported +as subdomains like '*.k8s.io' or 'k8s.*.io', and top-level-domains such as 'k8s.*'. +Matching partial subdomains like 'app*.k8s.io' is also supported. Each glob can only match +a single subdomain segment, so *.io does not match *.k8s.io.

    +

    A match exists between an image and a matchImage when all of the below are true:

    +
      +
    • Both contain the same number of domain parts and each part matches.
    • +
    • The URL path of an imageMatch must be a prefix of the target image URL path.
    • +
    • If the imageMatch contains a port, then the port must match in the image as well.
    • +
    +

    Example values of matchImages:

    +
      +
    • 123456789.dkr.ecr.us-east-1.amazonaws.com
    • +
    • *.azurecr.io
    • +
    • gcr.io
    • +
    • *.*.registry.io
    • +
    • registry.io:8080/path
    • +
    +
    defaultCacheDuration [Required]
    meta/v1.Duration
    -

    cacheUnauthorizedTTL is the duration to cache 'unauthorized' responses from -the webhook authorizer.

    +

    defaultCacheDuration is the default duration the plugin will cache credentials in-memory +if a cache duration is not provided in the plugin response. This field is required.

    - -## `KubeletX509Authentication` {#kubelet-config-k8s-io-v1beta1-KubeletX509Authentication} - - -**Appears in:** - -- [KubeletAuthentication](#kubelet-config-k8s-io-v1beta1-KubeletAuthentication) - - - - - - - - - - -
    FieldDescription
    clientCAFile
    +
    apiVersion [Required]
    string
    -

    clientCAFile is the path to a PEM-encoded certificate bundle. If set, any request -presenting a client certificate signed by one of the authorities in the bundle -is authenticated with a username corresponding to the CommonName, -and groups corresponding to the Organization in the client certificate.

    +

    Required input version of the exec CredentialProviderRequest. The returned CredentialProviderResponse +MUST use the same encoding version as the input. Current supported values are:

    +
      +
    • credentialprovider.kubelet.k8s.io/v1beta1
    • +
    - -## `MemoryReservation` {#kubelet-config-k8s-io-v1beta1-MemoryReservation} - - -**Appears in:** - -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) - - -

    MemoryReservation specifies the memory reservation of different types for each NUMA node

    - - - - - - - - +

    Arguments to pass to the command when executing it.

    + - +

    Env defines additional environment variables to expose to the process. These +are unioned with the host's environment, as well as variables client-go uses +to pass argument to the plugin.

    +
    FieldDescription
    numaNode [Required]
    -int32 +
    args
    +[]string
    - No description provided.
    limits [Required]
    -core/v1.ResourceList +
    env
    +[]ExecEnvVar
    - No description provided.
    -## `MemorySwapConfiguration` {#kubelet-config-k8s-io-v1beta1-MemorySwapConfiguration} +## `ExecEnvVar` {#kubelet-config-k8s-io-v1beta1-ExecEnvVar} **Appears in:** -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) +- [CredentialProvider](#kubelet-config-k8s-io-v1beta1-CredentialProvider) +

    ExecEnvVar is used for setting environment variables when executing an exec-based +credential plugin.

    + - + + +
    FieldDescription
    swapBehavior
    +
    name [Required]
    string
    -

    swapBehavior configures swap memory available to container workloads. May be one of -"", "LimitedSwap": workload combined memory and swap usage cannot exceed pod memory limit -"UnlimitedSwap": workloads can use unlimited swap, up to the allocatable limit.

    + No description provided.
    value [Required]
    +string
    + No description provided.
    -## `ResourceChangeDetectionStrategy` {#kubelet-config-k8s-io-v1beta1-ResourceChangeDetectionStrategy} - -(Alias of `string`) - -**Appears in:** - -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) - - -

    ResourceChangeDetectionStrategy denotes a mode in which internal -managers (secret, configmap) are discovering object changes.

    - - - - -## `ShutdownGracePeriodByPodPriority` {#kubelet-config-k8s-io-v1beta1-ShutdownGracePeriodByPodPriority} +## `KubeletAnonymousAuthentication` {#kubelet-config-k8s-io-v1beta1-KubeletAnonymousAuthentication} -**Appears in:** - -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) +**Appears in:** +- [KubeletAuthentication](#kubelet-config-k8s-io-v1beta1-KubeletAuthentication) -

    ShutdownGracePeriodByPodPriority specifies the shutdown grace period for Pods based on their associated priority class value

    @@ -1681,35 +1677,27 @@ managers (secret, configmap) are discovering object changes.

    - - - -
    priority [Required]
    -int32 -
    -

    priority is the priority value associated with the shutdown grace period

    -
    shutdownGracePeriodSeconds [Required]
    -int64 +
    enabled
    +bool
    -

    shutdownGracePeriodSeconds is the shutdown grace period in seconds

    +

    enabled allows anonymous requests to the kubelet server. +Requests that are not rejected by another authentication method are treated as +anonymous requests. +Anonymous requests have a username of system:anonymous, and a group name of +system:unauthenticated.

    - - - -## `FormatOptions` {#FormatOptions} +## `KubeletAuthentication` {#kubelet-config-k8s-io-v1beta1-KubeletAuthentication} **Appears in:** -- [LoggingConfiguration](#LoggingConfiguration) - +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) -

    FormatOptions contains options for the different logging formats.

    @@ -1717,26 +1705,37 @@ managers (secret, configmap) are discovering object changes.

    - + + + + + +
    json [Required]
    -JSONOptions +
    x509
    +KubeletX509Authentication
    -

    [Alpha] JSON contains options for logging format "json". -Only available when the LoggingAlphaOptions feature gate is enabled.

    +

    x509 contains settings related to x509 client certificate authentication.

    +
    webhook
    +KubeletWebhookAuthentication +
    +

    webhook contains settings related to webhook bearer token authentication.

    +
    anonymous
    +KubeletAnonymousAuthentication +
    +

    anonymous contains settings related to anonymous authentication.

    -## `JSONOptions` {#JSONOptions} +## `KubeletAuthorization` {#kubelet-config-k8s-io-v1beta1-KubeletAuthorization} **Appears in:** -- [FormatOptions](#FormatOptions) - +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) -

    JSONOptions contains options for logging format "json".

    @@ -1744,47 +1743,44 @@ Only available when the LoggingAlphaOptions feature gate is enabled.

    - -
    splitStream [Required]
    -bool +
    mode
    +KubeletAuthorizationMode
    -

    [Alpha] SplitStream redirects error messages to stderr while -info messages go to stdout, with buffering. The default is to write -both to stdout, without buffering. Only available when -the LoggingAlphaOptions feature gate is enabled.

    +

    mode is the authorization mode to apply to requests to the kubelet server. +Valid values are AlwaysAllow and Webhook. +Webhook mode uses the SubjectAccessReview API to determine authorization.

    infoBufferSize [Required]
    -k8s.io/apimachinery/pkg/api/resource.QuantityValue +
    webhook
    +KubeletWebhookAuthorization
    -

    [Alpha] InfoBufferSize sets the size of the info stream when -using split streams. The default is zero, which disables buffering. -Only available when the LoggingAlphaOptions feature gate is enabled.

    +

    webhook contains settings related to Webhook authorization.

    -## `LogFormatFactory` {#LogFormatFactory} +## `KubeletAuthorizationMode` {#kubelet-config-k8s-io-v1beta1-KubeletAuthorizationMode} +(Alias of `string`) +**Appears in:** -

    LogFormatFactory provides support for a certain additional, -non-default log format.

    +- [KubeletAuthorization](#kubelet-config-k8s-io-v1beta1-KubeletAuthorization) -## `LoggingConfiguration` {#LoggingConfiguration} + +## `KubeletWebhookAuthentication` {#kubelet-config-k8s-io-v1beta1-KubeletWebhookAuthentication} **Appears in:** -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) - +- [KubeletAuthentication](#kubelet-config-k8s-io-v1beta1-KubeletAuthentication) -

    LoggingConfiguration contains logging options.

    @@ -1792,61 +1788,64 @@ non-default log format.

    - - - - - - +
    format [Required]
    -string -
    -

    Format Flag specifies the structure of log messages. -default value of format is text

    -
    flushFrequency [Required]
    -TimeOrMetaDuration +
    enabled
    +bool
    -

    Maximum time between log flushes. -If a string, parsed as a duration (i.e. "1s") -If an int, the maximum number of nanoseconds (i.e. 1s = 1000000000). -Ignored if the selected logging backend writes log messages without buffering.

    +

    enabled allows bearer token authentication backed by the +tokenreviews.authentication.k8s.io API.

    verbosity [Required]
    -VerbosityLevel +
    cacheTTL
    +meta/v1.Duration
    -

    Verbosity is the threshold that determines which log messages are -logged. Default is zero which logs only the most important -messages. Higher values enable additional messages. Error messages -are always logged.

    +

    cacheTTL enables caching of authentication results

    vmodule [Required]
    -VModuleConfiguration +
    + +## `KubeletWebhookAuthorization` {#kubelet-config-k8s-io-v1beta1-KubeletWebhookAuthorization} + + +**Appears in:** + +- [KubeletAuthorization](#kubelet-config-k8s-io-v1beta1-KubeletAuthorization) + + + + + + + + + -
    FieldDescription
    cacheAuthorizedTTL
    +meta/v1.Duration
    -

    VModule overrides the verbosity threshold for individual files. -Only supported for "text" log format.

    +

    cacheAuthorizedTTL is the duration to cache 'authorized' responses from the +webhook authorizer.

    options [Required]
    -FormatOptions +
    cacheUnauthorizedTTL
    +meta/v1.Duration
    -

    [Alpha] Options holds additional parameters that are specific -to the different logging formats. Only the options for the selected -format get used, but all of them get validated. -Only available when the LoggingAlphaOptions feature gate is enabled.

    +

    cacheUnauthorizedTTL is the duration to cache 'unauthorized' responses from +the webhook authorizer.

    -## `LoggingOptions` {#LoggingOptions} +## `KubeletX509Authentication` {#kubelet-config-k8s-io-v1beta1-KubeletX509Authentication} +**Appears in:** + +- [KubeletAuthentication](#kubelet-config-k8s-io-v1beta1-KubeletAuthentication) -

    LoggingOptions can be used with ValidateAndApplyWithOptions to override -certain global defaults.

    @@ -1854,33 +1853,28 @@ certain global defaults.

    - - - -
    ErrorStream [Required]
    -io.Writer -
    -

    ErrorStream can be used to override the os.Stderr default.

    -
    InfoStream [Required]
    -io.Writer +
    clientCAFile
    +string
    -

    InfoStream can be used to override the os.Stdout default.

    +

    clientCAFile is the path to a PEM-encoded certificate bundle. If set, any request +presenting a client certificate signed by one of the authorities in the bundle +is authenticated with a username corresponding to the CommonName, +and groups corresponding to the Organization in the client certificate.

    -## `TimeOrMetaDuration` {#TimeOrMetaDuration} +## `MemoryReservation` {#kubelet-config-k8s-io-v1beta1-MemoryReservation} **Appears in:** -- [LoggingConfiguration](#LoggingConfiguration) +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) -

    TimeOrMetaDuration is present only for backwards compatibility for the -flushFrequency field, and new fields should use metav1.Duration.

    +

    MemoryReservation specifies the memory reservation of different types for each NUMA node

    @@ -1888,24 +1882,22 @@ flushFrequency field, and new fields should use metav1.Duration.

    - + No description provided. - + No description provided.
    Duration [Required]
    -meta/v1.Duration +
    numaNode [Required]
    +int32
    -

    Duration holds the duration

    -
    - [Required]
    -bool +
    limits [Required]
    +core/v1.ResourceList
    -

    SerializeAsString controls whether the value is serialized as a string or an integer

    -
    -## `TracingConfiguration` {#TracingConfiguration} +## `MemorySwapConfiguration` {#kubelet-config-k8s-io-v1beta1-MemorySwapConfiguration} **Appears in:** @@ -1913,60 +1905,69 @@ flushFrequency field, and new fields should use metav1.Duration.

    - [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) -

    TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.

    - - - - -
    FieldDescription
    endpoint
    +
    swapBehavior
    string
    -

    Endpoint of the collector this component will report traces to. -The connection is insecure, and does not currently support TLS. -Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.

    -
    samplingRatePerMillion
    -int32 -
    -

    SamplingRatePerMillion is the number of samples to collect per million spans. -Recommended is unset. If unset, sampler respects its parent span's sampling -rate, but otherwise never samples.

    +

    swapBehavior configures swap memory available to container workloads. May be one of +"", "LimitedSwap": workload combined memory and swap usage cannot exceed pod memory limit +"UnlimitedSwap": workloads can use unlimited swap, up to the allocatable limit.

    -## `VModuleConfiguration` {#VModuleConfiguration} +## `ResourceChangeDetectionStrategy` {#kubelet-config-k8s-io-v1beta1-ResourceChangeDetectionStrategy} -(Alias of `[]k8s.io/component-base/logs/api/v1.VModuleItem`) +(Alias of `string`) **Appears in:** -- [LoggingConfiguration](#LoggingConfiguration) +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) -

    VModuleConfiguration is a collection of individual file names or patterns -and the corresponding verbosity threshold.

    +

    ResourceChangeDetectionStrategy denotes a mode in which internal +managers (secret, configmap) are discovering object changes.

    -## `VerbosityLevel` {#VerbosityLevel} +## `ShutdownGracePeriodByPodPriority` {#kubelet-config-k8s-io-v1beta1-ShutdownGracePeriodByPodPriority} -(Alias of `uint32`) **Appears in:** -- [LoggingConfiguration](#LoggingConfiguration) - +- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) -

    VerbosityLevel represents a klog or logr verbosity threshold.

    +

    ShutdownGracePeriodByPodPriority specifies the shutdown grace period for Pods based on their associated priority class value

    + + + + + + + + + + + + +
    FieldDescription
    priority [Required]
    +int32 +
    +

    priority is the priority value associated with the shutdown grace period

    +
    shutdownGracePeriodSeconds [Required]
    +int64 +
    +

    shutdownGracePeriodSeconds is the shutdown grace period in seconds

    +
    + diff --git a/content/en/docs/reference/config-api/kubelet-credentialprovider.v1.md b/content/en/docs/reference/config-api/kubelet-credentialprovider.v1.md index 9c8b754443e5a..579bbb7080570 100644 --- a/content/en/docs/reference/config-api/kubelet-credentialprovider.v1.md +++ b/content/en/docs/reference/config-api/kubelet-credentialprovider.v1.md @@ -12,7 +12,6 @@ auto_generated: true - [CredentialProviderRequest](#credentialprovider-kubelet-k8s-io-v1-CredentialProviderRequest) - [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1-CredentialProviderResponse) - ## `CredentialProviderRequest` {#credentialprovider-kubelet-k8s-io-v1-CredentialProviderRequest} diff --git a/content/en/docs/reference/config-api/kubelet-credentialprovider.v1alpha1.md b/content/en/docs/reference/config-api/kubelet-credentialprovider.v1alpha1.md index c8a7bd682e60a..309ae2295fdc9 100644 --- a/content/en/docs/reference/config-api/kubelet-credentialprovider.v1alpha1.md +++ b/content/en/docs/reference/config-api/kubelet-credentialprovider.v1alpha1.md @@ -12,7 +12,6 @@ auto_generated: true - [CredentialProviderRequest](#credentialprovider-kubelet-k8s-io-v1alpha1-CredentialProviderRequest) - [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1alpha1-CredentialProviderResponse) - ## `CredentialProviderRequest` {#credentialprovider-kubelet-k8s-io-v1alpha1-CredentialProviderRequest} diff --git a/content/en/docs/reference/config-api/kubelet-credentialprovider.v1beta1.md b/content/en/docs/reference/config-api/kubelet-credentialprovider.v1beta1.md index 7384939b5f35b..352157d626c87 100644 --- a/content/en/docs/reference/config-api/kubelet-credentialprovider.v1beta1.md +++ b/content/en/docs/reference/config-api/kubelet-credentialprovider.v1beta1.md @@ -12,7 +12,6 @@ auto_generated: true - [CredentialProviderRequest](#credentialprovider-kubelet-k8s-io-v1beta1-CredentialProviderRequest) - [CredentialProviderResponse](#credentialprovider-kubelet-k8s-io-v1beta1-CredentialProviderResponse) - ## `CredentialProviderRequest` {#credentialprovider-kubelet-k8s-io-v1beta1-CredentialProviderRequest} @@ -110,7 +109,7 @@ stopping after the first successfully authenticated pull.

  • 123456789.dkr.ecr.us-east-1.amazonaws.com
  • *.azurecr.io
  • gcr.io
  • -
  • *.*registry.io
  • +
  • *.*.registry.io
  • registry.io:8080/path
From 8d49270fede09357b038bbf6b72183c0f24ed504 Mon Sep 17 00:00:00 2001 From: Mohammed Affan Date: Mon, 2 Oct 2023 16:38:44 +0530 Subject: [PATCH 056/229] Add statement for explaining the example image name --- .../extend-kubernetes/configure-multiple-schedulers.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md index 4967ff8f27cc2..a24dad21cf30e 100644 --- a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md +++ b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md @@ -52,11 +52,13 @@ Save the file as `Dockerfile`, build the image and push it to a registry. This e pushes the image to [Google Container Registry (GCR)](https://cloud.google.com/container-registry/). For more details, please read the GCR -[documentation](https://cloud.google.com/container-registry/docs/). +[documentation](https://cloud.google.com/container-registry/docs/). Alternatively +you can also use the [docker hub](https://hub.docker.com/search?q=). For more details +refer to the docker hub [documentation](https://docs.docker.com/docker-hub/repos/create/#create-a-repository). ```shell -docker build -t gcr.io/my-gcp-project/my-kube-scheduler:1.0 . -gcloud docker -- push gcr.io/my-gcp-project/my-kube-scheduler:1.0 +docker build -t gcr.io/my-gcp-project/my-kube-scheduler:1.0 . # The image name and the repository +gcloud docker -- push gcr.io/my-gcp-project/my-kube-scheduler:1.0 # used in here is just an example ``` ## Define a Kubernetes Deployment for the scheduler From e8225026544d001fb16a1b73ca965de072a61507 Mon Sep 17 00:00:00 2001 From: steve-hardman <132999137+steve-hardman@users.noreply.github.com> Date: Mon, 2 Oct 2023 23:58:12 +0100 Subject: [PATCH 057/229] Update jq link --- content/en/blog/_posts/2020-09-03-warnings/index.md | 2 +- .../en/blog/_posts/2021-10-18-kpng-specialized-proxiers.md | 2 +- content/en/docs/reference/kubectl/cheatsheet.md | 2 +- .../docs/tasks/administer-cluster/verify-signed-artifacts.md | 2 +- content/en/docs/tasks/tls/managing-tls-in-a-cluster.md | 4 ++-- 5 files changed, 6 insertions(+), 6 deletions(-) diff --git a/content/en/blog/_posts/2020-09-03-warnings/index.md b/content/en/blog/_posts/2020-09-03-warnings/index.md index a5cfb9f710db7..5d31aedef2f41 100644 --- a/content/en/blog/_posts/2020-09-03-warnings/index.md +++ b/content/en/blog/_posts/2020-09-03-warnings/index.md @@ -63,7 +63,7 @@ This metric has labels for the API `group`, `version`, `resource`, and `subresou and a `removed_release` label that indicates the Kubernetes release in which the API will no longer be served. This is an example query using `kubectl`, [prom2json](https://github.com/prometheus/prom2json), -and [jq](https://stedolan.github.io/jq/) to determine which deprecated APIs have been requested +and [jq](https://jqlang.github.io/jq/) to determine which deprecated APIs have been requested from the current instance of the API server: ```sh diff --git a/content/en/blog/_posts/2021-10-18-kpng-specialized-proxiers.md b/content/en/blog/_posts/2021-10-18-kpng-specialized-proxiers.md index 2c60c12f3f079..1e1b32b265ce3 100644 --- a/content/en/blog/_posts/2021-10-18-kpng-specialized-proxiers.md +++ b/content/en/blog/_posts/2021-10-18-kpng-specialized-proxiers.md @@ -210,7 +210,7 @@ podip=$(cat /tmp/out | jq -r '.Endpoints[]|select(.Local == true)|select(.IPs.V6 ip6tables -t nat -A PREROUTING -d $xip/128 -j DNAT --to-destination $podip ``` -Assuming the JSON output above is stored in `/tmp/out` ([jq](https://stedolan.github.io/jq/) is an *awesome* program!). +Assuming the JSON output above is stored in `/tmp/out` ([jq](https://jqlang.github.io/jq/) is an *awesome* program!). As this is an example we make it really simple for ourselves by using diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index b933c6eaeecd8..fa7b9c3eb2912 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -213,7 +213,7 @@ kubectl get pods --field-selector=status.phase=Running kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}' # List Names of Pods that belong to Particular RC -# "jq" command useful for transformations that are too complex for jsonpath, it can be found at https://stedolan.github.io/jq/ +# "jq" command useful for transformations that are too complex for jsonpath, it can be found at https://jqlang.github.io/jq/ sel=${$(kubectl get rc my-rc --output=json | jq -j '.spec.selector | to_entries | .[] | "\(.key)=\(.value),"')%?} echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name}) diff --git a/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md b/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md index 660b4e903bc96..45f18cec89160 100644 --- a/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md +++ b/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md @@ -15,7 +15,7 @@ You will need to have the following tools installed: - `cosign` ([install guide](https://docs.sigstore.dev/cosign/installation/)) - `curl` (often provided by your operating system) -- `jq` ([download jq](https://stedolan.github.io/jq/download/)) +- `jq` ([download jq](https://jqlang.github.io/jq/download/)) ## Verifying binary signatures diff --git a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md index f8705587dab70..57a1cf510388d 100644 --- a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md +++ b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md @@ -36,7 +36,7 @@ You need the `cfssl` tool. You can download `cfssl` from Some steps in this page use the `jq` tool. If you don't have `jq`, you can install it via your operating system's software sources, or fetch it from -[https://stedolan.github.io/jq/](https://stedolan.github.io/jq/). +[https://jqlang.github.io/jq/](https://jqlang.github.io/jq/). @@ -267,7 +267,7 @@ kubectl get csr my-svc.my-namespace -o json | \ ``` {{< note >}} -This uses the command line tool [`jq`](https://stedolan.github.io/jq/) to populate the base64-encoded +This uses the command line tool [`jq`](https://jqlang.github.io/jq/) to populate the base64-encoded content in the `.status.certificate` field. If you do not have `jq`, you can also save the JSON output to a file, populate this field manually, and upload the resulting file. From 27e6da11d9e30a3dfe715fce06c29ec4e6537a1b Mon Sep 17 00:00:00 2001 From: abhiram royals <110195480+Royal-Dragon@users.noreply.github.com> Date: Wed, 4 Oct 2023 19:18:14 +0530 Subject: [PATCH 058/229] changed "search" to "search this site" #43291 a feature request to replace "search" with "search this site" label #43291 --- data/i18n/en/en.toml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/data/i18n/en/en.toml b/data/i18n/en/en.toml index 8ec60129fa5a6..9ba8d0e51d284 100644 --- a/data/i18n/en/en.toml +++ b/data/i18n/en/en.toml @@ -430,7 +430,7 @@ other = """🛇 This item links to a third party project or product that is other = """

Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or projects. See the CNCF website guidelines for more details.

You should read the content guide before proposing a change that adds an extra third-party link.

""" [ui_search_placeholder] -other = "Search" +other = "Search this site" [version_check_mustbe] other = "Your Kubernetes server must be version " From 1e3b672e7211b9ad4141534c16f511e1e54089f5 Mon Sep 17 00:00:00 2001 From: "Kenneth J. Miller" Date: Thu, 28 Sep 2023 12:35:21 +0200 Subject: [PATCH 059/229] content: es: Fix incorrect letter case for data storage units Gib (Gibibit) was used where GiB (Gibibyte) is intended. --- content/es/docs/concepts/workloads/controllers/statefulset.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/es/docs/concepts/workloads/controllers/statefulset.md b/content/es/docs/concepts/workloads/controllers/statefulset.md index 95e86a7a3f674..7ed24d2b7edaf 100644 --- a/content/es/docs/concepts/workloads/controllers/statefulset.md +++ b/content/es/docs/concepts/workloads/controllers/statefulset.md @@ -153,7 +153,7 @@ El valor de Cluster Domain se pondrá a `cluster.local` a menos que Kubernetes crea un [PersistentVolume](/docs/concepts/storage/persistent-volumes/) para cada VolumeClaimTemplate. En el ejemplo de nginx de arriba, cada Pod recibirá un único PersistentVolume -con una StorageClass igual a `my-storage-class` y 1 Gib de almacenamiento provisionado. Si no se indica ninguna StorageClass, +con una StorageClass igual a `my-storage-class` y 1 GiB de almacenamiento provisionado. Si no se indica ninguna StorageClass, entonces se usa la StorageClass por defecto. Cuando un Pod se (re)programa en un nodo, sus `volumeMounts` montan los PersistentVolumes asociados con sus PersistentVolume Claims. Nótese que los PersistentVolumes asociados con los From 894a2215c8e8b2fdeb55c259c14d8b6be7e209bb Mon Sep 17 00:00:00 2001 From: "Kenneth J. Miller" Date: Thu, 28 Sep 2023 12:35:21 +0200 Subject: [PATCH 060/229] content: id: Fix incorrect letter case for data storage units Gib (Gibibit) was used where GiB (Gibibyte) is intended. --- content/id/docs/concepts/workloads/controllers/statefulset.md | 3 +-- .../manage-resources/memory-constraint-namespace.md | 2 +- 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/content/id/docs/concepts/workloads/controllers/statefulset.md b/content/id/docs/concepts/workloads/controllers/statefulset.md index a309e223a3693..5c091d3620939 100644 --- a/content/id/docs/concepts/workloads/controllers/statefulset.md +++ b/content/id/docs/concepts/workloads/controllers/statefulset.md @@ -154,7 +154,7 @@ Domain klaster akan diatur menjadi `cluster.local` kecuali Kubernetes membuat sebuah [PersistentVolume](/id/docs/concepts/storage/persistent-volumes/) untuk setiap VolumeClaimTemplate. Pada contoh nginx di atas, setiap Pod akan menerima sebuah PersistentVolume -dengan StorageClass `my-storage-class` dan penyimpanan senilai 1 Gib yang sudah di-_provisioning_. Jika tidak ada StorageClass +dengan StorageClass `my-storage-class` dan penyimpanan senilai 1 GiB yang sudah di-_provisioning_. Jika tidak ada StorageClass yang dispesifikasikan, maka StorageClass _default_ akan digunakan. Ketika sebuah Pod dilakukan _(re)schedule_ pada sebuah Node, `volumeMounts` akan me-_mount_ PersistentVolumes yang terkait dengan PersistentVolume Claim-nya. Perhatikan bahwa, PersistentVolume yang terkait dengan @@ -275,4 +275,3 @@ StatefulSet akan mulai membuat Pod dengan templat konfigurasi yang sudah di-_rev * Ikuti contoh yang ada pada [bagaimana cara melakukan deploy Cassandra dengan StatefulSets](/docs/tutorials/stateful-application/cassandra/). - diff --git a/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md b/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md index 1aae3e38f009b..406363d0e2882 100644 --- a/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md +++ b/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md @@ -202,7 +202,7 @@ dari LimitRange. Pada tahap ini, Containermu mungkin saja berjalan ataupun mungkin juga tidak berjalan. Ingat bahwa prasyarat untuk tugas ini adalah Node harus memiliki setidaknya 1 GiB memori. Jika tiap Node hanya memiliki -1 GiB memori, maka tidak akan ada cukup memori untuk dialokasikan pada setiap Node untuk memenuhi permintaan 1 Gib memori. Jika ternyata kamu menggunakan Node dengan 2 GiB memori, maka kamu mungkin memiliki cukup ruang untuk memenuhi permintaan 1 GiB tersebut. +1 GiB memori, maka tidak akan ada cukup memori untuk dialokasikan pada setiap Node untuk memenuhi permintaan 1 GiB memori. Jika ternyata kamu menggunakan Node dengan 2 GiB memori, maka kamu mungkin memiliki cukup ruang untuk memenuhi permintaan 1 GiB tersebut. Menghapus Pod: From 2b61bd9cfeacc3852cb7ee174526847c6b9dcb3d Mon Sep 17 00:00:00 2001 From: Mohammed Affan Date: Thu, 5 Oct 2023 12:04:48 +0530 Subject: [PATCH 061/229] Remove misleading docs around eviction and image garbage collection --- .../scheduling-eviction/node-pressure-eviction.md | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md b/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md index 80367800153a6..381e291488fa1 100644 --- a/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md +++ b/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md @@ -105,13 +105,11 @@ does not support other configurations. Some kubelet garbage collection features are deprecated in favor of eviction: -| Existing Flag | New Flag | Rationale | -| ------------- | -------- | --------- | -| `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection | -| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | eviction reclaims achieve the same behavior | -| `--maximum-dead-containers` | - | deprecated once old logs are stored outside of container's context | -| `--maximum-dead-containers-per-container` | - | deprecated once old logs are stored outside of container's context | -| `--minimum-container-ttl-duration` | - | deprecated once old logs are stored outside of container's context | +| Existing Flag | Rationale | +| ------------- | --------- | +| `--maximum-dead-containers` | deprecated once old logs are stored outside of container's context | +| `--maximum-dead-containers-per-container` | deprecated once old logs are stored outside of container's context | +| `--minimum-container-ttl-duration` | deprecated once old logs are stored outside of container's context | ### Eviction thresholds From 45644aa65c6410f698e5d288a81fc5b4999cc6cf Mon Sep 17 00:00:00 2001 From: Michael Date: Thu, 5 Oct 2023 20:56:08 +0800 Subject: [PATCH 062/229] [zh] Clean up kubeadm_init_phase_kubeconfig files --- .../kubeadm_init_phase_kubeconfig.md | 18 +-------------- .../kubeadm_init_phase_kubeconfig_admin.md | 20 ++--------------- .../kubeadm_init_phase_kubeconfig_all.md | 19 +++------------- ...nit_phase_kubeconfig_controller-manager.md | 20 ++--------------- .../kubeadm_init_phase_kubeconfig_kubelet.md | 22 +++---------------- ...kubeadm_init_phase_kubeconfig_scheduler.md | 20 ++--------------- 6 files changed, 13 insertions(+), 106 deletions(-) diff --git a/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig.md b/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig.md index 5f1cf4bef421d..d2ec9d5d351e1 100644 --- a/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig.md +++ b/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig.md @@ -1,29 +1,16 @@ - - -生成所有建立控制平面和管理员(admin)所需的 kubeconfig 文件 +生成所有建立控制平面和管理员(admin)所需的 kubeconfig 文件。 - ### 概要 - 此命令并非设计用来单独运行。请阅读可用子命令列表。 ``` @@ -33,7 +20,6 @@ kubeadm init phase kubeconfig [flags] - ### 选项 @@ -61,7 +47,6 @@ kubeadm init phase kubeconfig [flags] - ### 从父命令继承的选项
@@ -85,4 +70,3 @@ kubeadm init phase kubeconfig [flags]
- diff --git a/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_admin.md b/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_admin.md index 1c2213f0ee72c..3137702401a56 100644 --- a/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_admin.md +++ b/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_admin.md @@ -1,29 +1,16 @@ - - -为管理员(admin)和 kubeadm 本身生成 kubeconfig 文件 +为管理员(admin)和 kubeadm 本身生成 kubeconfig 文件。 - ### 概要 - 为管理员和 kubeadm 本身生成 kubeconfig 文件,并将其保存到 admin.conf 文件中。 ``` @@ -33,7 +20,6 @@ kubeadm init phase kubeconfig admin [flags] - ### 选项 @@ -135,7 +121,7 @@ Don't apply any changes; just output what would be done. -

admin 操作的帮助命令

+

admin 操作的帮助命令。

@@ -179,7 +165,6 @@ Don't apply any changes; just output what would be done. - ### 继承于父命令的选项
@@ -203,4 +188,3 @@ Don't apply any changes; just output what would be done.
- diff --git a/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md b/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md index 9607d7b20387c..70c0a5af88502 100644 --- a/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md +++ b/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_all.md @@ -1,29 +1,17 @@ - - -生成所有 kubeconfig 文件 +生成所有 kubeconfig 文件。 - ### 概要 -生成所有 kubeconfig 文件 +生成所有 kubeconfig 文件。 ``` kubeadm init phase kubeconfig all [flags] @@ -32,7 +20,6 @@ kubeadm init phase kubeconfig all [flags] - ### 选项 @@ -145,7 +132,7 @@ Don't apply any changes; just output what would be done. help for all -->

-all 操作的帮助命令 +all 操作的帮助命令。

diff --git a/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_controller-manager.md b/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_controller-manager.md index 508255e1657c1..26e65a1cf6ec8 100644 --- a/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_controller-manager.md +++ b/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_controller-manager.md @@ -1,29 +1,16 @@ - - -生成控制器管理器要使用的 kubeconfig 文件 +生成控制器管理器要使用的 kubeconfig 文件。 - ### 概要 - 生成控制器管理器要使用的 kubeconfig 文件,并保存到 controller-manager.conf 文件中。 ``` @@ -33,7 +20,6 @@ kubeadm init phase kubeconfig controller-manager [flags] - ### 选项
@@ -133,7 +119,7 @@ kubeadm init phase kubeconfig controller-manager [flags] -

controller-manager 操作的帮助命令

+

controller-manager 操作的帮助命令。

@@ -176,7 +162,6 @@ kubeadm init phase kubeconfig controller-manager [flags] - ### 继承于父命令的选项
@@ -200,4 +185,3 @@ kubeadm init phase kubeconfig controller-manager [flags]
- diff --git a/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_kubelet.md b/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_kubelet.md index c0b5d06bc864c..3bb33f1368ac4 100644 --- a/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_kubelet.md +++ b/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_kubelet.md @@ -1,18 +1,7 @@ - - -为 kubelet 生成一个 kubeconfig 文件,*仅仅*用于集群引导目的 +为 kubelet 生成一个 kubeconfig 文件,**仅仅**用于集群引导目的。 - 生成 kubelet 要使用的 kubeconfig 文件,并将其保存到 kubelet.conf 文件。 - -请注意,该操作目的是*仅*应用于引导集群。在控制平面启动之后,应该从 CSR API 请求所有 kubelet 凭据。 +请注意,该操作目的是**仅**用于引导集群。在控制平面启动之后,应该从 CSR API 请求所有 kubelet 凭据。 ``` kubeadm init phase kubeconfig kubelet [flags] @@ -38,7 +25,6 @@ kubeadm init phase kubeconfig kubelet [flags] - ### 选项 @@ -138,7 +124,7 @@ kubeadm init phase kubeconfig kubelet [flags] -

kubelet 操作的帮助命令

+

kubelet 操作的帮助命令。

@@ -194,7 +180,6 @@ kubeadm init phase kubeconfig kubelet [flags] - ### 继承于父命令的选项
@@ -218,4 +203,3 @@ kubeadm init phase kubeconfig kubelet [flags]
- diff --git a/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_scheduler.md b/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_scheduler.md index 510a7319564ed..356926bc59e59 100644 --- a/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_scheduler.md +++ b/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig_scheduler.md @@ -1,29 +1,16 @@ - - -生成调度器使用的 kubeconfig 文件 +生成调度器使用的 kubeconfig 文件。 - ### 概要 - 生成调度器(scheduler)要使用的 kubeconfig 文件,并保存到 scheduler.conf 文件中。 ``` @@ -33,7 +20,6 @@ kubeadm init phase kubeconfig scheduler [flags] - ### 选项 @@ -135,7 +121,7 @@ Don't apply any changes; just output what would be done. -

scheduler 操作的帮助命令

+

scheduler 操作的帮助命令。

@@ -179,7 +165,6 @@ Don't apply any changes; just output what would be done. - ### 继承于父命令的选项
@@ -203,4 +188,3 @@ Don't apply any changes; just output what would be done.
- From 78bc401fbcf49c345431c9199d5492b4245a186b Mon Sep 17 00:00:00 2001 From: Wilson Wu Date: Tue, 3 Oct 2023 23:04:43 +0800 Subject: [PATCH 063/229] Init translate --- .../2023-08-15-pkgs-k8s-io-introduction.md | 324 ++++++++++++++++++ 1 file changed, 324 insertions(+) create mode 100644 content/zh-cn/blog/_posts/2023-08-15-pkgs-k8s-io-introduction.md diff --git a/content/zh-cn/blog/_posts/2023-08-15-pkgs-k8s-io-introduction.md b/content/zh-cn/blog/_posts/2023-08-15-pkgs-k8s-io-introduction.md new file mode 100644 index 0000000000000..2dd837d0cf8d7 --- /dev/null +++ b/content/zh-cn/blog/_posts/2023-08-15-pkgs-k8s-io-introduction.md @@ -0,0 +1,324 @@ +--- +layout: blog +title: "pkgs.k8s.io:介绍 Kubernetes 社区自有的包仓库" +date: 2023-08-15T20:00:00+0000 +slug: pkgs-k8s-io-introduction +--- + + + +**作者**:Marko Mudrinić (Kubermatic) + +**译者**:Wilson Wu (DaoCloud) + + +我很高兴代表 Kubernetes SIG Release 介绍 Kubernetes +社区自有的 Debian 和 RPM 软件仓库:`pkgs.k8s.io`! +这些全新的仓库取代了我们自 Kubernetes v1.5 以来一直使用的托管在 +Google 的仓库(`apt.kubernetes.io` 和 `yum.kubernetes.io`)。 + + +这篇博文包含关于这些新的包仓库的信息、它对最终用户意味着什么以及如何迁移到新仓库。 + + +**ℹ️ 更新(2023 年 8 月 31 日):旧版托管在 Google 的仓库已被弃用,并将于 2023 年 9 月 13 日开始被冻结。** +查看[弃用公告](/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/)了解有关此更改的更多详细信息。 + + +## 关于新的包仓库,你需要了解哪些信息? {#what-you-need-to-know-about-the-new-package-repositories} + + +**(更新于 2023 年 8 月 31 日)** + + +- 这是一个**明确同意的更改**;你需要手动从托管在 Google 的仓库迁移到 + Kubernetes 社区自有的仓库。请参阅本公告后面的[如何迁移](#how-to-migrate), + 了解迁移信息和说明。 + +- 旧版托管在 Google 的仓库**自 2023 年 8 月 31 日起被弃用**, + 并将**于 2023 年 9 月 13 日左右被冻结**。 + 冻结将在计划于 2023 年 9 月发布补丁之后立即发生。 + 冻结旧仓库意味着我们在 2023 年 9 月 13 日这个时间点之后仅将 Kubernetes + 项目的包发布到社区自有的仓库。有关此更改的更多详细信息, + 请查看[弃用公告](/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/)。 + +- 旧仓库中的现有包将在可预见的未来一段时间内可用。 + 然而,Kubernetes 项目无法保证这会持续多久。 + 已弃用的旧仓库及其内容可能会在未来随时被删除,恕不另行通知。 + + +- 鉴于在 2023 年 9 月 13 日这个截止时间点之后不会向旧仓库发布任何新版本, + 如果你不在该截止时间点迁移至新的 Kubernetes 仓库, + 你将无法升级到该日期之后发布的任何补丁或次要版本。 + 也就是说,我们建议**尽快**迁移到新的 Kubernetes 仓库。 + +- 新的 Kubernetes 仓库中包含社区开始接管包构建以来仍在支持的 Kubernetes 版本的包。 + 这意味着 v1.24.0 之前的任何内容都只存在于托管在 Google 的仓库中。 + +- 每个 Kubernetes 次要版本都有一个专用的仓库。 + 当升级到不同的次要版本时,你必须记住,仓库详细信息也会发生变化。 + + +## 为什么我们要引入新的包仓库? {#why-are-we-introducing-new-package-repositories} + + +随着 Kubernetes 项目的不断发展,我们希望确保最终用户获得最佳体验。 +托管在 Google 的仓库多年来一直为我们提供良好的服务, +但我们开始面临一些问题,需要对发布包的方式进行重大变更。 +我们的另一个目标是对所有关键组件使用社区拥有的基础设施,其中包括仓库。 + + +将包发布到托管在 Google 的仓库是一个手动过程, +只能由名为 [Google 构建管理员](/zh-cn/releases/release-managers/#build-admins)的 Google 员工团队来完成。 +[Kubernetes 发布管理员团队](/zh-cn/releases/release-managers/#release-managers)是一个非常多元化的团队, +尤其是在我们工作的时区方面。考虑到这一限制,我们必须对每个版本进行非常仔细的规划, +确保我们有发布经理和 Google 构建管理员来执行发布。 + + +另一个问题是由于我们只有一个包仓库。因此,我们无法发布预发行版本 +(Alpha、Beta 和 RC)的包。这使得任何有兴趣测试的人都更难测试 Kubernetes 预发布版本。 +我们从测试这些版本的人员那里收到的反馈对于确保版本的最佳质量至关重要, +因此我们希望尽可能轻松地测试这些版本。最重要的是,只有一个仓库限制了我们对 +`cri-tools` 和 `kubernetes-cni` 等依赖进行发布, + + +尽管存在这些问题,我们仍非常感谢 Google 和 Google 构建管理员这些年来的参与、支持和帮助! + + +## 新的包仓库如何工作? {#how-the-new-package-repositories-work} + + +新的 Debian 和 RPM 仓库托管在 `pkgs.k8s.io`。 +目前,该域指向一个 CloudFront CDN,其后是包含仓库和包的 S3 存储桶。 +然而,我们计划在未来添加更多的镜像站点,让其他公司有可能帮助我们提供软件包服务。 + + +包通过 [OpenBuildService(OBS)平台](http://openbuildservice.org)构建和发布。 +经过长时间评估不同的解决方案后,我们决定使用 OpenBuildService 作为管理仓库和包的平台。 +首先,OpenBuildService 是一个开源平台,被大量开源项目和公司使用, +如 openSUSE、VideoLAN、Dell、Intel 等。OpenBuildService 具有许多功能, +使其非常灵活且易于与我们现有的发布工具集成。 +它还允许我们以与托管在 Google 的仓库类似的方式构建包,从而使迁移过程尽可能无缝。 + + +SUSE 赞助 Kubernetes 项目并且支持访问其引入的 OpenBuildService 环境 +([`build.opensuse.org`](http://build.opensuse.org)), +还提供将 OBS 与我们的发布流程集成的技术支持。 + + +我们使用 SUSE 的 OBS 实例来构建和发布包。构建新版本后, +我们的工具会自动将所需的制品和包设置推送到 `build.opensuse.org`。 +这将触发构建过程,为所有支持的架构(AMD64、ARM64、PPC64LE、S390X)构建包。 +最后,生成的包将自动推送到我们社区拥有的 S3 存储桶,以便所有用户都可以使用它们。 + + +我们想借此机会感谢 SUSE 允许我们使用 `build.opensuse.org` +以及他们的慷慨支持,使这种集成成为可能! + + +## 托管在 Google 的仓库和 Kubernetes 仓库之间有哪些显著差异? {#what-are-significant-differences-between-the-google-hosted-and-kubernetes-package-repositories} + + +你应该注意三个显著差异: + + +- 每个 Kubernetes 次要版本都有一个专用的仓库。例如, + 名为 `core:/stable:/v1.28` 的仓库仅托管稳定 Kubernetes v1.28 版本的包。 + 这意味着你可以从此仓库安装 v1.28.0,但无法安装 v1.27.0 或 v1.28 之外的任何其他次要版本。 + 升级到另一个次要版本后,你必须添加新的仓库并可以选择删除旧的仓库 + +- 每个 Kubernetes 仓库中可用的 `cri-tools` 和 `kubernetes-cni` 包版本有所不同 + - 这两个包是 `kubelet` 和 `kubeadm` 的依赖项 + - v1.24 到 v1.27 的 Kubernetes 仓库与托管在 Google 的仓库具有这些包的相同版本 + - v1.28 及更高版本的 Kubernetes 仓库将仅发布该 Kubernetes 次要版本 + - 就 v1.28 而言,Kubernetes v1.28 的仓库中仅提供 kubernetes-cni 1.2.0 和 cri-tools v1.28 + - 与 v1.29 类似,我们只计划发布 cri-tools v1.29 以及 Kubernetes v1.29 将使用的 kubernetes-cni 版本 + +- 包版本的修订部分(`1.28.0-00` 中的 `-00` 部分)现在由 OpenBuildService + 平台自动生成,并具有不同的格式。修订版本现在采用 `-x.y` 格式,例如 `1.28.0-1.1` + + +## 这是否会影响现有的托管在 Google 的仓库? {#does-this-in-any-way-affect-existing-google-hosted-repositories} + + +托管在 Google 的仓库以及发布到其中的所有包仍然可用,与之前一样。 +我们构建包并将其发布到托管在 Google 仓库的方式没有变化, +所有新引入的更改仅影响发布到社区自有仓库的包。 + + +然而,正如本文开头提到的,我们计划将来停止将包发布到托管在 Google 的仓库。 + + +## 如何迁移到 Kubernetes 社区自有的仓库? {#how-to-migrate} + + +### 使用 `apt`/`apt-get` 的 Debian、Ubuntu 一起其他操作系统 {#how-to-migrate-deb} + + +1. 替换 `apt` 仓库定义,以便 `apt` 指向新仓库而不是托管在 Google 的仓库。 + 确保将以下命令中的 Kubernetes 次要版本替换为你当前使用的次要版本: + + ```shell + echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list + ``` + + +2. 下载 Kubernetes 仓库的公共签名密钥。所有仓库都使用相同的签名密钥, + 因此你可以忽略 URL 中的版本: + + ```shell + curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg + ``` + + +3. 更新 `apt` 包索引: + + ```shell + sudo apt-get update + ``` + + +### 使用 `rpm`/`dnf` 的 CentOS、Fedora、RHEL 以及其他操作系统 {#how-to-migrate-rpm} + + +1. 替换 `yum` 仓库定义,使 `yum` 指向新仓库而不是托管在 Google 的仓库。 + 确保将以下命令中的 Kubernetes 次要版本替换为你当前使用的次要版本: + + ```shell + cat < +## 迁移到 Kubernetes 仓库后是否可以回滚到托管在 Google 的仓库? {#can-i-rollback-to-the-google-hosted-repository-after-migrating-to-the-kubernetes-repositories} + + +一般来说,可以。只需执行与迁移时相同的步骤,但使用托管在 Google 的仓库参数。 +你可以在[“安装 kubeadm”](/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm)等文档中找到这些参数。 + + +## 为什么没有固定的域名/IP 列表?为什么我无法限制包下载? {#why-isn-t-there-a-stable-list-of-domains-ips-why-can-t-i-restrict-package-downloads} + + +我们对 `pkgs.k8s.io` 的计划是使其根据用户位置充当一组后端(包镜像)的重定向器。 +此更改的本质意味着下载包的用户可以随时重定向到任何镜像。 +鉴于架构和我们计划在不久的将来加入更多镜像,我们无法提供给你可以添加到允许列表中的 +IP 地址或域名列表。 + + +限制性控制机制(例如限制访问特定 IP/域名列表的中间人代理或网络策略)将随着此更改而中断。 +对于这些场景,我们鼓励你将包的发布版本与你可以严格控制的本地仓库建立镜像。 + + +## 如果我发现新的仓库有异常怎么办? {#what-should-i-do-if-i-detect-some-abnormality-with-the-new-repositories} + + +如果你在新的 Kubernetes 仓库中遇到任何问题, +请在 [`kubernetes/release` 仓库](https://github.com/kubernetes/release/issues/new/choose)中提交问题。 From 3729d4a75a97bad0dfa048aa5c655f24281335c6 Mon Sep 17 00:00:00 2001 From: Arhell Date: Fri, 6 Oct 2023 00:36:28 +0300 Subject: [PATCH 064/229] [ja] update the range of pod-deletion-cost --- content/ja/docs/concepts/workloads/controllers/replicaset.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/ja/docs/concepts/workloads/controllers/replicaset.md b/content/ja/docs/concepts/workloads/controllers/replicaset.md index c3d5282fa5c15..1f73fe842f911 100644 --- a/content/ja/docs/concepts/workloads/controllers/replicaset.md +++ b/content/ja/docs/concepts/workloads/controllers/replicaset.md @@ -275,7 +275,7 @@ ReplicaSetは、ただ`.spec.replicas`フィールドを更新することによ [`controller.kubernetes.io/pod-deletion-cost`](/docs/reference/labels-annotations-taints/#pod-deletion-cost)アノテーションを使用すると、ReplicaSetをスケールダウンする際に、どのPodを最初に削除するかについて、ユーザーが優先順位を設定することができます。 -アノテーションはPodに設定する必要があり、範囲は[-2147483647, 2147483647]になります。同じReplicaSetに属する他のPodと比較して、Podを削除する際のコストを表しています。削除コストの低いPodは、削除コストの高いPodより優先的に削除されます。 +アノテーションはPodに設定する必要があり、範囲は[-2147483648, 2147483647]になります。同じReplicaSetに属する他のPodと比較して、Podを削除する際のコストを表しています。削除コストの低いPodは、削除コストの高いPodより優先的に削除されます。 このアノテーションを設定しないPodは暗黙的に0と設定され、負の値は許容されます。 無効な値はAPIサーバーによって拒否されます。 From f2cfc91486c1d88eefbe861393194b43d5e7e4b5 Mon Sep 17 00:00:00 2001 From: Shannon Kularathna Date: Tue, 15 Aug 2023 19:54:37 +0000 Subject: [PATCH 065/229] Fix the case of Secrets wherever it refers to the Kubernetes object --- .../en/docs/concepts/configuration/secret.md | 28 +++++++++---------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index d546ec12e4964..a7c38d4d4a5d5 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -6,8 +6,8 @@ content_type: concept feature: title: Secret and configuration management description: > - Deploy and update secrets and application configuration without rebuilding your image - and without exposing secrets in your stack configuration. + Deploy and update Secrets and application configuration without rebuilding your image + and without exposing Secrets in your stack configuration. weight: 30 --- @@ -68,7 +68,7 @@ help automate node registration. ### Use case: dotfiles in a secret volume You can make your data "hidden" by defining a key that begins with a dot. -This key represents a dotfile or "hidden" file. For example, when the following secret +This key represents a dotfile or "hidden" file. For example, when the following Secret is mounted into a volume, `secret-volume`, the volume will contain a single file, called `.secret-file`, and the `dotfile-test-container` will have this file present at the path `/etc/secret-volume/.secret-file`. @@ -135,8 +135,8 @@ Here are some of your options: [ServiceAccount](/docs/reference/access-authn-authz/authentication/#service-account-tokens) and its tokens to identify your client. - There are third-party tools that you can run, either within or outside your cluster, - that provide secrets management. For example, a service that Pods access over HTTPS, - that reveals a secret if the client correctly authenticates (for example, with a ServiceAccount + that provide Secrets management. For example, a service that Pods access over HTTPS, + that reveals a Secret if the client correctly authenticates (for example, with a ServiceAccount token). - For authentication, you can implement a custom signer for X.509 certificates, and use [CertificateSigningRequests](/docs/reference/access-authn-authz/certificate-signing-requests/) @@ -505,7 +505,7 @@ data: A bootstrap token Secret has the following keys specified under `data`: - `token-id`: A random 6 character string as the token identifier. Required. -- `token-secret`: A random 16 character string as the actual token secret. Required. +- `token-secret`: A random 16 character string as the actual token Secret. Required. - `description`: A human-readable string that describes what the token is used for. Optional. - `expiration`: An absolute UTC time using [RFC3339](https://datatracker.ietf.org/doc/html/rfc3339) specifying when the token @@ -568,9 +568,9 @@ precedence. #### Size limit {#restriction-data-size} -Individual secrets are limited to 1MiB in size. This is to discourage creation -of very large secrets that could exhaust the API server and kubelet memory. -However, creation of many smaller secrets could also exhaust memory. You can +Individual Secrets are limited to 1MiB in size. This is to discourage creation +of very large Secrets that could exhaust the API server and kubelet memory. +However, creation of many smaller Secrets could also exhaust memory. You can use a [resource quota](/docs/concepts/policy/resource-quotas/) to limit the number of Secrets (or other resources) in a namespace. @@ -708,17 +708,17 @@ LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT 0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names. ``` -### Container image pull secrets {#using-imagepullsecrets} +### Container image pull Secrets {#using-imagepullsecrets} If you want to fetch container images from a private repository, you need a way for the kubelet on each node to authenticate to that repository. You can configure -_image pull secrets_ to make this possible. These secrets are configured at the Pod +_image pull Secrets_ to make this possible. These Secrets are configured at the Pod level. #### Using imagePullSecrets -The `imagePullSecrets` field is a list of references to secrets in the same namespace. -You can use an `imagePullSecrets` to pass a secret that contains a Docker (or other) image registry +The `imagePullSecrets` field is a list of references to Secrets in the same namespace. +You can use an `imagePullSecrets` to pass a Secret that contains a Docker (or other) image registry password to the kubelet. The kubelet uses this information to pull a private image on behalf of your Pod. See the [PodSpec API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core) for more information about the `imagePullSecrets` field. @@ -787,7 +787,7 @@ Secrets it expects to interact with, other apps within the same namespace can render those assumptions invalid. A Secret is only sent to a node if a Pod on that node requires it. -For mounting secrets into Pods, the kubelet stores a copy of the data into a `tmpfs` +For mounting Secrets into Pods, the kubelet stores a copy of the data into a `tmpfs` so that the confidential data is not written to durable storage. Once the Pod that depends on the Secret is deleted, the kubelet deletes its local copy of the confidential data from the Secret. From cc62cbfda3fb47bf5a0585e7db237a23cf868bd4 Mon Sep 17 00:00:00 2001 From: Shannon Kularathna Date: Tue, 15 Aug 2023 20:11:43 +0000 Subject: [PATCH 066/229] Move YAML snippets to examples directory and include with code shortcode --- .../en/docs/concepts/configuration/secret.md | 154 ++---------------- .../en/examples/secret/basicauth-secret.yaml | 8 + .../secret/bootstrap-token-secret-base64.yaml | 13 ++ .../bootstrap-token-secret-literal.yaml | 18 ++ .../en/examples/secret/dockercfg-secret.yaml | 8 + .../en/examples/secret/dotfile-secret.yaml | 27 +++ .../en/examples/secret/optional-secret.yaml | 17 ++ .../secret/serviceaccount-token-secret.yaml | 9 + .../en/examples/secret/ssh-auth-secret.yaml | 9 + .../en/examples/secret/tls-auth-secret.yaml | 28 ++++ 10 files changed, 148 insertions(+), 143 deletions(-) create mode 100644 content/en/examples/secret/basicauth-secret.yaml create mode 100644 content/en/examples/secret/bootstrap-token-secret-base64.yaml create mode 100644 content/en/examples/secret/bootstrap-token-secret-literal.yaml create mode 100644 content/en/examples/secret/dockercfg-secret.yaml create mode 100644 content/en/examples/secret/dotfile-secret.yaml create mode 100644 content/en/examples/secret/optional-secret.yaml create mode 100644 content/en/examples/secret/serviceaccount-token-secret.yaml create mode 100644 content/en/examples/secret/ssh-auth-secret.yaml create mode 100644 content/en/examples/secret/tls-auth-secret.yaml diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index a7c38d4d4a5d5..e523c38ee7347 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -24,7 +24,7 @@ Because Secrets can be created independently of the Pods that use them, there is less risk of the Secret (and its data) being exposed during the workflow of creating, viewing, and editing Pods. Kubernetes, and applications that run in your cluster, can also take additional precautions with Secrets, such as avoiding -writing secret data to nonvolatile storage. +writing sensitive data to nonvolatile storage. Secrets are similar to {{< glossary_tooltip text="ConfigMaps" term_id="configmap" >}} but are specifically intended to hold confidential data. @@ -78,35 +78,7 @@ Files beginning with dot characters are hidden from the output of `ls -l`; you must use `ls -la` to see them when listing directory contents. {{< /note >}} -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: dotfile-secret -data: - .secret-file: dmFsdWUtMg0KDQo= ---- -apiVersion: v1 -kind: Pod -metadata: - name: secret-dotfiles-pod -spec: - volumes: - - name: secret-volume - secret: - secretName: dotfile-secret - containers: - - name: dotfile-test-container - image: registry.k8s.io/busybox - command: - - ls - - "-l" - - "/etc/secret-volume" - volumeMounts: - - name: secret-volume - readOnly: true - mountPath: "/etc/secret-volume" -``` +{{% code language="yaml" file="secret/dotfile-secret.yaml" %}} ### Use case: Secret visible to one container in a Pod @@ -135,7 +107,7 @@ Here are some of your options: [ServiceAccount](/docs/reference/access-authn-authz/authentication/#service-account-tokens) and its tokens to identify your client. - There are third-party tools that you can run, either within or outside your cluster, - that provide Secrets management. For example, a service that Pods access over HTTPS, + that manage sensitive data. For example, a service that Pods access over HTTPS, that reveals a Secret if the client correctly authenticates (for example, with a ServiceAccount token). - For authentication, you can implement a custom signer for X.509 certificates, and use @@ -251,18 +223,7 @@ fills in some other fields such as the `kubernetes.io/service-account.uid` annot The following example configuration declares a ServiceAccount token Secret: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: secret-sa-sample - annotations: - kubernetes.io/service-account.name: "sa-name" -type: kubernetes.io/service-account-token -data: - # You can include additional key value pairs as you do with Opaque Secrets - extra: YmFyCg== -``` +{{% code language="yaml" file="secret/serviceaccount-token-secret.yaml" %}} After creating the Secret, wait for Kubernetes to populate the `token` key in the `data` field. @@ -290,16 +251,7 @@ you must use one of the following `type` values for that Secret: Below is an example for a `kubernetes.io/dockercfg` type of Secret: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: secret-dockercfg -type: kubernetes.io/dockercfg -data: - .dockercfg: | - "" -``` +{{% code language="yaml" file="secret/dockercfg-secret.yaml" %}} {{< note >}} If you do not want to perform the base64 encoding, you can choose to use the @@ -369,16 +321,7 @@ Secret manifest. The following manifest is an example of a basic authentication Secret: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: secret-basic-auth -type: kubernetes.io/basic-auth -stringData: - username: admin # required field for kubernetes.io/basic-auth - password: t0p-Secret # required field for kubernetes.io/basic-auth -``` +{{% code language="yaml" file="secret/basicauth-secret.yaml" %}} The basic authentication Secret type is provided only for convenience. You can create an `Opaque` type for credentials used for basic authentication. @@ -397,17 +340,7 @@ as the SSH credential to use. The following manifest is an example of a Secret used for SSH public/private key authentication: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: secret-ssh-auth -type: kubernetes.io/ssh-auth -data: - # the data is abbreviated in this example - ssh-privatekey: | - MIIEpQIBAAKCAQEAulqb/Y ... -``` +{{% code language="yaml" file="secret/ssh-auth-secret.yaml" %}} The SSH authentication Secret type is provided only for convenience. You can create an `Opaque` type for credentials used for SSH authentication. @@ -440,21 +373,7 @@ the base64 encoded certificate and private key. For details, see The following YAML contains an example config for a TLS Secret: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: secret-tls -type: kubernetes.io/tls -stringData: - # the data is abbreviated in this example - tls.crt: | - --------BEGIN CERTIFICATE----- - MIIC2DCCAcCgAwIBAgIBATANBgkqh ... - tls.key: | - -----BEGIN RSA PRIVATE KEY----- - MIIEpgIBAAKCAQEA7yn3bRHQ5FHMQ ... -``` +{{% code language="yaml" file="secret/tls-auth-secret.yaml" %}} The TLS Secret type is provided only for convenience. You can create an `Opaque` type for credentials used for TLS authentication. @@ -486,21 +405,7 @@ string of the token ID. As a Kubernetes manifest, a bootstrap token Secret might look like the following: -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: bootstrap-token-5emitj - namespace: kube-system -type: bootstrap.kubernetes.io/token -data: - auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4= - expiration: MjAyMC0wOS0xM1QwNDozOToxMFo= - token-id: NWVtaXRq - token-secret: a3E0Z2lodnN6emduMXAwcg== - usage-bootstrap-authentication: dHJ1ZQ== - usage-bootstrap-signing: dHJ1ZQ== -``` +{{% code language="yaml" file="secret/bootstrap-token-secret-base64.yaml" %}} A bootstrap token Secret has the following keys specified under `data`: @@ -518,26 +423,7 @@ A bootstrap token Secret has the following keys specified under `data`: You can alternatively provide the values in the `stringData` field of the Secret without base64 encoding them: -```yaml -apiVersion: v1 -kind: Secret -metadata: - # Note how the Secret is named - name: bootstrap-token-5emitj - # A bootstrap token Secret usually resides in the kube-system namespace - namespace: kube-system -type: bootstrap.kubernetes.io/token -stringData: - auth-extra-groups: "system:bootstrappers:kubeadm:default-node-token" - expiration: "2020-09-13T04:39:10Z" - # This token ID is used in the name - token-id: "5emitj" - token-secret: "kq4gihvszzgn1p0r" - # This token can be used for authentication - usage-bootstrap-authentication: "true" - # and it can be used for signing - usage-bootstrap-signing: "true" -``` +{{% code language="yaml" file="secret/bootstrap-token-secret-literal.yaml" %}} ## Working with Secrets @@ -613,25 +499,7 @@ When you reference a Secret in a Pod, you can mark the Secret as _optional_, such as in the following example. If an optional Secret doesn't exist, Kubernetes ignores it. -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: mypod -spec: - containers: - - name: mypod - image: redis - volumeMounts: - - name: foo - mountPath: "/etc/foo" - readOnly: true - volumes: - - name: foo - secret: - secretName: mysecret - optional: true -``` +{{% code language="yaml" file="secret/optional-secret.yaml" %}} By default, Secrets are required. None of a Pod's containers will start until all non-optional Secrets are available. diff --git a/content/en/examples/secret/basicauth-secret.yaml b/content/en/examples/secret/basicauth-secret.yaml new file mode 100644 index 0000000000000..a854b267a01a5 --- /dev/null +++ b/content/en/examples/secret/basicauth-secret.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: Secret +metadata: + name: secret-basic-auth +type: kubernetes.io/basic-auth +stringData: + username: admin # required field for kubernetes.io/basic-auth + password: t0p-Secret # required field for kubernetes.io/basic-auth \ No newline at end of file diff --git a/content/en/examples/secret/bootstrap-token-secret-base64.yaml b/content/en/examples/secret/bootstrap-token-secret-base64.yaml new file mode 100644 index 0000000000000..98233758e2e7c --- /dev/null +++ b/content/en/examples/secret/bootstrap-token-secret-base64.yaml @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Secret +metadata: + name: bootstrap-token-5emitj + namespace: kube-system +type: bootstrap.kubernetes.io/token +data: + auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4= + expiration: MjAyMC0wOS0xM1QwNDozOToxMFo= + token-id: NWVtaXRq + token-secret: a3E0Z2lodnN6emduMXAwcg== + usage-bootstrap-authentication: dHJ1ZQ== + usage-bootstrap-signing: dHJ1ZQ== \ No newline at end of file diff --git a/content/en/examples/secret/bootstrap-token-secret-literal.yaml b/content/en/examples/secret/bootstrap-token-secret-literal.yaml new file mode 100644 index 0000000000000..6aec11ce870fc --- /dev/null +++ b/content/en/examples/secret/bootstrap-token-secret-literal.yaml @@ -0,0 +1,18 @@ +apiVersion: v1 +kind: Secret +metadata: + # Note how the Secret is named + name: bootstrap-token-5emitj + # A bootstrap token Secret usually resides in the kube-system namespace + namespace: kube-system +type: bootstrap.kubernetes.io/token +stringData: + auth-extra-groups: "system:bootstrappers:kubeadm:default-node-token" + expiration: "2020-09-13T04:39:10Z" + # This token ID is used in the name + token-id: "5emitj" + token-secret: "kq4gihvszzgn1p0r" + # This token can be used for authentication + usage-bootstrap-authentication: "true" + # and it can be used for signing + usage-bootstrap-signing: "true" \ No newline at end of file diff --git a/content/en/examples/secret/dockercfg-secret.yaml b/content/en/examples/secret/dockercfg-secret.yaml new file mode 100644 index 0000000000000..ccf73bc306f24 --- /dev/null +++ b/content/en/examples/secret/dockercfg-secret.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: Secret +metadata: + name: secret-dockercfg +type: kubernetes.io/dockercfg +data: + .dockercfg: | + eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo= \ No newline at end of file diff --git a/content/en/examples/secret/dotfile-secret.yaml b/content/en/examples/secret/dotfile-secret.yaml new file mode 100644 index 0000000000000..5c7900ad97479 --- /dev/null +++ b/content/en/examples/secret/dotfile-secret.yaml @@ -0,0 +1,27 @@ +apiVersion: v1 +kind: Secret +metadata: + name: dotfile-secret +data: + .secret-file: dmFsdWUtMg0KDQo= +--- +apiVersion: v1 +kind: Pod +metadata: + name: secret-dotfiles-pod +spec: + volumes: + - name: secret-volume + secret: + secretName: dotfile-secret + containers: + - name: dotfile-test-container + image: registry.k8s.io/busybox + command: + - ls + - "-l" + - "/etc/secret-volume" + volumeMounts: + - name: secret-volume + readOnly: true + mountPath: "/etc/secret-volume" \ No newline at end of file diff --git a/content/en/examples/secret/optional-secret.yaml b/content/en/examples/secret/optional-secret.yaml new file mode 100644 index 0000000000000..cc510b9078130 --- /dev/null +++ b/content/en/examples/secret/optional-secret.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Pod +metadata: + name: mypod +spec: + containers: + - name: mypod + image: redis + volumeMounts: + - name: foo + mountPath: "/etc/foo" + readOnly: true + volumes: + - name: foo + secret: + secretName: mysecret + optional: true \ No newline at end of file diff --git a/content/en/examples/secret/serviceaccount-token-secret.yaml b/content/en/examples/secret/serviceaccount-token-secret.yaml new file mode 100644 index 0000000000000..8ec8fb577d547 --- /dev/null +++ b/content/en/examples/secret/serviceaccount-token-secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Secret +metadata: + name: secret-sa-sample + annotations: + kubernetes.io/service-account.name: "sa-name" +type: kubernetes.io/service-account-token +data: + extra: YmFyCg== \ No newline at end of file diff --git a/content/en/examples/secret/ssh-auth-secret.yaml b/content/en/examples/secret/ssh-auth-secret.yaml new file mode 100644 index 0000000000000..9f79cbfb065fd --- /dev/null +++ b/content/en/examples/secret/ssh-auth-secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Secret +metadata: + name: secret-ssh-auth +type: kubernetes.io/ssh-auth +data: + # the data is abbreviated in this example + ssh-privatekey: | + UG91cmluZzYlRW1vdGljb24lU2N1YmE= \ No newline at end of file diff --git a/content/en/examples/secret/tls-auth-secret.yaml b/content/en/examples/secret/tls-auth-secret.yaml new file mode 100644 index 0000000000000..1e14b8e00ac47 --- /dev/null +++ b/content/en/examples/secret/tls-auth-secret.yaml @@ -0,0 +1,28 @@ +apiVersion: v1 +kind: Secret +metadata: + name: secret-tls +type: kubernetes.io/tls +data: + # values are base64 encoded, which obscures them but does NOT provide + # any useful level of confidentiality + tls.crt: | + LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNVakNDQWJzQ0FnMytNQTBHQ1NxR1NJYjNE + UUVCQlFVQU1JR2JNUXN3Q1FZRFZRUUdFd0pLVURFT01Bd0cKQTFVRUNCTUZWRzlyZVc4eEVEQU9C + Z05WQkFjVEIwTm9kVzh0YTNVeEVUQVBCZ05WQkFvVENFWnlZVzVyTkVSRQpNUmd3RmdZRFZRUUxF + dzlYWldKRFpYSjBJRk4xY0hCdmNuUXhHREFXQmdOVkJBTVREMFp5WVc1ck5FUkVJRmRsCllpQkRR + VEVqTUNFR0NTcUdTSWIzRFFFSkFSWVVjM1Z3Y0c5eWRFQm1jbUZ1YXpSa1pDNWpiMjB3SGhjTk1U + TXcKTVRFeE1EUTFNVE01V2hjTk1UZ3dNVEV3TURRMU1UTTVXakJMTVFzd0NRWURWUVFHREFKS1VE + RVBNQTBHQTFVRQpDQXdHWEZSdmEzbHZNUkV3RHdZRFZRUUtEQWhHY21GdWF6UkVSREVZTUJZR0Ex + VUVBd3dQZDNkM0xtVjRZVzF3CmJHVXVZMjl0TUlHYU1BMEdDU3FHU0liM0RRRUJBUVVBQTRHSUFE + Q0JoQUo5WThFaUhmeHhNL25PbjJTbkkxWHgKRHdPdEJEVDFKRjBReTliMVlKanV2YjdjaTEwZjVN + Vm1UQllqMUZTVWZNOU1vejJDVVFZdW4yRFljV29IcFA4ZQpqSG1BUFVrNVd5cDJRN1ArMjh1bklI + QkphVGZlQ09PekZSUFY2MEdTWWUzNmFScG04L3dVVm16eGFLOGtCOWVaCmhPN3F1TjdtSWQxL2pW + cTNKODhDQXdFQUFUQU5CZ2txaGtpRzl3MEJBUVVGQUFPQmdRQU1meTQzeE15OHh3QTUKVjF2T2NS + OEtyNWNaSXdtbFhCUU8xeFEzazlxSGtyNFlUY1JxTVQ5WjVKTm1rWHYxK2VSaGcwTi9WMW5NUTRZ + RgpnWXcxbnlESnBnOTduZUV4VzQyeXVlMFlHSDYyV1hYUUhyOVNVREgrRlowVnQvRGZsdklVTWRj + UUFEZjM4aU9zCjlQbG1kb3YrcE0vNCs5a1h5aDhSUEkzZXZ6OS9NQT09Ci0tLS0tRU5EIENFUlRJ + RklDQVRFLS0tLS0K + # In this example, the key data is not a real PEM-encoded private key + tls.key: | + RXhhbXBsZSBkYXRhIGZvciB0aGUgVExTIGNydCBmaWVsZA== \ No newline at end of file From 9e1201fb4a8666ce4ee6064f116ec274b88a576e Mon Sep 17 00:00:00 2001 From: Chun-Wei Chang Date: Thu, 5 Oct 2023 07:10:00 -0700 Subject: [PATCH 067/229] fix: link text in glossary cri --- .../en/docs/reference/glossary/container-runtime-interface.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/reference/glossary/container-runtime-interface.md b/content/en/docs/reference/glossary/container-runtime-interface.md index e3ad8f5b092a0..c2dab628efb04 100644 --- a/content/en/docs/reference/glossary/container-runtime-interface.md +++ b/content/en/docs/reference/glossary/container-runtime-interface.md @@ -17,6 +17,6 @@ The main protocol for the communication between the {{< glossary_tooltip text="k The Kubernetes Container Runtime Interface (CRI) defines the main [gRPC](https://grpc.io) protocol for the communication between the -[cluster components](/docs/concepts/overview/components/#node-components) +[node components](/docs/concepts/overview/components/#node-components) {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}. From c9cd9a269347af941267ef5da7d518b00f5f2f5b Mon Sep 17 00:00:00 2001 From: Arhell Date: Sat, 7 Oct 2023 01:27:04 +0300 Subject: [PATCH 068/229] [pt] update the range of pod-deletion-cost --- content/pt-br/docs/concepts/workloads/controllers/replicaset.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/pt-br/docs/concepts/workloads/controllers/replicaset.md b/content/pt-br/docs/concepts/workloads/controllers/replicaset.md index dcd22d1f77000..440187b4b9460 100644 --- a/content/pt-br/docs/concepts/workloads/controllers/replicaset.md +++ b/content/pt-br/docs/concepts/workloads/controllers/replicaset.md @@ -280,7 +280,7 @@ Se o Pod obedecer todos os items acima simultaneamente, a seleção é aleatóri Utilizando a anotação [`controller.kubernetes.io/pod-deletion-cost`](/docs/reference/labels-annotations-taints/#pod-deletion-cost), usuários podem definir uma preferência em relação à quais pods serão removidos primeiro caso o ReplicaSet precise escalonar para baixo. -A anotação deve ser definida no pod, com uma variação de [-2147483647, 2147483647]. Isso representa o custo de deletar um pod comparado com outros pods que pertencem à esse mesmo ReplicaSet. Pods com um custo de deleção menor são eleitos para deleção antes de pods com um custo maior. +A anotação deve ser definida no pod, com uma variação de [-2147483648, 2147483647]. Isso representa o custo de deletar um pod comparado com outros pods que pertencem à esse mesmo ReplicaSet. Pods com um custo de deleção menor são eleitos para deleção antes de pods com um custo maior. O valor implícito para essa anotação para pods que não a tem definida é 0; valores negativos são permitidos. Valores inválidos serão rejeitados pelo servidor API. From 318ff2e797d991d9551670f22554c22131215ac7 Mon Sep 17 00:00:00 2001 From: Michael Date: Fri, 6 Oct 2023 19:24:16 +0800 Subject: [PATCH 069/229] Clean up kubelet-tls-bootstrapping.md --- .../kubelet-tls-bootstrapping.md | 152 ++++++++++-------- 1 file changed, 88 insertions(+), 64 deletions(-) diff --git a/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md b/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md index c1b33647407c1..c4393b261e205 100644 --- a/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md +++ b/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md @@ -11,31 +11,35 @@ weight: 120 -In a Kubernetes cluster, the components on the worker nodes - kubelet and kube-proxy - need to communicate with Kubernetes control plane components, specifically kube-apiserver. -In order to ensure that communication is kept private, not interfered with, and ensure that each component of the cluster is talking to another trusted component, we strongly +In a Kubernetes cluster, the components on the worker nodes - kubelet and kube-proxy - need +to communicate with Kubernetes control plane components, specifically kube-apiserver. +In order to ensure that communication is kept private, not interfered with, and ensure that +each component of the cluster is talking to another trusted component, we strongly recommend using client TLS certificates on nodes. -The normal process of bootstrapping these components, especially worker nodes that need certificates so they can communicate safely with kube-apiserver, -can be a challenging process as it is often outside of the scope of Kubernetes and requires significant additional work. +The normal process of bootstrapping these components, especially worker nodes that need certificates +so they can communicate safely with kube-apiserver, can be a challenging process as it is often outside +of the scope of Kubernetes and requires significant additional work. This in turn, can make it challenging to initialize or scale a cluster. -In order to simplify the process, beginning in version 1.4, Kubernetes introduced a certificate request and signing API. The proposal can be -found [here](https://github.com/kubernetes/kubernetes/pull/20439). +In order to simplify the process, beginning in version 1.4, Kubernetes introduced a certificate request +and signing API. The proposal can be found [here](https://github.com/kubernetes/kubernetes/pull/20439). This document describes the process of node initialization, how to set up TLS client certificate bootstrapping for kubelets, and how it works. -## Initialization Process +## Initialization process When a worker node starts up, the kubelet does the following: 1. Look for its `kubeconfig` file -2. Retrieve the URL of the API server and credentials, normally a TLS key and signed certificate from the `kubeconfig` file -3. Attempt to communicate with the API server using the credentials. +1. Retrieve the URL of the API server and credentials, normally a TLS key and signed certificate from the `kubeconfig` file +1. Attempt to communicate with the API server using the credentials. -Assuming that the kube-apiserver successfully validates the kubelet's credentials, it will treat the kubelet as a valid node, and begin to assign pods to it. +Assuming that the kube-apiserver successfully validates the kubelet's credentials, +it will treat the kubelet as a valid node, and begin to assign pods to it. Note that the above process depends upon: @@ -45,35 +49,36 @@ Note that the above process depends upon: All of the following are responsibilities of whoever sets up and manages the cluster: 1. Creating the CA key and certificate -2. Distributing the CA certificate to the control plane nodes, where kube-apiserver is running -3. Creating a key and certificate for each kubelet; strongly recommended to have a unique one, with a unique CN, for each kubelet -4. Signing the kubelet certificate using the CA key -5. Distributing the kubelet key and signed certificate to the specific node on which the kubelet is running +1. Distributing the CA certificate to the control plane nodes, where kube-apiserver is running +1. Creating a key and certificate for each kubelet; strongly recommended to have a unique one, with a unique CN, for each kubelet +1. Signing the kubelet certificate using the CA key +1. Distributing the kubelet key and signed certificate to the specific node on which the kubelet is running -The TLS Bootstrapping described in this document is intended to simplify, and partially or even completely automate, steps 3 onwards, as these are the most common when initializing or scaling +The TLS Bootstrapping described in this document is intended to simplify, and partially or even +completely automate, steps 3 onwards, as these are the most common when initializing or scaling a cluster. -### Bootstrap Initialization +### Bootstrap initialization In the bootstrap initialization process, the following occurs: 1. kubelet begins -2. kubelet sees that it does _not_ have a `kubeconfig` file -3. kubelet searches for and finds a `bootstrap-kubeconfig` file -4. kubelet reads its bootstrap file, retrieving the URL of the API server and a limited usage "token" -5. kubelet connects to the API server, authenticates using the token -6. kubelet now has limited credentials to create and retrieve a certificate signing request (CSR) -7. kubelet creates a CSR for itself with the signerName set to `kubernetes.io/kube-apiserver-client-kubelet` -8. CSR is approved in one of two ways: +1. kubelet sees that it does _not_ have a `kubeconfig` file +1. kubelet searches for and finds a `bootstrap-kubeconfig` file +1. kubelet reads its bootstrap file, retrieving the URL of the API server and a limited usage "token" +1. kubelet connects to the API server, authenticates using the token +1. kubelet now has limited credentials to create and retrieve a certificate signing request (CSR) +1. kubelet creates a CSR for itself with the signerName set to `kubernetes.io/kube-apiserver-client-kubelet` +1. CSR is approved in one of two ways: * If configured, kube-controller-manager automatically approves the CSR * If configured, an outside process, possibly a person, approves the CSR using the Kubernetes API or via `kubectl` -9. Certificate is created for the kubelet -10. Certificate is issued to the kubelet -11. kubelet retrieves the certificate -12. kubelet creates a proper `kubeconfig` with the key and signed certificate -13. kubelet begins normal operation -14. Optional: if configured, kubelet automatically requests renewal of the certificate when it is close to expiry -15. The renewed certificate is approved and issued, either automatically or manually, depending on configuration. +1. Certificate is created for the kubelet +1. Certificate is issued to the kubelet +1. kubelet retrieves the certificate +1. kubelet creates a proper `kubeconfig` with the key and signed certificate +1. kubelet begins normal operation +1. Optional: if configured, kubelet automatically requests renewal of the certificate when it is close to expiry +1. The renewed certificate is approved and issued, either automatically or manually, depending on configuration. The rest of this document describes the necessary steps to configure TLS Bootstrapping, and its limitations. @@ -90,13 +95,16 @@ In addition, you need your Kubernetes Certificate Authority (CA). ## Certificate Authority -As without bootstrapping, you will need a Certificate Authority (CA) key and certificate. As without bootstrapping, these will be used -to sign the kubelet certificate. As before, it is your responsibility to distribute them to control plane nodes. +As without bootstrapping, you will need a Certificate Authority (CA) key and certificate. +As without bootstrapping, these will be used to sign the kubelet certificate. As before, +it is your responsibility to distribute them to control plane nodes. -For the purposes of this document, we will assume these have been distributed to control plane nodes at `/var/lib/kubernetes/ca.pem` (certificate) and `/var/lib/kubernetes/ca-key.pem` (key). +For the purposes of this document, we will assume these have been distributed to control +plane nodes at `/var/lib/kubernetes/ca.pem` (certificate) and `/var/lib/kubernetes/ca-key.pem` (key). We will refer to these as "Kubernetes CA certificate and key". -All Kubernetes components that use these certificates - kubelet, kube-apiserver, kube-controller-manager - assume the key and certificate to be PEM-encoded. +All Kubernetes components that use these certificates - kubelet, kube-apiserver, +kube-controller-manager - assume the key and certificate to be PEM-encoded. ## kube-apiserver configuration @@ -116,24 +124,27 @@ containing the signing certificate, for example ### Initial bootstrap authentication -In order for the bootstrapping kubelet to connect to kube-apiserver and request a certificate, it must first authenticate to the server. -You can use any [authenticator](/docs/reference/access-authn-authz/authentication/) that can authenticate the kubelet. +In order for the bootstrapping kubelet to connect to kube-apiserver and request a certificate, +it must first authenticate to the server. You can use any +[authenticator](/docs/reference/access-authn-authz/authentication/) that can authenticate the kubelet. While any authentication strategy can be used for the kubelet's initial bootstrap credentials, the following two authenticators are recommended for ease of provisioning. 1. [Bootstrap Tokens](#bootstrap-tokens) -2. [Token authentication file](#token-authentication-file) +1. [Token authentication file](#token-authentication-file) -Using bootstrap tokens is a simpler and more easily managed method to authenticate kubelets, and does not require any additional flags when starting kube-apiserver. +Using bootstrap tokens is a simpler and more easily managed method to authenticate kubelets, +and does not require any additional flags when starting kube-apiserver. Whichever method you choose, the requirement is that the kubelet be able to authenticate as a user with the rights to: 1. create and retrieve CSRs -2. be automatically approved to request node client certificates, if automatic approval is enabled. +1. be automatically approved to request node client certificates, if automatic approval is enabled. -A kubelet authenticating using bootstrap tokens is authenticated as a user in the group `system:bootstrappers`, which is the standard method to use. +A kubelet authenticating using bootstrap tokens is authenticated as a user in the group +`system:bootstrappers`, which is the standard method to use. As this feature matures, you should ensure tokens are bound to a Role Based Access Control (RBAC) policy @@ -144,17 +155,20 @@ particular bootstrap group's access when you are done provisioning the nodes. #### Bootstrap tokens -Bootstrap tokens are described in detail [here](/docs/reference/access-authn-authz/bootstrap-tokens/). These are tokens that are stored as secrets in the Kubernetes cluster, -and then issued to the individual kubelet. You can use a single token for an entire cluster, or issue one per worker node. +Bootstrap tokens are described in detail [here](/docs/reference/access-authn-authz/bootstrap-tokens/). +These are tokens that are stored as secrets in the Kubernetes cluster, and then issued to the individual kubelet. +You can use a single token for an entire cluster, or issue one per worker node. The process is two-fold: 1. Create a Kubernetes secret with the token ID, secret and scope(s). -2. Issue the token to the kubelet +1. Issue the token to the kubelet From the kubelet's perspective, one token is like another and has no special meaning. -From the kube-apiserver's perspective, however, the bootstrap token is special. Due to its `type`, `namespace` and `name`, kube-apiserver recognizes it as a special token, -and grants anyone authenticating with that token special bootstrap rights, notably treating them as a member of the `system:bootstrappers` group. This fulfills a basic requirement +From the kube-apiserver's perspective, however, the bootstrap token is special. +Due to its `type`, `namespace` and `name`, kube-apiserver recognizes it as a special token, +and grants anyone authenticating with that token special bootstrap rights, notably treating +them as a member of the `system:bootstrappers` group. This fulfills a basic requirement for TLS bootstrapping. The details for creating the secret are available [here](/docs/reference/access-authn-authz/bootstrap-tokens/). @@ -198,7 +212,8 @@ certificate signing request (CSR) as well as retrieve it when done. Fortunately, Kubernetes ships with a `ClusterRole` with precisely these (and only these) permissions, `system:node-bootstrapper`. -To do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` group to the cluster role `system:node-bootstrapper`. +To do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` +group to the cluster role `system:node-bootstrapper`. ```yaml # enable bootstrapping nodes to create CSR @@ -237,9 +252,10 @@ In order for the controller-manager to sign certificates, it needs the following As described earlier, you need to create a Kubernetes CA key and certificate, and distribute it to the control plane nodes. These will be used by the controller-manager to sign the kubelet certificates. -Since these signed certificates will, in turn, be used by the kubelet to authenticate as a regular kubelet to kube-apiserver, it is important that the CA -provided to the controller-manager at this stage also be trusted by kube-apiserver for authentication. This is provided to kube-apiserver -with the flag `--client-ca-file=FILENAME` (for example, `--client-ca-file=/var/lib/kubernetes/ca.pem`), as described in the kube-apiserver configuration section. +Since these signed certificates will, in turn, be used by the kubelet to authenticate as a regular kubelet +to kube-apiserver, it is important that the CA provided to the controller-manager at this stage also be +trusted by kube-apiserver for authentication. This is provided to kube-apiserver with the flag `--client-ca-file=FILENAME` +(for example, `--client-ca-file=/var/lib/kubernetes/ca.pem`), as described in the kube-apiserver configuration section. To provide the Kubernetes CA key and certificate to kube-controller-manager, use the following flags: @@ -266,10 +282,14 @@ RBAC permissions to the correct group. There are two distinct sets of permissions: -* `nodeclient`: If a node is creating a new certificate for a node, then it does not have a certificate yet. It is authenticating using one of the tokens listed above, and thus is part of the group `system:bootstrappers`. -* `selfnodeclient`: If a node is renewing its certificate, then it already has a certificate (by definition), which it uses continuously to authenticate as part of the group `system:nodes`. +* `nodeclient`: If a node is creating a new certificate for a node, then it does not have a certificate yet. + It is authenticating using one of the tokens listed above, and thus is part of the group `system:bootstrappers`. +* `selfnodeclient`: If a node is renewing its certificate, then it already has a certificate (by definition), + which it uses continuously to authenticate as part of the group `system:nodes`. -To enable the kubelet to request and receive a new certificate, create a `ClusterRoleBinding` that binds the group in which the bootstrapping node is a member `system:bootstrappers` to the `ClusterRole` that grants it permission, `system:certificates.k8s.io:certificatesigningrequests:nodeclient`: +To enable the kubelet to request and receive a new certificate, create a `ClusterRoleBinding` that binds +the group in which the bootstrapping node is a member `system:bootstrappers` to the `ClusterRole` that +grants it permission, `system:certificates.k8s.io:certificatesigningrequests:nodeclient`: ```yaml # Approve all CSRs for the group "system:bootstrappers" @@ -287,7 +307,8 @@ roleRef: apiGroup: rbac.authorization.k8s.io ``` -To enable the kubelet to renew its own client certificate, create a `ClusterRoleBinding` that binds the group in which the fully functioning node is a member `system:nodes` to the `ClusterRole` that +To enable the kubelet to renew its own client certificate, create a `ClusterRoleBinding` that binds +the group in which the fully functioning node is a member `system:nodes` to the `ClusterRole` that grants it permission, `system:certificates.k8s.io:certificatesigningrequests:selfnodeclient`: ```yaml @@ -316,10 +337,10 @@ built-in approver doesn't explicitly deny CSRs. It only ignores unauthorized requests. The controller also prunes expired certificates as part of garbage collection. - ## kubelet configuration -Finally, with the control plane nodes properly set up and all of the necessary authentication and authorization in place, we can configure the kubelet. +Finally, with the control plane nodes properly set up and all of the necessary +authentication and authorization in place, we can configure the kubelet. The kubelet requires the following configuration to bootstrap: @@ -385,7 +406,7 @@ referencing the generated key and obtained certificate is written to the path specified by `--kubeconfig`. The certificate and key file will be placed in the directory specified by `--cert-dir`. -### Client and Serving Certificates +### Client and serving certificates All of the above relate to kubelet _client_ certificates, specifically, the certificates a kubelet uses to authenticate to kube-apiserver. @@ -402,7 +423,7 @@ be used as serving certificates, or `server auth`. However, you _can_ enable its server certificate, at least partially, via certificate rotation. -### Certificate Rotation +### Certificate rotation Kubernetes v1.8 and higher kubelet implements features for enabling rotation of its client and/or serving certificates. Note, rotation of serving @@ -420,7 +441,7 @@ or pass the following command line argument to the kubelet (deprecated): Enabling `RotateKubeletServerCertificate` causes the kubelet **both** to request a serving certificate after bootstrapping its client credentials **and** to rotate that -certificate. To enable this behavior, use the field `serverTLSBootstrap` of +certificate. To enable this behavior, use the field `serverTLSBootstrap` of the [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/) or pass the following command line argument to the kubelet (deprecated): @@ -430,8 +451,8 @@ or pass the following command line argument to the kubelet (deprecated): {{< note >}} The CSR approving controllers implemented in core Kubernetes do not -approve node _serving_ certificates for [security -reasons](https://github.com/kubernetes/community/pull/1982). To use +approve node _serving_ certificates for +[security reasons](https://github.com/kubernetes/community/pull/1982). To use `RotateKubeletServerCertificate` operators need to run a custom approving controller, or manually approve the serving certificate requests. @@ -439,9 +460,9 @@ A deployment-specific approval process for kubelet serving certificates should t 1. are requested by nodes (ensure the `spec.username` field is of the form `system:node:` and `spec.groups` contains `system:nodes`) -2. request usages for a serving certificate (ensure `spec.usages` contains `server auth`, +1. request usages for a serving certificate (ensure `spec.usages` contains `server auth`, optionally contains `digital signature` and `key encipherment`, and contains no other usages) -3. only have IP and DNS subjectAltNames that belong to the requesting node, +1. only have IP and DNS subjectAltNames that belong to the requesting node, and have no URI and Email subjectAltNames (parse the x509 Certificate Signing Request in `spec.request` to verify `subjectAltNames`) @@ -457,8 +478,11 @@ Like the kubelet, these other components also require a method of authenticating You have several options for generating these credentials: * The old way: Create and distribute certificates the same way you did for kubelet before TLS bootstrapping -* DaemonSet: Since the kubelet itself is loaded on each node, and is sufficient to start base services, you can run kube-proxy and other node-specific services not as a standalone process, but rather as a daemonset in the `kube-system` namespace. Since it will be in-cluster, you can give it a proper service account with appropriate permissions to perform its activities. This may be the simplest way to configure such services. - +* DaemonSet: Since the kubelet itself is loaded on each node, and is sufficient to start base services, + you can run kube-proxy and other node-specific services not as a standalone process, but rather as a + daemonset in the `kube-system` namespace. Since it will be in-cluster, you can give it a proper service + account with appropriate permissions to perform its activities. This may be the simplest way to configure + such services. ## kubectl approval From 8b72e69169f8824774bc486556318da94d154cf6 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Thu, 5 Oct 2023 09:24:15 +0800 Subject: [PATCH 070/229] [zh] Resync cluster-administration concepts --- .../concepts/cluster-administration/_index.md | 12 +++++++ .../cluster-administration/networking.md | 31 +++++++++++++------ .../cluster-administration/system-traces.md | 17 +++++++--- 3 files changed, 45 insertions(+), 15 deletions(-) diff --git a/content/zh-cn/docs/concepts/cluster-administration/_index.md b/content/zh-cn/docs/concepts/cluster-administration/_index.md index f261dcc037f02..cec70f46da945 100644 --- a/content/zh-cn/docs/concepts/cluster-administration/_index.md +++ b/content/zh-cn/docs/concepts/cluster-administration/_index.md @@ -5,6 +5,12 @@ content_type: concept description: > 关于创建和管理 Kubernetes 集群的底层细节。 no_list: true +card: + name: setup + weight: 60 + anchors: + - anchor: "#securing-a-cluster" + title: 保护集群 --- diff --git a/content/zh-cn/docs/concepts/cluster-administration/networking.md b/content/zh-cn/docs/concepts/cluster-administration/networking.md index 18f13acd5cfd9..a71411adf13c1 100644 --- a/content/zh-cn/docs/concepts/cluster-administration/networking.md +++ b/content/zh-cn/docs/concepts/cluster-administration/networking.md @@ -57,29 +57,40 @@ Kubernetes 的宗旨就是在应用之间共享机器。 与其去解决这些问题,Kubernetes 选择了其他不同的方法。 要了解 Kubernetes 网络模型,请参阅[此处](/zh-cn/docs/concepts/services-networking/)。 + ## 如何实现 Kubernetes 的网络模型 {#how-to-implement-the-kubernetes-network-model} -网络模型由每个节点上的容器运行时实现。最常见的容器运行时使用 -[Container Network Interface](https://github.com/containernetworking/cni) (CNI) 插件来管理其网络和安全功能。 -许多不同的 CNI 插件来自于许多不同的供应商。其中一些仅提供添加和删除网络接口的基本功能, +网络模型由各节点上的容器运行时来实现。最常见的容器运行时使用 +[Container Network Interface](https://github.com/containernetworking/cni) (CNI) 插件来管理其网络和安全能力。 +来自不同供应商 CNI 插件有很多。其中一些仅提供添加和删除网络接口的基本功能, 而另一些则提供更复杂的解决方案,例如与其他容器编排系统集成、运行多个 CNI 插件、高级 IPAM 功能等。 + 请参阅[此页面](/zh-cn/docs/concepts/cluster-administration/addons/#networking-and-network-policy)了解 Kubernetes 支持的网络插件的非详尽列表。 ## {{% heading "whatsnext" %}} -网络模型的早期设计、运行原理以及未来的一些计划, -都在[联网设计文档](https://git.k8s.io/design-proposals-archive/network/networking.md)里有更详细的描述。 +网络模型的早期设计、运行原理都在[联网设计文档](https://git.k8s.io/design-proposals-archive/network/networking.md)里有详细描述。 +关于未来的计划,以及旨在改进 Kubernetes 联网能力的一些正在进行的工作,可以参考 SIG Network +的 [KEPs](https://github.com/kubernetes/enhancements/tree/master/keps/sig-network)。 diff --git a/content/zh-cn/docs/concepts/cluster-administration/system-traces.md b/content/zh-cn/docs/concepts/cluster-administration/system-traces.md index 56a4373bf06e0..aefeb8e90a2ed 100644 --- a/content/zh-cn/docs/concepts/cluster-administration/system-traces.md +++ b/content/zh-cn/docs/concepts/cluster-administration/system-traces.md @@ -215,12 +215,19 @@ span will be sent to the exporter. -Kubernetes v{{< skew currentVersion >}} 中的 kubelet 从垃圾回收、Pod -同步例程以及每个 gRPC 方法中收集 span。CRI-O 和 containerd -这类关联的容器运行时可以将链路链接到其导出的 span,以提供更多上下文信息。 +Kubernetes v{{< skew currentVersion >}} 中的 kubelet 收集与垃圾回收、Pod +同步例程以及每个 gRPC 方法相关的 Span。 +kubelet 借助 gRPC 来传播跟踪上下文,以便 CRI-O 和 containerd +这类带有跟踪插桩的容器运行时可以在其导出的 Span 与 kubelet +所提供的跟踪上下文之间建立关联。所得到的跟踪数据会包含 kubelet +与容器运行时 Span 之间的父子链接关系,从而为调试节点问题提供有用的上下文信息。 {{< glossary_definition term_id="workload" length="short" >}} @@ -19,29 +26,24 @@ Whether your workload is a single component or several that work together, on Ku it inside a set of [_pods_](/docs/concepts/workloads/pods). In Kubernetes, a Pod represents a set of running {{< glossary_tooltip text="containers" term_id="container" >}} on your cluster. - -A Pod has a defined lifecycle. For example, once a Pod is running in your cluster then -a critical failure on the {{< glossary_tooltip text="node" term_id="node" >}} where that -Pod is running means that all the Pods on that node fail. Kubernetes treats that level -of failure as final: you would need to create a new Pod even if the node later recovers. --> 在 Kubernetes 中,无论你的负载是由单个组件还是由多个一同工作的组件构成, 你都可以在一组 [**Pod**](/zh-cn/docs/concepts/workloads/pods) 中运行它。 在 Kubernetes 中,Pod 代表的是集群上处于运行状态的一组 -{{< glossary_tooltip text="容器" term_id="container" >}} 的集合。 +{{< glossary_tooltip text="容器" term_id="container" >}}的集合。 Kubernetes Pod 遵循[预定义的生命周期](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/)。 例如,当在你的集群中运行了某个 Pod,但是 Pod 所在的 {{< glossary_tooltip text="节点" term_id="node" >}} 出现致命错误时, 所有该节点上的 Pod 的状态都会变成失败。Kubernetes 将这类失败视为最终状态: -即使该节点后来恢复正常运行,你也需要创建新的 `Pod` 以恢复应用。 +即使该节点后来恢复正常运行,你也需要创建新的 Pod 以恢复应用。 -* [`Deployment`](/zh-cn/docs/concepts/workloads/controllers/deployment/) 和 - [`ReplicaSet`](/zh-cn/docs/concepts/workloads/controllers/replicaset/) +* [Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/) 和 + [ReplicaSet](/zh-cn/docs/concepts/workloads/controllers/replicaset/) (替换原来的资源 {{< glossary_tooltip text="ReplicationController" term_id="replication-controller" >}})。 - `Deployment` 很适合用来管理你的集群上的无状态应用,`Deployment` 中的所有 - `Pod` 都是相互等价的,并且在需要的时候被替换。 + Deployment 很适合用来管理你的集群上的无状态应用,Deployment 中的所有 + Pod 都是相互等价的,并且在需要的时候被替换。 * [StatefulSet](/zh-cn/docs/concepts/workloads/controllers/statefulset/) 让你能够运行一个或者多个以某种方式跟踪应用状态的 Pod。 - 例如,如果你的负载会将数据作持久存储,你可以运行一个 `StatefulSet`,将每个 - `Pod` 与某个 [`PersistentVolume`](/zh-cn/docs/concepts/storage/persistent-volumes/) - 对应起来。你在 `StatefulSet` 中各个 `Pod` 内运行的代码可以将数据复制到同一 - `StatefulSet` 中的其它 `Pod` 中以提高整体的服务可靠性。 + 例如,如果你的负载会将数据作持久存储,你可以运行一个 StatefulSet,将每个 + Pod 与某个 [PersistentVolume](/zh-cn/docs/concepts/storage/persistent-volumes/) + 对应起来。你在 StatefulSet 中各个 Pod 内运行的代码可以将数据复制到同一 + StatefulSet 中的其它 Pod 中以提高整体的服务可靠性。 * [DaemonSet](/zh-cn/docs/concepts/workloads/controllers/daemonset/) - 定义提供节点本地支撑设施的 `Pod`。这些 Pod 可能对于你的集群的运维是 + 定义提供节点本地支撑设施的 Pod。这些 Pod 可能对于你的集群的运维是 非常重要的,例如作为网络链接的辅助工具或者作为网络 {{< glossary_tooltip text="插件" term_id="addons" >}} 的一部分等等。每次你向集群中添加一个新节点时,如果该节点与某 `DaemonSet` - 的规约匹配,则控制平面会为该 `DaemonSet` 调度一个 `Pod` 到该新节点上运行。 + 的规约匹配,则控制平面会为该 DaemonSet 调度一个 Pod 到该新节点上运行。 * [Job](/zh-cn/docs/concepts/workloads/controllers/job/) 和 [CronJob](/zh-cn/docs/concepts/workloads/controllers/cron-jobs/)。 - 定义一些一直运行到结束并停止的任务。`Job` 用来执行一次性任务,而 - `CronJob` 用来执行的根据时间规划反复运行的任务。 + 定义一些一直运行到结束并停止的任务。 + 你可以使用 [Job](/zh-cn/docs/concepts/workloads/controllers/job/) + 来定义只需要执行一次并且执行后即视为完成的任务。你可以使用 + [CronJob](/zh-cn/docs/concepts/workloads/controllers/cron-jobs/) + 来根据某个排期表来多次运行同一个 Job。 在庞大的 Kubernetes 生态系统中,你还可以找到一些提供额外操作的第三方工作负载相关的资源。 通过使用[定制资源定义(CRD)](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/), 你可以添加第三方工作负载资源,以完成原本不是 Kubernetes 核心功能的工作。 -例如,如果你希望运行一组 `Pod`,但要求**所有** Pod 都可用时才执行操作 +例如,如果你希望运行一组 Pod,但要求**所有** Pod 都可用时才执行操作 (比如针对某种高吞吐量的分布式任务),你可以基于定制资源实现一个能够满足这一需求的扩展, 并将其安装到集群中运行。 @@ -127,23 +135,23 @@ then you can implement or install an extension that does provide that feature. 除了阅读了解每类资源外,你还可以了解与这些资源相关的任务: -* [使用 `Deployment` 运行一个无状态的应用](/zh-cn/docs/tasks/run-application/run-stateless-application-deployment/) +* [使用 Deployment 运行一个无状态的应用](/zh-cn/docs/tasks/run-application/run-stateless-application-deployment/) * 以[单实例](/zh-cn/docs/tasks/run-application/run-single-instance-stateful-application/)或者[多副本集合](/zh-cn/docs/tasks/run-application/run-replicated-stateful-application/) 的形式运行有状态的应用; -* [使用 `CronJob` 运行自动化的任务](/zh-cn/docs/tasks/job/automated-tasks-with-cron-jobs/) +* [使用 CronJob 运行自动化的任务](/zh-cn/docs/tasks/job/automated-tasks-with-cron-jobs/) -要了解 Kubernetes 将代码与配置分离的实现机制,可参阅[配置部分](/zh-cn/docs/concepts/configuration/)。 +要了解 Kubernetes 将代码与配置分离的实现机制,可参阅[配置](/zh-cn/docs/concepts/configuration/)节。 一旦你的应用处于运行状态,你就可能想要以 -[`Service`](/zh-cn/docs/concepts/services-networking/service/) +[Service](/zh-cn/docs/concepts/services-networking/service/) 的形式使之可在互联网上访问;或者对于 Web 应用而言,使用 -[`Ingress`](/zh-cn/docs/concepts/services-networking/ingress) 资源将其暴露到互联网上。 +[Ingress](/zh-cn/docs/concepts/services-networking/ingress) 资源将其暴露到互联网上。 diff --git a/content/zh-cn/docs/concepts/workloads/controllers/job.md b/content/zh-cn/docs/concepts/workloads/controllers/job.md index 482c2c737419a..06ec1cfd7660d 100644 --- a/content/zh-cn/docs/concepts/workloads/controllers/job.md +++ b/content/zh-cn/docs/concepts/workloads/controllers/job.md @@ -890,7 +890,7 @@ These are some requirements and semantics of the API: are ignored. When no rule matches the Pod failure, the default handling applies. - you may want to restrict a rule to a specific container by specifying its name - in`spec.podFailurePolicy.rules[*].containerName`. When not specified the rule + in`spec.podFailurePolicy.rules[*].onExitCodes.containerName`. When not specified the rule applies to all containers. When specified, it should match one the container or `initContainer` names in the Pod template. - you may specify the action taken when a Pod failure policy is matched by @@ -910,9 +910,9 @@ These are some requirements and semantics of the API: - 在 `spec.podFailurePolicy.rules` 中设定的 Pod 失效策略规则将按序评估。 一旦某个规则与 Pod 失效策略匹配,其余规则将被忽略。 当没有规则匹配 Pod 失效策略时,将会采用默认的处理方式。 -- 你可能希望在 `spec.podFailurePolicy.rules[*].containerName` - 中通过指定的名称将规则限制到特定容器。 - 如果不设置,规则将适用于所有容器。 +- 你可能希望在 `spec.podFailurePolicy.rules[*].onExitCodes.containerName` + 中通过指定的名称限制只能针对特定容器应用对应的规则。 + 如果不设置此属性,规则将适用于所有容器。 如果指定了容器名称,它应该匹配 Pod 模板中的一个普通容器或一个初始容器(Init Container)。 - 你可以在 `spec.podFailurePolicy.rules[*].action` 指定当 Pod 失效策略发生匹配时要采取的操作。 可能的值为: @@ -1155,17 +1155,13 @@ consume. ## Job 模式 {#job-patterns} -Job 对象可以用来支持多个 Pod 的可靠的并发执行。 -Job 对象不是设计用来支持相互通信的并行进程的,后者一般在科学计算中应用较多。 -Job 的确能够支持对一组相互独立而又有所关联的**工作条目**的并行处理。 +Job 对象可以用来处理一组相互独立而又彼此关联的“工作条目”。 这类工作条目可能是要发送的电子邮件、要渲染的视频帧、要编解码的文件、NoSQL 数据库中要扫描的主键范围等等。 @@ -1182,25 +1178,35 @@ The tradeoffs are: 并行计算的模式有好多种,每种都有自己的强项和弱点。这里要权衡的因素有: +- 每个工作条目对应一个 Job 或者所有工作条目对应同一 Job 对象。 + 为每个工作条目创建一个 Job 的做法会给用户带来一些额外的负担,系统需要管理大量的 Job 对象。 + 用一个 Job 对象来完成所有工作条目的做法更适合处理大量工作条目的场景。 +- 创建数目与工作条目相等的 Pod 或者令每个 Pod 可以处理多个工作条目。 + 当 Pod 个数与工作条目数目相等时,通常不需要在 Pod 中对现有代码和容器做较大改动; + 让每个 Pod 能够处理多个工作条目的做法更适合于工作条目数量较大的场合。 + -- 每个工作条目对应一个 Job 或者所有工作条目对应同一 Job 对象。 - 后者更适合处理大量工作条目的场景; - 前者会给用户带来一些额外的负担,而且需要系统管理大量的 Job 对象。 -- 创建与工作条目相等的 Pod 或者令每个 Pod 可以处理多个工作条目。 - 前者通常不需要对现有代码和容器做较大改动; - 后者则更适合工作条目数量较大的场合,原因同上。 - 有几种技术都会用到工作队列。这意味着需要运行一个队列服务, 并修改现有程序或容器使之能够利用该工作队列。 与之比较,其他方案在修改现有容器化应用以适应需求方面可能更容易一些。 +- 当 Job 与某个[无头 Service](/zh-cn/docs/concepts/services-networking/service/#headless-services) + 之间存在关联时,你可以让 Job 中的 Pod 之间能够相互通信,从而协作完成计算。 下面是对这些权衡的汇总,第 2 到 4 列对应上面的权衡比较。 模式的名称对应了相关示例和更详细描述的链接。 @@ -1222,8 +1228,8 @@ The pattern names are also links to examples and more detailed description. | [每工作条目一 Pod 的队列](/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue/) | ✓ | | 有时 | | [Pod 数量可变的队列](/zh-cn/docs/tasks/job/fine-parallel-processing-work-queue/) | ✓ | ✓ | | | [静态任务分派的带索引的 Job](/zh-cn/docs/tasks/job/indexed-parallel-processing-static) | ✓ | | ✓ | -| [Job 模板扩展](/zh-cn/docs/tasks/job/parallel-processing-expansion/) | | | ✓ | | [带 Pod 间通信的 Job](/zh-cn/docs/tasks/job/job-with-pod-to-pod-communication/) | ✓ | 有时 | 有时 | +| [Job 模板扩展](/zh-cn/docs/tasks/job/parallel-processing-expansion/) | | | ✓ | | 模式 | `.spec.completions` | `.spec.parallelism` | | ----- |:-------------------:|:--------------------:| | [每工作条目一 Pod 的队列](/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue/) | W | 任意值 | | [Pod 个数可变的队列](/zh-cn/docs/tasks/job/fine-parallel-processing-work-queue/) | 1 | 任意值 | | [静态任务分派的带索引的 Job](/zh-cn/docs/tasks/job/indexed-parallel-processing-static) | W | | 任意值 | -| [Job 模板扩展](/zh-cn/docs/tasks/job/parallel-processing-expansion/) | 1 | 应该为 1 | | [带 Pod 间通信的 Job](/zh-cn/docs/tasks/job/job-with-pod-to-pod-communication/) | W | W | +| [Job 模板扩展](/zh-cn/docs/tasks/job/parallel-processing-expansion/) | 1 | 应该为 1 | -### 有序索引 {#ordinal-index} +### 序号索引 {#ordinal-index} 对于具有 N 个[副本](#replicas)的 StatefulSet,该 StatefulSet 中的每个 Pod 将被分配一个整数序号, -该序号在此 StatefulSet 上是唯一的。默认情况下,这些 Pod 将被从 0 到 N-1 的序号。 +该序号在此 StatefulSet 中是唯一的。默认情况下,这些 Pod 将被赋予从 0 到 N-1 的序号。 +StatefulSet 的控制器也会添加一个包含此索引的 Pod 标签:`apps.kubernetes.io/pod-index`。 +### Pod 索引标签 {#pod-index-label} + +{{< feature-state for_k8s_version="v1.28" state="beta" >}} + + +当 StatefulSet {{}}创建一个 Pod 时, +新的 Pod 会被打上 `apps.kubernetes.io/pod-index` 标签。标签的取值为 Pod 的序号索引。 +此标签使你能够将流量路由到特定索引值的 Pod、使用 Pod 索引标签来过滤日志或度量值等等。 +注意要使用这一特性需要启用特性门控 `PodIndexLabel`,而该门控默认是被启用的。 + 节点鉴权是一种特殊用途的鉴权模式,专门对 kubelet 发出的 API 请求进行授权。 - * services * endpoints @@ -57,8 +58,10 @@ Write operations: 写入操作: * 节点和节点状态(启用 `NodeRestriction` 准入插件以限制 kubelet 只能修改自己的节点) @@ -71,8 +74,11 @@ Auth-related operations: 身份认证与鉴权相关的操作: * 对于基于 TLS 的启动引导过程时使用的 [certificationsigningrequests API](/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/) @@ -80,25 +86,33 @@ Auth-related operations: * 为委派的身份验证/鉴权检查创建 TokenReview 和 SubjectAccessReview 的能力 在将来的版本中,节点鉴权器可能会添加或删除权限,以确保 kubelet 具有正确操作所需的最小权限集。 -为了获得节点鉴权器的授权,kubelet 必须使用一个凭证以表示它在 `system:nodes` +为了获得节点鉴权器的授权,kubelet 必须使用一个凭据以表示它在 `system:nodes` 组中,用户名为 `system:node:`。上述的组名和用户名格式要与 [kubelet TLS 启动引导](/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) 过程中为每个 kubelet 创建的标识相匹配。 `` 的值**必须**与 kubelet 注册的节点名称精确匹配。默认情况下,节点名称是由 `hostname` 提供的主机名,或者通过 kubelet `--hostname-override` @@ -114,7 +128,10 @@ To enable the Node authorizer, start the apiserver with `--authorization-mode=No 要启用节点鉴权器,请使用 `--authorization-mode=Node` 启动 API 服务器。 要限制 kubelet 可以写入的 API 对象,请使用 `--enable-admission-plugins=...,NodeRestriction,...` 启动 API 服务器,从而启用 @@ -132,8 +149,9 @@ To limit the API objects kubelets are able to write, enable the [NodeRestriction ### 在 `system:nodes` 组之外的 kubelet {#kubelets-outside-the-system-nodes-group} `system:nodes` 组之外的 kubelet 不会被 `Node` 鉴权模式授权,并且需要继续通过当前授权它们的机制来授权。 @@ -151,7 +169,7 @@ because they do not have a username in the `system:node:...` format. These kubelets would not be authorized by the `Node` authorization mode, and would need to continue to be authorized via whatever mechanism currently authorizes them. --> -在一些部署中,kubelet 具有 `system:nodes` 组的凭证, +在一些部署中,kubelet 具有 `system:nodes` 组的凭据, 但是无法给出它们所关联的节点的标识,因为它们没有 `system:node:...` 格式的用户名。 这些 kubelet 不会被 `Node` 鉴权模式授权,并且需要继续通过当前授权它们的任何机制来授权。 @@ -161,65 +179,3 @@ since the default node identifier implementation would not consider that a node --> 因为默认的节点标识符实现不会把它当作节点身份标识,`NodeRestriction` 准入插件会忽略来自这些 kubelet 的请求。 - - -### 相对于以前使用 RBAC 的版本的更新 {#upgrades-from-previous-versions-using-rbac} - - -升级的 1.7 之前的使用 [RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/) -的集群将继续按原样运行,因为 `system:nodes` 组绑定已经存在。 - - -如果集群管理员希望开始使用 `Node` 鉴权器和 `NodeRestriction` 准入插件来限制节点对 -API 的访问,这一需求可以通过下列操作来完成且不会影响已部署的应用: - - -1. 启用 `Node` 鉴权模式 (`--authorization-mode=Node,RBAC`) 和 `NodeRestriction` 准入插件 -2. 确保所有 kubelet 的凭据符合组/用户名要求 -3. 审核 API 服务器日志以确保 `Node` 鉴权器不会拒绝来自 kubelet 的请求(日志中没有持续的 `NODE DENY` 消息) -4. 删除 `system:node` 集群角色绑定 - - -### RBAC 节点权限 {#rbac-node-permissions} - - -在 1.6 版本中,当使用 [RBAC 鉴权模式](/zh-cn/docs/reference/access-authn-authz/rbac/) -时,`system:nodes` 集群角色会被自动绑定到 `system:node` 组。 - - -在 1.7 版本中,不再推荐将 `system:nodes` 组自动绑定到 `system:node` -角色,因为节点鉴权器通过对 Secret 和 ConfigMap 访问的额外限制完成了相同的任务。 -如果同时启用了 `Node` 和 `RBAC` 鉴权模式,1.7 版本则不会创建 `system:nodes` -组到 `system:node` 角色的自动绑定。 - - -在 1.8 版本中,绑定将根本不会被创建。 - - -使用 RBAC 时,将继续创建 `system:node` 集群角色,以便与将其他用户或组绑定到该角色的部署方法兼容。 From dd7930f5d7fd8956fe0697a68620cbf2c2d45bf5 Mon Sep 17 00:00:00 2001 From: xin gu <418294249@qq.com> Date: Sat, 7 Oct 2023 13:12:46 +0800 Subject: [PATCH 073/229] sync configure-upgrade-etc kubeadm-certs verify-signed-artifacts --- .../administer-cluster/configure-upgrade-etcd.md | 11 +++++++++++ .../tasks/administer-cluster/kubeadm/kubeadm-certs.md | 5 +++-- .../administer-cluster/verify-signed-artifacts.md | 4 ++-- 3 files changed, 16 insertions(+), 4 deletions(-) diff --git a/content/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd.md b/content/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd.md index 9545872007f26..89d76cca43508 100644 --- a/content/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd.md +++ b/content/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd.md @@ -21,6 +21,17 @@ weight: 270 {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +你需要有一个 Kubernetes 集群,并且必须配置 kubectl 命令行工具以与你的集群通信。 +建议在至少有两个不充当控制平面的节点上运行此任务。如果你还没有集群, +你可以使用 [minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/) 创建一个。 + 在集群创建过程中,kubeadm 对 `admin.conf` 中的证书进行签名时,将其配置为 `Subject: O = system:masters, CN = kubernetes-admin`。 [`system:masters`](/zh-cn/docs/reference/access-authn-authz/rbac/#user-facing-roles) -是一个例外的超级用户组,可以绕过鉴权层(例如 RBAC)。 +是一个例外的超级用户组,可以绕过鉴权层(例如 [RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/))。 强烈建议不要将 `admin.conf` 文件与任何人共享。 你需要安装以下工具: - `cosign`([安装指南](https://docs.sigstore.dev/cosign/installation/)) - `curl`(通常由你的操作系统提供) -- `jq`([下载 jq](https://stedlan.github.io/jq/download/)) +- `jq`([下载 jq](https://jqlang.github.io/jq/download/)) 关键的设计思想是在 Pod 的卷来源中允许使用 -[卷申领的参数](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ephemeralvolumesource-v1alpha1-core)。 +[卷申领的参数](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ephemeralvolumesource-v1-core)。 PersistentVolumeClaim 的标签、注解和整套字段集均被支持。 -创建这样一个 Pod 后, -临时卷控制器在 Pod 所属的命名空间中创建一个实际的 PersistentVolumeClaim 对象, -并确保删除 Pod 时,同步删除 PersistentVolumeClaim。 +创建这样一个 Pod 后,临时卷控制器在 Pod 所属的命名空间中创建一个实际的 +PersistentVolumeClaim 对象,并确保删除 Pod 时,同步删除 PersistentVolumeClaim。 如上设置将触发卷的绑定与/或制备,相应动作或者在 {{< glossary_tooltip text="StorageClass" term_id="storage-class" >}} -使用即时卷绑定时立即执行, -或者当 Pod 被暂时性调度到某节点时执行 (`WaitForFirstConsumer` 卷绑定模式)。 +使用即时卷绑定时立即执行,或者当 Pod 被暂时性调度到某节点时执行 (`WaitForFirstConsumer` 卷绑定模式)。 对于通用的临时卷,建议采用后者,这样调度器就可以自由地为 Pod 选择合适的节点。 对于即时绑定,调度器则必须选出一个节点,使得在卷可用时,能立即访问该卷。 @@ -355,8 +353,8 @@ and in this case you need to ensure that volume clean up happens separately. 拥有通用临时存储的 Pod 是提供临时存储 (ephemeral storage) 的 PersistentVolumeClaim 的所有者。 当 Pod 被删除时,Kubernetes 垃圾收集器会删除 PVC, 然后 PVC 通常会触发卷的删除,因为存储类的默认回收策略是删除卷。 -你可以使用带有 `retain` 回收策略的 StorageClass 创建准临时 (quasi-ephemeral) 本地存储: -该存储比 Pod 寿命长,在这种情况下,你需要确保单独进行卷清理。 +你可以使用带有 `retain` 回收策略的 StorageClass 创建准临时 (Quasi-Ephemeral) 本地存储: +该存储比 Pod 寿命长,所以在这种情况下,你需要确保单独进行卷清理。 自动创建的 PVC 采取确定性的命名机制:名称是 Pod 名称和卷名称的组合,中间由连字符(`-`)连接。 -在上面的示例中,PVC 将命名为 `my-app-scratch-volume` 。 +在上面的示例中,PVC 将被命名为 `my-app-scratch-volume` 。 这种确定性的命名机制使得与 PVC 交互变得更容易,因为一旦知道 Pod 名称和卷名,就不必搜索它。 -这种命名机制也引入了潜在的冲突, -不同的 Pod 之间(名为 “Pod-a” 的 Pod 挂载名为 "scratch" 的卷, -和名为 "pod" 的 Pod 挂载名为 “a-scratch” 的卷,这两者均会生成名为 -"pod-a-scratch" 的 PVC),或者在 Pod 和手工创建的 PVC 之间可能出现冲突。 +这种命名机制也引入了潜在的冲突,不同的 Pod 之间(名为 “Pod-a” 的 +Pod 挂载名为 "scratch" 的卷,和名为 "pod" 的 Pod 挂载名为 “a-scratch” 的卷, +这两者均会生成名为 "pod-a-scratch" 的 PVC),或者在 Pod 和手工创建的 +PVC 之间可能出现冲突。 -以下冲突会被检测到:如果 PVC 是为 Pod 创建的,那么它只用于临时卷。 +这类冲突会被检测到:如果 PVC 是为 Pod 创建的,那么它只用于临时卷。 此检测基于所有权关系。现有的 PVC 不会被覆盖或修改。 但这并不能解决冲突,因为如果没有正确的 PVC,Pod 就无法启动。 +{{< caution >}} -{{< caution >}} 当同一个命名空间中命名 Pod 和卷时,要小心,以防止发生此类冲突。 {{< /caution >}} @@ -461,7 +459,7 @@ See [local ephemeral storage](/docs/concepts/configuration/manage-resources-cont - 有关设计的更多信息,参阅 [Ephemeral Inline CSI volumes KEP](https://github.com/kubernetes/enhancements/blob/ad6021b3d61a49040a3f835e12c8bb5424db2bbb/keps/sig-storage/20190122-csi-inline-volumes.md)。 -- 本特性下一步开发的更多信息,参阅 +- 关于本特性下一步开发的更多信息,参阅 [enhancement tracking issue #596](https://github.com/kubernetes/enhancements/issues/596)。 旧版本的 Kubernetes 仍支持这些“树内(In-Tree)”持久卷类型: @@ -943,13 +943,10 @@ Older versions of Kubernetes also supported the following in-tree PersistentVolu * [`cinder`](/zh-cn/docs/concepts/storage/volumes/#cinder) - Cinder (OpenStack block storage) (v1.27 开始**不可用**) * `photonPersistentDisk` - Photon 控制器持久化盘。(从 v1.15 版本开始将**不可用**) -* [`scaleIO`](/zh-cn/docs/concepts/storage/volumes/#scaleio) - ScaleIO 卷(v1.21 之后**不可用**) -* [`flocker`](/zh-cn/docs/concepts/storage/volumes/#flocker) - Flocker 存储 - (v1.25 之后**不可用**) -* [`quobyte`](/zh-cn/docs/concepts/storage/volumes/#quobyte) - Quobyte 卷 - (v1.25 之后**不可用**) -* [`storageos`](/zh-cn/docs/concepts/storage/volumes/#storageos) - StorageOS 卷 - (v1.25 之后**不可用**) +* `scaleIO` - ScaleIO 卷(v1.21 之后**不可用**) +* `flocker` - Flocker 存储 (v1.25 之后**不可用**) +* `quobyte` - Quobyte 卷 (v1.25 之后**不可用**) +* `storageos` - StorageOS 卷 (v1.25 之后**不可用**) ### 带有 Secret、DownwardAPI 和 ConfigMap 的配置示例 {#example-configuration-secret-downwardapi-configmap} -{{< codenew file="pods/storage/projected-secret-downwardapi-configmap.yaml" >}} +{{% code_sample file="pods/storage/projected-secret-downwardapi-configmap.yaml" %}} ### 带有非默认权限模式设置的 Secret 的配置示例 {#example-configuration-secrets-nondefault-permission-mode} -{{< codenew file="pods/storage/projected-secrets-nondefault-permission-mode.yaml" >}} +{{% code_sample file="pods/storage/projected-secrets-nondefault-permission-mode.yaml" %}} StorageClass 对象的命名很重要,用户使用这个命名来请求生成一个特定的类。 -当创建 StorageClass 对象时,管理员设置 StorageClass 对象的命名和其他参数, -一旦创建了对象就不能再对其更新。 +当创建 StorageClass 对象时,管理员设置 StorageClass 对象的命名和其他参数。 ### 删除策略 {#deletion-policy} -卷快照类具有 `deletionPolicy` 属性。用户可以配置当所绑定的 VolumeSnapshot -对象将被删除时,如何处理 VolumeSnapshotContent 对象。 +卷快照类具有 [deletionPolicy] 属性(/zh-cn/docs/concepts/storage/volume-snapshots/#delete)。 +用户可以配置当所绑定的 VolumeSnapshot 对象将被删除时,如何处理 VolumeSnapshotContent 对象。 卷快照类的这个策略可以是 `Retain` 或者 `Delete`。这个策略字段必须指定。 如果删除策略是 `Delete`,那么底层的存储快照会和 VolumeSnapshotContent 对象 From a6f2e7d26666335cd157fe6ec9f5163a632c64c2 Mon Sep 17 00:00:00 2001 From: windsonsea Date: Sun, 8 Oct 2023 14:09:27 +0800 Subject: [PATCH 075/229] [zh] Sync kubelet-tls-bootstrapping.md --- .../kubelet-tls-bootstrapping.md | 207 ++++++++++-------- 1 file changed, 120 insertions(+), 87 deletions(-) diff --git a/content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md b/content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md index 9cfd2cf49dc86..5ad0658471c08 100644 --- a/content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md +++ b/content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md @@ -17,8 +17,10 @@ weight: 120 在一个 Kubernetes 集群中,工作节点上的组件(kubelet 和 kube-proxy)需要与 @@ -27,8 +29,9 @@ Kubernetes 控制平面组件通信,尤其是 kube-apiserver。 我们强烈建议使用节点上的客户端 TLS 证书。 启动引导这些组件的正常过程,尤其是需要证书来与 kube-apiserver 安全通信的工作节点, @@ -36,8 +39,8 @@ This in turn, can make it challenging to initialize or scale a cluster. 这也使得初始化或者扩缩一个集群的操作变得具有挑战性。 @@ -61,16 +64,17 @@ When a worker node starts up, the kubelet does the following: 1. 寻找自己的 `kubeconfig` 文件 -2. 检索 API 服务器的 URL 和凭据,通常是来自 `kubeconfig` 文件中的 +1. 检索 API 服务器的 URL 和凭据,通常是来自 `kubeconfig` 文件中的 TLS 密钥和已签名证书 -3. 尝试使用这些凭据来与 API 服务器通信 +1. 尝试使用这些凭据来与 API 服务器通信 负责部署和管理集群的人有以下责任: 1. 创建 CA 密钥和证书 -2. 将 CA 证书发布到 kube-apiserver 运行所在的控制平面节点上 -3. 为每个 kubelet 创建密钥和证书;强烈建议为每个 kubelet 使用独一无二的、 +1. 将 CA 证书发布到 kube-apiserver 运行所在的控制平面节点上 +1. 为每个 kubelet 创建密钥和证书;强烈建议为每个 kubelet 使用独一无二的、 CN 取值与众不同的密钥和证书 -4. 使用 CA 密钥对 kubelet 证书签名 -5. 将 kubelet 密钥和签名的证书发布到 kubelet 运行所在的特定节点上 +1. 使用 CA 密钥对 kubelet 证书签名 +1. 将 kubelet 密钥和签名的证书发布到 kubelet 运行所在的特定节点上 本文中描述的 TLS 启动引导过程有意简化甚至完全自动化上述过程, @@ -121,16 +126,16 @@ In the bootstrap initialization process, the following occurs: 1. kubelet 启动 2. kubelet 看到自己**没有**对应的 `kubeconfig` 文件 @@ -145,12 +150,12 @@ In the bootstrap initialization process, the following occurs: 来批复该 CSR 9. kubelet 所需要的证书被创建 10. 证书被发放给 kubelet 11. kubelet 取回该证书 @@ -190,8 +195,9 @@ In addition, you need your Kubernetes Certificate Authority (CA). ## 证书机构 {#certificate-authority} @@ -200,10 +206,12 @@ to sign the kubelet certificate. As before, it is your responsibility to distrib 如前所述,将证书机构密钥和证书发布到控制平面节点是你的责任。 就本文而言,我们假定这些数据被发布到控制平面节点上的 `/var/lib/kubernetes/ca.pem`(证书)和 `/var/lib/kubernetes/ca-key.pem`(密钥)文件中。 @@ -247,8 +255,9 @@ containing the signing certificate, for example ### 初始启动引导认证 {#initial-bootstrap-authentication} @@ -262,16 +271,17 @@ bootstrap credentials, the following two authenticators are recommended for ease of provisioning. 1. [Bootstrap Tokens](#bootstrap-tokens) -2. [Token authentication file](#token-authentication-file) +1. [Token authentication file](#token-authentication-file) --> 尽管所有身份认证策略都可以用来对 kubelet 的初始启动凭据来执行认证, -出于容易准备的因素,建议使用如下两个身份认证组件: +但出于容易准备的因素,建议使用如下两个身份认证组件: 1. [启动引导令牌(Bootstrap Token)](#bootstrap-tokens) 2. [令牌认证文件](#token-authentication-file) 启动引导令牌是一种对 kubelet 进行身份认证的方法,相对简单且容易管理, 且不需要在启动 kube-apiserver 时设置额外的标志。 @@ -280,15 +290,16 @@ Using bootstrap tokens is a simpler and more easily managed method to authentica Whichever method you choose, the requirement is that the kubelet be able to authenticate as a user with the rights to: 1. create and retrieve CSRs -2. be automatically approved to request node client certificates, if automatic approval is enabled. +1. be automatically approved to request node client certificates, if automatic approval is enabled. --> 无论选择哪种方法,这里的需求是 kubelet 能够被身份认证为某个具有如下权限的用户: 1. 创建和读取 CSR -2. 在启用了自动批复时,能够在请求节点客户端证书时得到自动批复 +1. 在启用了自动批复时,能够在请求节点客户端证书时得到自动批复 使用启动引导令牌执行身份认证的 kubelet 会被认证为 `system:bootstrappers` 组中的用户。这是使用启动引导令牌的一种标准方法。 @@ -301,38 +312,41 @@ requests related to certificate provisioning. With RBAC in place, scoping the tokens to a group allows for great flexibility. For example, you could disable a particular bootstrap group's access when you are done provisioning the nodes. --> -随着这个功能特性的逐渐成熟,你需要确保令牌绑定到某基于角色的访问控制(RBAC) -策略上,从而严格限制请求(使用[启动引导令牌](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/)) +随着这个功能特性的逐渐成熟,你需要确保令牌绑定到某基于角色的访问控制(RBAC)策略上, +从而严格限制请求(使用[启动引导令牌](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/)) 仅限于客户端申请提供证书。当 RBAC 被配置启用时,可以将令牌限制到某个组, 从而提高灵活性。例如,你可以在准备节点期间禁止某特定启动引导组的访问。 #### 启动引导令牌 {#bootstrap-tokens} -启动引导令牌的细节在[这里](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/) -详述。启动引导令牌在 Kubernetes 集群中存储为 Secret 对象,被发放给各个 kubelet。 +启动引导令牌的细节在[这里](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/)详述。 +启动引导令牌在 Kubernetes 集群中存储为 Secret 对象,被发放给各个 kubelet。 你可以在整个集群中使用同一个令牌,也可以为每个节点发放单独的令牌。 这一过程有两个方面: 1. 基于令牌 ID、机密数据和范畴信息创建 Kubernetes Secret -2. 将令牌发放给 kubelet +1. 将令牌发放给 kubelet 从 kubelet 的角度,所有令牌看起来都很像,没有特别的含义。 @@ -407,7 +421,8 @@ certificate signing request (CSR) as well as retrieve it when done. Fortunately, Kubernetes ships with a `ClusterRole` with precisely these (and only these) permissions, `system:node-bootstrapper`. -To do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` group to the cluster role `system:node-bootstrapper`. +To do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` +group to the cluster role `system:node-bootstrapper`. --> ### 授权 kubelet 创建 CSR {#authorize-kubelet-to-create-csr} @@ -419,6 +434,9 @@ To do this, you only need to create a `ClusterRoleBinding` that binds the `syste 为了实现这一点,你只需要创建 `ClusterRoleBinding`,将 `system:bootstrappers` 组绑定到集群角色 `system:node-bootstrapper`。 + ```yaml # 允许启动引导节点创建 CSR apiVersion: rbac.authorization.k8s.io/v1 @@ -443,7 +461,7 @@ the controller-manager is responsible for issuing actual signed certificates. --> ## kube-controller-manager 配置 {#kube-controller-manager-configuration} -API 服务器从 kubelet 收到证书请求并对这些请求执行身份认证, +尽管 API 服务器从 kubelet 收到证书请求并对这些请求执行身份认证, 但真正负责发放签名证书的是控制器管理器(controller-manager)。 由于这些被签名的证书反过来会被 kubelet 用来在 kube-apiserver 执行普通的 kubelet 身份认证,很重要的一点是为控制器管理器所提供的 CA 也被 kube-apiserver 信任用来执行身份认证。CA 密钥和证书是通过 kube-apiserver 的标志 -`--client-ca-file=FILENAME`(例如,`--client-ca-file=/var/lib/kubernetes/ca.pem`), -来设定的,正如 kube-apiserver 配置节所述。 +`--client-ca-file=FILENAME`(例如 `--client-ca-file=/var/lib/kubernetes/ca.pem`)来设定的, +正如 kube-apiserver 配置节所述。 要将 Kubernetes CA 密钥和证书提供给 kube-controller-manager,可使用以下标志: @@ -530,23 +549,30 @@ RBAC permissions to the correct group. 许可权限有两组: * `nodeclient`:如果节点在为节点创建新的证书,则该节点还没有证书。 - 该节点使用前文所列的令牌之一来执行身份认证,因此是组 `system:bootstrappers` 组的成员。 + 该节点使用前文所列的令牌之一来执行身份认证,因此是 `system:bootstrappers` 组的成员。 * `selfnodeclient`:如果节点在对证书执行续期操作,则该节点已经拥有一个证书。 节点持续使用现有的证书将自己认证为 `system:nodes` 组的成员。 要允许 kubelet 请求并接收新的证书,可以创建一个 `ClusterRoleBinding` 将启动引导节点所处的组 `system:bootstrappers` 绑定到为其赋予访问权限的 `ClusterRole` `system:certificates.k8s.io:certificatesigningrequests:nodeclient`: + ```yaml # 批复 "system:bootstrappers" 组的所有 CSR apiVersion: rbac.authorization.k8s.io/v1 @@ -564,13 +590,17 @@ roleRef: ``` 要允许 kubelet 对其客户端证书执行续期操作,可以创建一个 `ClusterRoleBinding` 将正常工作的节点所处的组 `system:nodes` 绑定到为其授予访问许可的 `ClusterRole` `system:certificates.k8s.io:certificatesigningrequests:selfnodeclient`: + ```yaml # 批复 "system:nodes" 组的 CSR 续约请求 apiVersion: rbac.authorization.k8s.io/v1 @@ -602,14 +632,14 @@ collection. 的一部分的 `csrapproving` 控制器是自动被启用的。 该控制器使用 [`SubjectAccessReview` API](/zh-cn/docs/reference/access-authn-authz/authorization/#checking-api-access) 来确定给定用户是否被授权请求 CSR,之后基于鉴权结果执行批复操作。 -为了避免与其它批复组件发生冲突,内置的批复组件不会显式地拒绝任何 CSRs。 -该组件仅是忽略未被授权的请求。 -控制器也会作为垃圾收集的一部分清除已过期的证书。 +为了避免与其它批复组件发生冲突,内置的批复组件不会显式地拒绝任何 CSR。 +该组件仅是忽略未被授权的请求。控制器也会作为垃圾收集的一部分清除已过期的证书。 ## kubelet 配置 {#kubelet-configuration} @@ -640,7 +670,7 @@ Its format is identical to a normal `kubeconfig` file. A sample file might look 启动引导 `kubeconfig` 文件应该放在一个 kubelet 可访问的路径下,例如 `/var/lib/kubelet/bootstrap-kubeconfig`。 -其格式与普通的 `kubeconfig` 文件完全相同。实例文件可能看起来像这样: +其格式与普通的 `kubeconfig` 文件完全相同。示例文件可能看起来像这样: ```yaml apiVersion: v1 @@ -721,12 +751,12 @@ directory specified by `--cert-dir`. 证书和密钥文件会被放到 `--cert-dir` 所指定的目录中。 -### 客户和服务证书 {#client-and-serving-certificates} +### 客户端和服务证书 {#client-and-serving-certificates} 前文所述的内容都与 kubelet **客户端**证书相关,尤其是 kubelet 用来向 kube-apiserver 认证自身身份的证书。 @@ -758,7 +788,7 @@ TLS 启动引导所提供的客户端证书默认被签名为仅用于 `client a 不过,你可以启用服务器证书,至少可以部分地通过证书轮换来实现这点。 @@ -818,9 +848,9 @@ A deployment-specific approval process for kubelet serving certificates should t 1. are requested by nodes (ensure the `spec.username` field is of the form `system:node:` and `spec.groups` contains `system:nodes`) -2. request usages for a serving certificate (ensure `spec.usages` contains `server auth`, +1. request usages for a serving certificate (ensure `spec.usages` contains `server auth`, optionally contains `digital signature` and `key encipherment`, and contains no other usages) -3. only have IP and DNS subjectAltNames that belong to the requesting node, +1. only have IP and DNS subjectAltNames that belong to the requesting node, and have no URI and Email subjectAltNames (parse the x509 Certificate Signing Request in `spec.request` to verify `subjectAltNames`) --> @@ -828,9 +858,9 @@ A deployment-specific approval process for kubelet serving certificates should t 1. 由节点发出的请求(确保 `spec.username` 字段形式为 `system:node:` 且 `spec.groups` 包含 `system:nodes`) -2. 请求中包含服务证书用法(确保 `spec.usages` 中包含 `server auth`,可选地也可包含 +1. 请求中包含服务证书用法(确保 `spec.usages` 中包含 `server auth`,可选地也可包含 `digital signature` 和 `key encipherment`,且不包含其它用法) -3. 仅包含隶属于请求节点的 IP 和 DNS 的 `subjectAltNames`,没有 URI 和 Email +1. 仅包含隶属于请求节点的 IP 和 DNS 的 `subjectAltNames`,没有 URI 和 Email 形式的 `subjectAltNames`(解析 `spec.request` 中的 x509 证书签名请求可以检查 `subjectAltNames`) {{< /note >}} @@ -857,7 +887,11 @@ You have several options for generating these credentials: * 较老的方式:和 kubelet 在 TLS 启动引导之前所做的一样,用类似的方式创建和分发证书。 * DaemonSet:由于 kubelet 自身被加载到所有节点之上,并且有足够能力来启动基本服务, @@ -874,7 +908,7 @@ manager. --> ## kubectl 批复 {#kubectl-approval} -CSR 可以在编译进控制器内部的批复工作流之外被批复。 +CSR 可以在编译进控制器管理器内部的批复工作流之外被批复。 + + +**作者**:Frederico Muñoz (SAS Institute) + +**译者**:[Michael Yao](https://github.com/windsonsea) (DaoCloud) + + +**这是 SIG Architecture 焦点访谈系列的首次采访,这一系列访谈将涵盖多个子项目。 +我们从 SIG Architecture:Conformance 子项目开始。** + +在本次 [SIG Architecture](https://github.com/kubernetes/community/blob/master/sig-architecture/README.md) +访谈中,我们与 [Riaan Kleinhans](https://github.com/Riaankl) (ii-Team) 进行了对话,他是 +[Conformance 子项目](https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#conformance-definition-1)的负责人。 + + +## 关于 SIG Architecture 和 Conformance 子项目 + +**Frederico (FSM)**:你好 Riaan,欢迎!首先,请介绍一下你自己,你的角色以及你是如何参与 Kubernetes 的。 + +**Riaan Kleinhans (RK)**:嗨!我叫 Riaan Kleinhans,我住在南非。 +我是新西兰 [ii-Team](ii.nz) 的项目经理。在我加入 ii 时,本来计划在 2020 年 4 月搬到新西兰, +然后新冠疫情爆发了。幸运的是,作为一个灵活和富有活力的团队,我们能够在各个不同的时区以远程方式协作。 + + +ii 团队负责管理 Kubernetes Conformance 测试的技术债务,并编写测试内容来消除这些技术债务。 +我担任项目经理的角色,成为监控、测试内容编写和社区之间的桥梁。通过这项工作,我有幸在最初的几个月里结识了 +[Dan Kohn](https://github.com/dankohn),他对我们的工作充满热情,给了我很大的启发。 + + +**FSM**:谢谢!所以,你参与 SIG Architecture 是因为合规性的工作? + +**RK**:SIG Architecture 负责管理 Kubernetes Conformance 子项目。 +最初,我大部分时间直接与 SIG Architecture 交流 Conformance 子项目。 +然而,随着我们开始按 SIG 来组织工作任务,我们开始直接与各个 SIG 进行协作。 +与拥有未被测试的 API 的这些 SIG 的协作帮助我们加快了工作进度。 + + +**FSM**:你如何描述 Conformance 子项目的主要目标和介入的领域? + +**RM**: Kubernetes Conformance 子项目专注于通过开发和维护全面的合规性测试套件来确保兼容性并遵守 +Kubernetes 规范。其主要目标包括确保不同 Kubernetes 实现之间的兼容性,验证 API 规范的遵守情况, +通过鼓励合规性认证来支持生态体系,并促进 Kubernetes 社区内的合作。 +通过提供标准化的测试并促进一致的行为和功能, +Conformance 子项目为开发人员和用户提供了一个可靠且兼容的 Kubernetes 生态体系。 + + +## 关于 Conformance Test Suite 的更多内容 + +**FSM**:我认为,提供这些标准化测试的一部分工作在于 +[Conformance Test Suite](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)。 +你能解释一下它是什么以及其重要性吗? + +**RK**:Kubernetes Conformance Test Suite 检查 Kubernetes 发行版是否符合项目的规范, +确保在不同的实现之间的兼容性。它涵盖了诸如 API、联网、存储、调度和安全等各个特性。 +能够通过测试,则表示实现合理,便于推动构建一致且可移植的容器编排平台。 + + +**FSM**:是的,这些测试很重要,因为它们定义了所有 Kubernetes 集群必须支持的最小特性集合。 +你能描述一下决定将哪些特性包含在内的过程吗?在最小特性集的思路与其他 SIG 提案之间是否有所冲突? + +**RK**:SIG Architecture 针对经受合规性测试的每个端点的要求,都有明确的定义。 +API 端点只有正式发布且不是可选的特性,才会被(进一步)考虑是否合规。 +多年来,关于合规性配置文件已经进行了若干讨论, +探讨将被大多数终端用户广泛使用的可选端点(例如 RBAC)纳入特定配置文件中的可能性。 +然而,这一方面仍在不断改进中。 + + +不满足合规性标准的端点被列在 +[ineligible_endpoints.yaml](https://github.com/kubernetes/kubernetes/blob/master/test/conformance/testdata/ineligible_endpoints.yaml) 中, +该文件放在 Kubernetes 代码仓库中,是被公开访问的。 +随着这些端点的状态或要求发生变化,此文件可能会被更新以添加或删除端点。 +不合格的端点也可以在 [APISnoop](https://apisnoop.cncf.io/) 上看到。 + +对于 SIG Architecture 来说,确保透明度并纳入社区意见以确定端点的合格或不合格状态是至关重要的。 + + +**FSM**:为新特性编写测试内容通常需要某种强制执行方式。 +你如何看待 Kubernetes 中这方面的演变?是否有人在努力改进这个流程, +使得必须具备测试成为头等要务,或许这从来都不是一个问题? + +**RK**:在 2018 年开始围绕 Kubernetes 合规性计划进行讨论时,只有大约 11% 的端点被测试所覆盖。 +那时,CNCF 的管理委员会提出一个要求,如果要提供资金覆盖缺失的合规性测试,Kubernetes 社区应采取一个策略, +即如果新特性没有包含稳定 API 的合规性测试,则不允许添加此特性。 + + +SIG Architecture 负责监督这一要求,[APISnoop](https://apisnoop.cncf.io/) +在此方面被证明是一个非常有价值的工具。通过自动化流程,APISnoop 在每个周末生成一个 PR, +以突出 Conformance 覆盖范围的变化。如果有端点在没有进行合规性测试的情况下进阶至正式发布, +将会被迅速识别发现。这种方法有助于防止积累新的技术债务。 + +此外,我们计划在不久的将来创建一个发布通知任务,作用是添加额外一层防护,以防止产生新的技术债务。 + + +**FSM**:我明白了,工具化和自动化在其中起着重要的作用。 +在你看来,就合规性而言,还有哪些领域需要做一些工作? +换句话说,目前标记为优先改进的领域有哪些? + +**RK**:在 1.27 版本中,我们已完成了 “100% 合规性测试” 的里程碑! + + +当时,社区重新审视了所有被列为不合规的端点。这个列表是收集多年的社区意见后填充的。 +之前被认为不合规的几个端点已被挑选出来并迁移到一个新的专用列表中, +该列表中包含目前合规性测试开发的焦点。同样,可以在 apisnoop.cncf.io 上查阅此列表。 + + +为了确保在合规性项目中避免产生新的技术债务,我们计划建立一个发布通知任务作为额外的预防措施。 + +虽然 APISnoop 目前被托管在 CNCF 基础设施上,但此项目已慷慨地捐赠给了 Kubernetes 社区。 +因此,它将在 2023 年底之前转移到社区自治的基础设施上。 + + +**FSM**:这是个好消息!对于想要提供帮助的人们,你能否重点说明一下协作的价值所在? +参与贡献是否需要对 Kubernetes 有很扎实的知识,或否有办法让一些新人也能为此项目做出贡献? + +**RK**:参与合规性测试就像 "洗碗" 一样,它可能不太显眼,但仍然非常重要。 +这需要对 Kubernetes 有深入的理解,特别是在需要对端点进行测试的领域。 +这就是为什么与负责测试 API 端点的每个 SIG 进行协作会如此重要。 + + +我们的承诺是让所有人都能参与测试内容编写,作为这一承诺的一部分, +ii 团队目前正在开发一个 “点击即部署(click and deploy)” 的解决方案。 +此解决方案旨在使所有人都能在几分钟内快速创建一个在真实硬件上工作的环境。 +我们将在准备好后分享有关此项开发的更新。 + + +**FSM**:那会非常有帮助,谢谢。最后你还想与我们的读者分享些什么见解吗? + +**RK**:合规性测试是一个协作性的社区工作,涉及各个 SIG 之间的广泛合作。 +SIG Architecture 在推动倡议并提供指导方面起到了领头作用。然而, +工作的进展在很大程度上依赖于所有 SIG 在审查、增强和认可测试方面的支持。 + + +我要衷心感谢 ii 团队多年来对解决技术债务的坚定承诺。 +特别要感谢 [Hippie Hacker](https://github.com/hh) 的指导和对愿景的引领作用,这是非常宝贵的。 +此外,我还要特别表扬 Stephen Heywood 在最近几个版本中承担了大部分测试内容编写工作而做出的贡献, +还有 Zach Mandeville 对 APISnoop 也做了很好的贡献。 + + +**FSM**:非常感谢你参加本次访谈并分享你的深刻见解,我本人从中获益良多,我相信读者们也会同样受益。 From 2a51003aec5ceb3938a115c1d4b7e648436699c1 Mon Sep 17 00:00:00 2001 From: Edith Puclla <58795858+edithturn@users.noreply.github.com> Date: Sun, 8 Oct 2023 08:26:15 +0100 Subject: [PATCH 077/229] Update content/es/docs/concepts/storage/projected-volumes.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Si, de acuerdo con esto Rodolfo! Co-authored-by: Rodolfo Martínez Vega --- content/es/docs/concepts/storage/projected-volumes.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/es/docs/concepts/storage/projected-volumes.md b/content/es/docs/concepts/storage/projected-volumes.md index e496094a1d90c..11c8149df46c5 100644 --- a/content/es/docs/concepts/storage/projected-volumes.md +++ b/content/es/docs/concepts/storage/projected-volumes.md @@ -20,9 +20,9 @@ Un volumen `proyectado` asigna varias fuentes de volúmenes existentes al mismo Actualmente se pueden proyectar los siguientes tipos de fuentes de volumen: -- [`secret`](/docs/concepts/storage/volumes/#secret) -- [`downwardAPI`](/docs/concepts/storage/volumes/#downwardapi) -- [`configMap`](/docs/concepts/storage/volumes/#configmap) +- [`secret`](/es/docs/concepts/storage/volumes/#secret) +- [`downwardAPI`](/es/docs/concepts/storage/volumes/#downwardapi) +- [`configMap`](/es/docs/concepts/storage/volumes/#configmap) - [`serviceAccountToken`](#serviceaccounttoken) Se requiere que todas las fuentes estén en el mismo espacio de nombres que el Pod. Para más detalles, From 30b7c6f19485483132adf774c3ed44cdbe844b73 Mon Sep 17 00:00:00 2001 From: Edith Puclla Date: Sun, 8 Oct 2023 09:09:00 +0100 Subject: [PATCH 078/229] Taking feedback in docs --- .../es/docs/concepts/storage/projected-volumes.md | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/content/es/docs/concepts/storage/projected-volumes.md b/content/es/docs/concepts/storage/projected-volumes.md index 11c8149df46c5..1540d5df97b54 100644 --- a/content/es/docs/concepts/storage/projected-volumes.md +++ b/content/es/docs/concepts/storage/projected-volumes.md @@ -48,12 +48,14 @@ en un Pod en una ruta especificada. Por ejemplo: {{% code_sample file="pods/storage/projected-service-account-token.yaml" %}} -El Pod de ejemplo tiene un volumen proyectado que contiene el token de cuenta de servicio inyectado. Los contenedores en este Pod pueden usar ese token para acceder al servidor API de Kubernetes, autenticándose con la identidad de [the pod's ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/). +El Pod de ejemplo tiene un volumen proyectado que contiene el token de cuenta de servicio inyectado. +Los contenedores en este Pod pueden usar ese token para acceder al servidor API de Kubernetes, autenticándose con la identidad de [the pod's ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/). El campo `audience` contiene la audiencia prevista del token. Un destinatario del token debe identificarse con un identificador especificado en la audiencia del token y, de lo contrario, debe rechazar el token. Este campo es opcional y de forma predeterminada es el identificador del servidor API. -The `expirationSeconds` es la duración esperada de validez del token de la cuenta de servicio. El valor predeterminado es 1 hora y debe durar al menos 10 minutos (600 segundos). Un administrador +The `expirationSeconds` es la duración esperada de validez del token de la cuenta de servicio. El valor predeterminado es 1 hora y debe durar al menos 10 minutos (600 segundos). +Un administrador también puede limitar su valor máximo especificando la opción `--service-account-max-token-expiration` para el servidor API. El campo `path` especifica una ruta relativa al punto de montaje del volumen proyectado. @@ -87,7 +89,11 @@ Si los permisos de volumen `serviceAccountToken` de un Pod se establecieron en ` ### Windows -En los pods de Windows que tienen un volumen proyectado y `RunAsUsername` configurado en el pod `SecurityContext`, la propiedad no se aplica debido a la forma en que se administran las cuentas de usuario en Windows. Windows almacena y administra cuentas de grupos y usuarios locales en un archivo de base de datos llamado Administrador de cuentas de seguridad (SAM). Cada contenedor mantiene su propia instancia de la base de datos SAM, de la cual el host no tiene visibilidad mientras el contenedor se está ejecutando. Los contenedores de Windows están diseñados para ejecutar la parte del modo de usuario del sistema operativo de forma aislada del host, de ahí el mantenimiento de una base de datos SAM virtual. Como resultado, el kubelet que se ejecuta en el host no tiene la capacidad de configurar dinámicamente la propiedad de los archivos del host para cuentas de contenedores virtualizados. Se recomienda que, si los archivos de la máquina host se van a compartir con el contenedor, se coloquen en su propio montaje de volumen fuera de `C:\`. +En los pods de Windows que tienen un volumen proyectado y `RunAsUsername` configurado en el pod `SecurityContext`, la propiedad no se aplica debido a la forma en que se administran las cuentas de usuario en Windows. +Windows almacena y administra cuentas de grupos y usuarios locales en un archivo de base de datos llamado Administrador de cuentas de seguridad (SAM). +Cada contenedor mantiene su propia instancia de la base de datos SAM, de la cual el host no tiene visibilidad mientras el contenedor se está ejecutando. +Los contenedores de Windows están diseñados para ejecutar la parte del modo de usuario del sistema operativo de forma aislada del host, de ahí el mantenimiento de una base de datos SAM virtual. +Como resultado, el kubelet que se ejecuta en el host no tiene la capacidad de configurar dinámicamente la propiedad de los archivos del host para cuentas de contenedores virtualizados. Se recomienda que, si los archivos de la máquina host se van a compartir con el contenedor, se coloquen en su propio montaje de volumen fuera de `C:\`. De forma predeterminada, los archivos proyectados tendrán la siguiente propiedad, como se muestra en un archivo de volumen proyectado de ejemplo: From c9e4030a0806d14ba4f9c4f55f07aca82de5d174 Mon Sep 17 00:00:00 2001 From: Arhell Date: Sun, 8 Oct 2023 11:12:06 +0300 Subject: [PATCH 079/229] [pl] Updated resources of README.md --- README-pl.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README-pl.md b/README-pl.md index 7544de45835a6..62dc2d0ee22f3 100644 --- a/README-pl.md +++ b/README-pl.md @@ -43,7 +43,7 @@ make container-image make container-serve ``` -Jeśli widzisz błędy, prawdopodobnie kontener z Hugo nie dysponuje wystarczającymi zasobami. Aby rozwiązać ten problem, zwiększ ilość dostępnych zasobów CPU i pamięci dla Dockera na Twojej maszynie ([MacOSX](https://docs.docker.com/docker-for-mac/#resources) i [Windows](https://docs.docker.com/docker-for-windows/#resources)). +Jeśli widzisz błędy, prawdopodobnie kontener z Hugo nie dysponuje wystarczającymi zasobami. Aby rozwiązać ten problem, zwiększ ilość dostępnych zasobów CPU i pamięci dla Dockera na Twojej maszynie ([MacOS](https://docs.docker.com/desktop/settings/mac/) i [Windows](https://docs.docker.com/desktop/settings/windows/)). Aby obejrzeć zawartość serwisu, otwórz w przeglądarce adres . Po każdej zmianie plików źródłowych, Hugo automatycznie aktualizuje stronę i odświeża jej widok w przeglądarce. From 6f4039bcef85945533f1ffc8209597962f038709 Mon Sep 17 00:00:00 2001 From: Sarthak Patel <76515568+Community-Programmer@users.noreply.github.com> Date: Sun, 8 Oct 2023 01:24:51 -0700 Subject: [PATCH 080/229] Add KCSA Certification to Training page (#43318) * Add KCSA certification to Training page Co-Authored-By: Tim Bannister * Change CSS for training Account for the addition of KCSA. Co-Authored-By: Tim Bannister --------- Co-authored-by: Tim Bannister --- content/en/training/_index.html | 39 +++-- static/css/training.css | 59 ++++++- .../images/training/kubernetes-cksa-white.svg | 147 ++++++++++++++++++ 3 files changed, 231 insertions(+), 14 deletions(-) create mode 100644 static/images/training/kubernetes-cksa-white.svg diff --git a/content/en/training/_index.html b/content/en/training/_index.html index 74880486a6bf2..28fe46cfc0c43 100644 --- a/content/en/training/_index.html +++ b/content/en/training/_index.html @@ -14,17 +14,22 @@

Build your cloud native career

Kubernetes is at the core of the cloud native movement. Training and certifications from the Linux Foundation and our training partners lets you invest in your career, learn Kubernetes, and make your cloud native projects successful.

-
- -
-
- -
-
- -
-
- +
+
+ +
+
+ +
+
+ +
+
+ +
+
+ +
@@ -93,6 +98,16 @@

Go to Certification +
+ +
+ Kubernetes and Cloud Native Security Associate (KCSA) +
+

The KCSA is a pre-professional certification designed for candidates interested in advancing to the professional level through a demonstrated understanding of foundational knowledge and skills of security technologies in the cloud native ecosystem.

+

A certified KCSA will confirm an understanding of the baseline security configuration of Kubernetes clusters to meet compliance objectives.

+
+ Go to Certification +
Certified Kubernetes Application Developer (CKAD) @@ -106,6 +121,7 @@
Certified Kubernetes Administrator (CKA)
+

The Certified Kubernetes Administrator (CKA) program provides assurance that CKAs have the skills, knowledge, and competency to perform the responsibilities of Kubernetes administrators.

A certified Kubernetes administrator has demonstrated the ability to do basic installation as well as configuring and managing production-grade Kubernetes clusters.


@@ -115,6 +131,7 @@
Certified Kubernetes Security Specialist (CKS)
+

The Certified Kubernetes Security Specialist program provides assurance that the holder is comfortable and competent with a broad range of best practices. CKS certification covers skills for securing container-based applications and Kubernetes platforms during build, deployment and runtime.

Candidates for CKS must hold a current Certified Kubernetes Administrator (CKA) certification to demonstrate they possess sufficient Kubernetes expertise before sitting for the CKS.


diff --git a/static/css/training.css b/static/css/training.css index 12de761d72b5b..ddc2d1e29e43f 100644 --- a/static/css/training.css +++ b/static/css/training.css @@ -48,6 +48,14 @@ body.cid-training section.call-to-action .main-section .cta-text > * { max-width: min(1000px, 50vw); } +/*Image Container for further position of image in smaller devices*/ +body.cid-training .cta-img-container{ + display: flex; + flex-direction: row; + align-items: center; + gap: 40px; +} + body.cid-training section.call-to-action .main-section .cta-image { max-width: max(20vw,150px); flex-basis: auto; @@ -114,7 +122,7 @@ body.cid-training #get-certified .col-nav a.button { } -@media only screen and (max-width: 840px) { +@media only screen and (max-width: 945px) { body.cid-training section.call-to-action .main-section .cta-image > img { margin: 0; } @@ -143,6 +151,47 @@ body.cid-training #get-certified .col-nav a.button { margin: auto; padding-top: 20px; } + + body.cid-training .cta-img-container{ + position: relative; + display: grid; + grid-template-columns: repeat(2, 120px); + gap: 0; + } + + body.cid-training #logo-kcnf{ + grid-column: 1 / span 2; + margin: 0 auto; + } + + body.cid-training #logo-kcnf > img{ + position: absolute; + top: 0; + } + + .cta-image:nth-child(-n+2) { + grid-row: 1; + } + + + /*Change the display to Grid to prevent stretching of tiles*/ + body.cid-training .col-container { + display: grid; + grid-template-columns: auto auto; + + } + + body.cid-training .col-nav { + width: 100%; + height: 100%; + } + + body.cid-training .col-container .col-nav:last-child{ + justify-self: center; + width: 50%; + grid-column-start: span 2; + + } } @@ -151,8 +200,11 @@ body.cid-training #get-certified .col-nav a.button { @media only screen and (max-width: 480px) { body.cid-training .col { margin: 1% 0 1% 0%;} - body.cid-training .col-container { flex-direction: column; } + body.cid-training .col-container { display: flex; flex-direction: column; } + body.cid-training .cta-img-container{display: flex;flex-direction: column;} + body.cid-training .col-container .col-nav:last-child{width: 100%;} body.cid-training #logo-cks { order: initial; } + body.cid-training #logo-kcnf > img{position: relative; } } @media only screen and (max-width: 650px) { @@ -160,7 +212,8 @@ body.cid-training #get-certified .col-nav a.button { display: block; width: 100%; } - body.cid-training .col-container { flex-direction: column; } + body.cid-training .col-container {display: flex; flex-direction: column; } + body.cid-training .col-container .col-nav:last-child{width: 100%;} } body.cid-training .button { diff --git a/static/images/training/kubernetes-cksa-white.svg b/static/images/training/kubernetes-cksa-white.svg new file mode 100644 index 0000000000000..a5ad27635303b --- /dev/null +++ b/static/images/training/kubernetes-cksa-white.svg @@ -0,0 +1,147 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + From 4add5dba13ae6d7803b1d1be9cc97742ec977003 Mon Sep 17 00:00:00 2001 From: "guiyong.ou" Date: Sun, 8 Oct 2023 18:20:20 +0800 Subject: [PATCH 081/229] fix CustomResourceDefinition version is ```apiextensions.k8s.io/v1```,not ```apiextensions/v1``` Signed-off-by: guiyong.ou --- content/zh-cn/docs/reference/using-api/deprecation-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/zh-cn/docs/reference/using-api/deprecation-guide.md b/content/zh-cn/docs/reference/using-api/deprecation-guide.md index 3ba469be12d38..518fc1990de2e 100644 --- a/content/zh-cn/docs/reference/using-api/deprecation-guide.md +++ b/content/zh-cn/docs/reference/using-api/deprecation-guide.md @@ -347,7 +347,7 @@ The **apiextensions.k8s.io/v1beta1** API version of CustomResourceDefinition is **apiextensions.k8s.io/v1beta1** API 版本的 CustomResourceDefinition 不在 v1.22 版本中继续提供。 -* 迁移清单和 API 客户端使用 **apiextensions/v1** API 版本,此 API 从 v1.16 版本开始可用; +* 迁移清单和 API 客户端使用 **apiextensions.k8s.io/v1** API 版本,此 API 从 v1.16 版本开始可用; * 所有的已保存的对象都可以通过新的 API 来访问; +在升级 kubelet 之前先进行节点排空,这样可以确保 Pod 被重新准入并且容器被重新创建。 +这一步骤对于解决某些安全问题或其他关键错误是非常必要的。 +{{}} + -* 一定不要为 `kube-apiserver` 和 `kube-controller-manager` 指定 `--cloud-provider` 标志。 - 这将保证它们不会运行任何云服务专用循环逻辑,这将会由云管理控制器运行。未来这个标记将被废弃并去除。 -* `kubelet` 必须使用 `--cloud-provider=external` 运行。 - 这是为了保证让 kubelet 知道在执行任何任务前,它必须被云管理控制器初始化。 +* `kubelet`、`kube-apiserver` 和 `kube-controller-manager` 必须根据用户对外部 CCM 的使用进行设置。 + 如果用户有一个外部的 CCM(不是 Kubernetes 控制器管理器中的内部云控制器回路), + 那么必须添加 `--cloud-provider=external` 参数。否则,不应添加此参数。 -* 指定了 `--cloud-provider=external` 的 kubelet 将被添加一个 `node.cloudprovider.kubernetes.io/uninitialized` +* 指定了 `--cloud-provider=external` 的组件将被添加一个 `node.cloudprovider.kubernetes.io/uninitialized` 的污点,导致其在初始化过程中不可调度(`NoSchedule`)。 这将标记该节点在能够正常调度前,需要外部的控制器进行二次初始化。 请注意,如果云管理控制器不可用,集群中的新节点会一直处于不可调度的状态。 From da557702cce5c7a15c85b6afc7a7daeda1cd15b8 Mon Sep 17 00:00:00 2001 From: Arhell Date: Mon, 9 Oct 2023 00:21:17 +0300 Subject: [PATCH 084/229] [pt] Updated resources of README.md --- README-pt.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README-pt.md b/README-pt.md index 3de16340509e3..ae2f644ed869d 100644 --- a/README-pt.md +++ b/README-pt.md @@ -49,7 +49,7 @@ Para executar o build do website em um contêiner, execute o comando abaixo: make container-serve ``` -Caso ocorram erros, é provável que o contêiner que está executando o Hugo não tenha recursos suficientes. A solução é aumentar a quantidade de CPU e memória disponível para o Docker ([MacOSX](https://docs.docker.com/docker-for-mac/#resources) e [Windows](https://docs.docker.com/docker-for-windows/#resources)). +Caso ocorram erros, é provável que o contêiner que está executando o Hugo não tenha recursos suficientes. A solução é aumentar a quantidade de CPU e memória disponível para o Docker ([MacOS](https://docs.docker.com/desktop/settings/mac/) e [Windows](https://docs.docker.com/desktop/settings/windows/)). Abra seu navegador em http://localhost:1313 para visualizar o website. Conforme você faz alterações nos arquivos fontes, o Hugo atualiza o website e força a atualização do navegador. From c8575a2bcdb99d16d0e68543a6060cd1374b021d Mon Sep 17 00:00:00 2001 From: YAMADA Kazuaki Date: Mon, 9 Oct 2023 14:34:37 +0900 Subject: [PATCH 085/229] fix: incorrect translation of 'successfully created' --- content/ja/docs/concepts/workloads/pods/pod-lifecycle.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md b/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md index 89d521098c809..7cb7b1f81ca21 100644 --- a/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md @@ -92,7 +92,7 @@ Podの`spec`には、Always、OnFailure、またはNeverのいずれかの値を PodにはPodStatusがあります。それにはPodが成功したかどうかの情報を持つ[PodCondition](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podcondition-v1-core)の配列が含まれています。kubeletは、下記のPodConditionを管理します: * `PodScheduled`: PodがNodeにスケジュールされました。 -* `PodHasNetwork`: (アルファ版機能; [明示的に有効](#pod-has-network)にしなければならない) Podサンドボックスが正常に成功され、ネットワークの設定が完了しました。 +* `PodHasNetwork`: (アルファ版機能; [明示的に有効](#pod-has-network)にしなければならない) Podサンドボックスが正常に作成され、ネットワークの設定が完了しました。 * `ContainersReady`: Pod内のすべてのコンテナが準備できた状態です。 * `Initialized`: すべての[Initコンテナ](/ja/docs/concepts/workloads/pods/init-containers)が正常に終了しました。 * `Ready`: Podはリクエストを処理でき、一致するすべてのサービスの負荷分散プールに追加されます。 From 45e2c068a42ff7b6dda9a49dc6476c5d78c98d2f Mon Sep 17 00:00:00 2001 From: Michael Date: Mon, 9 Oct 2023 14:08:28 +0800 Subject: [PATCH 086/229] [zh] Sync /command-line-tools-reference/kube-scheduler.md --- .../kube-scheduler.md | 144 +++++++----------- 1 file changed, 55 insertions(+), 89 deletions(-) diff --git a/content/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler.md b/content/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler.md index 6b1f3cc58eeb8..518d5e9fa9a6c 100644 --- a/content/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler.md +++ b/content/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler.md @@ -10,17 +10,6 @@ weight: 30 auto_generated: true --> - - ## {{% heading "synopsis" %}} 监听 --secure-port 端口的 IP 地址。 集群的其余部分以及 CLI/ Web 客户端必须可以访问关联的接口。 如果为空,将使用所有接口(0.0.0.0 表示使用所有 IPv4 接口,“::” 表示使用所有 IPv6 接口)。 -如果为空或未指定地址 (0.0.0.0 或 ::),所有接口将被使用。 +如果为空或未指定地址 (0.0.0.0 或 ::),所有接口和 IP 地址簇将被使用。 @@ -274,10 +262,9 @@ A set of key=value pairs that describe feature gates for alpha/experimental feat APIListChunking=true|false (BETA - default=true)
APIPriorityAndFairness=true|false (BETA - default=true)
APIResponseCompression=true|false (BETA - default=true)
-APISelfSubjectReview=true|false (BETA - default=true)
APIServerIdentity=true|false (BETA - default=true)
APIServerTracing=true|false (BETA - default=true)
-AdmissionWebhookMatchConditions=true|false (ALPHA - default=false)
+AdmissionWebhookMatchConditions=true|false (BETA - default=true)
AggregatedDiscoveryEndpoint=true|false (BETA - default=true)
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
@@ -286,32 +273,32 @@ AppArmor=true|false (BETA - default=true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
CPUManagerPolicyBetaOptions=true|false (BETA - default=true)
CPUManagerPolicyOptions=true|false (BETA - default=true)
+CRDValidationRatcheting=true|false (ALPHA - default=false)
CSIMigrationPortworx=true|false (BETA - default=false)
-CSIMigrationRBD=true|false (ALPHA - default=false)
CSINodeExpandSecret=true|false (BETA - default=true)
CSIVolumeHealth=true|false (ALPHA - default=false)
CloudControllerManagerWebhook=true|false (ALPHA - default=false)
CloudDualStackNodeIPs=true|false (ALPHA - default=false)
ClusterTrustBundle=true|false (ALPHA - default=false)
ComponentSLIs=true|false (BETA - default=true)
+ConsistentListFromCache=true|false (ALPHA - default=false)
ContainerCheckpoint=true|false (ALPHA - default=false)
ContextualLogging=true|false (ALPHA - default=false)
+CronJobsScheduledAnnotation=true|false (BETA - default=true)
CrossNamespaceVolumeDataSource=true|false (ALPHA - default=false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - default=false)
CustomResourceValidationExpressions=true|false (BETA - default=true)
+DevicePluginCDIDevices=true|false (ALPHA - default=false)
DisableCloudProviders=true|false (ALPHA - default=false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - default=false)
DynamicResourceAllocation=true|false (ALPHA - default=false)
ElasticIndexedJob=true|false (BETA - default=true)
EventedPLEG=true|false (BETA - default=false)
-ExpandedDNSConfig=true|false (BETA - default=true)
-ExperimentalHostUserNamespaceDefaulting=true|false (BETA - default=false)
GracefulNodeShutdown=true|false (BETA - default=true)
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - default=true)
HPAContainerMetrics=true|false (BETA - default=true)
HPAScaleToZero=true|false (ALPHA - default=false)
HonorPVReclaimPolicy=true|false (ALPHA - default=false)
-IPTablesOwnershipCleanup=true|false (BETA - default=true)
InPlacePodVerticalScaling=true|false (ALPHA - default=false)
InTreePluginAWSUnregister=true|false (ALPHA - default=false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - default=false)
@@ -319,18 +306,20 @@ InTreePluginAzureFileUnregister=true|false (ALPHA - default=false)
InTreePluginGCEUnregister=true|false (ALPHA - default=false)
InTreePluginOpenStackUnregister=true|false (ALPHA - default=false)
InTreePluginPortworxUnregister=true|false (ALPHA - default=false)
-InTreePluginRBDUnregister=true|false (ALPHA - default=false)
InTreePluginvSphereUnregister=true|false (ALPHA - default=false)
+JobBackoffLimitPerIndex=true|false (ALPHA - default=false)
JobPodFailurePolicy=true|false (BETA - default=true)
+JobPodReplacementPolicy=true|false (ALPHA - default=false)
JobReadyPods=true|false (BETA - default=true)
KMSv2=true|false (BETA - default=true)
+KMSv2KDF=true|false (BETA - default=false)
+KubeProxyDrainingTerminatingNodes=true|false (ALPHA - default=false)
+KubeletCgroupDriverFromCRI=true|false (ALPHA - default=false)
KubeletInUserNamespace=true|false (ALPHA - default=false)
-KubeletPodResources=true|false (BETA - default=true)
KubeletPodResourcesDynamicResources=true|false (ALPHA - default=false)
KubeletPodResourcesGet=true|false (ALPHA - default=false)
-KubeletPodResourcesGetAllocatable=true|false (BETA - default=true)
KubeletTracing=true|false (BETA - default=true)
-LegacyServiceAccountTokenTracking=true|false (BETA - default=true)
+LegacyServiceAccountTokenCleanUp=true|false (ALPHA - default=false)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - default=false)
LogarithmicScaleDown=true|false (BETA - default=true)
LoggingAlphaOptions=true|false (ALPHA - default=false)
@@ -340,35 +329,35 @@ MaxUnavailableStatefulSet=true|false (ALPHA - default=false)
MemoryManager=true|false (BETA - default=true)
MemoryQoS=true|false (ALPHA - default=false)
MinDomainsInPodTopologySpread=true|false (BETA - default=true)
-MinimizeIPTablesRestore=true|false (BETA - default=true)
MultiCIDRRangeAllocator=true|false (ALPHA - default=false)
MultiCIDRServiceAllocator=true|false (ALPHA - default=false)
-NetworkPolicyStatus=true|false (ALPHA - default=false)
NewVolumeManagerReconstruction=true|false (BETA - default=true)
NodeInclusionPolicyInPodTopologySpread=true|false (BETA - default=true)
NodeLogQuery=true|false (ALPHA - default=false)
-NodeOutOfServiceVolumeDetach=true|false (BETA - default=true)
-NodeSwap=true|false (ALPHA - default=false)
+NodeSwap=true|false (BETA - default=false)
OpenAPIEnums=true|false (BETA - default=true)
PDBUnhealthyPodEvictionPolicy=true|false (BETA - default=true)
+PersistentVolumeLastPhaseTransitionTime=true|false (ALPHA - default=false)
PodAndContainerStatsFromCRI=true|false (ALPHA - default=false)
PodDeletionCost=true|false (BETA - default=true)
PodDisruptionConditions=true|false (BETA - default=true)
-PodHasNetworkCondition=true|false (ALPHA - default=false)
+PodHostIPs=true|false (ALPHA - default=false)
+PodIndexLabel=true|false (BETA - default=true)
+PodReadyToStartContainersCondition=true|false (ALPHA - default=false)
PodSchedulingReadiness=true|false (BETA - default=true)
-ProbeTerminationGracePeriod=true|false (BETA - default=true)
ProcMountType=true|false (ALPHA - default=false)
-ProxyTerminatingEndpoints=true|false (BETA - default=true)
QOSReserved=true|false (ALPHA - default=false)
ReadWriteOncePod=true|false (BETA - default=true)
RecoverVolumeExpansionFailure=true|false (ALPHA - default=false)
RemainingItemCount=true|false (BETA - default=true)
-RetroactiveDefaultStorageClass=true|false (BETA - default=true)
RotateKubeletServerCertificate=true|false (BETA - default=true)
SELinuxMountReadWriteOncePod=true|false (BETA - default=true)
+SchedulerQueueingHints=true|false (BETA - default=true)
SecurityContextDeny=true|false (ALPHA - default=false)
-ServiceNodePortStaticSubrange=true|false (ALPHA - default=false)
+ServiceNodePortStaticSubrange=true|false (BETA - default=true)
+SidecarContainers=true|false (ALPHA - default=false)
SizeMemoryBackedVolumes=true|false (BETA - default=true)
+SkipReadOnlyValidationGCE=true|false (ALPHA - default=false)
StableLoadBalancerNodeSet=true|false (BETA - default=true)
StatefulSetAutoDeletePVC=true|false (BETA - default=true)
StatefulSetStartOrdinal=true|false (BETA - default=true)
@@ -376,10 +365,11 @@ StorageVersionAPI=true|false (ALPHA - default=false)
StorageVersionHash=true|false (BETA - default=true)
TopologyAwareHints=true|false (BETA - default=true)
TopologyManagerPolicyAlphaOptions=true|false (ALPHA - default=false)
-TopologyManagerPolicyBetaOptions=true|false (BETA - default=false)
-TopologyManagerPolicyOptions=true|false (ALPHA - default=false)
-UserNamespacesStatelessPodsSupport=true|false (ALPHA - default=false)
-ValidatingAdmissionPolicy=true|false (ALPHA - default=false)
+TopologyManagerPolicyBetaOptions=true|false (BETA - default=true)
+TopologyManagerPolicyOptions=true|false (BETA - default=true)
+UnknownVersionInteroperabilityProxy=true|false (ALPHA - default=false)
+UserNamespacesSupport=true|false (ALPHA - default=false)
+ValidatingAdmissionPolicy=true|false (BETA - default=false)
VolumeCapacityPriority=true|false (ALPHA - default=false)
WatchList=true|false (ALPHA - default=false)
WinDSR=true|false (ALPHA - default=false)
@@ -390,10 +380,9 @@ WindowsHostNetwork=true|false (ALPHA - default=true) APIListChunking=true|false (BETA - 默认值为 true)
APIPriorityAndFairness=true|false (BETA - 默认值为 true)
APIResponseCompression=true|false (BETA - 默认值为 true)
-APISelfSubjectReview=true|false (BETA - 默认值为 true)
APIServerIdentity=true|false (BETA - 默认值为 true)
APIServerTracing=true|false (BETA - 默认值为 true)
-AdmissionWebhookMatchConditions=true|false (ALPHA - 默认值为 false)
+AdmissionWebhookMatchConditions=true|false (BETA - 默认值为 true)
AggregatedDiscoveryEndpoint=true|false (BETA - 默认值为 true)
AllAlpha=true|false (ALPHA - 默认值为 false)
AllBeta=true|false (BETA - 默认值为 false)
@@ -402,32 +391,32 @@ AppArmor=true|false (BETA - 默认值为 true)
CPUManagerPolicyAlphaOptions=true|false (ALPHA - 默认值为 false)
CPUManagerPolicyBetaOptions=true|false (BETA - 默认值为 true)
CPUManagerPolicyOptions=true|false (BETA - 默认值为 true)
+CRDValidationRatcheting=true|false (ALPHA - 默认值为 false)
CSIMigrationPortworx=true|false (BETA - 默认值为 false)
-CSIMigrationRBD=true|false (ALPHA - 默认值为 false)
CSINodeExpandSecret=true|false (BETA - 默认值为 true)
CSIVolumeHealth=true|false (ALPHA - 默认值为 false)
CloudControllerManagerWebhook=true|false (ALPHA - 默认值为 false)
CloudDualStackNodeIPs=true|false (ALPHA - 默认值为 false)
ClusterTrustBundle=true|false (ALPHA - 默认值为 false)
ComponentSLIs=true|false (BETA - 默认值为 true)
+ConsistentListFromCache=true|false (ALPHA - 默认值为 false)
ContainerCheckpoint=true|false (ALPHA - 默认值为 false)
ContextualLogging=true|false (ALPHA - 默认值为 false)
+CronJobsScheduledAnnotation=true|false (BETA - 默认值为 true)
CrossNamespaceVolumeDataSource=true|false (ALPHA - 默认值为 false)
CustomCPUCFSQuotaPeriod=true|false (ALPHA - 默认值为 false)
CustomResourceValidationExpressions=true|false (BETA - 默认值为 true)
+DevicePluginCDIDevices=true|false (ALPHA - 默认值为 false)
DisableCloudProviders=true|false (ALPHA - 默认值为 false)
DisableKubeletCloudCredentialProviders=true|false (ALPHA - 默认值为 false)
DynamicResourceAllocation=true|false (ALPHA - 默认值为 false)
ElasticIndexedJob=true|false (BETA - 默认值为 true)
EventedPLEG=true|false (BETA - 默认值为 false)
-ExpandedDNSConfig=true|false (BETA - 默认值为 true)
-ExperimentalHostUserNamespaceDefaulting=true|false (BETA - 默认值为 false)
GracefulNodeShutdown=true|false (BETA - 默认值为 true)
GracefulNodeShutdownBasedOnPodPriority=true|false (BETA - 默认值为 true)
HPAContainerMetrics=true|false (BETA - 默认值为 true)
HPAScaleToZero=true|false (ALPHA - 默认值为 false)
HonorPVReclaimPolicy=true|false (ALPHA - 默认值为 false)
-IPTablesOwnershipCleanup=true|false (BETA - 默认值为 true)
InPlacePodVerticalScaling=true|false (ALPHA - 默认值为 false)
InTreePluginAWSUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginAzureDiskUnregister=true|false (ALPHA - 默认值为 false)
@@ -435,18 +424,20 @@ InTreePluginAzureFileUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginGCEUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginOpenStackUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginPortworxUnregister=true|false (ALPHA - 默认值为 false)
-InTreePluginRBDUnregister=true|false (ALPHA - 默认值为 false)
InTreePluginvSphereUnregister=true|false (ALPHA - 默认值为 false)
+JobBackoffLimitPerIndex=true|false (ALPHA - 默认值为 false)
JobPodFailurePolicy=true|false (BETA - 默认值为 true)
+JobPodReplacementPolicy=true|false (ALPHA - 默认值为 false)
JobReadyPods=true|false (BETA - 默认值为 true)
KMSv2=true|false (BETA - 默认值为 true)
+KMSv2KDF=true|false (BETA - 默认值为 false)
+KubeProxyDrainingTerminatingNodes=true|false (ALPHA - 默认值为 false)
+KubeletCgroupDriverFromCRI=true|false (ALPHA - 默认值为 false)
KubeletInUserNamespace=true|false (ALPHA - 默认值为 false)
-KubeletPodResources=true|false (BETA - 默认值为 true)
KubeletPodResourcesDynamicResources=true|false (ALPHA - 默认值为 false)
KubeletPodResourcesGet=true|false (ALPHA - 默认值为 false)
-KubeletPodResourcesGetAllocatable=true|false (BETA - 默认值为 true)
KubeletTracing=true|false (BETA - 默认值为 true)
-LegacyServiceAccountTokenTracking=true|false (BETA - 默认值为 true)
+LegacyServiceAccountTokenCleanUp=true|false (ALPHA - 默认值为 false)
LocalStorageCapacityIsolationFSQuotaMonitoring=true|false (ALPHA - 默认值为 false)
LogarithmicScaleDown=true|false (BETA - 默认值为 true)
LoggingAlphaOptions=true|false (ALPHA - 默认值为 false)
@@ -456,35 +447,35 @@ MaxUnavailableStatefulSet=true|false (ALPHA - 默认值为 false)
MemoryManager=true|false (BETA - 默认值为 true)
MemoryQoS=true|false (ALPHA - 默认值为 false)
MinDomainsInPodTopologySpread=true|false (BETA - 默认值为 true)
-MinimizeIPTablesRestore=true|false (BETA - 默认值为 true)
MultiCIDRRangeAllocator=true|false (ALPHA - 默认值为 false)
MultiCIDRServiceAllocator=true|false (ALPHA - 默认值为 false)
-NetworkPolicyStatus=true|false (ALPHA - 默认值为 false)
NewVolumeManagerReconstruction=true|false (BETA - 默认值为 true)
NodeInclusionPolicyInPodTopologySpread=true|false (BETA - 默认值为 true)
NodeLogQuery=true|false (ALPHA - 默认值为 false)
-NodeOutOfServiceVolumeDetach=true|false (BETA - 默认值为 true)
-NodeSwap=true|false (ALPHA - 默认值为 false)
+NodeSwap=true|false (BETA - 默认值为 false)
OpenAPIEnums=true|false (BETA - 默认值为 true)
PDBUnhealthyPodEvictionPolicy=true|false (BETA - 默认值为 true)
+PersistentVolumeLastPhaseTransitionTime=true|false (ALPHA - 默认值为 false)
PodAndContainerStatsFromCRI=true|false (ALPHA - 默认值为 false)
PodDeletionCost=true|false (BETA - 默认值为 true)
PodDisruptionConditions=true|false (BETA - 默认值为 true)
-PodHasNetworkCondition=true|false (ALPHA - 默认值为 false)
+PodHostIPs=true|false (ALPHA - 默认值为 false)
+PodIndexLabel=true|false (BETA - 默认值为 true)
+PodReadyToStartContainersCondition=true|false (ALPHA - 默认值为 false)
PodSchedulingReadiness=true|false (BETA - 默认值为 true)
-ProbeTerminationGracePeriod=true|false (BETA - 默认值为 true)
ProcMountType=true|false (ALPHA - 默认值为 false)
-ProxyTerminatingEndpoints=true|false (BETA - 默认值为 true)
QOSReserved=true|false (ALPHA - 默认值为 false)
ReadWriteOncePod=true|false (BETA - 默认值为 true)
RecoverVolumeExpansionFailure=true|false (ALPHA - 默认值为 false)
RemainingItemCount=true|false (BETA - 默认值为 true)
-RetroactiveDefaultStorageClass=true|false (BETA - 默认值为 true)
RotateKubeletServerCertificate=true|false (BETA - 默认值为 true)
SELinuxMountReadWriteOncePod=true|false (BETA - 默认值为 true)
+SchedulerQueueingHints=true|false (BETA - 默认值为 true)
SecurityContextDeny=true|false (ALPHA - 默认值为 false)
-ServiceNodePortStaticSubrange=true|false (ALPHA - 默认值为 false)
+ServiceNodePortStaticSubrange=true|false (BETA - 默认值为 true)
+SidecarContainers=true|false (ALPHA - 默认值为 false)
SizeMemoryBackedVolumes=true|false (BETA - 默认值为 true)
+SkipReadOnlyValidationGCE=true|false (ALPHA - 默认值为 false)
StableLoadBalancerNodeSet=true|false (BETA - 默认值为 true)
StatefulSetAutoDeletePVC=true|false (BETA - 默认值为 true)
StatefulSetStartOrdinal=true|false (BETA - 默认值为 true)
@@ -492,10 +483,11 @@ StorageVersionAPI=true|false (ALPHA - 默认值为 false)
StorageVersionHash=true|false (BETA - 默认值为 true)
TopologyAwareHints=true|false (BETA - 默认值为 true)
TopologyManagerPolicyAlphaOptions=true|false (ALPHA - 默认值为 false)
-TopologyManagerPolicyBetaOptions=true|false (BETA - 默认值为 false)
-TopologyManagerPolicyOptions=true|false (ALPHA - 默认值为 false)
-UserNamespacesStatelessPodsSupport=true|false (ALPHA - 默认值为 false)
-ValidatingAdmissionPolicy=true|false (ALPHA - 默认值为 false)
+TopologyManagerPolicyBetaOptions=true|false (BETA - 默认值为 true)
+TopologyManagerPolicyOptions=true|false (BETA - 默认值为 true)
+UnknownVersionInteroperabilityProxy=true|false (ALPHA - 默认值为 false)
+UserNamespacesSupport=true|false (ALPHA - 默认值为 false)
+ValidatingAdmissionPolicy=true|false (BETA - 默认值为 false)
VolumeCapacityPriority=true|false (ALPHA - 默认值为 false)
WatchList=true|false (ALPHA - 默认值为 false)
WinDSR=true|false (ALPHA - 默认值为 false)
@@ -669,33 +661,6 @@ The duration the clients should wait between attempting acquisition and renewal - ---lock-object-name string     默认值:"kube-scheduler" - - - - -已弃用: 定义锁对象的名称。将被删除以便使用 --leader-elect-resource-name。 -如果 --config 指定了一个配置文件,那么这个参数将被忽略。 - - - - ---lock-object-namespace string     默认值:"kube-system" - - - - -已弃用: 定义锁对象的命名空间。将被删除以便使用 leader-elect-resource-namespace。 -如果 --config 指定了一个配置文件,那么这个参数将被忽略。 - - - - --log-flush-frequency duration     默认值:5s @@ -994,9 +959,10 @@ number for the log level verbosity -打印版本信息并退出。 +--version, --version=raw 打印版本信息并推出; +--version=vX.Y.Z... 设置报告的版本。 From 0d8d697861cb9da6debd19d0e158be74ce6a1697 Mon Sep 17 00:00:00 2001 From: YAMADA Kazuaki Date: Mon, 9 Oct 2023 15:09:39 +0900 Subject: [PATCH 087/229] fix: anbiguous translation of diagnostic failed --- content/ja/docs/concepts/workloads/pods/pod-lifecycle.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md b/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md index 89d521098c809..b6bc7598f67b5 100644 --- a/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md +++ b/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md @@ -206,7 +206,7 @@ probeを使ってコンテナをチェックする4つの異なる方法があ : コンテナの診断が失敗しました。 `Unknown` -: コンテナの診断が失敗しました(何も実行する必要はなく、kubeletはさらにチェックを行います)。 +: コンテナの診断自体が失敗しました(何も実行する必要はなく、kubeletはさらにチェックを行います)。 ### Probeの種類 {#types-of-probe} From d6817aeab52490df9ec2e2fbc85208f775b66eb3 Mon Sep 17 00:00:00 2001 From: Maciej Filocha Date: Sun, 8 Oct 2023 15:05:05 +0200 Subject: [PATCH 088/229] Update main Polish index page --- content/pl/_index.html | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/content/pl/_index.html b/content/pl/_index.html index 1fa0f72c4cd08..06312f7197954 100644 --- a/content/pl/_index.html +++ b/content/pl/_index.html @@ -4,9 +4,10 @@ cid: home sitemap: priority: 1.0 - --- +{{< site-searchbar >}} + {{< blocks/section id="oceanNodes" >}} {{% blocks/feature image="flower" %}} [Kubernetes]({{< relref "/docs/concepts/overview/" >}}), znany też jako K8s, to otwarte oprogramowanie służące do automatyzacji procesów uruchamiania, skalowania i zarządzania aplikacjami w kontenerach. @@ -58,5 +59,3 @@

The Challenges of Migrating 150+ Microservices to Kubernetes

{{< /blocks/section >}} - -{{< blocks/kubernetes-features >}} From 926770351c839258ef021432401973c9cce65cff Mon Sep 17 00:00:00 2001 From: Raul Mahiques <18713435+rmahique@users.noreply.github.com> Date: Mon, 9 Oct 2023 09:05:47 +0200 Subject: [PATCH 089/229] Added instructions for SUSE-based distributions (#42913) * Update install-kubectl-linux.md Added instructions for SUSE based distributions * Update change-package-repository.md Added a section for openSUSE and SLES distributions * Update content/en/docs/tasks/tools/install-kubectl-linux.md Co-authored-by: Michael * Update content/en/docs/tasks/tools/install-kubectl-linux.md Co-authored-by: Michael * Update content/en/docs/tasks/tools/install-kubectl-linux.md Co-authored-by: Michael --------- Co-authored-by: Michael --- .../kubeadm/change-package-repository.md | 26 ++++++++++++++++ .../docs/tasks/tools/install-kubectl-linux.md | 30 +++++++++++++++++++ 2 files changed, 56 insertions(+) diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md b/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md index d39f2a4891e6e..db633d15ee253 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md @@ -66,6 +66,32 @@ exclude=kubelet kubeadm kubectl **You're using the Kubernetes package repositories and this guide applies to you.** Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories. +{{% /tab %}} + +{{% tab name="openSUSE or SLES" %}} + +Print the contents of the file that defines the Kubernetes `zypper` repository: + +```shell +# On your system, this configuration file could have a different name +cat /etc/zypp/repos.d/kubernetes.repo +``` + +If you see a `baseurl` similar to the `baseurl` in the output below: + +``` +[kubernetes] +name=Kubernetes +baseurl=https://pkgs.k8s.io/core:/stable:/v{{< skew currentVersionAddMinor -1 "." >}}/rpm/ +enabled=1 +gpgcheck=1 +gpgkey=https://pkgs.k8s.io/core:/stable:/v{{< skew currentVersionAddMinor -1 "." >}}/rpm/repodata/repomd.xml.key +exclude=kubelet kubeadm kubectl +``` + +**You're using the Kubernetes package repositories and this guide applies to you.** +Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories. + {{% /tab %}} {{< /tabs >}} diff --git a/content/en/docs/tasks/tools/install-kubectl-linux.md b/content/en/docs/tasks/tools/install-kubectl-linux.md index 684f904b14bda..dafb88fa025a6 100644 --- a/content/en/docs/tasks/tools/install-kubectl-linux.md +++ b/content/en/docs/tasks/tools/install-kubectl-linux.md @@ -192,6 +192,36 @@ To upgrade kubectl to another minor release, you'll need to bump the version in sudo yum install -y kubectl ``` +{{% /tab %}} + +{{% tab name="SUSE-based distributions" %}} + +1. Add the Kubernetes `zypper` repository. If you want to use Kubernetes version + different than {{< param "version" >}}, replace {{< param "version" >}} with + the desired minor version in the command below. + + ```bash + # This overwrites any existing configuration in /etc/zypp/repos.d/kubernetes.repo + cat <}}/rpm/ + enabled=1 + gpgcheck=1 + gpgkey=https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/rpm/repodata/repomd.xml.key + EOF + + {{< note >}} + To upgrade kubectl to another minor release, you'll need to bump the version in `/etc/zypp/repos.d/kubernetes.repo` + before running `zypper update`. This procedure is described in more detail in + [Changing The Kubernetes Package Repository](/docs/tasks/administer-cluster/kubeadm/change-package-repository/). + {{< /note >}} + +1. Install kubectl using `zypper`: + + ```bash + sudo zypper install -y kubectl + {{% /tab %}} {{< /tabs >}} From aabb993812d84497822fef09dd23166eb9d7ac26 Mon Sep 17 00:00:00 2001 From: Shubham Date: Mon, 9 Oct 2023 15:39:01 +0530 Subject: [PATCH 090/229] Fix hyperlinks in Ingress concept (#43110) * Fixed the Hyperlinks under the Ingress docs. * Modified the hyperlink for Ingress spec. --- content/en/docs/concepts/services-networking/ingress.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/en/docs/concepts/services-networking/ingress.md b/content/en/docs/concepts/services-networking/ingress.md index 836373b3b8898..cad103a7d05d0 100644 --- a/content/en/docs/concepts/services-networking/ingress.md +++ b/content/en/docs/concepts/services-networking/ingress.md @@ -84,7 +84,7 @@ is the [rewrite-target annotation](https://github.com/kubernetes/ingress-nginx/b Different [Ingress controllers](/docs/concepts/services-networking/ingress-controllers) support different annotations. Review the documentation for your choice of Ingress controller to learn which annotations are supported. -The Ingress [spec](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status) +The [Ingress spec](/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec) has all the information needed to configure a load balancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Ingress resource only supports rules for directing HTTP(S) traffic. @@ -94,8 +94,8 @@ should be defined. There are some ingress controllers, that work without the definition of a default `IngressClass`. For example, the Ingress-NGINX controller can be -configured with a [flag](https://kubernetes.github.io/ingress-nginx/#what-is-the-flag-watch-ingress-without-class) -`--watch-ingress-without-class`. It is [recommended](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do) though, to specify the +configured with a [flag](https://kubernetes.github.io/ingress-nginx/user-guide/k8s-122-migration/#what-is-the-flag-watch-ingress-without-class) +`--watch-ingress-without-class`. It is [recommended](https://kubernetes.github.io/ingress-nginx/user-guide/k8s-122-migration/#i-have-only-one-ingress-controller-in-my-cluster-what-should-i-do) though, to specify the default `IngressClass` as shown [below](#default-ingress-class). ### Ingress rules From 0c63fb814b55ecac2828943d7665847d21cea387 Mon Sep 17 00:00:00 2001 From: Meenu Yadav <116630390+MeenuyD@users.noreply.github.com> Date: Mon, 9 Oct 2023 15:47:27 +0530 Subject: [PATCH 091/229] fix:Mismatch of footer colour at Case-Study pages (#43021) --- assets/scss/_case-studies.scss | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/assets/scss/_case-studies.scss b/assets/scss/_case-studies.scss index 4f44864127525..5c907d1c08809 100644 --- a/assets/scss/_case-studies.scss +++ b/assets/scss/_case-studies.scss @@ -1,7 +1,7 @@ // SASS for Case Studies pages go here: hr { - background-color: #999999; + background-color: #303030; margin-top: 0; } From 4e0c6cbf9b9dc52415203ee08aab62fbce86ddb3 Mon Sep 17 00:00:00 2001 From: alok0277 <120774363+alok0277@users.noreply.github.com> Date: Mon, 9 Oct 2023 15:53:07 +0530 Subject: [PATCH 092/229] Fix banner image scaling on Case Studies pages (#43056) * image flow is synchronized with the responsive design. * Update single.html * Update quote.html * fixed banner image --- layouts/case-studies/single.html | 2 +- layouts/shortcodes/case-studies/quote.html | 2 +- static/css/new-case-studies.css | 3 ++- 3 files changed, 4 insertions(+), 3 deletions(-) diff --git a/layouts/case-studies/single.html b/layouts/case-studies/single.html index ad0335685acfd..b5bf43b7b8ba7 100644 --- a/layouts/case-studies/single.html +++ b/layouts/case-studies/single.html @@ -4,7 +4,7 @@ {{ else }} {{ if .Params.new_case_study_styles }} -