From 006788d13b4356ef76ccebcac9de89af99fd80a1 Mon Sep 17 00:00:00 2001
From: utkarsh-singh1
Date: Mon, 27 Mar 2023 16:48:42 +0530
Subject: [PATCH 001/229] Updated kubect cheatsheet docmentation
Signed-off-by: utkarsh-singh1
---
content/en/docs/reference/kubectl/cheatsheet.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md
index 54a0e439547be..9033fb2b22099 100644
--- a/content/en/docs/reference/kubectl/cheatsheet.md
+++ b/content/en/docs/reference/kubectl/cheatsheet.md
@@ -42,7 +42,7 @@ echo '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc
### FISH
-Require kubectl version 1.23 or more.
+Require kubectl version 1.23 or above.
```bash
echo 'kubectl completion fish | source' >> ~/.config/fish/config.fish # add autocomplete permanently to your fish shell
From 024aa7aa0c4bb3240b925363f1b4ac0545221fd5 Mon Sep 17 00:00:00 2001
From: Milas Bowman
Date: Tue, 25 Apr 2023 08:42:48 -0400
Subject: [PATCH 002/229] editorconfig: preserve final newline in YAML
I'm not sure why this is defaulted to `false` for all file types,
so this just enables it for YAML for now.
Signed-off-by: Milas Bowman
---
.editorconfig | 3 +++
1 file changed, 3 insertions(+)
diff --git a/.editorconfig b/.editorconfig
index bc1dfe40c8faa..1a235f9c90853 100644
--- a/.editorconfig
+++ b/.editorconfig
@@ -16,5 +16,8 @@ indent_size = 2
indent_style = space
indent_size = 4
+[*.{yaml}]
+insert_final_newline = true
+
[Makefile]
indent_style = tab
From d9ef7b3849a85533258d8d2c878cba26854e8f87 Mon Sep 17 00:00:00 2001
From: iyear
Date: Tue, 25 Apr 2023 20:52:57 +0800
Subject: [PATCH 003/229] kubectl/jsonpath: add example of escaping termination
character
Signed-off-by: iyear
---
content/en/docs/reference/kubectl/jsonpath.md | 33 +++++++++++--------
1 file changed, 20 insertions(+), 13 deletions(-)
diff --git a/content/en/docs/reference/kubectl/jsonpath.md b/content/en/docs/reference/kubectl/jsonpath.md
index f1aa5f8ec4946..7477430efffb3 100644
--- a/content/en/docs/reference/kubectl/jsonpath.md
+++ b/content/en/docs/reference/kubectl/jsonpath.md
@@ -34,7 +34,12 @@ Given the JSON input:
"items":[
{
"kind":"None",
- "metadata":{"name":"127.0.0.1"},
+ "metadata":{
+ "name":"127.0.0.1",
+ "labels":{
+ "kubernetes.io/hostname":"127.0.0.1"
+ }
+ },
"status":{
"capacity":{"cpu":"4"},
"addresses":[{"type": "LegacyHostIP", "address":"127.0.0.1"}]
@@ -65,18 +70,19 @@ Given the JSON input:
}
```
-Function | Description | Example | Result
---------------------|---------------------------|-----------------------------------------------------------------|------------------
-`text` | the plain text | `kind is {.kind}` | `kind is List`
-`@` | the current object | `{@}` | the same as input
-`.` or `[]` | child operator | `{.kind}`, `{['kind']}` or `{['name\.type']}` | `List`
-`..` | recursive descent | `{..name}` | `127.0.0.1 127.0.0.2 myself e2e`
-`*` | wildcard. Get all objects | `{.items[*].metadata.name}` | `[127.0.0.1 127.0.0.2]`
-`[start:end:step]` | subscript operator | `{.users[0].name}` | `myself`
-`[,]` | union operator | `{.items[*]['metadata.name', 'status.capacity']}` | `127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8]`
-`?()` | filter | `{.users[?(@.name=="e2e")].user.password}` | `secret`
-`range`, `end` | iterate list | `{range .items[*]}[{.metadata.name}, {.status.capacity}] {end}` | `[127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]]`
-`''` | quote interpreted string | `{range .items[*]}{.metadata.name}{'\t'}{end}` | `127.0.0.1 127.0.0.2`
+Function | Description | Example | Result
+--------------------|------------------------------|-----------------------------------------------------------------|------------------
+`text` | the plain text | `kind is {.kind}` | `kind is List`
+`@` | the current object | `{@}` | the same as input
+`.` or `[]` | child operator | `{.kind}`, `{['kind']}` or `{['name\.type']}` | `List`
+`..` | recursive descent | `{..name}` | `127.0.0.1 127.0.0.2 myself e2e`
+`*` | wildcard. Get all objects | `{.items[*].metadata.name}` | `[127.0.0.1 127.0.0.2]`
+`[start:end:step]` | subscript operator | `{.users[0].name}` | `myself`
+`[,]` | union operator | `{.items[*]['metadata.name', 'status.capacity']}` | `127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8]`
+`?()` | filter | `{.users[?(@.name=="e2e")].user.password}` | `secret`
+`range`, `end` | iterate list | `{range .items[*]}[{.metadata.name}, {.status.capacity}] {end}` | `[127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]]`
+`''` | quote interpreted string | `{range .items[*]}{.metadata.name}{'\t'}{end}` | `127.0.0.1 127.0.0.2`
+`\` | escape termination character | `{.items[0].metadata.labels.kubernetes\.io/hostname}` | `127.0.0.1`
Examples using `kubectl` and JSONPath expressions:
@@ -87,6 +93,7 @@ kubectl get pods -o=jsonpath='{.items[0]}'
kubectl get pods -o=jsonpath='{.items[0].metadata.name}'
kubectl get pods -o=jsonpath="{.items[*]['metadata.name', 'status.capacity']}"
kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}'
+kubectl get pods -o=jsonpath='{.items[0].metadata.labels.kubernetes\.io/hostname}'
```
{{< note >}}
From 9afdd652c3b6ba1779a4cd31c906863f6c5432a8 Mon Sep 17 00:00:00 2001
From: NitishKumar06
Date: Wed, 10 May 2023 10:28:17 +0530
Subject: [PATCH 004/229] Added secret keyword to kubectl-command.html file
Signed-off-by: NitishKumar06
---
static/docs/reference/generated/kubectl/kubectl-commands.html | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/static/docs/reference/generated/kubectl/kubectl-commands.html b/static/docs/reference/generated/kubectl/kubectl-commands.html
index 4fff80b171a7d..474fcb6879daf 100644
--- a/static/docs/reference/generated/kubectl/kubectl-commands.html
+++ b/static/docs/reference/generated/kubectl/kubectl-commands.html
@@ -1735,7 +1735,7 @@
secret tls
Create a TLS secret from the given public/private key pair.
The public/private key pair must exist beforehand. The public key certificate must be .PEM encoded and match the given private key.
Usage
-
$ kubectl create tls NAME --cert=path/to/cert/file --key=path/to/key/file [--dry-run=server|client|none]
+
$ kubectl create secret tls NAME --cert=path/to/cert/file --key=path/to/key/file [--dry-run=server|client|none]
Flags
From 25bb30bac1fade2c63f16e0362bfe94700e7af52 Mon Sep 17 00:00:00 2001
From: hatfieldbrian <81722870+hatfieldbrian@users.noreply.github.com>
Date: Sat, 13 May 2023 04:23:57 -0700
Subject: [PATCH 005/229] Update api-concepts.md
Correct collection definition: a list of instances of a resource _type_
---
content/en/docs/reference/using-api/api-concepts.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/reference/using-api/api-concepts.md b/content/en/docs/reference/using-api/api-concepts.md
index 6e6b0fe9ae563..5fb289977ed93 100644
--- a/content/en/docs/reference/using-api/api-concepts.md
+++ b/content/en/docs/reference/using-api/api-concepts.md
@@ -34,7 +34,7 @@ API concepts:
* A *resource type* is the name used in the URL (`pods`, `namespaces`, `services`)
* All resource types have a concrete representation (their object schema) which is called a *kind*
-* A list of instances of a resource is known as a *collection*
+* A list of instances of a resource type is known as a *collection*
* A single instance of a resource type is called a *resource*, and also usually represents an *object*
* For some resource types, the API includes one or more *sub-resources*, which are represented as URI paths below the resource
From 948dec77a865a5d98becd57bfd342325fdadf874 Mon Sep 17 00:00:00 2001
From: utkarsh-singh1
Date: Mon, 5 Jun 2023 15:41:17 +0530
Subject: [PATCH 006/229] Updated oudated links in pre-requisites-ref-docs.md
Signed-off-by: utkarsh-singh1
---
.../contribute/generate-ref-docs/prerequisites-ref-docs.md | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md b/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md
index c7199208130be..ff87f63aa1271 100644
--- a/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md
+++ b/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md
@@ -5,18 +5,17 @@
- You need to have these tools installed:
- - [Python](https://www.python.org/downloads/) v3.7.x
+ - [Python](https://www.python.org/downloads/) v3.7.x+
- [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
- - [Golang](https://golang.org/doc/install) version 1.13+
+ - [Golang](https://go.dev/dl/) version 1.13+
- [Pip](https://pypi.org/project/pip/) used to install PyYAML
- [PyYAML](https://pyyaml.org/) v5.1.2
- [make](https://www.gnu.org/software/make/)
- [gcc compiler/linker](https://gcc.gnu.org/)
- - [Docker](https://docs.docker.com/engine/installation/) (Required only for `kubectl` command reference)
+ - [Docker](https://docs.docker.com/engine/installation/) (Required only for `kubectl` command reference, also kubernetes moving on from dockershim read [here](https://kubernetes.io/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/))
- Your `PATH` environment variable must include the required build tools, such as the `Go` binary and `python`.
- You need to know how to create a pull request to a GitHub repository.
This involves creating your own fork of the repository. For more
information, see [Work from a local clone](/docs/contribute/new-content/open-a-pr/#fork-the-repo).
-
From d873f03e78f93c81b847f1d389141a7138479297 Mon Sep 17 00:00:00 2001
From: shubham82
Date: Mon, 19 Jun 2023 15:38:54 +0530
Subject: [PATCH 007/229] Add -subj Command Option.
---
.../access-authn-authz/certificate-signing-requests.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md
index a6513853a7991..8dd6337e6c235 100644
--- a/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md
+++ b/content/en/docs/reference/access-authn-authz/certificate-signing-requests.md
@@ -488,7 +488,7 @@ O is the group that this user will belong to. You can refer to
```shell
openssl genrsa -out myuser.key 2048
-openssl req -new -key myuser.key -out myuser.csr
+openssl req -new -key myuser.key -out myuser.csr -subj "/CN=myuser"
```
### Create a CertificateSigningRequest {#create-certificatessigningrequest}
From 512b71177c95f631cc8d9b376ab8f0c56c76e1da Mon Sep 17 00:00:00 2001
From: Morgan Rowse
Date: Mon, 19 Jun 2023 15:16:49 +0200
Subject: [PATCH 008/229] Update list-all-running-container-images.md
also include initContainers
---
.../list-all-running-container-images.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md b/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md
index 0ae940296241b..5a5f41008e913 100644
--- a/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md
+++ b/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md
@@ -23,7 +23,7 @@ of Containers for each.
- Fetch all Pods in all namespaces using `kubectl get pods --all-namespaces`
- Format the output to include only the list of Container image names
- using `-o jsonpath={.items[*].spec.containers[*].image}`. This will recursively parse out the
+ using `-o jsonpath={.items[*].spec['initContainers', 'containers'][*].image}`. This will recursively parse out the
`image` field from the returned json.
- See the [jsonpath reference](/docs/reference/kubectl/jsonpath/)
for further information on how to use jsonpath.
@@ -33,7 +33,7 @@ of Containers for each.
- Use `uniq` to aggregate image counts
```shell
-kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\
+kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec['initContainers', 'containers'][*].image}" |\
tr -s '[[:space:]]' '\n' |\
sort |\
uniq -c
@@ -42,7 +42,7 @@ The jsonpath is interpreted as follows:
- `.items[*]`: for each returned value
- `.spec`: get the spec
-- `.containers[*]`: for each container
+- `['initContainers', 'containers'][*]`: for each container
- `.image`: get the image
{{< note >}}
From 810a7cc2c0111f0e26db2f448df57f1ec20a4cf8 Mon Sep 17 00:00:00 2001
From: Kevin Grigorenko
Date: Mon, 26 Jun 2023 13:41:50 -0500
Subject: [PATCH 009/229] Clarify IBM Java and IBM Semeru Runtimes cgroupsV2
support
Signed-off-by: Kevin Grigorenko
---
content/en/blog/_posts/2022-08-31-cgroupv2-ga.md | 4 ++--
content/en/docs/concepts/architecture/cgroups.md | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/content/en/blog/_posts/2022-08-31-cgroupv2-ga.md b/content/en/blog/_posts/2022-08-31-cgroupv2-ga.md
index d4345195746b8..4071d4458160d 100644
--- a/content/en/blog/_posts/2022-08-31-cgroupv2-ga.md
+++ b/content/en/blog/_posts/2022-08-31-cgroupv2-ga.md
@@ -118,8 +118,8 @@ Scenarios in which you might need to update to cgroup v2 include the following:
DaemonSet for monitoring pods and containers, update it to v0.43.0 or later.
* If you deploy Java applications, prefer to use versions which fully support cgroup v2:
* [OpenJDK / HotSpot](https://bugs.openjdk.org/browse/JDK-8230305): jdk8u372, 11.0.16, 15 and later
- * [IBM Semeru Runtimes](https://www.eclipse.org/openj9/docs/version0.33/#control-groups-v2-support): jdk8u345-b01, 11.0.16.0, 17.0.4.0, 18.0.2.0 and later
- * [IBM Java](https://www.ibm.com/docs/en/sdk-java-technology/8?topic=new-service-refresh-7#whatsnew_sr7__fp15): 8.0.7.15 and later
+ * [IBM Semeru Runtimes](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.382.0, 11.0.20.0, 17.0.8.0, and later
+ * [IBM Java](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.8.6 and later
## Learn more
diff --git a/content/en/docs/concepts/architecture/cgroups.md b/content/en/docs/concepts/architecture/cgroups.md
index b0a98af6604b0..b96d89e0d6dd4 100644
--- a/content/en/docs/concepts/architecture/cgroups.md
+++ b/content/en/docs/concepts/architecture/cgroups.md
@@ -104,8 +104,8 @@ updated to newer versions that support cgroup v2. For example:
DaemonSet for monitoring pods and containers, update it to v0.43.0 or later.
* If you deploy Java applications, prefer to use versions which fully support cgroup v2:
* [OpenJDK / HotSpot](https://bugs.openjdk.org/browse/JDK-8230305): jdk8u372, 11.0.16, 15 and later
- * [IBM Semeru Runtimes](https://www.eclipse.org/openj9/docs/version0.33/#control-groups-v2-support): jdk8u345-b01, 11.0.16.0, 17.0.4.0, 18.0.2.0 and later
- * [IBM Java](https://www.ibm.com/docs/en/sdk-java-technology/8?topic=new-service-refresh-7#whatsnew_sr7__fp15): 8.0.7.15 and later
+ * [IBM Semeru Runtimes](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.382.0, 11.0.20.0, 17.0.8.0, and later
+ * [IBM Java](https://www.ibm.com/support/pages/apar/IJ46681): 8.0.8.6 and later
* If you are using the [uber-go/automaxprocs](https://github.com/uber-go/automaxprocs) package, make sure
the version you use is v1.5.1 or higher.
From cfb6309c56eeb0ea8d56c0eb6d153cff2c043de7 Mon Sep 17 00:00:00 2001
From: Simon Engmann
Date: Mon, 17 Jul 2023 12:37:02 +0200
Subject: [PATCH 010/229] Fix example errors for CrossNamespacePodAffinity
Remove references to CrossNamespaceAffinity
The scope CrossNamespaceAffinity does not exist. Attempting to feed the example
YAML to `kubectl create` results in the following error:
> The ResourceQuota "disable-cross-namespace-affinity" is invalid:
> * spec.scopeSelector.matchExpressions.scopeName: Invalid value:
> "CrossNamespaceAffinity": unsupported scope
Add missing operator for CrossNamespacePodAffinity
Trying to create the example ResourceQuotas without an operator results in the
following error from `kubectl create`:
> The ResourceQuota "disable-cross-namespace-affinity" is invalid:
> * spec.scopeSelector.matchExpressions.operator: Invalid value: "": must be
> 'Exist' when scope is any of ResourceQuotaScopeTerminating,
> ResourceQuotaScopeNotTerminating, ResourceQuotaScopeBestEffort,
> ResourceQuotaScopeNotBestEffort or
> ResourceQuotaScopeCrossNamespacePodAffinity
> * spec.scopeSelector.matchExpressions.operator: Invalid value: "": not a valid
> selector operator
The error message itself has another bug, as the operator is Exist*s*, not
Exist.
Signed-off-by: Simon Engmann
---
content/en/docs/concepts/policy/resource-quotas.md | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md
index c67880458a355..2186c071d1602 100644
--- a/content/en/docs/concepts/policy/resource-quotas.md
+++ b/content/en/docs/concepts/policy/resource-quotas.md
@@ -465,7 +465,7 @@ from getting scheduled in a failure domain.
Using this scope operators can prevent certain namespaces (`foo-ns` in the example below)
from having pods that use cross-namespace pod affinity by creating a resource quota object in
-that namespace with `CrossNamespaceAffinity` scope and hard limit of 0:
+that namespace with `CrossNamespacePodAffinity` scope and hard limit of 0:
```yaml
apiVersion: v1
@@ -478,11 +478,12 @@ spec:
pods: "0"
scopeSelector:
matchExpressions:
- - scopeName: CrossNamespaceAffinity
+ - scopeName: CrossNamespacePodAffinity
+ operator: Exists
```
If operators want to disallow using `namespaces` and `namespaceSelector` by default, and
-only allow it for specific namespaces, they could configure `CrossNamespaceAffinity`
+only allow it for specific namespaces, they could configure `CrossNamespacePodAffinity`
as a limited resource by setting the kube-apiserver flag --admission-control-config-file
to the path of the following configuration file:
@@ -497,12 +498,13 @@ plugins:
limitedResources:
- resource: pods
matchScopes:
- - scopeName: CrossNamespaceAffinity
+ - scopeName: CrossNamespacePodAffinity
+ operator: Exists
```
With the above configuration, pods can use `namespaces` and `namespaceSelector` in pod affinity only
if the namespace where they are created have a resource quota object with
-`CrossNamespaceAffinity` scope and a hard limit greater than or equal to the number of pods using those fields.
+`CrossNamespacePodAffinity` scope and a hard limit greater than or equal to the number of pods using those fields.
## Requests compared to Limits {#requests-vs-limits}
From 8face3d0085c8dd9a5eb02c644464d6be9fb3a88 Mon Sep 17 00:00:00 2001
From: Alex Serbul <22218473+AlexanderSerbul@users.noreply.github.com>
Date: Wed, 19 Jul 2023 13:16:49 +0300
Subject: [PATCH 011/229] Update connect-applications-service.md
We are talking about pods here, not about services yet.
---
.../en/docs/tutorials/services/connect-applications-service.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/tutorials/services/connect-applications-service.md b/content/en/docs/tutorials/services/connect-applications-service.md
index 7425fa119bc67..24245584fafb8 100644
--- a/content/en/docs/tutorials/services/connect-applications-service.md
+++ b/content/en/docs/tutorials/services/connect-applications-service.md
@@ -59,7 +59,7 @@ to make queries against both IPs. Note that the containers are *not* using port
the node, nor are there any special NAT rules to route traffic to the pod. This means
you can run multiple nginx pods on the same node all using the same `containerPort`,
and access them from any other pod or node in your cluster using the assigned IP
-address for the Service. If you want to arrange for a specific port on the host
+address for the pod. If you want to arrange for a specific port on the host
Node to be forwarded to backing Pods, you can - but the networking model should
mean that you do not need to do so.
From 9ad322dabc4f9f915ff1fcbc99a84a179bcca6ac Mon Sep 17 00:00:00 2001
From: Alex Serbul <22218473+AlexanderSerbul@users.noreply.github.com>
Date: Wed, 19 Jul 2023 13:50:34 +0300
Subject: [PATCH 012/229] Update connect-applications-service.md
Semantic fix
---
.../en/docs/tutorials/services/connect-applications-service.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/tutorials/services/connect-applications-service.md b/content/en/docs/tutorials/services/connect-applications-service.md
index 24245584fafb8..c1aaf87411c71 100644
--- a/content/en/docs/tutorials/services/connect-applications-service.md
+++ b/content/en/docs/tutorials/services/connect-applications-service.md
@@ -189,7 +189,7 @@ Note there's no mention of your Service. This is because you created the replica
before the Service. Another disadvantage of doing this is that the scheduler might
put both Pods on the same machine, which will take your entire Service down if
it dies. We can do this the right way by killing the 2 Pods and waiting for the
-Deployment to recreate them. This time around the Service exists *before* the
+Deployment to recreate them. This time the Service exists *before* the
replicas. This will give you scheduler-level Service spreading of your Pods
(provided all your nodes have equal capacity), as well as the right environment
variables:
From b1a61e5916ad381725d1328d3ae7737dbe40b991 Mon Sep 17 00:00:00 2001
From: Lino Ngando <12659036+lngando@users.noreply.github.com>
Date: Thu, 20 Jul 2023 10:37:06 +0200
Subject: [PATCH 013/229] Update connect-applications-service.md
It is the ReplicaSet who recreates dead pods and not the deployment
---
.../en/docs/tutorials/services/connect-applications-service.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/tutorials/services/connect-applications-service.md b/content/en/docs/tutorials/services/connect-applications-service.md
index 7425fa119bc67..bd14faf47f7a7 100644
--- a/content/en/docs/tutorials/services/connect-applications-service.md
+++ b/content/en/docs/tutorials/services/connect-applications-service.md
@@ -71,7 +71,7 @@ if you're curious.
So we have pods running nginx in a flat, cluster wide, address space. In theory,
you could talk to these pods directly, but what happens when a node dies? The pods
-die with it, and the Deployment will create new ones, with different IPs. This is
+die with it, and the ReplicaSet inside the Deployment will create new ones, with different IPs. This is
the problem a Service solves.
A Kubernetes Service is an abstraction which defines a logical set of Pods running
From d263d8c938c9e679d879b4fc10fed04f2e2fce0d Mon Sep 17 00:00:00 2001
From: Marcelo Giles
Date: Fri, 21 Jul 2023 00:20:34 -0700
Subject: [PATCH 014/229] Add note for Linux distros that don't include glibc
Reorder sentence
---
.../tools/kubeadm/install-kubeadm.md | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
index 6fbe04549d777..b93e23d80b63e 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
@@ -29,6 +29,14 @@ see the [Creating a cluster with kubeadm](/docs/setup/production-environment/too
* Swap disabled. You **MUST** disable swap in order for the kubelet to work properly.
* For example, `sudo swapoff -a` will disable swapping temporarily. To make this change persistent across reboots, make sure swap is disabled in config files like `/etc/fstab`, `systemd.swap`, depending how it was configured on your system.
+{{< note >}}
+The `kubeadm` installation is done via binaries that use dynamic linking and assumes that your target system provides `glibc`.
+This is a reasonable assumption on many Linux distributions (including Debian, Ubuntu, Fedora, CentOS, etc.)
+but it is not always the case with custom and lightweight distributions which don't include `glibc` by default, such as Alpine Linux.
+The expectation is that the distribution either includes `glibc` or a [compatibility layer](https://wiki.alpinelinux.org/wiki/Running_glibc_programs)
+that provides the expected symbols.
+{{< /note >}}
+
## Verify the MAC address and product_uuid are unique for every node {#verify-mac-address}
@@ -259,6 +267,10 @@ sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
```
+{{< note >}}
+Please refer to the note in the [Before you begin](#before-you-begin) section for Linux distributions that do not include `glibc` by default.
+{{< /note >}}
+
Install `kubectl` by following the instructions on [Install Tools page](/docs/tasks/tools/#kubectl).
Enable and start `kubelet`:
From 59eff873d88bbc9f64fad9a2f14c53d144f52c91 Mon Sep 17 00:00:00 2001
From: Marcelo Giles
Date: Fri, 21 Jul 2023 22:51:38 -0700
Subject: [PATCH 015/229] Add note after restore cmd to specify that data-dir
will be (re)created
---
.../configure-upgrade-etcd.md | 22 ++++++++++++-------
1 file changed, 14 insertions(+), 8 deletions(-)
diff --git a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md
index 3b4f90770bf84..f0b0786c73a25 100644
--- a/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md
+++ b/content/en/docs/tasks/administer-cluster/configure-upgrade-etcd.md
@@ -271,16 +271,16 @@ that is not currently used by an etcd process. Taking the snapshot will
not affect the performance of the member.
Below is an example for taking a snapshot of the keyspace served by
-`$ENDPOINT` to the file `snapshotdb`:
+`$ENDPOINT` to the file `snapshot.db`:
```shell
-ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshotdb
+ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshot.db
```
Verify the snapshot:
```shell
-ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshotdb
+ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshot.db
```
```console
@@ -339,19 +339,25 @@ employed to recover the data of a failed cluster.
Before starting the restore operation, a snapshot file must be present. It can
either be a snapshot file from a previous backup operation, or from a remaining
[data directory](https://etcd.io/docs/current/op-guide/configuration/#--data-dir).
+
Here is an example:
```shell
-ETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 snapshot restore snapshotdb
+ETCDCTL_API=3 etcdctl --endpoints 10.2.0.9:2379 snapshot restore snapshot.db
```
-Another example for restoring using etcdctl options:
+
+Another example for restoring using `etcdctl` options:
+
```shell
-ETCDCTL_API=3 etcdctl snapshot restore --data-dir snapshotdb
+ETCDCTL_API=3 etcdctl --data-dir snapshot restore snapshot.db
```
-Yet another example would be to first export the environment variable
+where `` is a directory that will be created during the restore process.
+
+Yet another example would be to first export the `ETCDCTL_API` environment variable:
+
```shell
export ETCDCTL_API=3
-etcdctl snapshot restore --data-dir snapshotdb
+etcdctl --data-dir snapshot restore snapshot.db
```
For more information and examples on restoring a cluster from a snapshot file, see
From 448c2a53b6d1dccc3cc04346a99afeea3913c42b Mon Sep 17 00:00:00 2001
From: Kundan Kumar
Date: Fri, 21 Jul 2023 22:45:08 +0530
Subject: [PATCH 016/229] what's next for network plugin
incorporated review comments
---
.../compute-storage-net/network-plugins.md | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
index 5c6cfa7fc5842..a58f41abde860 100644
--- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
+++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
@@ -172,3 +172,9 @@ metadata:
## {{% heading "whatsnext" %}}
+* Learn about [Network Policies](/docs/concepts/services-networking/network-policies/) using network
+ plugins
+* Learn about [Cluster Networking](/docs/concepts/cluster-administration/networking/)
+ with network plugins
+* Learn about the [Troubleshooting CNI plugin-related errors](/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/)
+
From f2374598b5c65871aa312193e383d734033d6b0b Mon Sep 17 00:00:00 2001
From: Andrey Goran
Date: Tue, 25 Jul 2023 12:32:16 +0400
Subject: [PATCH 017/229] ID localization: replaced {{< codenew ... >}} with
{{% codenew ... %}} in all files
---
.../concepts/cluster-administration/logging.md | 10 +++++-----
.../manage-deployment.md | 2 +-
.../working-with-objects/kubernetes-objects.md | 2 +-
.../concepts/policy/pod-security-policy.md | 6 +++---
.../scheduling-eviction/assign-pod-node.md | 6 +++---
...tries-to-pod-etc-hosts-with-host-aliases.md | 2 +-
.../connect-applications-service.md | 8 ++++----
.../services-networking/dns-pod-service.md | 2 +-
.../concepts/services-networking/dual-stack.md | 6 +++---
.../concepts/services-networking/ingress.md | 2 +-
.../workloads/controllers/daemonset.md | 2 +-
.../workloads/controllers/deployment.md | 2 +-
.../controllers/garbage-collection.md | 2 +-
.../docs/concepts/workloads/controllers/job.md | 2 +-
.../workloads/controllers/replicaset.md | 6 +++---
.../controllers/replicationcontroller.md | 2 +-
.../pods/pod-topology-spread-constraints.md | 6 +++---
.../dns-debugging-resolution.md | 2 +-
.../memory-constraint-namespace.md | 10 +++++-----
.../assign-memory-resource.md | 6 +++---
.../assign-pods-nodes-using-node-affinity.md | 4 ++--
...figure-liveness-readiness-startup-probes.md | 6 +++---
.../configure-persistent-volume-storage.md | 6 +++---
.../configure-pod-configmap.md | 18 +++++++++---------
.../configure-service-account.md | 2 +-
.../configure-volume-storage.md | 2 +-
.../pull-image-private-registry.md | 2 +-
.../quality-service-pod.md | 8 ++++----
.../security-context.md | 8 ++++----
.../share-process-namespace.md | 2 +-
.../debug-application-introspection.md | 2 +-
.../get-shell-running-container.md | 2 +-
.../define-command-argument-container.md | 2 +-
.../define-environment-variable-container.md | 2 +-
.../distribute-credentials-secure.md | 10 +++++-----
.../job/automated-tasks-with-cron-jobs.md | 2 +-
.../declarative-config.md | 10 +++++-----
.../horizontal-pod-autoscale-walkthrough.md | 4 ++--
.../run-stateless-application-deployment.md | 6 +++---
content/id/docs/tutorials/hello-minikube.md | 4 ++--
.../stateful-application/basic-stateful-set.md | 4 ++--
.../expose-external-ip-address.md | 2 +-
42 files changed, 97 insertions(+), 97 deletions(-)
diff --git a/content/id/docs/concepts/cluster-administration/logging.md b/content/id/docs/concepts/cluster-administration/logging.md
index e00745d1d979f..33266635c333d 100644
--- a/content/id/docs/concepts/cluster-administration/logging.md
+++ b/content/id/docs/concepts/cluster-administration/logging.md
@@ -21,7 +21,7 @@ Arsitektur _logging_ pada level klaster yang akan dijelaskan berikut mengasumsik
Pada bagian ini, kamu dapat melihat contoh tentang dasar _logging_ pada Kubernetes yang mengeluarkan data pada _standard output_. Demonstrasi berikut ini menggunakan sebuah [spesifikasi pod](/examples/debug/counter-pod.yaml) dengan kontainer yang akan menuliskan beberapa teks ke _standard output_ tiap detik.
-{{< codenew file="debug/counter-pod.yaml" >}}
+{{% codenew file="debug/counter-pod.yaml" %}}
Untuk menjalankan pod ini, gunakan perintah berikut:
@@ -126,13 +126,13 @@ Dengan menggunakan cara ini kamu dapat memisahkan aliran log dari bagian-bagian
Sebagai contoh, sebuah pod berjalan pada satu kontainer tunggal, dan kontainer menuliskan ke dua berkas log yang berbeda, dengan dua format yang berbeda pula. Berikut ini _file_ konfigurasi untuk Pod:
-{{< codenew file="admin/logging/two-files-counter-pod.yaml" >}}
+{{% codenew file="admin/logging/two-files-counter-pod.yaml" %}}
Hal ini akan menyulitkan untuk mengeluarkan log dalam format yang berbeda pada aliran log yang sama, meskipun kamu dapat me-_redirect_ keduanya ke `stdout` dari kontainer. Sebagai gantinya, kamu dapat menggunakan dua buah kontainer _sidecar_. Tiap kontainer _sidecar_ dapat membaca suatu berkas log tertentu dari _shared volume_ kemudian mengarahkan log ke `stdout`-nya sendiri.
Berikut _file_ konfigurasi untuk pod yang memiliki dua buah kontainer _sidecard_:
-{{< codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" >}}
+{{% codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" %}}
Saat kamu menjalankan pod ini, kamu dapat mengakses tiap aliran log secara terpisah dengan menjalankan perintah berikut:
@@ -175,7 +175,7 @@ Menggunakan agen _logging_ di dalam kontainer _sidecar_ dapat berakibat pengguna
Sebagai contoh, kamu dapat menggunakan [Stackdriver](/docs/tasks/debug-application-cluster/logging-stackdriver/),
yang menggunakan fluentd sebagai agen _logging_. Berikut ini dua _file_ konfigurasi yang dapat kamu pakai untuk mengimplementasikan cara ini. _File_ yang pertama berisi sebuah [ConfigMap](/id/docs/tasks/configure-pod-container/configure-pod-configmap/) untuk mengonfigurasi fluentd.
-{{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}}
+{{% codenew file="admin/logging/fluentd-sidecar-config.yaml" %}}
{{< note >}}
Konfigurasi fluentd berada diluar cakupan artikel ini. Untuk informasi lebih lanjut tentang cara mengonfigurasi fluentd, silakan lihat [dokumentasi resmi fluentd ](http://docs.fluentd.org/).
@@ -183,7 +183,7 @@ Konfigurasi fluentd berada diluar cakupan artikel ini. Untuk informasi lebih lan
_File_ yang kedua mendeskripsikan sebuah pod yang memiliki kontainer _sidecar_ yang menjalankan fluentd. Pod ini melakukan _mount_ sebuah volume yang akan digunakan fluentd untuk mengambil data konfigurasinya.
-{{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}}
+{{% codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" %}}
Setelah beberapa saat, kamu akan mendapati pesan log pada _interface_ Stackdriver.
diff --git a/content/id/docs/concepts/cluster-administration/manage-deployment.md b/content/id/docs/concepts/cluster-administration/manage-deployment.md
index 4bdc7f790e6dc..12525e39e1754 100644
--- a/content/id/docs/concepts/cluster-administration/manage-deployment.md
+++ b/content/id/docs/concepts/cluster-administration/manage-deployment.md
@@ -17,7 +17,7 @@ Kamu telah melakukan _deploy_ pada aplikasimu dan mengeksposnya melalui sebuah _
Banyak aplikasi memerlukan beberapa _resource_, seperti Deployment dan Service. Pengelolaan beberapa _resource_ dapat disederhanakan dengan mengelompokkannya dalam berkas yang sama (dengan pemisah `---` pada YAML). Contohnya:
-{{< codenew file="application/nginx-app.yaml" >}}
+{{% codenew file="application/nginx-app.yaml" %}}
Beberapa _resource_ dapat dibuat seolah-olah satu _resource_:
diff --git a/content/id/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/id/docs/concepts/overview/working-with-objects/kubernetes-objects.md
index aa702827b9ad4..5195243acb70a 100644
--- a/content/id/docs/concepts/overview/working-with-objects/kubernetes-objects.md
+++ b/content/id/docs/concepts/overview/working-with-objects/kubernetes-objects.md
@@ -64,7 +64,7 @@ akan mengubah informasi yang kamu berikan ke dalam format JSON ketika melakukan
Berikut merupakan contoh _file_ `.yaml` yang menunjukkan _field_ dan _spec_ objek untuk _Deployment_:
-{{< codenew file="application/deployment.yaml" >}}
+{{% codenew file="application/deployment.yaml" %}}
Salah satu cara untuk membuat _Deployment_ menggunakan _file_ `.yaml`
seperti yang dijabarkan di atas adalah dengan menggunakan perintah
diff --git a/content/id/docs/concepts/policy/pod-security-policy.md b/content/id/docs/concepts/policy/pod-security-policy.md
index 3646246150e83..d89e6ca7398f3 100644
--- a/content/id/docs/concepts/policy/pod-security-policy.md
+++ b/content/id/docs/concepts/policy/pod-security-policy.md
@@ -146,7 +146,7 @@ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n
Beri definisi objek contoh PodSecurityPolicy dalam sebuah berkas. Ini adalah kebijakan yang mencegah pembuatan Pod-Pod yang _privileged_.
-{{< codenew file="policy/example-psp.yaml" >}}
+{{% codenew file="policy/example-psp.yaml" %}}
Dan buatlah PodSecurityPolicy tersebut dengan `kubectl`:
@@ -297,11 +297,11 @@ podsecuritypolicy "example" deleted
Berikut adalah kebijakan dengan batasan paling sedikit yang dapat kamu buat, ekuivalen dengan tidak menggunakan _admission controller_ Pod Security Policy:
-{{< codenew file="policy/privileged-psp.yaml" >}}
+{{% codenew file="policy/privileged-psp.yaml" %}}
Berikut adalah sebuah contoh kebijakan yang restriktif yang mengharuskan pengguna-pengguna untuk berjalan sebagai pengguna yang _unprivileged_, memblokir kemungkinan eskalasi menjadi _root_, dan mengharuskan penggunaan beberapa mekanisme keamanan.
-{{< codenew file="policy/restricted-psp.yaml" >}}
+{{% codenew file="policy/restricted-psp.yaml" %}}
## Referensi Kebijakan
diff --git a/content/id/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/id/docs/concepts/scheduling-eviction/assign-pod-node.md
index 4f0f838db5204..6139b0b2d00f0 100644
--- a/content/id/docs/concepts/scheduling-eviction/assign-pod-node.md
+++ b/content/id/docs/concepts/scheduling-eviction/assign-pod-node.md
@@ -52,7 +52,7 @@ spec:
Kemudian tambahkan sebuah `nodeSelector` seperti berikut:
-{{< codenew file="pods/pod-nginx.yaml" >}}
+{{% codenew file="pods/pod-nginx.yaml" %}}
Ketika kamu menjalankan perintah `kubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml`, pod tersebut akan dijadwalkan pada node yang memiliki label yang dirinci. Kamu dapat memastikan penambahan nodeSelector berhasil dengan menjalankan `kubectl get pods -o wide` dan melihat "NODE" tempat Pod ditugaskan.
@@ -110,7 +110,7 @@ Afinitas node dinyatakan sebagai _field_ `nodeAffinity` dari _field_ `affinity`
Berikut ini contoh dari pod yang menggunakan afinitas node:
-{{< codenew file="pods/pod-with-node-affinity.yaml" >}}
+{{% codenew file="pods/pod-with-node-affinity.yaml" %}}
Aturan afinitas node tersebut menyatakan pod hanya bisa ditugaskan pada node dengan label yang memiliki kunci `kubernetes.io/e2e-az-name` dan bernilai `e2e-az1` atau `e2e-az2`. Selain itu, dari semua node yang memenuhi kriteria tersebut, mode dengan label dengan kunci `another-node-label-key` and bernilai `another-node-label-value` harus lebih diutamakan.
@@ -151,7 +151,7 @@ Afinitas antar pod dinyatakan sebagai _field_ `podAffinity` dari _field_ `affini
#### Contoh pod yang menggunakan pod affinity:
-{{< codenew file="pods/pod-with-pod-affinity.yaml" >}}
+{{% codenew file="pods/pod-with-pod-affinity.yaml" %}}
Afinitas pada pod tersebut menetapkan sebuah aturan afinitas pod dan aturan anti-afinitas pod. Pada contoh ini, `podAffinity` adalah `requiredDuringSchedulingIgnoredDuringExecution`
sementara `podAntiAffinity` adalah `preferredDuringSchedulingIgnoredDuringExecution`. Aturan afinitas pod menyatakan bahwa pod dapat dijadwalkan pada node hanya jika node tersebut berada pada zona yang sama dengan minimal satu pod yang sudah berjalan yang memiliki label dengan kunci "security" dan bernilai "S1". (Lebih detail, pod dapat berjalan pada node N jika node N memiliki label dengan kunci `failure-domain.beta.kubernetes.io/zone`dan nilai V sehingga ada minimal satu node dalam klaster dengan kunci `failure-domain.beta.kubernetes.io/zone` dan bernilai V yang menjalankan pod yang memiliki label dengan kunci "security" dan bernilai "S1".) Aturan anti-afinitas pod menyatakan bahwa pod memilih untuk tidak dijadwalkan pada sebuah node jika node tersebut sudah menjalankan pod yang memiliki label dengan kunci "security" dan bernilai "S2". (Jika `topologyKey` adalah `failure-domain.beta.kubernetes.io/zone` maka dapat diartikan bahwa pod tidak dapat dijadwalkan pada node jika node berada pada zona yang sama dengan pod yang memiliki label dengan kunci "security" dan bernilai "S2".) Lihat [design doc](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md) untuk lebih banyak contoh afinitas dan anti-afinitas pod, baik `requiredDuringSchedulingIgnoredDuringExecution`
diff --git a/content/id/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md b/content/id/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md
index 26a2473f460c9..bd0f5339c8013 100644
--- a/content/id/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md
+++ b/content/id/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases.md
@@ -68,7 +68,7 @@ Selain _boilerplate default_, kita dapat menambahkan entri pada berkas
`bar.remote` pada `10.1.2.3`, kita dapat melakukannya dengan cara menambahkan
HostAliases pada Pod di bawah _field_ `.spec.hostAliases`:
-{{< codenew file="service/networking/hostaliases-pod.yaml" >}}
+{{% codenew file="service/networking/hostaliases-pod.yaml" %}}
Pod ini kemudian dapat dihidupkan dengan perintah berikut:
diff --git a/content/id/docs/concepts/services-networking/connect-applications-service.md b/content/id/docs/concepts/services-networking/connect-applications-service.md
index 545f5a76e129a..b4fee74d27dae 100644
--- a/content/id/docs/concepts/services-networking/connect-applications-service.md
+++ b/content/id/docs/concepts/services-networking/connect-applications-service.md
@@ -25,7 +25,7 @@ Panduan ini menggunakan server *nginx* sederhana untuk mendemonstrasikan konsepn
Kita melakukan ini di beberapa contoh sebelumnya, tetapi mari kita lakukan sekali lagi dan berfokus pada prespektif jaringannya. Buat sebuah *nginx Pod*, dan perhatikan bahwa templat tersebut mempunyai spesifikasi *port* kontainer:
-{{< codenew file="service/networking/run-my-nginx.yaml" >}}
+{{% codenew file="service/networking/run-my-nginx.yaml" %}}
Ini membuat aplikasi tersebut dapat diakses dari *node* manapun di dalam klaster kamu. Cek lokasi *node* dimana *Pod* tersebut berjalan:
```shell
@@ -66,7 +66,7 @@ service/my-nginx exposed
Perintah di atas sama dengan `kubectl apply -f` dengan *yaml* sebagai berikut:
-{{< codenew file="service/networking/nginx-svc.yaml" >}}
+{{% codenew file="service/networking/nginx-svc.yaml" %}}
Spesifikasi ini akan membuat *Service* yang membuka *TCP port 80* di setiap *Pod* dengan label `run: my-nginx` dan mengeksposnya ke dalam *port Service* (`targetPort`: adalah port kontainer yang menerima trafik, `port` adalah *service port* yang dapat berupa *port* apapun yang digunakan *Pod* lain untuk mengakses *Service*).
@@ -253,7 +253,7 @@ nginxsecret Opaque 2 1m
Sekarang modifikasi replika *nginx* untuk menjalankan server *https* menggunakan *certificate* di dalam *secret* dan *Service* untuk mengekspos semua *port* (80 dan 443):
-{{< codenew file="service/networking/nginx-secure-app.yaml" >}}
+{{% codenew file="service/networking/nginx-secure-app.yaml" %}}
Berikut catatan penting tentang manifes *nginx-secure-app*:
@@ -281,7 +281,7 @@ node $ curl -k https://10.244.3.5
Perlu dicatat bahwa kita menggunakan parameter `-k` saat menggunakan *curl*, ini karena kita tidak tau apapun tentang *Pod* yang menjalankan *nginx* saat pembuatan seritifikat, jadi kita harus memberitahu *curl* untuk mengabaikan ketidakcocokan *CName*. Dengan membuat *Service*, kita menghubungkan *CName* yang digunakan pada *certificate* dengan nama pada *DNS* yang digunakan *Pod*. Lakukan pengujian dari sebuah *Pod* (*secret* yang sama digunakan untuk agar mudah, *Pod* tersebut hanya membutuhkan *nginx.crt* untuk mengakses *Service*)
-{{< codenew file="service/networking/curlpod.yaml" >}}
+{{% codenew file="service/networking/curlpod.yaml" %}}
```shell
kubectl apply -f ./curlpod.yaml
diff --git a/content/id/docs/concepts/services-networking/dns-pod-service.md b/content/id/docs/concepts/services-networking/dns-pod-service.md
index efdba8d7a13be..a7dc17f96f4cd 100644
--- a/content/id/docs/concepts/services-networking/dns-pod-service.md
+++ b/content/id/docs/concepts/services-networking/dns-pod-service.md
@@ -225,7 +225,7 @@ pada _field_ `dnsConfig`:
Di bawah ini merupakan contoh sebuah Pod dengan pengaturan DNS kustom:
-{{< codenew file="service/networking/custom-dns.yaml" >}}
+{{% codenew file="service/networking/custom-dns.yaml" %}}
Ketika Pod diatas dibuat, maka Container `test`
memiliki isi berkas `/etc/resolv.conf` sebagai berikut:
diff --git a/content/id/docs/concepts/services-networking/dual-stack.md b/content/id/docs/concepts/services-networking/dual-stack.md
index 52e892f4f4704..6faed791617ff 100644
--- a/content/id/docs/concepts/services-networking/dual-stack.md
+++ b/content/id/docs/concepts/services-networking/dual-stack.md
@@ -96,19 +96,19 @@ Kubernetes akan mengalokasikan alamat IP (atau yang dikenal juga sebagai
"_cluster IP_") dari `service-cluster-ip-range` yang dikonfigurasi pertama kali
untuk Service ini.
-{{< codenew file="service/networking/dual-stack-default-svc.yaml" >}}
+{{% codenew file="service/networking/dual-stack-default-svc.yaml" %}}
Spesifikasi Service berikut memasukkan bagian `ipFamily`. Sehingga Kubernetes
akan mengalokasikan alamat IPv6 (atau yang dikenal juga sebagai "_cluster IP_")
dari `service-cluster-ip-range` yang dikonfigurasi untuk Service ini.
-{{< codenew file="service/networking/dual-stack-ipv6-svc.yaml" >}}
+{{% codenew file="service/networking/dual-stack-ipv6-svc.yaml" %}}
Sebagai perbandingan, spesifikasi Service berikut ini akan dialokasikan sebuah alamat
IPv4 (atau yang dikenal juga sebagai "_cluster IP_") dari `service-cluster-ip-range`
yang dikonfigurasi untuk Service ini.
-{{< codenew file="service/networking/dual-stack-ipv4-svc.yaml" >}}
+{{% codenew file="service/networking/dual-stack-ipv4-svc.yaml" %}}
### Tipe _LoadBalancer_
diff --git a/content/id/docs/concepts/services-networking/ingress.md b/content/id/docs/concepts/services-networking/ingress.md
index 84db01b37e827..8e129ff223cf7 100644
--- a/content/id/docs/concepts/services-networking/ingress.md
+++ b/content/id/docs/concepts/services-networking/ingress.md
@@ -132,7 +132,7 @@ akan diarahkan pada *backend default*.
Terdapat konsep Kubernetes yang memungkinkan kamu untuk mengekspos sebuah Service, lihat [alternatif lain](#alternatif-lain).
Kamu juga bisa membuat spesifikasi Ingress dengan *backend default* yang tidak memiliki *rules*.
-{{< codenew file="service/networking/ingress.yaml" >}}
+{{% codenew file="service/networking/ingress.yaml" %}}
Jika kamu menggunakan `kubectl apply -f` kamu dapat melihat:
diff --git a/content/id/docs/concepts/workloads/controllers/daemonset.md b/content/id/docs/concepts/workloads/controllers/daemonset.md
index ea21a7b268fc7..4edafc7f444af 100644
--- a/content/id/docs/concepts/workloads/controllers/daemonset.md
+++ b/content/id/docs/concepts/workloads/controllers/daemonset.md
@@ -37,7 +37,7 @@ Kamu bisa definisikan DaemonSet dalam berkas YAML. Contohnya, berkas
`daemonset.yaml` di bawah mendefinisikan DaemonSet yang menjalankan _image_ Docker
fluentd-elasticsearch:
-{{< codenew file="controllers/daemonset.yaml" >}}
+{{% codenew file="controllers/daemonset.yaml" %}}
* Buat DaemonSet berdasarkan berkas YAML:
```
diff --git a/content/id/docs/concepts/workloads/controllers/deployment.md b/content/id/docs/concepts/workloads/controllers/deployment.md
index 18f1542418e33..f6a3244174fe0 100644
--- a/content/id/docs/concepts/workloads/controllers/deployment.md
+++ b/content/id/docs/concepts/workloads/controllers/deployment.md
@@ -41,7 +41,7 @@ Berikut adalah penggunaan yang umum pada Deployment:
Berikut adalah contoh Deployment. Dia membuat ReplicaSet untuk membangkitkan tiga Pod `nginx`:
-{{< codenew file="controllers/nginx-deployment.yaml" >}}
+{{% codenew file="controllers/nginx-deployment.yaml" %}}
Dalam contoh ini:
diff --git a/content/id/docs/concepts/workloads/controllers/garbage-collection.md b/content/id/docs/concepts/workloads/controllers/garbage-collection.md
index 5eb00cf987caa..121d148b2f20b 100644
--- a/content/id/docs/concepts/workloads/controllers/garbage-collection.md
+++ b/content/id/docs/concepts/workloads/controllers/garbage-collection.md
@@ -22,7 +22,7 @@ Kamu juga bisa menspesifikasikan hubungan antara pemilik dan dependen dengan car
Berikut adalah berkas untuk sebuah ReplicaSet yang memiliki tiga Pod:
-{{< codenew file="controllers/replicaset.yaml" >}}
+{{% codenew file="controllers/replicaset.yaml" %}}
Jika kamu membuat ReplicaSet tersebut dan kemudian melihat metadata Pod, kamu akan melihat kolom OwnerReferences:
diff --git a/content/id/docs/concepts/workloads/controllers/job.md b/content/id/docs/concepts/workloads/controllers/job.md
index 4a7cce3f2a4a3..03a58ea21223b 100644
--- a/content/id/docs/concepts/workloads/controllers/job.md
+++ b/content/id/docs/concepts/workloads/controllers/job.md
@@ -33,7 +33,7 @@ Berikut merupakan contoh konfigurasi Job. Job ini melakukan komputasi π hingga
digit ke 2000 kemudian memberikan hasilnya sebagai keluaran. Job tersebut memerlukan
waktu 10 detik untuk dapat diselesaikan.
-{{< codenew file="controllers/job.yaml" >}}
+{{% codenew file="controllers/job.yaml" %}}
Kamu dapat menjalankan contoh tersebut dengan menjalankan perintah berikut:
diff --git a/content/id/docs/concepts/workloads/controllers/replicaset.md b/content/id/docs/concepts/workloads/controllers/replicaset.md
index 57b1124208a91..e43ccc57c0ac6 100644
--- a/content/id/docs/concepts/workloads/controllers/replicaset.md
+++ b/content/id/docs/concepts/workloads/controllers/replicaset.md
@@ -29,7 +29,7 @@ Hal ini berarti kamu boleh jadi tidak akan membutuhkan manipulasi objek ReplicaS
## Contoh
-{{< codenew file="controllers/frontend.yaml" >}}
+{{% codenew file="controllers/frontend.yaml" %}}
Menyimpan _manifest_ ini dalam `frontend.yaml` dan mengirimkannya ke klaster Kubernetes akan membuat ReplicaSet yang telah didefinisikan beserta dengan Pod yang dikelola.
@@ -131,7 +131,7 @@ Walaupun kamu bisa membuat Pod biasa tanpa masalah, sangat direkomendasikan untu
Mengambil contoh ReplicaSet _frontend_ sebelumnya, dan Pod yang ditentukan pada _manifest_ berikut:
-{{< codenew file="pods/pod-rs.yaml" >}}
+{{% codenew file="pods/pod-rs.yaml" %}}
Karena Pod tersebut tidak memiliki Controller (atau objek lain) sebagai referensi pemilik yang sesuai dengan selektor dari ReplicaSet _frontend_, Pod tersebut akan langsung diakuisisi oleh ReplicaSet.
@@ -257,7 +257,7 @@ Jumlah Pod pada ReplicaSet dapat diatur dengan mengubah nilai dari _field_ `.spe
Pengaturan jumlah Pod pada ReplicaSet juga dapat dilakukan mengunakan [Horizontal Pod Autoscalers (HPA)](/docs/tasks/run-application/horizontal-pod-autoscale/). Berikut adalah contoh HPA terhadap ReplicaSet yang telah dibuat pada contoh sebelumnya.
-{{< codenew file="controllers/hpa-rs.yaml" >}}
+{{% codenew file="controllers/hpa-rs.yaml" %}}
Menyimpan _manifest_ ini dalam `hpa-rs.yaml` dan mengirimkannya ke klaster Kubernetes akan membuat HPA tersebut yang akan mengatur jumlah Pod pada ReplicaSet yang telah didefinisikan bergantung terhadap penggunaan CPU dari Pod yang direplikasi.
diff --git a/content/id/docs/concepts/workloads/controllers/replicationcontroller.md b/content/id/docs/concepts/workloads/controllers/replicationcontroller.md
index 48ec718a6df67..f53cac7f290c9 100644
--- a/content/id/docs/concepts/workloads/controllers/replicationcontroller.md
+++ b/content/id/docs/concepts/workloads/controllers/replicationcontroller.md
@@ -36,7 +36,7 @@ Sebuah contoh sederhana adalah membuat sebuah objek ReplicationController untuk
Contoh ReplicationController ini mengonfigurasi tiga salinan dari peladen web nginx.
-{{< codenew file="controllers/replication.yaml" >}}
+{{% codenew file="controllers/replication.yaml" %}}
Jalankan contoh di atas dengan mengunduh berkas contoh dan menjalankan perintah ini:
diff --git a/content/id/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/id/docs/concepts/workloads/pods/pod-topology-spread-constraints.md
index 6f27244ef6dea..188080abb5802 100644
--- a/content/id/docs/concepts/workloads/pods/pod-topology-spread-constraints.md
+++ b/content/id/docs/concepts/workloads/pods/pod-topology-spread-constraints.md
@@ -114,7 +114,7 @@ node2 dan node3 (`P` merepresentasikan Pod):
Jika kita ingin Pod baru akan disebar secara merata berdasarkan Pod yang telah ada pada semua zona,
maka _spec_ bernilai sebagai berikut:
-{{< codenew file="pods/topology-spread-constraints/one-constraint.yaml" >}}
+{{% codenew file="pods/topology-spread-constraints/one-constraint.yaml" %}}
`topologyKey: zone` berarti persebaran merata hanya akan digunakan pada Node dengan pasangan label
"zone: ". `whenUnsatisfiable: DoNotSchedule` memberitahukan penjadwal untuk membiarkan
@@ -161,7 +161,7 @@ Ini dibuat berdasarkan contoh sebelumnya. Misalkan kamu memiliki klaster dengan
Kamu dapat menggunakan 2 TopologySpreadConstraint untuk mengatur persebaran Pod pada zona dan Node:
-{{< codenew file="pods/topology-spread-constraints/two-constraints.yaml" >}}
+{{% codenew file="pods/topology-spread-constraints/two-constraints.yaml" %}}
Dalam contoh ini, untuk memenuhi batasan pertama, Pod yang baru hanya akan ditempatkan pada "zoneB",
sedangkan untuk batasan kedua, Pod yang baru hanya akan ditempatkan pada "node4". Maka hasil dari
@@ -224,7 +224,7 @@ sesuai dengan nilai tersebut akan dilewatkan.
berkas yaml seperti di bawah, jadi "mypod" akan ditempatkan pada "zoneB", bukan "zoneC".
Demikian juga `spec.nodeSelector` akan digunakan.
- {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}}
+ {{% codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" %}}
### Batasan _default_ pada tingkat klaster
diff --git a/content/id/docs/tasks/administer-cluster/dns-debugging-resolution.md b/content/id/docs/tasks/administer-cluster/dns-debugging-resolution.md
index 67b451db52b34..fdc31a1147e9c 100644
--- a/content/id/docs/tasks/administer-cluster/dns-debugging-resolution.md
+++ b/content/id/docs/tasks/administer-cluster/dns-debugging-resolution.md
@@ -21,7 +21,7 @@ kube-dns.
### Membuat Pod sederhana yang digunakan sebagai lingkungan pengujian
-{{< codenew file="admin/dns/dnsutils.yaml" >}}
+{{% codenew file="admin/dns/dnsutils.yaml" %}}
Gunakan manifes berikut untuk membuat sebuah Pod:
diff --git a/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md b/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md
index 1aae3e38f009b..3e24947edb5e3 100644
--- a/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md
+++ b/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md
@@ -40,7 +40,7 @@ kubectl create namespace constraints-mem-example
Berikut berkas konfigurasi untuk sebuah LimitRange:
-{{< codenew file="admin/resource/memory-constraints.yaml" >}}
+{{% codenew file="admin/resource/memory-constraints.yaml" %}}
Membuat LimitRange:
@@ -85,7 +85,7 @@ Berikut berkas konfigurasi Pod yang memiliki satu Container. Manifes Container
menentukan permintaan memori 600 MiB dan limit memori 800 MiB. Nilai tersebut memenuhi
batasan minimum dan maksimum memori yang ditentukan oleh LimitRange.
-{{< codenew file="admin/resource/memory-constraints-pod.yaml" >}}
+{{% codenew file="admin/resource/memory-constraints-pod.yaml" %}}
Membuat Pod:
@@ -127,7 +127,7 @@ kubectl delete pod constraints-mem-demo --namespace=constraints-mem-example
Berikut berkas konfigurasi untuk sebuah Pod yang memiliki satu Container. Container tersebut menentukan
permintaan memori 800 MiB dan batas memori 1.5 GiB.
-{{< codenew file="admin/resource/memory-constraints-pod-2.yaml" >}}
+{{% codenew file="admin/resource/memory-constraints-pod-2.yaml" %}}
Mencoba membuat Pod:
@@ -148,7 +148,7 @@ pods "constraints-mem-demo-2" is forbidden: maximum memory usage per Container i
Berikut berkas konfigurasi untuk sebuah Pod yang memiliki satu Container. Container tersebut menentukan
permintaan memori 100 MiB dan limit memori 800 MiB.
-{{< codenew file="admin/resource/memory-constraints-pod-3.yaml" >}}
+{{% codenew file="admin/resource/memory-constraints-pod-3.yaml" %}}
Mencoba membuat Pod:
@@ -171,7 +171,7 @@ pods "constraints-mem-demo-3" is forbidden: minimum memory usage per Container i
Berikut berkas konfigurasi untuk sebuah Pod yang memiliki satu Container. Container tersebut tidak menentukan
permintaan memori dan juga limit memori.
-{{< codenew file="admin/resource/memory-constraints-pod-4.yaml" >}}
+{{% codenew file="admin/resource/memory-constraints-pod-4.yaml" %}}
Mencoba membuat Pod:
diff --git a/content/id/docs/tasks/configure-pod-container/assign-memory-resource.md b/content/id/docs/tasks/configure-pod-container/assign-memory-resource.md
index bb092e86a58c6..4a50fb84159c2 100644
--- a/content/id/docs/tasks/configure-pod-container/assign-memory-resource.md
+++ b/content/id/docs/tasks/configure-pod-container/assign-memory-resource.md
@@ -69,7 +69,7 @@ Dalam latihan ini, kamu akan membuat Pod yang memiliki satu Container. Container
sebesar 100 MiB dan batasan memori sebesar 200 MiB. Berikut berkas konfigurasi
untuk Pod:
-{{< codenew file="pods/resource/memory-request-limit.yaml" >}}
+{{% codenew file="pods/resource/memory-request-limit.yaml" %}}
Bagian `args` dalam berkas konfigurasi memberikan argumen untuk Container pada saat dimulai.
Argumen`"--vm-bytes", "150M"` memberi tahu Container agar mencoba mengalokasikan memori sebesar 150 MiB.
@@ -139,7 +139,7 @@ Dalam latihan ini, kamu membuat Pod yang mencoba mengalokasikan lebih banyak mem
Berikut adalah berkas konfigurasi untuk Pod yang memiliki satu Container dengan berkas
permintaan memori sebesar 50 MiB dan batasan memori sebesar 100 MiB:
-{{< codenew file="pods/resource/memory-request-limit-2.yaml" >}}
+{{% codenew file="pods/resource/memory-request-limit-2.yaml" %}}
Dalam bagian `args` dari berkas konfigurasi, kamu dapat melihat bahwa Container tersebut
akan mencoba mengalokasikan memori sebesar 250 MiB, yang jauh di atas batas yaitu 100 MiB.
@@ -250,7 +250,7 @@ kapasitas dari Node mana pun dalam klaster kamu. Berikut adalah berkas konfigura
Container dengan permintaan memori 1000 GiB, yang kemungkinan besar melebihi kapasitas
dari setiap Node dalam klaster kamu.
-{{< codenew file="pods/resource/memory-request-limit-3.yaml" >}}
+{{% codenew file="pods/resource/memory-request-limit-3.yaml" %}}
Buatlah Pod:
diff --git a/content/id/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md b/content/id/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md
index a60d862fd26c2..3d4a5d079b8d1 100644
--- a/content/id/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md
+++ b/content/id/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md
@@ -64,7 +64,7 @@ Afinitas Node di dalam klaster Kubernetes.
Konfigurasi ini menunjukkan sebuah Pod yang memiliki afinitas node `requiredDuringSchedulingIgnoredDuringExecution`, `disktype: ssd`.
Dengan kata lain, Pod hanya akan dijadwalkan hanya pada Node yang memiliki label `disktype=ssd`.
-{{< codenew file="pods/pod-nginx-required-affinity.yaml" >}}
+{{% codenew file="pods/pod-nginx-required-affinity.yaml" %}}
1. Terapkan konfigurasi berikut untuk membuat sebuah Pod yang akan dijadwalkan pada Node yang kamu pilih:
@@ -90,7 +90,7 @@ Dengan kata lain, Pod hanya akan dijadwalkan hanya pada Node yang memiliki label
Konfigurasi ini memberikan deskripsi sebuah Pod yang memiliki afinitas Node `preferredDuringSchedulingIgnoredDuringExecution`,`disktype: ssd`.
Artinya Pod akan diutamakan dijalankan pada Node yang memiliki label `disktype=ssd`.
-{{< codenew file="pods/pod-nginx-preferred-affinity.yaml" >}}
+{{% codenew file="pods/pod-nginx-preferred-affinity.yaml" %}}
1. Terapkan konfigurasi berikut untuk membuat sebuah Pod yang akan dijadwalkan pada Node yang kamu pilih:
diff --git a/content/id/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/id/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md
index d56f59a09a646..e03a9d97a331a 100644
--- a/content/id/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md
+++ b/content/id/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md
@@ -46,7 +46,7 @@ Kubernetes menyediakan _probe liveness_ untuk mendeteksi dan memperbaiki situasi
Pada latihan ini, kamu akan membuat Pod yang menjalankan Container dari image
`registry.k8s.io/busybox`. Berikut ini adalah berkas konfigurasi untuk Pod tersebut:
-{{< codenew file="pods/probe/exec-liveness.yaml" >}}
+{{% codenew file="pods/probe/exec-liveness.yaml" %}}
Pada berkas konfigurasi di atas, kamu dapat melihat bahwa Pod memiliki satu `Container`.
_Field_ `periodSeconds` menentukan bahwa kubelet harus melakukan _probe liveness_ setiap 5 detik.
@@ -128,7 +128,7 @@ liveness-exec 1/1 Running 1 1m
Jenis kedua dari _probe liveness_ menggunakan sebuah permintaan GET HTTP. Berikut ini
berkas konfigurasi untuk Pod yang menjalankan Container dari image `registry.k8s.io/liveness`.
-{{< codenew file="pods/probe/http-liveness.yaml" >}}
+{{% codenew file="pods/probe/http-liveness.yaml" %}}
Pada berkas konfigurasi tersebut, kamu dapat melihat Pod memiliki sebuah Container.
_Field_ `periodSeconds` menentukan bahwa kubelet harus mengerjakan _probe liveness_ setiap 3 detik.
@@ -190,7 +190,7 @@ kubelet akan mencoba untuk membuka soket pada Container kamu dengan porta terten
Jika koneksi dapat terbentuk dengan sukses, maka Container dianggap dalam kondisi sehat.
Namun jika tidak berhasil terbentuk, maka Container dianggap gagal.
-{{< codenew file="pods/probe/tcp-liveness-readiness.yaml" >}}
+{{% codenew file="pods/probe/tcp-liveness-readiness.yaml" %}}
Seperti yang terlihat, konfigurasi untuk pemeriksaan TCP cukup mirip dengan
pemeriksaan HTTP. Contoh ini menggunakan _probe readiness_ dan _liveness_.
diff --git a/content/id/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md b/content/id/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md
index 858342880e7e1..79db4e848a754 100644
--- a/content/id/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md
+++ b/content/id/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md
@@ -93,7 +93,7 @@ untuk mengatur
Berikut berkas konfigurasi untuk hostPath PersistentVolume:
-{{< codenew file="pods/storage/pv-volume.yaml" >}}
+{{% codenew file="pods/storage/pv-volume.yaml" %}}
Berkas konfigurasi tersebut menentukan bahwa volume berada di `/mnt/data` pada
klaster Node. Konfigurasi tersebut juga menentukan ukuran dari 10 gibibytes dan
@@ -129,7 +129,7 @@ setidaknya untuk satu Node.
Berikut berkas konfigurasi untuk PersistentVolumeClaim:
-{{< codenew file="pods/storage/pv-claim.yaml" >}}
+{{% codenew file="pods/storage/pv-claim.yaml" %}}
Membuat sebuah PersistentVolumeClaim:
@@ -169,7 +169,7 @@ Langkah selanjutnya adalah membuat sebuah Pod yang akan menggunakan PersistentVo
Berikut berkas konfigurasi untuk Pod:
-{{< codenew file="pods/storage/pv-pod.yaml" >}}
+{{% codenew file="pods/storage/pv-pod.yaml" %}}
Perhatikan bahwa berkas konfigurasi Pod menentukan sebuah PersistentVolumeClaim, tetapi
tidak menentukan PeristentVolume. Dari sudut pandang Pod, _claim_ adalah volume.
diff --git a/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md
index bfdad56610635..201aef5fedcf9 100644
--- a/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md
+++ b/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md
@@ -467,7 +467,7 @@ configmap/special-config-2-c92b5mmcf2 created
2. Memberikan nilai `special.how` yang sudah terdapat pada ConfigMap pada variabel _environment_ `SPECIAL_LEVEL_KEY` di spesifikasi Pod.
- {{< codenew file="pods/pod-single-configmap-env-variable.yaml" >}}
+ {{% codenew file="pods/pod-single-configmap-env-variable.yaml" %}}
Buat Pod:
@@ -481,7 +481,7 @@ configmap/special-config-2-c92b5mmcf2 created
* Seperti pada contoh sebelumnya, buat ConfigMap terlebih dahulu.
- {{< codenew file="configmap/configmaps.yaml" >}}
+ {{% codenew file="configmap/configmaps.yaml" %}}
Buat ConfigMap:
@@ -491,7 +491,7 @@ configmap/special-config-2-c92b5mmcf2 created
* Tentukan variabel _environment_ pada spesifikasi Pod.
- {{< codenew file="pods/pod-multiple-configmap-env-variable.yaml" >}}
+ {{% codenew file="pods/pod-multiple-configmap-env-variable.yaml" %}}
Buat Pod:
@@ -509,7 +509,7 @@ Fungsi ini tersedia pada Kubernetes v1.6 dan selanjutnya.
* Buat ConfigMap yang berisi beberapa pasangan kunci-nilai.
- {{< codenew file="configmap/configmap-multikeys.yaml" >}}
+ {{% codenew file="configmap/configmap-multikeys.yaml" %}}
Buat ConfigMap:
@@ -519,7 +519,7 @@ Fungsi ini tersedia pada Kubernetes v1.6 dan selanjutnya.
* Gunakan `envFrom` untuk menentukan seluruh data pada ConfigMap sebagai variabel _environment_ kontainer. Kunci dari ConfigMap akan menjadi nama variabel _environment_ di dalam Pod.
- {{< codenew file="pods/pod-configmap-envFrom.yaml" >}}
+ {{% codenew file="pods/pod-configmap-envFrom.yaml" %}}
Buat Pod:
@@ -536,7 +536,7 @@ Kamu dapat menggunakan variabel _environment_ yang ditentukan ConfigMap pada bag
Sebagai contoh, spesifikasi Pod berikut
-{{< codenew file="pods/pod-configmap-env-var-valueFrom.yaml" >}}
+{{% codenew file="pods/pod-configmap-env-var-valueFrom.yaml" %}}
dibuat dengan menjalankan
@@ -556,7 +556,7 @@ Seperti yang sudah dijelaskan pada [Membuat ConfigMap dari berkas](#membuat-conf
Contoh pada bagian ini merujuk pada ConfigMap bernama `special-config`, Seperti berikut.
-{{< codenew file="configmap/configmap-multikeys.yaml" >}}
+{{% codenew file="configmap/configmap-multikeys.yaml" %}}
Buat ConfigMap:
@@ -570,7 +570,7 @@ Tambahkan nama ConfigMap di bawah bagian `volumes` pada spesifikasi Pod.
Hal ini akan menambahkan data ConfigMap pada direktori yang ditentukan oleh `volumeMounts.mountPath` (pada kasus ini, `/etc/config`).
Bagian `command` berisi daftar berkas pada direktori dengan nama-nama yang sesuai dengan kunci-kunci pada ConfigMap.
-{{< codenew file="pods/pod-configmap-volume.yaml" >}}
+{{% codenew file="pods/pod-configmap-volume.yaml" %}}
Buat Pod:
@@ -594,7 +594,7 @@ Jika ada beberapa berkas pada direktori `/etc/config/`, berkas-berkas tersebut a
Gunakan kolom `path` untuk menentukan jalur berkas yang diinginkan untuk butir tertentu pada ConfigMap (butir ConfigMap tertentu).
Pada kasus ini, butir `SPECIAL_LEVEL` akan akan dipasangkan sebagai `config-volume` pada `/etc/config/keys`.
-{{< codenew file="pods/pod-configmap-volume-specific-key.yaml" >}}
+{{% codenew file="pods/pod-configmap-volume-specific-key.yaml" %}}
Buat Pod:
diff --git a/content/id/docs/tasks/configure-pod-container/configure-service-account.md b/content/id/docs/tasks/configure-pod-container/configure-service-account.md
index e53812d65a8b3..f469b257d85cd 100644
--- a/content/id/docs/tasks/configure-pod-container/configure-service-account.md
+++ b/content/id/docs/tasks/configure-pod-container/configure-service-account.md
@@ -282,7 +282,7 @@ Kubelet juga dapat memproyeksikan _token_ ServiceAccount ke Pod. Kamu dapat mene
Perilaku ini diatur pada PodSpec menggunakan tipe ProjectedVolume yaitu [ServiceAccountToken](/id/docs/concepts/storage/volumes/#projected). Untuk memungkinkan Pod dengan _token_ dengan pengguna bertipe _"vault"_ dan durasi validitas selama dua jam, kamu harus mengubah bagian ini pada PodSpec:
-{{< codenew file="pods/pod-projected-svc-token.yaml" >}}
+{{% codenew file="pods/pod-projected-svc-token.yaml" %}}
Buat Pod:
diff --git a/content/id/docs/tasks/configure-pod-container/configure-volume-storage.md b/content/id/docs/tasks/configure-pod-container/configure-volume-storage.md
index 02d664d530457..e6b6f365a45c0 100644
--- a/content/id/docs/tasks/configure-pod-container/configure-volume-storage.md
+++ b/content/id/docs/tasks/configure-pod-container/configure-volume-storage.md
@@ -25,7 +25,7 @@ _Filesystem_ dari sebuah Container hanya hidup selama Container itu juga hidup.
Pada latihan ini, kamu membuat sebuah Pod yang menjalankan sebuah Container. Pod ini memiliki sebuah Volume dengan tipe [emptyDir](/id/docs/concepts/storage/volumes/#emptydir)
yang tetap bertahan, meski Container berakhir dan dimulai ulang. Berikut berkas konfigurasi untuk Pod:
-{{< codenew file="pods/storage/redis.yaml" >}}
+{{% codenew file="pods/storage/redis.yaml" %}}
1. Membuat Pod:
diff --git a/content/id/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/id/docs/tasks/configure-pod-container/pull-image-private-registry.md
index 50aad8de9a15a..3fe2ce8407c3b 100644
--- a/content/id/docs/tasks/configure-pod-container/pull-image-private-registry.md
+++ b/content/id/docs/tasks/configure-pod-container/pull-image-private-registry.md
@@ -176,7 +176,7 @@ Kamu telah berhasil menetapkan kredensial Docker kamu sebagai sebuah Secret yang
Berikut ini adalah berkas konfigurasi untuk Pod yang memerlukan akses ke kredensial Docker kamu pada `regcred`:
-{{< codenew file="pods/private-reg-pod.yaml" >}}
+{{% codenew file="pods/private-reg-pod.yaml" %}}
Unduh berkas diatas:
diff --git a/content/id/docs/tasks/configure-pod-container/quality-service-pod.md b/content/id/docs/tasks/configure-pod-container/quality-service-pod.md
index c5337c8854a75..5ced04b84f1fb 100644
--- a/content/id/docs/tasks/configure-pod-container/quality-service-pod.md
+++ b/content/id/docs/tasks/configure-pod-container/quality-service-pod.md
@@ -41,7 +41,7 @@ Agar sebuah Pod memiliki kelas QoS Guaranteed:
Berikut adalah berkas konfigurasi untuk sebuah Pod dengan satu Container. Container tersebut memiliki sebuah batasan memori dan sebuah
permintaan memori, keduanya sama dengan 200MiB. Container itu juga mempunyai batasan CPU dan permintaan CPU yang sama sebesar 700 milliCPU:
-{{< codenew file="pods/qos/qos-pod.yaml" >}}
+{{% codenew file="pods/qos/qos-pod.yaml" %}}
Buatlah Pod:
@@ -100,7 +100,7 @@ Sebuah Pod akan mendapatkan kelas QoS Burstable apabila:
Berikut adalah berkas konfigurasi untuk Pod dengan satu Container. Container yang dimaksud memiliki batasan memori sebesar 200MiB
dan permintaan memori sebesar 100MiB.
-{{< codenew file="pods/qos/qos-pod-2.yaml" >}}
+{{% codenew file="pods/qos/qos-pod-2.yaml" %}}
Buatlah Pod:
@@ -147,7 +147,7 @@ Agar Pod mendapatkan kelas QoS BestEffort, Container dalam pod tidak boleh
memiliki batasan atau permintaan memori atau CPU.
Berikut adalah berkas konfigurasi untuk Pod dengan satu Container. Container yang dimaksud tidak memiliki batasan atau permintaan memori atau CPU apapun.
-{{< codenew file="pods/qos/qos-pod-3.yaml" >}}
+{{% codenew file="pods/qos/qos-pod-3.yaml" %}}
Buatlah Pod:
@@ -183,7 +183,7 @@ kubectl delete pod qos-demo-3 --namespace=qos-example
Berikut adalah konfigurasi berkas untuk Pod yang memiliki dua Container. Satu Container menentukan permintaan memori sebesar 200MiB. Container yang lain tidak menentukan permintaan atau batasan apapun.
-{{< codenew file="pods/qos/qos-pod-4.yaml" >}}
+{{% codenew file="pods/qos/qos-pod-4.yaml" %}}
Perhatikan bahwa Pod ini memenuhi kriteria untuk kelas QoS Burstable. Maksudnya, Container tersebut tidak memenuhi
kriteria untuk kelas QoS Guaranteed, dan satu dari Container tersebut memiliki permintaan memori.
diff --git a/content/id/docs/tasks/configure-pod-container/security-context.md b/content/id/docs/tasks/configure-pod-container/security-context.md
index d190468399cf1..a8bd1bfdf9620 100644
--- a/content/id/docs/tasks/configure-pod-container/security-context.md
+++ b/content/id/docs/tasks/configure-pod-container/security-context.md
@@ -50,7 +50,7 @@ dalam spesifikasi Pod. Bagian `securityContext` adalah sebuah objek
Aturan keamanan yang kamu tetapkan untuk Pod akan berlaku untuk semua Container dalam Pod tersebut.
Berikut sebuah berkas konfigurasi untuk Pod yang memiliki volume `securityContext` dan `emptyDir`:
-{{< codenew file="pods/security/security-context.yaml" >}}
+{{% codenew file="pods/security/security-context.yaml" %}}
Dalam berkas konfigurasi ini, bagian `runAsUser` menentukan bahwa dalam setiap Container pada
Pod, semua proses dijalankan oleh ID pengguna 1000. Bagian `runAsGroup` menentukan grup utama dengan ID 3000 untuk
@@ -191,7 +191,7 @@ ada aturan yang tumpang tindih. Aturan pada Container mempengaruhi volume pada P
Berikut berkas konfigurasi untuk Pod yang hanya memiliki satu Container. Keduanya, baik Pod
dan Container memiliki bagian `securityContext` sebagai berikut:
-{{< codenew file="pods/security/security-context-2.yaml" >}}
+{{% codenew file="pods/security/security-context-2.yaml" %}}
Buatlah Pod tersebut:
@@ -244,7 +244,7 @@ bagian `capabilities` pada `securityContext` di manifes Container-nya.
Pertama-tama, mari melihat apa yang terjadi ketika kamu tidak menyertakan bagian `capabilities`.
Berikut ini adalah berkas konfigurasi yang tidak menambah atau mengurangi kemampuan apa pun dari Container:
-{{< codenew file="pods/security/security-context-3.yaml" >}}
+{{% codenew file="pods/security/security-context-3.yaml" %}}
Buatlah Pod tersebut:
@@ -306,7 +306,7 @@ Container ini memiliki kapabilitas tambahan yang sudah ditentukan.
Berikut ini adalah berkas konfigurasi untuk Pod yang hanya menjalankan satu Container. Konfigurasi
ini menambahkan kapabilitas `CAP_NET_ADMIN` dan `CAP_SYS_TIME`:
-{{< codenew file="pods/security/security-context-4.yaml" >}}
+{{% codenew file="pods/security/security-context-4.yaml" %}}
Buatlah Pod tersebut:
diff --git a/content/id/docs/tasks/configure-pod-container/share-process-namespace.md b/content/id/docs/tasks/configure-pod-container/share-process-namespace.md
index 9b32d74b3cdf6..c764bd8df3eaa 100644
--- a/content/id/docs/tasks/configure-pod-container/share-process-namespace.md
+++ b/content/id/docs/tasks/configure-pod-container/share-process-namespace.md
@@ -34,7 +34,7 @@ proses pemecahan masalah (_troubleshoot_) image kontainer yang tidak memiliki ut
Pembagian _namespace_ proses (_Process Namespace Sharing_) diaktifkan menggunakan _field_ `shareProcessNamespace`
`v1.PodSpec`. Sebagai contoh:
-{{< codenew file="pods/share-process-namespace.yaml" >}}
+{{% codenew file="pods/share-process-namespace.yaml" %}}
1. Buatlah sebuah Pod `nginx` di dalam klaster kamu:
diff --git a/content/id/docs/tasks/debug-application-cluster/debug-application-introspection.md b/content/id/docs/tasks/debug-application-cluster/debug-application-introspection.md
index 746c46f045a09..a2c7b2f318610 100644
--- a/content/id/docs/tasks/debug-application-cluster/debug-application-introspection.md
+++ b/content/id/docs/tasks/debug-application-cluster/debug-application-introspection.md
@@ -18,7 +18,7 @@ Pod kamu. Namun ada sejumlah cara untuk mendapatkan lebih banyak informasi tenta
Dalam contoh ini, kamu menggunakan Deployment untuk membuat dua buah Pod, yang hampir sama dengan contoh sebelumnya.
-{{< codenew file="application/nginx-with-request.yaml" >}}
+{{% codenew file="application/nginx-with-request.yaml" %}}
Buat Deployment dengan menjalankan perintah ini:
diff --git a/content/id/docs/tasks/debug-application-cluster/get-shell-running-container.md b/content/id/docs/tasks/debug-application-cluster/get-shell-running-container.md
index e15a8a4df6532..2c9e5fe38e781 100644
--- a/content/id/docs/tasks/debug-application-cluster/get-shell-running-container.md
+++ b/content/id/docs/tasks/debug-application-cluster/get-shell-running-container.md
@@ -26,7 +26,7 @@ mendapatkan _shell_ untuk masuk ke dalam Container yang sedang berjalan.
Dalam latihan ini, kamu perlu membuat Pod yang hanya memiliki satu Container saja. Container
tersebut menjalankan _image_ nginx. Berikut ini adalah berkas konfigurasi untuk Pod tersebut:
-{{< codenew file="application/shell-demo.yaml" >}}
+{{% codenew file="application/shell-demo.yaml" %}}
Buatlah Pod tersebut:
diff --git a/content/id/docs/tasks/inject-data-application/define-command-argument-container.md b/content/id/docs/tasks/inject-data-application/define-command-argument-container.md
index 9f2cd7a7aefc8..f2d248232e004 100644
--- a/content/id/docs/tasks/inject-data-application/define-command-argument-container.md
+++ b/content/id/docs/tasks/inject-data-application/define-command-argument-container.md
@@ -44,7 +44,7 @@ Merujuk pada [catatan](#catatan) di bawah.
Pada latihan ini, kamu akan membuat sebuah Pod baru yang menjalankan sebuah Container. Berkas konfigurasi
untuk Pod mendefinisikan sebuah perintah dan dua argumen:
-{{< codenew file="pods/commands.yaml" >}}
+{{% codenew file="pods/commands.yaml" %}}
1. Buat sebuah Pod dengan berkas konfigurasi YAML:
diff --git a/content/id/docs/tasks/inject-data-application/define-environment-variable-container.md b/content/id/docs/tasks/inject-data-application/define-environment-variable-container.md
index 0f35ef27f7188..584866d4c4d12 100644
--- a/content/id/docs/tasks/inject-data-application/define-environment-variable-container.md
+++ b/content/id/docs/tasks/inject-data-application/define-environment-variable-container.md
@@ -30,7 +30,7 @@ Dalam latihan ini, kamu membuat sebuah Pod yang menjalankan satu buah Container.
Berkas konfigurasi untuk Pod tersebut mendefinisikan sebuah variabel lingkungan dengan nama `DEMO_GREETING` yang bernilai `"Hello from the environment"`.
Berikut berkas konfigurasi untuk Pod tersebut:
-{{< codenew file="pods/inject/envars.yaml" >}}
+{{% codenew file="pods/inject/envars.yaml" %}}
1. Buatlah sebuah Pod berdasarkan berkas konfigurasi YAML tersebut:
diff --git a/content/id/docs/tasks/inject-data-application/distribute-credentials-secure.md b/content/id/docs/tasks/inject-data-application/distribute-credentials-secure.md
index c08db9484f6cc..5d4c3633fac96 100644
--- a/content/id/docs/tasks/inject-data-application/distribute-credentials-secure.md
+++ b/content/id/docs/tasks/inject-data-application/distribute-credentials-secure.md
@@ -37,7 +37,7 @@ Gunakan alat yang telah dipercayai oleh OS kamu untuk menghindari risiko dari pe
Berikut ini adalah berkas konfigurasi yang dapat kamu gunakan untuk membuat Secret yang akan menampung nama pengguna dan kata sandi kamu:
-{{< codenew file="pods/inject/secret.yaml" >}}
+{{% codenew file="pods/inject/secret.yaml" %}}
1. Membuat Secret
@@ -95,7 +95,7 @@ Tentu saja ini lebih mudah. Pendekatan yang mendetil setiap langkah di atas bert
Berikut ini adalah berkas konfigurasi yang dapat kamu gunakan untuk membuat Pod:
-{{< codenew file="pods/inject/secret-pod.yaml" >}}
+{{% codenew file="pods/inject/secret-pod.yaml" %}}
1. Membuat Pod:
@@ -157,7 +157,7 @@ Berikut ini adalah berkas konfigurasi yang dapat kamu gunakan untuk membuat Pod:
* Tentukan nilai `backend-username` yang didefinisikan di Secret ke variabel lingkungan `SECRET_USERNAME` di dalam spesifikasi Pod.
- {{< codenew file="pods/inject/pod-single-secret-env-variable.yaml" >}}
+ {{% codenew file="pods/inject/pod-single-secret-env-variable.yaml" %}}
* Membuat Pod:
@@ -187,7 +187,7 @@ Berikut ini adalah berkas konfigurasi yang dapat kamu gunakan untuk membuat Pod:
* Definisikan variabel lingkungan di dalam spesifikasi Pod.
- {{< codenew file="pods/inject/pod-multiple-secret-env-variable.yaml" >}}
+ {{% codenew file="pods/inject/pod-multiple-secret-env-variable.yaml" %}}
* Membuat Pod:
@@ -221,7 +221,7 @@ Fitur ini tersedia mulai dari Kubernetes v1.6 dan yang lebih baru.
* Gunakan envFrom untuk mendefinisikan semua data Secret sebagai variabel lingkungan Container. _Key_ dari Secret akan mennjadi nama variabel lingkungan di dalam Pod.
- {{< codenew file="pods/inject/pod-secret-envFrom.yaml" >}}
+ {{% codenew file="pods/inject/pod-secret-envFrom.yaml" %}}
* Membuat Pod:
diff --git a/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md b/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md
index 349a283d7a01a..6bf0f53532aa8 100644
--- a/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md
+++ b/content/id/docs/tasks/job/automated-tasks-with-cron-jobs.md
@@ -34,7 +34,7 @@ Untuk informasi lanjut mengenai keterbatasan, lihat [CronJob](/id/docs/concepts/
CronJob membutuhkan sebuah berkas konfigurasi.
Ini adalah contoh dari berkas konfigurasi CronJob `.spec` yang akan mencetak waktu sekarang dan pesan "hello" setiap menit:
-{{< codenew file="application/job/cronjob.yaml" >}}
+{{% codenew file="application/job/cronjob.yaml" %}}
Jalankan contoh CronJob menggunakan perintah berikut:
diff --git a/content/id/docs/tasks/manage-kubernetes-objects/declarative-config.md b/content/id/docs/tasks/manage-kubernetes-objects/declarative-config.md
index 88eeaf38d3079..073937e189409 100644
--- a/content/id/docs/tasks/manage-kubernetes-objects/declarative-config.md
+++ b/content/id/docs/tasks/manage-kubernetes-objects/declarative-config.md
@@ -52,7 +52,7 @@ Tambahkan parameter `-R` untuk memproses seluruh direktori secara rekursif.
Berikut sebuah contoh *file* konfigurasi objek:
-{{< codenew file="application/simple_deployment.yaml" >}}
+{{% codenew file="application/simple_deployment.yaml" %}}
Jalankan perintah `kubectl diff` untuk menampilkan objek yang akan dibuat:
@@ -135,7 +135,7 @@ Tambahkan argumen `-R` untuk memproses seluruh direktori secara rekursif.
Berikut sebuah contoh *file* konfigurasi:
-{{< codenew file="application/simple_deployment.yaml" >}}
+{{% codenew file="application/simple_deployment.yaml" %}}
Buat objek dengan perintah `kubectl apply`::
@@ -248,7 +248,7 @@ spec:
Perbarui *file* konfigurasi `simple_deployment.yaml`, ubah *image* dari `nginx:1.7.9` ke `nginx:1.11.9`, dan hapus *field* `minReadySeconds`:
-{{< codenew file="application/update_deployment.yaml" >}}
+{{% codenew file="application/update_deployment.yaml" %}}
Terapkan perubahan yang telah dibuat di *file* konfigurasi:
@@ -379,7 +379,7 @@ Perintah `kubectl apply` menulis konten dari berkas konfigurasi ke anotasi `kube
Agar lebih jelas, simak contoh berikut. Misalkan, berikut adalah *file* konfigurasi untuk sebuah objek Deployment:
-{{< codenew file="application/update_deployment.yaml" >}}
+{{% codenew file="application/update_deployment.yaml" %}}
Juga, misalkan, berikut adalah konfigurasi *live* dari objek Deployment yang sama:
@@ -627,7 +627,7 @@ TODO(pwittrock): *Uncomment* ini untuk versi 1.6
Berikut adalah sebuah *file* konfigurasi untuk sebuah Deployment. Berkas berikut tidak menspesifikasikan `strategy`:
-{{< codenew file="application/simple_deployment.yaml" >}}
+{{% codenew file="application/simple_deployment.yaml" %}}
Buat objek dengan perintah `kubectl apply`:
diff --git a/content/id/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/id/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
index 7a23efa6ff3c4..1c16b087b79db 100644
--- a/content/id/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
+++ b/content/id/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
@@ -57,7 +57,7 @@ Bagian ini mendefinisikan laman index.php yang melakukan beberapa komputasi inte
Pertama, kita akan memulai Deployment yang menjalankan _image_ dan mengeksposnya sebagai Service
menggunakan konfigurasi berikut:
-{{< codenew file="application/php-apache.yaml" >}}
+{{% codenew file="application/php-apache.yaml" %}}
Jalankan perintah berikut:
@@ -434,7 +434,7 @@ Semua metrik di HorizontalPodAutoscaler dan metrik API ditentukan menggunakan no
Daripada menggunakan perintah `kubectl autoscale` untuk membuat HorizontalPodAutoscaler secara imperatif, kita dapat menggunakan berkas berikut untuk membuatnya secara deklaratif:
-{{< codenew file="application/hpa/php-apache.yaml" >}}
+{{% codenew file="application/hpa/php-apache.yaml" %}}
Kita akan membuat _autoscaler_ dengan menjalankan perintah berikut:
diff --git a/content/id/docs/tasks/run-application/run-stateless-application-deployment.md b/content/id/docs/tasks/run-application/run-stateless-application-deployment.md
index 74e76c827be57..3e96eb1fda1e6 100644
--- a/content/id/docs/tasks/run-application/run-stateless-application-deployment.md
+++ b/content/id/docs/tasks/run-application/run-stateless-application-deployment.md
@@ -38,7 +38,7 @@ Kamu dapat menjalankan aplikasi dengan membuat sebuah objek Deployment Kubernete
dapat mendeskripsikan sebuah Deployment di dalam berkas YAML. Sebagai contohnya, berkas
YAML berikut mendeskripsikan sebuah Deployment yang menjalankan _image_ Docker nginx:1.14.2:
-{{< codenew file="application/deployment.yaml" >}}
+{{% codenew file="application/deployment.yaml" %}}
1. Buatlah sebuah Deployment berdasarkan berkas YAML:
@@ -100,7 +100,7 @@ YAML berikut mendeskripsikan sebuah Deployment yang menjalankan _image_ Docker n
Kamu dapat mengubah Deployment dengan cara mengaplikasikan berkas YAML yang baru.
Berkas YAML ini memberikan spesifikasi Deployment untuk menggunakan Nginx versi 1.16.1.
-{{< codenew file="application/deployment-update.yaml" >}}
+{{% codenew file="application/deployment-update.yaml" %}}
1. Terapkan berkas YAML yang baru:
@@ -116,7 +116,7 @@ Kamu dapat meningkatkan jumlah Pod di dalam Deployment dengan menerapkan
berkas YAML baru. Berkas YAML ini akan meningkatkan jumlah replika menjadi 4,
yang nantinya memberikan spesifikasi agar Deployment memiliki 4 buah Pod.
-{{< codenew file="application/deployment-scale.yaml" >}}
+{{% codenew file="application/deployment-scale.yaml" %}}
1. Terapkan berkas YAML:
diff --git a/content/id/docs/tutorials/hello-minikube.md b/content/id/docs/tutorials/hello-minikube.md
index 6790dbf47fba5..d2e4a5de76677 100644
--- a/content/id/docs/tutorials/hello-minikube.md
+++ b/content/id/docs/tutorials/hello-minikube.md
@@ -38,9 +38,9 @@ Kamupun bisa mengikuti tutorial ini kalau sudah instalasi minikube di lokal. Sil
Tutorial ini menyediakan image Kontainer yang dibuat melalui barisan kode berikut:
-{{< codenew language="js" file="minikube/server.js" >}}
+{{% codenew language="js" file="minikube/server.js" %}}
-{{< codenew language="conf" file="minikube/Dockerfile" >}}
+{{% codenew language="conf" file="minikube/Dockerfile" %}}
Untuk info lebih lanjut tentang perintah `docker build`, baca [dokumentasi Docker](https://docs.docker.com/engine/reference/commandline/build/).
diff --git a/content/id/docs/tutorials/stateful-application/basic-stateful-set.md b/content/id/docs/tutorials/stateful-application/basic-stateful-set.md
index b664a3bb8abf1..7ce5437d61bfc 100644
--- a/content/id/docs/tutorials/stateful-application/basic-stateful-set.md
+++ b/content/id/docs/tutorials/stateful-application/basic-stateful-set.md
@@ -59,7 +59,7 @@ Contoh ini menciptakan sebuah
[Service _headless_](/id/docs/concepts/services-networking/service/#service-headless),
`nginx`, untuk mempublikasikan alamat IP Pod di dalam StatefulSet, `web`.
-{{< codenew file="application/web/web.yaml" >}}
+{{% codenew file="application/web/web.yaml" %}}
Unduh contoh di atas, dan simpan ke dalam berkas dengan nama `web.yaml`.
@@ -1075,7 +1075,7 @@ menjalankan atau mengakhiri semua Pod secara bersamaan (paralel), dan tidak menu
suatu Pod menjadi Running dan Ready atau benar-benar berakhir sebelum menjalankan atau
mengakhiri Pod yang lain.
-{{< codenew file="application/web/web-parallel.yaml" >}}
+{{% codenew file="application/web/web-parallel.yaml" %}}
Unduh contoh di atas, dan simpan ke sebuah berkas dengan nama `web-parallel.yaml`.
diff --git a/content/id/docs/tutorials/stateless-application/expose-external-ip-address.md b/content/id/docs/tutorials/stateless-application/expose-external-ip-address.md
index df297f4c634b7..2152c8e0e3621 100644
--- a/content/id/docs/tutorials/stateless-application/expose-external-ip-address.md
+++ b/content/id/docs/tutorials/stateless-application/expose-external-ip-address.md
@@ -42,7 +42,7 @@ yang mengekspos alamat IP eksternal.
1. Jalankan sebuah aplikasi Hello World pada klaster kamu:
-{{< codenew file="service/load-balancer-example.yaml" >}}
+{{% codenew file="service/load-balancer-example.yaml" %}}
```shell
kubectl apply -f https://k8s.io/examples/service/load-balancer-example.yaml
From 67b931ea1482821dabfb7938362ee531beeea307 Mon Sep 17 00:00:00 2001
From: Haripriya
Date: Fri, 28 Jul 2023 18:36:05 +0530
Subject: [PATCH 018/229] created issue-wrangler.md
---
.../contribute/participate/issue-wrangler.md | 40 +++++++++++++++++++
1 file changed, 40 insertions(+)
create mode 100644 content/en/docs/contribute/participate/issue-wrangler.md
diff --git a/content/en/docs/contribute/participate/issue-wrangler.md b/content/en/docs/contribute/participate/issue-wrangler.md
new file mode 100644
index 0000000000000..7788bd012418c
--- /dev/null
+++ b/content/en/docs/contribute/participate/issue-wrangler.md
@@ -0,0 +1,40 @@
+---
+title: Issue Wranglers
+content_type: concept
+weight: 20
+---
+
+
+
+There are many issues that need triage, and in order to reduce our reliance on formal approvers or reviewers, we have introduced a new role to wrangle issues every week.The main responsibility of this role is to bridge the gap between organizational contributor and reviewer.
+This section covers the duties of a PR wrangler.
+
+
+
+## Duties
+
+Each day in a week-long shift as Issue Wrangler:
+
+- Making sure the issue is worded and titled correctly to provide contributors with adequate information.
+- Identifying whether the issue falls under the support category and assigning a "triage/accepted" status.
+- Assuring the issue is tagged with the appropriate sig/area/kind labels.
+- Keeping an eye on stale & rotten issues within the kubernetes/website repository.
+- [Issues board](https://github.com/orgs/kubernetes/projects/72/views/1) maintenance would be nice
+
+### Requirements
+
+- Must be an active member of the Kubernetes organization.
+- A minimum of 15 quality contributions to Kubernetes (of which a certain amount should be directed towards kubernetes/website).
+- Performing the role in an informal capacity already
+
+### What is its place in the contributor hierarchy?
+
+- In between a contributor and a reviewer.
+- For someone who assumes the role and demonstrates ability, the next step is to shadow a PR Wrangler and review PRs informally.
+
+### Process Implementation
+
+- Identify people who are already triaging issues and put them on a roster.
+- Pilot a shadow program and gauge interest
+- The mantel may be passed on if there is interest.
+- Keep repeating.
From 5cc83a83634b875e31e34bfbae6f58cd2222a211 Mon Sep 17 00:00:00 2001
From: Haripriya
Date: Sat, 29 Jul 2023 22:31:56 +0530
Subject: [PATCH 019/229] updated as per suggestions
---
content/en/docs/contribute/participate/issue-wrangler.md | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/content/en/docs/contribute/participate/issue-wrangler.md b/content/en/docs/contribute/participate/issue-wrangler.md
index 7788bd012418c..65e3514890fc8 100644
--- a/content/en/docs/contribute/participate/issue-wrangler.md
+++ b/content/en/docs/contribute/participate/issue-wrangler.md
@@ -6,8 +6,7 @@ weight: 20
-There are many issues that need triage, and in order to reduce our reliance on formal approvers or reviewers, we have introduced a new role to wrangle issues every week.The main responsibility of this role is to bridge the gap between organizational contributor and reviewer.
-This section covers the duties of a PR wrangler.
+In order to reduce the burden on the [PR Wrangler](/docs/contribute/participate/pr-wrangler), formal approvers, and reviewers, members of SIG Docs take week long shifts [triaging and categorising issues](/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues) for the repository.
@@ -15,7 +14,7 @@ This section covers the duties of a PR wrangler.
Each day in a week-long shift as Issue Wrangler:
-- Making sure the issue is worded and titled correctly to provide contributors with adequate information.
+- Triage and tag incoming issues daily. See [Triage and categorize issues](https://github.com/kubernetes/website/blob/main/docs/contribute/review/for-approvers/#triage-and-categorize-issues) for guidelines on how SIG Docs uses metadata.
- Identifying whether the issue falls under the support category and assigning a "triage/accepted" status.
- Assuring the issue is tagged with the appropriate sig/area/kind labels.
- Keeping an eye on stale & rotten issues within the kubernetes/website repository.
From dcee41b6e50a8ad014f95332c1c64dfcd8a532d8 Mon Sep 17 00:00:00 2001
From: Haripriya
Date: Sun, 30 Jul 2023 00:54:45 +0530
Subject: [PATCH 020/229] added prow commands and when to close issues
---
.../contribute/participate/issue-wrangler.md | 35 ++++++++++++++-----
1 file changed, 27 insertions(+), 8 deletions(-)
diff --git a/content/en/docs/contribute/participate/issue-wrangler.md b/content/en/docs/contribute/participate/issue-wrangler.md
index 65e3514890fc8..099d6701185a8 100644
--- a/content/en/docs/contribute/participate/issue-wrangler.md
+++ b/content/en/docs/contribute/participate/issue-wrangler.md
@@ -26,14 +26,33 @@ Each day in a week-long shift as Issue Wrangler:
- A minimum of 15 quality contributions to Kubernetes (of which a certain amount should be directed towards kubernetes/website).
- Performing the role in an informal capacity already
-### What is its place in the contributor hierarchy?
+### Helpful Prow commands for wranglers
-- In between a contributor and a reviewer.
-- For someone who assumes the role and demonstrates ability, the next step is to shadow a PR Wrangler and review PRs informally.
+```
+# reopen an issue
+/reopen
-### Process Implementation
+# transfer issues that don't fit in k/website to another repository
+/transfer[-issue]
-- Identify people who are already triaging issues and put them on a roster.
-- Pilot a shadow program and gauge interest
-- The mantel may be passed on if there is interest.
-- Keep repeating.
+# change the state of rotten issues
+/remove-lifecycle rotten
+```
+
+### When to close Issues
+
+For an open source project to succeed, good issue management is crucial. But it is also critical to resolve issues in order to maintain the repository and communicate clearly with contributors and users.
+
+Close issues when:
+
+- A similar issue has been reported more than once. It is also advisable to direct the users to the original issue.
+- It is very difficult to understand and address the issue presented by the author with the information provided.
+ However, encourage the user to provide more details or reopen the issue if they can reproduce it later.
+- Having implemented the same functionality elsewhere. One can close this issue and direct user to the appropriate place.
+- Feature requests that are not currently planned or aligned with the project's goals.
+- If the assignee has not responded to comments or feedback in more than two weeks
+ The issue can be assigned to someone who is highly motivated to contribute.
+- In cases where an issue appears to be spam and is clearly unrelated.
+- If the issue is related to an external limitation or dependency and is beyond the control of the project.
+
+To close an issue, leave a `/close` comment on the issue.
From a62c27cbdcfaee81bdf7c1a4a7c7ed455adbd5b2 Mon Sep 17 00:00:00 2001
From: Dipankar Das
Date: Tue, 1 Aug 2023 14:40:52 +0530
Subject: [PATCH 021/229] added note for secret to be in same namespace as
workloads
secret is used to have authenticate with the private Container registry
it should be in the same namespace which container the workloads
Signed-off-by: Dipankar Das
---
.../configure-pod-container/pull-image-private-registry.md | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md
index e871d9bb810b6..549fd8f6802c8 100644
--- a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md
+++ b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md
@@ -211,7 +211,9 @@ kubectl get pod private-reg
```
{{< note >}}
-In case the Pod fails to start with the status `ImagePullBackOff`, view the Pod events:
+Please ensure that the Pod or Deployment, and so on, created within a particular namespace contains the necessary secret in that same namespace.
+
+Also, in case the Pod fails to start with the status `ImagePullBackOff`, view the Pod events:
```shell
kubectl describe pod private-reg
```
@@ -242,4 +244,4 @@ Events:
* Learn more about [using a private registry](/docs/concepts/containers/images/#using-a-private-registry).
* Learn more about [adding image pull secrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account).
* See [kubectl create secret docker-registry](/docs/reference/generated/kubectl/kubectl-commands/#-em-secret-docker-registry-em-).
-* See the `imagePullSecrets` field within the [container definitions](/docs/reference/kubernetes-api/workload-resources/pod-v1/#containers) of a Pod
+* See the `imagePullSecrets` field within the [container definitions](/docs/reference/kubernetes-api/workload-resources/pod-v1/#containers) of a Pod
\ No newline at end of file
From aa6e4639349a45309846ccf77af44a48dda4cb7e Mon Sep 17 00:00:00 2001
From: Dipankar Das <65275144+dipankardas011@users.noreply.github.com>
Date: Wed, 2 Aug 2023 12:44:18 +0530
Subject: [PATCH 022/229] Update
content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md
Co-authored-by: Tim Bannister
---
.../configure-pod-container/pull-image-private-registry.md | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md
index 549fd8f6802c8..312cdb6248536 100644
--- a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md
+++ b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md
@@ -211,7 +211,10 @@ kubectl get pod private-reg
```
{{< note >}}
-Please ensure that the Pod or Deployment, and so on, created within a particular namespace contains the necessary secret in that same namespace.
+To use image pull secrets for a Pod (or a Deployment, or other object that
+has a pod template that you are using), you need to make sure that the appropriate
+Secret does exist in the right namespace. The namespace to use is the same
+namespace where you defined the Pod.
Also, in case the Pod fails to start with the status `ImagePullBackOff`, view the Pod events:
```shell
From 25e953f521e65edb2fcab3c9ab783c56cb615058 Mon Sep 17 00:00:00 2001
From: utkarsh-singh1
Date: Sat, 12 Aug 2023 23:21:10 +0530
Subject: [PATCH 023/229] Updated /docs/reference/kubectl/cheatsheet.md
Signed-off-by: utkarsh-singh1
---
content/en/docs/reference/kubectl/cheatsheet.md | 9 ---------
1 file changed, 9 deletions(-)
diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md
index df49986712c5f..3cb201634670a 100644
--- a/content/en/docs/reference/kubectl/cheatsheet.md
+++ b/content/en/docs/reference/kubectl/cheatsheet.md
@@ -39,15 +39,6 @@ complete -o default -F __start_kubectl k
source <(kubectl completion zsh) # set up autocomplete in zsh into the current shell
echo '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc # add autocomplete permanently to your zsh shell
```
-
-### FISH
-
-Require kubectl version 1.23 or above.
-
-```bash
-echo 'kubectl completion fish | source' >> ~/.config/fish/config.fish # add autocomplete permanently to your fish shell
-```
-
### A note on `--all-namespaces`
Appending `--all-namespaces` happens frequently enough that you should be aware of the shorthand for `--all-namespaces`:
From 7ed368d0be2f5b31bb38655e19f7899db167f3d1 Mon Sep 17 00:00:00 2001
From: windsonsea
Date: Mon, 7 Aug 2023 10:05:47 +0800
Subject: [PATCH 024/229] Clean up /releases
---
content/en/releases/_index.md | 13 ++-
content/en/releases/download.md | 6 +-
content/en/releases/notes.md | 8 +-
content/en/releases/release-managers.md | 5 +-
content/en/releases/version-skew-policy.md | 114 ++++++++++++++-------
5 files changed, 100 insertions(+), 46 deletions(-)
diff --git a/content/en/releases/_index.md b/content/en/releases/_index.md
index 5b62f95d82add..092729a2f477f 100644
--- a/content/en/releases/_index.md
+++ b/content/en/releases/_index.md
@@ -4,13 +4,17 @@ title: Releases
type: docs
---
-
-The Kubernetes project maintains release branches for the most recent three minor releases ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). Kubernetes 1.19 and newer receive [approximately 1 year of patch support](/releases/patch-releases/#support-period). Kubernetes 1.18 and older received approximately 9 months of patch support.
+The Kubernetes project maintains release branches for the most recent three minor releases
+({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}).
+Kubernetes 1.19 and newer receive
+[approximately 1 year of patch support](/releases/patch-releases/#support-period).
+Kubernetes 1.18 and older received approximately 9 months of patch support.
Kubernetes versions are expressed as **x.y.z**,
-where **x** is the major version, **y** is the minor version, and **z** is the patch version, following [Semantic Versioning](https://semver.org/) terminology.
+where **x** is the major version, **y** is the minor version, and **z** is the patch version,
+following [Semantic Versioning](https://semver.org/) terminology.
More information in the [version skew policy](/releases/version-skew-policy/) document.
@@ -22,6 +26,7 @@ More information in the [version skew policy](/releases/version-skew-policy/) do
## Upcoming Release
-Check out the [schedule](https://github.com/kubernetes/sig-release/tree/master/releases/release-{{< skew nextMinorVersion >}}) for the upcoming **{{< skew nextMinorVersion >}}** Kubernetes release!
+Check out the [schedule](https://github.com/kubernetes/sig-release/tree/master/releases/release-{{< skew nextMinorVersion >}})
+for the upcoming **{{< skew nextMinorVersion >}}** Kubernetes release!
## Helpful Resources
diff --git a/content/en/releases/download.md b/content/en/releases/download.md
index 0cee6e3556afb..c728ec015f9a8 100644
--- a/content/en/releases/download.md
+++ b/content/en/releases/download.md
@@ -43,6 +43,7 @@ You can fetch that list using:
```shell
curl -Ls "https://sbom.k8s.io/$(curl -Ls https://dl.k8s.io/release/stable.txt)/release" | grep "SPDXID: SPDXRef-Package-registry.k8s.io" | grep -v sha256 | cut -d- -f3- | sed 's/-/\//' | sed 's/-v1/:v1/'
```
+
For Kubernetes v{{< skew currentVersion >}}, the only kind of code artifact that
you can verify integrity for is a container image, using the experimental
signing support.
@@ -50,11 +51,10 @@ signing support.
To manually verify signed container images of Kubernetes core components, refer to
[Verify Signed Container Images](/docs/tasks/administer-cluster/verify-signed-artifacts).
-
-
## Binaries
-Find links to download Kubernetes components (and their checksums) in the [CHANGELOG](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG) files.
+Find links to download Kubernetes components (and their checksums) in the
+[CHANGELOG](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG) files.
Alternately, use [downloadkubernetes.com](https://www.downloadkubernetes.com/) to filter by version and architecture.
diff --git a/content/en/releases/notes.md b/content/en/releases/notes.md
index 1bb60c810627c..bcda7d0a04437 100644
--- a/content/en/releases/notes.md
+++ b/content/en/releases/notes.md
@@ -8,6 +8,10 @@ sitemap:
priority: 0.5
---
-Release notes can be found by reading the [Changelog](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG) that matches your Kubernetes version. View the changelog for {{< skew currentVersionAddMinor 0 >}} on [GitHub](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-{{< skew currentVersionAddMinor 0 >}}.md).
+Release notes can be found by reading the [Changelog](https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG)
+that matches your Kubernetes version. View the changelog for {{< skew currentVersionAddMinor 0 >}} on
+[GitHub](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-{{< skew currentVersionAddMinor 0 >}}.md).
-Alternately, release notes can be searched and filtered online at: [relnotes.k8s.io](https://relnotes.k8s.io). View filtered release notes for {{< skew currentVersionAddMinor 0 >}} on [relnotes.k8s.io](https://relnotes.k8s.io/?releaseVersions={{< skew currentVersionAddMinor 0 >}}.0).
+Alternately, release notes can be searched and filtered online at: [relnotes.k8s.io](https://relnotes.k8s.io).
+View filtered release notes for {{< skew currentVersionAddMinor 0 >}} on
+[relnotes.k8s.io](https://relnotes.k8s.io/?releaseVersions={{< skew currentVersionAddMinor 0 >}}.0).
diff --git a/content/en/releases/release-managers.md b/content/en/releases/release-managers.md
index 34fab5552f9b5..cbfd77b66e597 100644
--- a/content/en/releases/release-managers.md
+++ b/content/en/releases/release-managers.md
@@ -31,7 +31,10 @@ The responsibilities of each role are described below.
### Security Embargo Policy
-Some information about releases is subject to embargo and we have defined policy about how those embargoes are set. Please refer to the [Security Embargo Policy](https://github.com/kubernetes/committee-security-response/blob/main/private-distributors-list.md#embargo-policy) for more information.
+Some information about releases is subject to embargo and we have defined policy about
+how those embargoes are set. Please refer to the
+[Security Embargo Policy](https://github.com/kubernetes/committee-security-response/blob/main/private-distributors-list.md#embargo-policy)
+for more information.
## Handbooks
diff --git a/content/en/releases/version-skew-policy.md b/content/en/releases/version-skew-policy.md
index 7a9f1c753a1b2..7031402e5c356 100644
--- a/content/en/releases/version-skew-policy.md
+++ b/content/en/releases/version-skew-policy.md
@@ -20,13 +20,19 @@ Specific cluster deployment tools may place additional restrictions on version s
## Supported versions
-Kubernetes versions are expressed as **x.y.z**, where **x** is the major version, **y** is the minor version, and **z** is the patch version, following [Semantic Versioning](https://semver.org/) terminology.
-For more information, see [Kubernetes Release Versioning](https://git.k8s.io/sig-release/release-engineering/versioning.md#kubernetes-release-versioning).
+Kubernetes versions are expressed as **x.y.z**, where **x** is the major version,
+**y** is the minor version, and **z** is the patch version, following
+[Semantic Versioning](https://semver.org/) terminology. For more information, see
+[Kubernetes Release Versioning](https://git.k8s.io/sig-release/release-engineering/versioning.md#kubernetes-release-versioning).
-The Kubernetes project maintains release branches for the most recent three minor releases ({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}). Kubernetes 1.19 and newer receive [approximately 1 year of patch support](/releases/patch-releases/#support-period). Kubernetes 1.18 and older received approximately 9 months of patch support.
+The Kubernetes project maintains release branches for the most recent three minor releases
+({{< skew latestVersion >}}, {{< skew prevMinorVersion >}}, {{< skew oldestMinorVersion >}}).
+Kubernetes 1.19 and newer receive [approximately 1 year of patch support](/releases/patch-releases/#support-period).
+Kubernetes 1.18 and older received approximately 9 months of patch support.
-Applicable fixes, including security fixes, may be backported to those three release branches, depending on severity and feasibility.
-Patch releases are cut from those branches at a [regular cadence](/releases/patch-releases/#cadence), plus additional urgent releases, when required.
+Applicable fixes, including security fixes, may be backported to those three release branches,
+depending on severity and feasibility. Patch releases are cut from those branches at a
+[regular cadence](/releases/patch-releases/#cadence), plus additional urgent releases, when required.
The [Release Managers](/releases/release-managers/) group owns this decision.
@@ -36,7 +42,8 @@ For more information, see the Kubernetes [patch releases](/releases/patch-releas
### kube-apiserver
-In [highly-available (HA) clusters](/docs/setup/production-environment/tools/kubeadm/high-availability/), the newest and oldest `kube-apiserver` instances must be within one minor version.
+In [highly-available (HA) clusters](/docs/setup/production-environment/tools/kubeadm/high-availability/),
+the newest and oldest `kube-apiserver` instances must be within one minor version.
Example:
@@ -51,7 +58,8 @@ Example:
Example:
* `kube-apiserver` is at **{{< skew currentVersion >}}**
-* `kubelet` is supported at **{{< skew currentVersion >}}**, **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}**
+* `kubelet` is supported at **{{< skew currentVersion >}}**, **{{< skew currentVersionAddMinor -1 >}}**,
+ **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}**
{{< note >}}
If version skew exists between `kube-apiserver` instances in an HA cluster, this narrows the allowed `kubelet` versions.
@@ -60,18 +68,24 @@ If version skew exists between `kube-apiserver` instances in an HA cluster, this
Example:
* `kube-apiserver` instances are at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}**
-* `kubelet` is supported at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}** (**{{< skew currentVersion >}}** is not supported because that would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**)
+* `kubelet` is supported at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**,
+ and **{{< skew currentVersionAddMinor -3 >}}** (**{{< skew currentVersion >}}** is not supported because that
+ would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**)
### kube-proxy
* `kube-proxy` must not be newer than `kube-apiserver`.
-* `kube-proxy` may be up to three minor versions older than `kube-apiserver` (`kube-proxy` < 1.25 may only be up to two minor versions older than `kube-apiserver`).
-* `kube-proxy` may be up to three minor versions older or newer than the `kubelet` instance it runs alongside (`kube-proxy` < 1.25 may only be up to two minor versions older or newer than the `kubelet` instance it runs alongside).
+* `kube-proxy` may be up to three minor versions older than `kube-apiserver`
+ (`kube-proxy` < 1.25 may only be up to two minor versions older than `kube-apiserver`).
+* `kube-proxy` may be up to three minor versions older or newer than the `kubelet` instance
+ it runs alongside (`kube-proxy` < 1.25 may only be up to two minor versions older or newer
+ than the `kubelet` instance it runs alongside).
Example:
* `kube-apiserver` is at **{{< skew currentVersion >}}**
-* `kube-proxy` is supported at **{{< skew currentVersion >}}**, **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}**
+* `kube-proxy` is supported at **{{< skew currentVersion >}}**, **{{< skew currentVersionAddMinor -1 >}}**,
+ **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}**
{{< note >}}
If version skew exists between `kube-apiserver` instances in an HA cluster, this narrows the allowed `kube-proxy` versions.
@@ -80,26 +94,36 @@ If version skew exists between `kube-apiserver` instances in an HA cluster, this
Example:
* `kube-apiserver` instances are at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}**
-* `kube-proxy` is supported at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}** (**{{< skew currentVersion >}}** is not supported because that would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**)
+* `kube-proxy` is supported at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**,
+ and **{{< skew currentVersionAddMinor -3 >}}** (**{{< skew currentVersion >}}** is not supported because that would
+ be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**)
### kube-controller-manager, kube-scheduler, and cloud-controller-manager
-`kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` must not be newer than the `kube-apiserver` instances they communicate with. They are expected to match the `kube-apiserver` minor version, but may be up to one minor version older (to allow live upgrades).
+`kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` must not be newer than the
+`kube-apiserver` instances they communicate with. They are expected to match the `kube-apiserver` minor version,
+but may be up to one minor version older (to allow live upgrades).
Example:
* `kube-apiserver` is at **{{< skew currentVersion >}}**
-* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}**
+* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported
+ at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}**
{{< note >}}
-If version skew exists between `kube-apiserver` instances in an HA cluster, and these components can communicate with any `kube-apiserver` instance in the cluster (for example, via a load balancer), this narrows the allowed versions of these components.
+If version skew exists between `kube-apiserver` instances in an HA cluster, and these components
+can communicate with any `kube-apiserver` instance in the cluster (for example, via a load balancer),
+this narrows the allowed versions of these components.
{{< /note >}}
Example:
* `kube-apiserver` instances are at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}**
-* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` communicate with a load balancer that can route to any `kube-apiserver` instance
-* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at **{{< skew currentVersionAddMinor -1 >}}** (**{{< skew currentVersion >}}** is not supported because that would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**)
+* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` communicate with a load balancer
+ that can route to any `kube-apiserver` instance
+* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at
+ **{{< skew currentVersionAddMinor -1 >}}** (**{{< skew currentVersion >}}** is not supported
+ because that would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**)
### kubectl
@@ -108,7 +132,8 @@ Example:
Example:
* `kube-apiserver` is at **{{< skew currentVersion >}}**
-* `kubectl` is supported at **{{< skew currentVersionAddMinor 1 >}}**, **{{< skew currentVersion >}}**, and **{{< skew currentVersionAddMinor -1 >}}**
+* `kubectl` is supported at **{{< skew currentVersionAddMinor 1 >}}**, **{{< skew currentVersion >}}**,
+ and **{{< skew currentVersionAddMinor -1 >}}**
{{< note >}}
If version skew exists between `kube-apiserver` instances in an HA cluster, this narrows the supported `kubectl` versions.
@@ -117,21 +142,24 @@ If version skew exists between `kube-apiserver` instances in an HA cluster, this
Example:
* `kube-apiserver` instances are at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}**
-* `kubectl` is supported at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** (other versions would be more than one minor version skewed from one of the `kube-apiserver` components)
+* `kubectl` is supported at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}**
+ (other versions would be more than one minor version skewed from one of the `kube-apiserver` components)
## Supported component upgrade order
-The supported version skew between components has implications on the order in which components must be upgraded.
-This section describes the order in which components must be upgraded to transition an existing cluster from version **{{< skew currentVersionAddMinor -1 >}}** to version **{{< skew currentVersion >}}**.
+The supported version skew between components has implications on the order
+in which components must be upgraded. This section describes the order in
+which components must be upgraded to transition an existing cluster from version
+**{{< skew currentVersionAddMinor -1 >}}** to version **{{< skew currentVersion >}}**.
Optionally, when preparing to upgrade, the Kubernetes project recommends that
you do the following to benefit from as many regression and bug fixes as
-possible during your upgrade:
+possible during your upgrade:
-* Ensure that components are on the most recent patch version of your current
- minor version.
-* Upgrade components to the most recent patch version of the target minor
- version.
+* Ensure that components are on the most recent patch version of your current
+ minor version.
+* Upgrade components to the most recent patch version of the target minor
+ version.
For example, if you're running version {{}},
ensure that you're on the most recent patch version. Then, upgrade to the most
@@ -142,12 +170,19 @@ recent patch version of {{}}.
Pre-requisites:
* In a single-instance cluster, the existing `kube-apiserver` instance is **{{< skew currentVersionAddMinor -1 >}}**
-* In an HA cluster, all `kube-apiserver` instances are at **{{< skew currentVersionAddMinor -1 >}}** or **{{< skew currentVersion >}}** (this ensures maximum skew of 1 minor version between the oldest and newest `kube-apiserver` instance)
-* The `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` instances that communicate with this server are at version **{{< skew currentVersionAddMinor -1 >}}** (this ensures they are not newer than the existing API server version, and are within 1 minor version of the new API server version)
-* `kubelet` instances on all nodes are at version **{{< skew currentVersionAddMinor -1 >}}** or **{{< skew currentVersionAddMinor -2 >}}** (this ensures they are not newer than the existing API server version, and are within 2 minor versions of the new API server version)
+* In an HA cluster, all `kube-apiserver` instances are at **{{< skew currentVersionAddMinor -1 >}}** or
+ **{{< skew currentVersion >}}** (this ensures maximum skew of 1 minor version between the oldest and newest `kube-apiserver` instance)
+* The `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` instances that
+ communicate with this server are at version **{{< skew currentVersionAddMinor -1 >}}**
+ (this ensures they are not newer than the existing API server version, and are within 1 minor version of the new API server version)
+* `kubelet` instances on all nodes are at version **{{< skew currentVersionAddMinor -1 >}}** or **{{< skew currentVersionAddMinor -2 >}}**
+ (this ensures they are not newer than the existing API server version, and are within 2 minor versions of the new API server version)
* Registered admission webhooks are able to handle the data the new `kube-apiserver` instance will send them:
- * `ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration` objects are updated to include any new versions of REST resources added in **{{< skew currentVersion >}}** (or use the [`matchPolicy: Equivalent` option](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchpolicy) available in v1.15+)
- * The webhooks are able to handle any new versions of REST resources that will be sent to them, and any new fields added to existing versions in **{{< skew currentVersion >}}**
+ * `ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration` objects are updated to include
+ any new versions of REST resources added in **{{< skew currentVersion >}}**
+ (or use the [`matchPolicy: Equivalent` option](/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-matchpolicy) available in v1.15+)
+ * The webhooks are able to handle any new versions of REST resources that will be sent to them,
+ and any new fields added to existing versions in **{{< skew currentVersion >}}**
Upgrade `kube-apiserver` to **{{< skew currentVersion >}}**
@@ -161,7 +196,9 @@ require `kube-apiserver` to not skip minor versions when upgrading, even in sing
Pre-requisites:
-* The `kube-apiserver` instances these components communicate with are at **{{< skew currentVersion >}}** (in HA clusters in which these control plane components can communicate with any `kube-apiserver` instance in the cluster, all `kube-apiserver` instances must be upgraded before upgrading these components)
+* The `kube-apiserver` instances these components communicate with are at **{{< skew currentVersion >}}**
+ (in HA clusters in which these control plane components can communicate with any `kube-apiserver`
+ instance in the cluster, all `kube-apiserver` instances must be upgraded before upgrading these components)
Upgrade `kube-controller-manager`, `kube-scheduler`, and
`cloud-controller-manager` to **{{< skew currentVersion >}}**. There is no
@@ -175,7 +212,8 @@ Pre-requisites:
* The `kube-apiserver` instances the `kubelet` communicates with are at **{{< skew currentVersion >}}**
-Optionally upgrade `kubelet` instances to **{{< skew currentVersion >}}** (or they can be left at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, or **{{< skew currentVersionAddMinor -3 >}}**)
+Optionally upgrade `kubelet` instances to **{{< skew currentVersion >}}** (or they can be left at
+**{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, or **{{< skew currentVersionAddMinor -3 >}}**)
{{< note >}}
Before performing a minor version `kubelet` upgrade, [drain](/docs/tasks/administer-cluster/safely-drain-node/) pods from that node.
@@ -183,7 +221,8 @@ In-place minor version `kubelet` upgrades are not supported.
{{ note >}}
{{< warning >}}
-Running a cluster with `kubelet` instances that are persistently three minor versions behind `kube-apiserver` means they must be upgraded before the control plane can be upgraded.
+Running a cluster with `kubelet` instances that are persistently three minor versions behind
+`kube-apiserver` means they must be upgraded before the control plane can be upgraded.
{{ warning >}}
### kube-proxy
@@ -192,8 +231,11 @@ Pre-requisites:
* The `kube-apiserver` instances `kube-proxy` communicates with are at **{{< skew currentVersion >}}**
-Optionally upgrade `kube-proxy` instances to **{{< skew currentVersion >}}** (or they can be left at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, or **{{< skew currentVersionAddMinor -3 >}}**)
+Optionally upgrade `kube-proxy` instances to **{{< skew currentVersion >}}**
+(or they can be left at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**,
+or **{{< skew currentVersionAddMinor -3 >}}**)
{{< warning >}}
-Running a cluster with `kube-proxy` instances that are persistently three minor versions behind `kube-apiserver` means they must be upgraded before the control plane can be upgraded.
+Running a cluster with `kube-proxy` instances that are persistently three minor versions behind
+`kube-apiserver` means they must be upgraded before the control plane can be upgraded.
{{ warning >}}
From 68de69d5e38c9fed19d1b093c11c4ab2c8a8fc13 Mon Sep 17 00:00:00 2001
From: Haripriya
Date: Fri, 18 Aug 2023 23:29:35 +0530
Subject: [PATCH 025/229] added commits as per suggestions
---
.../contribute/participate/issue-wrangler.md | 37 ++++++++++++++-----
1 file changed, 28 insertions(+), 9 deletions(-)
diff --git a/content/en/docs/contribute/participate/issue-wrangler.md b/content/en/docs/contribute/participate/issue-wrangler.md
index 099d6701185a8..0100de6ada76d 100644
--- a/content/en/docs/contribute/participate/issue-wrangler.md
+++ b/content/en/docs/contribute/participate/issue-wrangler.md
@@ -6,7 +6,7 @@ weight: 20
-In order to reduce the burden on the [PR Wrangler](/docs/contribute/participate/pr-wrangler), formal approvers, and reviewers, members of SIG Docs take week long shifts [triaging and categorising issues](/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues) for the repository.
+In order to reduce the burden on the [PR Wrangler](/docs/contribute/participate/pr-wranglers),and formal approvers, and reviewers, members of SIG Docs take week long shifts [triaging and categorising issues](/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues) for the repository.
@@ -14,19 +14,19 @@ In order to reduce the burden on the [PR Wrangler](/docs/contribute/participate/
Each day in a week-long shift as Issue Wrangler:
-- Triage and tag incoming issues daily. See [Triage and categorize issues](https://github.com/kubernetes/website/blob/main/docs/contribute/review/for-approvers/#triage-and-categorize-issues) for guidelines on how SIG Docs uses metadata.
+- Triage and tag incoming issues daily. See [Triage and categorize issues](https://github.com/kubernetes/website/blob/main/content/en/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues) for guidelines on how SIG Docs uses metadata.
- Identifying whether the issue falls under the support category and assigning a "triage/accepted" status.
-- Assuring the issue is tagged with the appropriate sig/area/kind labels.
+- Ensuring the issue is tagged with the appropriate sig/area/kind labels.
- Keeping an eye on stale & rotten issues within the kubernetes/website repository.
-- [Issues board](https://github.com/orgs/kubernetes/projects/72/views/1) maintenance would be nice
+- Maintenance of [Issues board](https://github.com/orgs/kubernetes/projects/72/views/1) would be nice
### Requirements
- Must be an active member of the Kubernetes organization.
-- A minimum of 15 quality contributions to Kubernetes (of which a certain amount should be directed towards kubernetes/website).
+- A minimum of 15 [non-trivial](https://www.kubernetes.dev/docs/guide/pull-requests/#trivial-edits) contributions to Kubernetes (of which a certain amount should be directed towards kubernetes/website).
- Performing the role in an informal capacity already
-### Helpful Prow commands for wranglers
+### Helpful [Prow commands](https://prow.k8s.io/command-help) for wranglers
```
# reopen an issue
@@ -37,6 +37,27 @@ Each day in a week-long shift as Issue Wrangler:
# change the state of rotten issues
/remove-lifecycle rotten
+
+# change the state of stale issues
+/remove-lifecycle stale
+
+# assign sig to an issue
+/sig
+
+# add specific area
+/area
+
+# for beginner friendly issues
+/good-first-issue
+
+# issues that needs help
+/help wanted
+
+# tagging issue as support specific
+/kind support
+
+# to accept triaging for an issue
+/triage accepted
```
### When to close Issues
@@ -45,13 +66,11 @@ For an open source project to succeed, good issue management is crucial. But it
Close issues when:
-- A similar issue has been reported more than once. It is also advisable to direct the users to the original issue.
+- When a similar issue is reported more than once, you'll first tag it as /triage duplicate; link it to the main issue & then close it. It is also advisable to direct the users to the original issue.
- It is very difficult to understand and address the issue presented by the author with the information provided.
However, encourage the user to provide more details or reopen the issue if they can reproduce it later.
- Having implemented the same functionality elsewhere. One can close this issue and direct user to the appropriate place.
- Feature requests that are not currently planned or aligned with the project's goals.
-- If the assignee has not responded to comments or feedback in more than two weeks
- The issue can be assigned to someone who is highly motivated to contribute.
- In cases where an issue appears to be spam and is clearly unrelated.
- If the issue is related to an external limitation or dependency and is beyond the control of the project.
From 1cf3b648f643acfa1575d7837b788c02e4d95cca Mon Sep 17 00:00:00 2001
From: Haripriya
Date: Fri, 1 Sep 2023 00:51:23 +0530
Subject: [PATCH 026/229] Update issue-wrangler.md
---
content/en/docs/contribute/participate/issue-wrangler.md | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/content/en/docs/contribute/participate/issue-wrangler.md b/content/en/docs/contribute/participate/issue-wrangler.md
index 0100de6ada76d..1150c838da352 100644
--- a/content/en/docs/contribute/participate/issue-wrangler.md
+++ b/content/en/docs/contribute/participate/issue-wrangler.md
@@ -6,7 +6,7 @@ weight: 20
-In order to reduce the burden on the [PR Wrangler](/docs/contribute/participate/pr-wranglers),and formal approvers, and reviewers, members of SIG Docs take week long shifts [triaging and categorising issues](/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues) for the repository.
+Alongside the [PR Wrangler](/docs/contribute/participate/pr-wranglers),and formal approvers, and reviewers, members of SIG Docs take week long shifts [triaging and categorising issues](/docs/contribute/review/for-approvers.md/#triage-and-categorize-issues) for the repository.
@@ -58,6 +58,9 @@ Each day in a week-long shift as Issue Wrangler:
# to accept triaging for an issue
/triage accepted
+
+# closing an issue we won't be working on and haven't fixed yet
+/close not-planned
```
### When to close Issues
From 70f29136d26b672e755048290c9586552ef785f2 Mon Sep 17 00:00:00 2001
From: Lucifergene
Date: Wed, 6 Sep 2023 22:16:39 +0530
Subject: [PATCH 027/229] Added Tabs to mention RollingUpdate Deployment
Strategy YAMLs
---
.../workloads/controllers/deployment.md | 99 +++++++++++++++++++
1 file changed, 99 insertions(+)
diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md
index 17b8b9f221b25..9b1e5f065bd06 100644
--- a/content/en/docs/concepts/workloads/controllers/deployment.md
+++ b/content/en/docs/concepts/workloads/controllers/deployment.md
@@ -1197,6 +1197,105 @@ rolling update starts, such that the total number of old and new Pods does not e
Pods. Once old Pods have been killed, the new ReplicaSet can be scaled up further, ensuring that the
total number of Pods running at any time during the update is at most 130% of desired Pods.
+Here are some Rolling Update Deployment examples that use the `maxUnavailable` and `maxSurge`:
+
+{{< tabs name="tab_with_md" >}}
+{{% tab name="Max Unavailable" %}}
+
+ ```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx-deployment
+ labels:
+ app: nginx
+spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx:1.14.2
+ ports:
+ - containerPort: 80
+ strategy:
+ type: RollingUpdate
+ rollingUpdate:
+ maxUnavailable: 1
+ ```
+
+{{% /tab %}}
+{{% tab name="Max Surge" %}}
+
+ ```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx-deployment
+ labels:
+ app: nginx
+spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx:1.14.2
+ ports:
+ - containerPort: 80
+ strategy:
+ type: RollingUpdate
+ rollingUpdate:
+ maxSurge: 1
+ ```
+
+{{% /tab %}}
+{{% tab name="Hybrid" %}}
+
+ ```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx-deployment
+ labels:
+ app: nginx
+spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx:1.14.2
+ ports:
+ - containerPort: 80
+ strategy:
+ type: RollingUpdate
+ rollingUpdate:
+ maxSurge: 1
+ maxUnavailable: 1
+ ```
+
+{{% /tab %}}
+{{< /tabs >}}
+
### Progress Deadline Seconds
`.spec.progressDeadlineSeconds` is an optional field that specifies the number of seconds you want
From 285b6fa176ee4ae1683cb9136ede11779a3a278c Mon Sep 17 00:00:00 2001
From: pegasas <616672335@qq.com>
Date: Fri, 18 Aug 2023 22:01:50 +0800
Subject: [PATCH 028/229] Document snag with stringData and server-side apply
---
content/en/docs/concepts/configuration/secret.md | 8 ++++++++
.../configmap-secret/managing-secret-using-config-file.md | 8 ++++++++
2 files changed, 16 insertions(+)
diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md
index 8d26c463f0bf8..071e0ed8361cc 100644
--- a/content/en/docs/concepts/configuration/secret.md
+++ b/content/en/docs/concepts/configuration/secret.md
@@ -387,6 +387,10 @@ stringData:
password: t0p-Secret # required field for kubernetes.io/basic-auth
```
+{{< note >}}
+`stringData` for a Secret does not work well with server-side apply
+{{< /note >}}
+
The basic authentication Secret type is provided only for convenience.
You can create an `Opaque` type for credentials used for basic authentication.
However, using the defined and public Secret type (`kubernetes.io/basic-auth`) helps other
@@ -545,6 +549,10 @@ stringData:
usage-bootstrap-signing: "true"
```
+{{< note >}}
+`stringData` for a Secret does not work well with server-side apply
+{{< /note >}}
+
## Working with Secrets
### Creating a Secret
diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md
index 7245624bf8349..17696ae16e6c4 100644
--- a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md
+++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md
@@ -109,6 +109,10 @@ stringData:
password:
```
+{{< note >}}
+`stringData` for a Secret does not work well with server-side apply
+{{< /note >}}
+
When you retrieve the Secret data, the command returns the encoded values,
and not the plaintext values you provided in `stringData`.
@@ -152,6 +156,10 @@ stringData:
username: administrator
```
+{{< note >}}
+`stringData` for a Secret does not work well with server-side apply
+{{< /note >}}
+
The `Secret` object is created as follows:
```yaml
From d96bb79c5a8fe65cc7b28955f8bc70850487dc83 Mon Sep 17 00:00:00 2001
From: Arhell
Date: Sat, 9 Sep 2023 03:06:59 +0300
Subject: [PATCH 029/229] [id] Update configure-pod-configmap.md
---
.../tasks/configure-pod-container/configure-pod-configmap.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md
index bfdad56610635..4449862b772bd 100644
--- a/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md
+++ b/content/id/docs/tasks/configure-pod-container/configure-pod-configmap.md
@@ -545,6 +545,9 @@ kubectl create -f https://kubernetes.io/examples/pods/pod-configmap-env-var-valu
```
menghasilkan keluaran pada kontainer `test-container` seperti berikut:
+```shell
+kubectl logs dapi-test-pod
+```
```shell
very charm
From 21652e34e5cbfb8f81a988916ea113934a7c0df1 Mon Sep 17 00:00:00 2001
From: Edith Puclla
Date: Tue, 12 Sep 2023 16:00:19 -0500
Subject: [PATCH 030/229] [es] Add concepts/storage/projected-volumes.md
---
.../concepts/storage/projected-volumes.md | 115 ++++++++++++++++++
...rojected-secret-downwardapi-configmap.yaml | 35 ++++++
...ed-secrets-nondefault-permission-mode.yaml | 27 ++++
.../projected-service-account-token.yaml | 21 ++++
4 files changed, 198 insertions(+)
create mode 100644 content/es/docs/concepts/storage/projected-volumes.md
create mode 100644 content/es/examples/pods/storage/projected-secret-downwardapi-configmap.yaml
create mode 100644 content/es/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml
create mode 100644 content/es/examples/pods/storage/projected-service-account-token.yaml
diff --git a/content/es/docs/concepts/storage/projected-volumes.md b/content/es/docs/concepts/storage/projected-volumes.md
new file mode 100644
index 0000000000000..b59d1afd465ba
--- /dev/null
+++ b/content/es/docs/concepts/storage/projected-volumes.md
@@ -0,0 +1,115 @@
+---
+reviewers:
+ - ramrodo
+ - raelga
+ - electrocucaracha
+title: Volúmenes proyectados
+content_type: concept
+weight: 21 # just after persistent volumes
+---
+
+
+
+Este documento describe los _projected volumes_ en Kubernetes. Necesita estar familiarizado con [volumes](/docs/concepts/storage/volumes/).
+
+
+
+## Introducción
+
+Un volumen `projected` asigna varias fuentes de volúmenes existentes al mismo directorio.
+
+Actualmente se pueden proyectar los siguientes tipos de fuentes de volumen:
+
+- [`secret`](/docs/concepts/storage/volumes/#secret)
+- [`downwardAPI`](/docs/concepts/storage/volumes/#downwardapi)
+- [`configMap`](/docs/concepts/storage/volumes/#configmap)
+- [`serviceAccountToken`](#serviceaccounttoken)
+
+Se requiere que todas las fuentes estén en el mismo espacio de nombres que el Pod. Para más detalles,
+vea [all-in-one volume](https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md) documento de diseño.
+
+### Configuración de ejemplo con un secreto, una downwardAPI, y una configMap {#example-configuration-secret-downwardapi-configmap}
+
+{{% code_sample file="pods/storage/projected-secret-downwardapi-configmap.yaml" %}}
+
+### Configuración de ejemplo: secretos con un modo de permiso no predeterminado establecido {#example-configuration-secrets-nondefault-permission-mode}
+
+{{% code_sample file="pods/storage/projected-secrets-nondefault-permission-mode.yaml" %}}
+
+Cada fuente de volumen proyectada aparece en la especificación bajo `sources`. Los parámetros son casi los mismos con dos excepciones:
+
+- Para los secretos, el campo `secretName` se ha cambiado a `name` para que sea coherente con el nombre de ConfigMap.
+- El `defaultMode` solo se puede especificar en el nivel proyectado y no para cada fuente de volumen. However, Sin embargo, como se ilustra arriba, puede configurar explícitamente el `mode` para cada proyección individual.
+
+## volúmenes proyectados de serviceAccountToken {#serviceaccounttoken}
+
+Puede inyectar el token para la [service account](/docs/reference/access-authn-authz/authentication/#service-account-tokens) actual
+en un Pod en una ruta especificada. Por ejemplo:
+
+{{% code_sample file="pods/storage/projected-service-account-token.yaml" %}}
+
+El Pod de ejemplo tiene un volumen proyectado que contiene el token de cuenta de servicio inyectado. Los contenedores en este Pod pueden usar ese token para acceder al servidor API de Kubernetes, autenticándose con la identidad de [the pod's ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/).
+
+El campo `audience` contiene la audiencia prevista del
+token. Un destinatario del token debe identificarse con un identificador especificado en la audiencia del token y, de lo contrario, debe rechazar el token. Este campo es opcional y de forma predeterminada es el identificador del servidor API.
+
+The `expirationSeconds` es la duración esperada de validez del token de la cuenta de servicio. El valor predeterminado es 1 hora y debe durar al menos 10 minutos (600 segundos). Un administrador
+también puede limitar su valor máximo especificando la opción `--service-account-max-token-expiration`
+para el servidor API. El campo `path` especifica una ruta relativa al punto de montaje del volumen proyectado.
+
+{{< note >}}
+Un contenedor que utiliza una fuente de volumen proyectada como montaje de volumen [`subPath`](/docs/concepts/storage/volumes/#using-subpath)
+no recibirá actualizaciones para esas fuentes de volumen.
+{{< /note >}}
+
+## Interacciones SecurityContext
+
+La [proposal](https://git.k8s.io/enhancements/keps/sig-storage/2451-service-account-token-volumes#proposal) para el manejo de permisos de archivos en la mejora del volumen de cuentas de servicio proyectadas introdujo los archivos proyectados que tienen los permisos de propietario correctos establecidos.
+
+### Linux
+
+En los pods de Linux que tienen un volumen proyectado y `RunAsUser` configurado en el Pod
+[`SecurityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context),
+los archivos proyectados tienen la conjunto de propiedad correcto, incluida la propiedad del usuario del contenedor.
+
+Cuando todos los contenedores en un pod tienen el mismo `runAsUser` configurado en su
+[`PodSecurityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context)
+or container
+[`SecurityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context-1),
+entonces el kubelet garantiza que el contenido del volumen `serviceAccountToken` sea propiedad de ese usuario y que el archivo token tenga su modo de permiso establecido en `0600`.
+
+{{< note >}}
+{{< glossary_tooltip text="Ephemeral containers" term_id="ephemeral-container" >}}
+agregado a un pod después de su creación _not_ cambia los permisos de volumen que se establecieron cuando se creó el pod.
+
+Si los permisos de volumen `serviceAccountToken` de un Pod se establecieron en `0600` porque todos los demás contenedores en el Pod tienen el mismo `runAsUser`, los contenedores efímeros deben usar el mismo `runAsUser` para poder leer el token.
+{{< /note >}}
+
+### Windows
+
+En los pods de Windows que tienen un volumen proyectado y `RunAsUsername` configurado en el pod `SecurityContext`, la propiedad no se aplica debido a la forma en que se administran las cuentas de usuario en Windows. Windows almacena y administra cuentas de grupos y usuarios locales en un archivo de base de datos llamado Administrador de cuentas de seguridad (SAM). Cada contenedor mantiene su propia instancia de la base de datos SAM, de la cual el host no tiene visibilidad mientras el contenedor se está ejecutando. Los contenedores de Windows están diseñados para ejecutar la parte del modo de usuario del sistema operativo de forma aislada del host, de ahí el mantenimiento de una base de datos SAM virtual. Como resultado, el kubelet que se ejecuta en el host no tiene la capacidad de configurar dinámicamente la propiedad de los archivos del host para cuentas de contenedores virtualizados. Se recomienda que, si los archivos de la máquina host se van a compartir con el contenedor, se coloquen en su propio montaje de volumen fuera de `C:\`.
+
+De forma predeterminada, los archivos proyectados tendrán la siguiente propiedad, como se muestra en un archivo de volumen proyectado de ejemplo:
+
+```powershell
+PS C:\> Get-Acl C:\var\run\secrets\kubernetes.io\serviceaccount\..2021_08_31_22_22_18.318230061\ca.crt | Format-List
+
+Path : Microsoft.PowerShell.Core\FileSystem::C:\var\run\secrets\kubernetes.io\serviceaccount\..2021_08_31_22_22_18.318230061\ca.crt
+Owner : BUILTIN\Administrators
+Group : NT AUTHORITY\SYSTEM
+Access : NT AUTHORITY\SYSTEM Allow FullControl
+ BUILTIN\Administrators Allow FullControl
+ BUILTIN\Users Allow ReadAndExecute, Synchronize
+Audit :
+Sddl : O:BAG:SYD:AI(A;ID;FA;;;SY)(A;ID;FA;;;BA)(A;ID;0x1200a9;;;BU)
+```
+
+Esto implica que todos los usuarios administradores como `ContainerAdministrator` tendrán acceso de lectura, escritura y ejecución, mientras que los usuarios que no sean administradores tendrán acceso de lectura y ejecución.
+
+{{< note >}}
+En general, se desaconseja otorgar acceso al contenedor al host, ya que puede abrir la puerta a posibles vulnerabilidades de seguridad.
+
+Creating a Windows Pod with `RunAsUser` in it's `SecurityContext` will result in
+the Pod being stuck at `ContainerCreating` forever. So it is advised to not use
+the Linux only `RunAsUser` option with Windows Pods.
+{{< /note >}}
diff --git a/content/es/examples/pods/storage/projected-secret-downwardapi-configmap.yaml b/content/es/examples/pods/storage/projected-secret-downwardapi-configmap.yaml
new file mode 100644
index 0000000000000..453dc08c0c7d9
--- /dev/null
+++ b/content/es/examples/pods/storage/projected-secret-downwardapi-configmap.yaml
@@ -0,0 +1,35 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: volume-test
+spec:
+ containers:
+ - name: container-test
+ image: busybox:1.28
+ volumeMounts:
+ - name: all-in-one
+ mountPath: "/projected-volume"
+ readOnly: true
+ volumes:
+ - name: all-in-one
+ projected:
+ sources:
+ - secret:
+ name: mysecret
+ items:
+ - key: username
+ path: my-group/my-username
+ - downwardAPI:
+ items:
+ - path: "labels"
+ fieldRef:
+ fieldPath: metadata.labels
+ - path: "cpu_limit"
+ resourceFieldRef:
+ containerName: container-test
+ resource: limits.cpu
+ - configMap:
+ name: myconfigmap
+ items:
+ - key: config
+ path: my-group/my-config
diff --git a/content/es/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml b/content/es/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml
new file mode 100644
index 0000000000000..b921fd93c5833
--- /dev/null
+++ b/content/es/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml
@@ -0,0 +1,27 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: volume-test
+spec:
+ containers:
+ - name: container-test
+ image: busybox:1.28
+ volumeMounts:
+ - name: all-in-one
+ mountPath: "/projected-volume"
+ readOnly: true
+ volumes:
+ - name: all-in-one
+ projected:
+ sources:
+ - secret:
+ name: mysecret
+ items:
+ - key: username
+ path: my-group/my-username
+ - secret:
+ name: mysecret2
+ items:
+ - key: password
+ path: my-group/my-password
+ mode: 511
diff --git a/content/es/examples/pods/storage/projected-service-account-token.yaml b/content/es/examples/pods/storage/projected-service-account-token.yaml
new file mode 100644
index 0000000000000..cc307659a78ef
--- /dev/null
+++ b/content/es/examples/pods/storage/projected-service-account-token.yaml
@@ -0,0 +1,21 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: sa-token-test
+spec:
+ containers:
+ - name: container-test
+ image: busybox:1.28
+ volumeMounts:
+ - name: token-vol
+ mountPath: "/service-account"
+ readOnly: true
+ serviceAccountName: default
+ volumes:
+ - name: token-vol
+ projected:
+ sources:
+ - serviceAccountToken:
+ audience: api
+ expirationSeconds: 3600
+ path: token
From fe9e053b727d8e35d27af4806ac151123414ff97 Mon Sep 17 00:00:00 2001
From: Edith Puclla
Date: Tue, 12 Sep 2023 16:24:25 -0500
Subject: [PATCH 031/229] [es] Modify concepts/storage/projected-volumes.md
---
.../concepts/storage/projected-volumes.md | 20 +++++++++----------
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/content/es/docs/concepts/storage/projected-volumes.md b/content/es/docs/concepts/storage/projected-volumes.md
index b59d1afd465ba..ac1a6935f37a2 100644
--- a/content/es/docs/concepts/storage/projected-volumes.md
+++ b/content/es/docs/concepts/storage/projected-volumes.md
@@ -10,13 +10,13 @@ weight: 21 # just after persistent volumes
-Este documento describe los _projected volumes_ en Kubernetes. Necesita estar familiarizado con [volumes](/docs/concepts/storage/volumes/).
+Este documento describe los _volúmenes proyectados_ en Kubernetes. Necesita estar familiarizado con [volumes](/docs/concepts/storage/volumes/).
## Introducción
-Un volumen `projected` asigna varias fuentes de volúmenes existentes al mismo directorio.
+Un volumen `proyectado` asigna varias fuentes de volúmenes existentes al mismo directorio.
Actualmente se pueden proyectar los siguientes tipos de fuentes de volumen:
@@ -26,7 +26,7 @@ Actualmente se pueden proyectar los siguientes tipos de fuentes de volumen:
- [`serviceAccountToken`](#serviceaccounttoken)
Se requiere que todas las fuentes estén en el mismo espacio de nombres que el Pod. Para más detalles,
-vea [all-in-one volume](https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md) documento de diseño.
+vea el documento de diseño [all-in-one volume](https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md).
### Configuración de ejemplo con un secreto, una downwardAPI, y una configMap {#example-configuration-secret-downwardapi-configmap}
@@ -39,9 +39,9 @@ vea [all-in-one volume](https://git.k8s.io/design-proposals-archive/node/all-in-
Cada fuente de volumen proyectada aparece en la especificación bajo `sources`. Los parámetros son casi los mismos con dos excepciones:
- Para los secretos, el campo `secretName` se ha cambiado a `name` para que sea coherente con el nombre de ConfigMap.
-- El `defaultMode` solo se puede especificar en el nivel proyectado y no para cada fuente de volumen. However, Sin embargo, como se ilustra arriba, puede configurar explícitamente el `mode` para cada proyección individual.
+- El `defaultMode` solo se puede especificar en el nivel proyectado y no para cada fuente de volumen. Sin embargo, como se ilustra arriba, puede configurar explícitamente el `mode` para cada proyección individual.
-## volúmenes proyectados de serviceAccountToken {#serviceaccounttoken}
+## Volúmenes proyectados de serviceAccountToken {#serviceaccounttoken}
Puede inyectar el token para la [service account](/docs/reference/access-authn-authz/authentication/#service-account-tokens) actual
en un Pod en una ruta especificada. Por ejemplo:
@@ -64,7 +64,7 @@ no recibirá actualizaciones para esas fuentes de volumen.
## Interacciones SecurityContext
-La [proposal](https://git.k8s.io/enhancements/keps/sig-storage/2451-service-account-token-volumes#proposal) para el manejo de permisos de archivos en la mejora del volumen de cuentas de servicio proyectadas introdujo los archivos proyectados que tienen los permisos de propietario correctos establecidos.
+La [propuesta](https://git.k8s.io/enhancements/keps/sig-storage/2451-service-account-token-volumes#proposal) para el manejo de permisos de archivos en la mejora del volumen de cuentas de servicio proyectadas introdujo los archivos proyectados que tienen los permisos de propietario correctos establecidos.
### Linux
@@ -74,13 +74,13 @@ los archivos proyectados tienen la conjunto de propiedad correcto, incluida la p
Cuando todos los contenedores en un pod tienen el mismo `runAsUser` configurado en su
[`PodSecurityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context)
-or container
+o el contenedor
[`SecurityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context-1),
entonces el kubelet garantiza que el contenido del volumen `serviceAccountToken` sea propiedad de ese usuario y que el archivo token tenga su modo de permiso establecido en `0600`.
{{< note >}}
{{< glossary_tooltip text="Ephemeral containers" term_id="ephemeral-container" >}}
-agregado a un pod después de su creación _not_ cambia los permisos de volumen que se establecieron cuando se creó el pod.
+agregado a un pod después de su creación _no_ cambia los permisos de volumen que se establecieron cuando se creó el pod.
Si los permisos de volumen `serviceAccountToken` de un Pod se establecieron en `0600` porque todos los demás contenedores en el Pod tienen el mismo `runAsUser`, los contenedores efímeros deben usar el mismo `runAsUser` para poder leer el token.
{{< /note >}}
@@ -109,7 +109,5 @@ Esto implica que todos los usuarios administradores como `ContainerAdministrator
{{< note >}}
En general, se desaconseja otorgar acceso al contenedor al host, ya que puede abrir la puerta a posibles vulnerabilidades de seguridad.
-Creating a Windows Pod with `RunAsUser` in it's `SecurityContext` will result in
-the Pod being stuck at `ContainerCreating` forever. So it is advised to not use
-the Linux only `RunAsUser` option with Windows Pods.
+Crear un Pod de Windows con `RunAsUser` en su `SecurityContext` dará como resultado que el Pod quede atascado en `ContainerCreating` para siempre. Por lo tanto, se recomienda no utilizar la opción `RunAsUser` exclusiva de Linux con Windows Pods.
{{< /note >}}
From 30aac26f6bbdc04df3eb52bfb5e31c724ffc8713 Mon Sep 17 00:00:00 2001
From: Ayush Gupta
Date: Tue, 19 Sep 2023 14:30:23 +0530
Subject: [PATCH 032/229] Feature addition in section why do you need
Kubernetes
---
content/en/docs/concepts/overview/_index.md | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/content/en/docs/concepts/overview/_index.md b/content/en/docs/concepts/overview/_index.md
index 200b3e2ea337e..12c150c6cafa8 100644
--- a/content/en/docs/concepts/overview/_index.md
+++ b/content/en/docs/concepts/overview/_index.md
@@ -129,6 +129,14 @@ Kubernetes provides you with:
Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens,
and SSH keys. You can deploy and update secrets and application configuration without
rebuilding your container images, and without exposing secrets in your stack configuration.
+* **Batch execution**
+ In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired.
+* **Horizontal scaling**
+ Scale your application up and down with a simple command, with a UI, or automatically based on CPU usage.
+* **IPv4/IPv6 dual-stack**
+ Allocation of IPv4 and IPv6 addresses to Pods and Services
+* **Designed for extensibility**
+ Add features to your Kubernetes cluster without changing upstream source code.
## What Kubernetes is not
From eb2a2807d6058e3043d5d0cb96820ea288767fb3 Mon Sep 17 00:00:00 2001
From: satyampsoni
Date: Wed, 20 Sep 2023 01:06:31 +0530
Subject: [PATCH 033/229] subscribe to kubeweelkly enchanced #42961
---
layouts/index.html | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/layouts/index.html b/layouts/index.html
index 1e14ba92281ea..142743eea3a2e 100644
--- a/layouts/index.html
+++ b/layouts/index.html
@@ -18,13 +18,12 @@
/* Add your own MailChimp form style overrides in your site stylesheet or in this style block.
We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */
-
@@ -1079,4 +1105,3 @@ If set, write the default configuration values to this file and exit.
-
From c9c9a3349fd2f8083a18bb27e7f05af46d38695e Mon Sep 17 00:00:00 2001
From: Edith Puclla <58795858+edithturn@users.noreply.github.com>
Date: Wed, 20 Sep 2023 14:56:30 -0500
Subject: [PATCH 035/229] Update
content/es/docs/concepts/storage/projected-volumes.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Co-authored-by: Rodolfo Martínez Vega
---
content/es/docs/concepts/storage/projected-volumes.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/es/docs/concepts/storage/projected-volumes.md b/content/es/docs/concepts/storage/projected-volumes.md
index ac1a6935f37a2..478beb232a1bc 100644
--- a/content/es/docs/concepts/storage/projected-volumes.md
+++ b/content/es/docs/concepts/storage/projected-volumes.md
@@ -1,7 +1,7 @@
---
reviewers:
- ramrodo
- - raelga
+ - krol3
- electrocucaracha
title: Volúmenes proyectados
content_type: concept
From d7cc2d810a113c5b5170286cb8ae961e7ad08b67 Mon Sep 17 00:00:00 2001
From: Edith Puclla <58795858+edithturn@users.noreply.github.com>
Date: Wed, 20 Sep 2023 14:57:41 -0500
Subject: [PATCH 036/229] Update
content/es/docs/concepts/storage/projected-volumes.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Co-authored-by: Rodolfo Martínez Vega
---
content/es/docs/concepts/storage/projected-volumes.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/es/docs/concepts/storage/projected-volumes.md b/content/es/docs/concepts/storage/projected-volumes.md
index 478beb232a1bc..d7014c3a8251b 100644
--- a/content/es/docs/concepts/storage/projected-volumes.md
+++ b/content/es/docs/concepts/storage/projected-volumes.md
@@ -10,7 +10,7 @@ weight: 21 # just after persistent volumes
-Este documento describe los _volúmenes proyectados_ en Kubernetes. Necesita estar familiarizado con [volumes](/docs/concepts/storage/volumes/).
+Este documento describe los _volúmenes proyectados_ en Kubernetes. Necesita estar familiarizado con [volúmenes](/es/docs/concepts/storage/volumes/).
From adf78b96f0a1e0b9577d5578483d011ca600079b Mon Sep 17 00:00:00 2001
From: Edith Puclla <58795858+edithturn@users.noreply.github.com>
Date: Wed, 20 Sep 2023 15:01:50 -0500
Subject: [PATCH 037/229] Update
content/es/docs/concepts/storage/projected-volumes.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Gracias por esto!
Co-authored-by: Rodolfo Martínez Vega
---
content/es/docs/concepts/storage/projected-volumes.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/es/docs/concepts/storage/projected-volumes.md b/content/es/docs/concepts/storage/projected-volumes.md
index d7014c3a8251b..e496094a1d90c 100644
--- a/content/es/docs/concepts/storage/projected-volumes.md
+++ b/content/es/docs/concepts/storage/projected-volumes.md
@@ -28,7 +28,7 @@ Actualmente se pueden proyectar los siguientes tipos de fuentes de volumen:
Se requiere que todas las fuentes estén en el mismo espacio de nombres que el Pod. Para más detalles,
vea el documento de diseño [all-in-one volume](https://git.k8s.io/design-proposals-archive/node/all-in-one-volume.md).
-### Configuración de ejemplo con un secreto, una downwardAPI, y una configMap {#example-configuration-secret-downwardapi-configmap}
+### Configuración de ejemplo con un secreto, una downwardAPI y una configMap {#example-configuration-secret-downwardapi-configmap}
{{% code_sample file="pods/storage/projected-secret-downwardapi-configmap.yaml" %}}
From cab22c412d92b79914bb3709c3a115276dbf2502 Mon Sep 17 00:00:00 2001
From: Tim Bannister
Date: Wed, 20 Sep 2023 18:18:47 +0100
Subject: [PATCH 038/229] Revise tutorial introduction
---
.../stateful-application/basic-stateful-set.md | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/content/en/docs/tutorials/stateful-application/basic-stateful-set.md b/content/en/docs/tutorials/stateful-application/basic-stateful-set.md
index eefaaa18e0019..2d9354e61e58a 100644
--- a/content/en/docs/tutorials/stateful-application/basic-stateful-set.md
+++ b/content/en/docs/tutorials/stateful-application/basic-stateful-set.md
@@ -27,14 +27,25 @@ following Kubernetes concepts:
* [Headless Services](/docs/concepts/services-networking/service/#headless-services)
* [PersistentVolumes](/docs/concepts/storage/persistent-volumes/)
* [PersistentVolume Provisioning](https://github.com/kubernetes/examples/tree/master/staging/persistent-volume-provisioning/)
-* [StatefulSets](/docs/concepts/workloads/controllers/statefulset/)
* The [kubectl](/docs/reference/kubectl/kubectl/) command line tool
+{{% include "task-tutorial-prereqs.md" %}}
+You should configure `kubectl` to use a context that uses the `default`
+namespace.
+If you are using an existing cluster, make sure that it's OK to use that
+cluster's default namespace to practice. Ideally, practice in a cluster
+that doesn't run any real workloads.
+
+It's also useful to read the concept page about [StatefulSets](/docs/concepts/workloads/controllers/statefulset/).
+
{{< note >}}
This tutorial assumes that your cluster is configured to dynamically provision
-PersistentVolumes. If your cluster is not configured to do so, you
+PersistentVolumes. You'll also need to have a [default StorageClass](/docs/concepts/storage/storage-classes/#default-storageclass).
+If your cluster is not configured to provision storage dynamically, you
will have to manually provision two 1 GiB volumes prior to starting this
-tutorial.
+tutorial and
+set up your cluster so that those PersistentVolumes map to the
+PersistentVolumeClaim templates that the StatefulSet defines.
{{< /note >}}
## {{% heading "objectives" %}}
From 648e2ba33385209dd777cb66bdf02893840a41cb Mon Sep 17 00:00:00 2001
From: Richa Banker
Date: Mon, 24 Jul 2023 16:44:55 -0700
Subject: [PATCH 039/229] Add an entry in glossary for GVR
Co-authored-by: Tim Bannister
---
.../glossary/group-version-resource.md | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
create mode 100644 content/en/docs/reference/glossary/group-version-resource.md
diff --git a/content/en/docs/reference/glossary/group-version-resource.md b/content/en/docs/reference/glossary/group-version-resource.md
new file mode 100644
index 0000000000000..cdd208fd5ed8e
--- /dev/null
+++ b/content/en/docs/reference/glossary/group-version-resource.md
@@ -0,0 +1,18 @@
+---
+title: Group Version Resource
+id: gvr
+date: 2023-07-24
+short_description: >
+ The API group, API version and name of a Kubernetes API.
+
+aka: ["GVR"]
+tags:
+- architecture
+---
+Means of representing unique Kubernetes API resource.
+
+
+
+Group Version Resources (GVRs) specify the API group, API version, and resource (name for the object kind as it appears in the URI) associated with accessing a particular id of object in Kubernetes.
+GVRs let you define and distinguish different Kubernetes objects, and to specify a way of accessing
+objects that is stable even as APIs change.
\ No newline at end of file
From 7dcb1c4cb5f4576863847840e122105414a6e181 Mon Sep 17 00:00:00 2001
From: Arhell
Date: Sat, 23 Sep 2023 02:32:37 +0300
Subject: [PATCH 040/229] [ja] remove "O=system:masters" from
"kube-apiserver-etcd-client".md
---
content/ja/docs/setup/best-practices/certificates.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/ja/docs/setup/best-practices/certificates.md b/content/ja/docs/setup/best-practices/certificates.md
index a782b52fee1e9..a499631875a05 100644
--- a/content/ja/docs/setup/best-practices/certificates.md
+++ b/content/ja/docs/setup/best-practices/certificates.md
@@ -67,7 +67,7 @@ CAの秘密鍵をクラスターにコピーしたくない場合、自身で全
| kube-etcd | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` |
| kube-etcd-peer | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` |
| kube-etcd-healthcheck-client | etcd-ca | | client | |
-| kube-apiserver-etcd-client | etcd-ca | system:masters | client | |
+| kube-apiserver-etcd-client | etcd-ca | | client | |
| kube-apiserver | kubernetes-ca | | server | ``, ``, ``, `[1]` |
| kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | |
| front-proxy-client | kubernetes-front-proxy-ca | | client | |
From da06ff06c8ff8b87b3315d524b906e71455ccc65 Mon Sep 17 00:00:00 2001
From: MeenuyD
Date: Sun, 24 Sep 2023 20:54:02 +0530
Subject: [PATCH 041/229] Fix: secret missing in docker-registry secret command
in kubectl-command docs
---
static/docs/reference/generated/kubectl/kubectl-commands.html | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/static/docs/reference/generated/kubectl/kubectl-commands.html b/static/docs/reference/generated/kubectl/kubectl-commands.html
index 4fff80b171a7d..cd4388a1f1171 100644
--- a/static/docs/reference/generated/kubectl/kubectl-commands.html
+++ b/static/docs/reference/generated/kubectl/kubectl-commands.html
@@ -1503,7 +1503,8 @@
secret docker-registry
nodes to pull images on your behalf, they must have the credentials. You can provide this information
by creating a dockercfg secret and attaching it to your service account.
From b4412b123655b91c75c790d1fd9950955e2ce548 Mon Sep 17 00:00:00 2001
From: niranjandarshann
Date: Mon, 25 Sep 2023 10:52:03 +0530
Subject: [PATCH 042/229] Added glossary of CSI
---
content/en/docs/concepts/storage/ephemeral-volumes.md | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/content/en/docs/concepts/storage/ephemeral-volumes.md b/content/en/docs/concepts/storage/ephemeral-volumes.md
index 77844874348d7..f92f544768fd0 100644
--- a/content/en/docs/concepts/storage/ephemeral-volumes.md
+++ b/content/en/docs/concepts/storage/ephemeral-volumes.md
@@ -47,8 +47,7 @@ different purposes:
[secret](/docs/concepts/storage/volumes/#secret): inject different
kinds of Kubernetes data into a Pod
- [CSI ephemeral volumes](#csi-ephemeral-volumes):
- similar to the previous volume kinds, but provided by special
- [CSI drivers](https://github.com/container-storage-interface/spec/blob/master/spec.md)
+ similar to the previous volume kinds, but provided by special {{< glossary_tooltip text="CSI" term_id="csi" >}} drivers
which specifically [support this feature](https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html)
- [generic ephemeral volumes](#generic-ephemeral-volumes), which
can be provided by all storage drivers that also support persistent volumes
From 27473c3381f5bc56f4ccca7e2dcd20d2cb63ac33 Mon Sep 17 00:00:00 2001
From: Mohammed Affan
Date: Thu, 24 Aug 2023 15:52:36 +0530
Subject: [PATCH 043/229] Add eviction thresholds parameters
Update content/en/docs/tasks/administer-cluster/kubelet-config-file.md
Co-authored-by: Qiming Teng
---
.../administer-cluster/kubelet-config-file.md | 23 ++++++++++++++-----
1 file changed, 17 insertions(+), 6 deletions(-)
diff --git a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md
index 506dd2723e00b..815f9f70c3aec 100644
--- a/content/en/docs/tasks/administer-cluster/kubelet-config-file.md
+++ b/content/en/docs/tasks/administer-cluster/kubelet-config-file.md
@@ -35,14 +35,22 @@ address: "192.168.0.8"
port: 20250
serializeImagePulls: false
evictionHard:
- memory.available: "200Mi"
+ memory.available: "100Mi"
+ nodefs.available: "10%"
+ nodefs.inodesFree: "5%"
+ imagefs.available: "15%"
```
-In the example, the kubelet is configured to serve on IP address 192.168.0.8 and port 20250, pull images in parallel,
-and evict Pods when available memory drops below 200Mi. Since only one of the four evictionHard thresholds is configured,
-other evictionHard thresholds are reset to 0 from their built-in defaults.
-All other kubelet configuration values are left at their built-in defaults, unless overridden
-by flags. Command line flags which target the same value as a config file will override that value.
+In this example, the kubelet is configured with the following settings:
+
+1. `address`: The kubelet will serve on IP address `192.168.0.8`.
+2. `port`: The kubelet will serve on port `20250`.
+3. `serializeImagePulls`: Image pulls will be done in parallel.
+4. `evictionHard`: The kubelet will evict Pods under one of the following conditions:
+ - When the node's available memory drops below 100MiB.
+ - When the node's main filesystem's available space is less than 10%.
+ - When the image filesystem's available space is less than 15%.
+ - When more than 95% of the node's main filesystem's inodes are in use.
{{< note >}}
In the example, by changing the default value of only one parameter for
@@ -51,6 +59,9 @@ will be set to zero. In order to provide custom values, you should provide all
the threshold values respectively.
{{< /note >}}
+The `imagefs` is an optional filesystem that container runtimes use to store container
+images and container writable layers.
+
## Start a kubelet process configured via the config file
{{< note >}}
From f2635821d773ef56f433c6938a8642ec74471a7b Mon Sep 17 00:00:00 2001
From: kujiraitakahiro
Date: Mon, 25 Sep 2023 20:53:03 +0900
Subject: [PATCH 044/229] Update content/ja/docs/reference/glossary/kubeadm.md
add "ja" link page
Update content/ja/docs/reference/glossary/addons.md
add "ja" link page
Update content/ja/docs/reference/glossary/addons.md
add "ja" link page
Create kubeadm.md
Update addons.md
Create addons.md
Co-Authored-By: atoato88
---
content/ja/docs/reference/glossary/addons.md | 16 ++++++++++++++++
content/ja/docs/reference/glossary/kubeadm.md | 18 ++++++++++++++++++
2 files changed, 34 insertions(+)
create mode 100644 content/ja/docs/reference/glossary/addons.md
create mode 100644 content/ja/docs/reference/glossary/kubeadm.md
diff --git a/content/ja/docs/reference/glossary/addons.md b/content/ja/docs/reference/glossary/addons.md
new file mode 100644
index 0000000000000..199aa23e6b1b1
--- /dev/null
+++ b/content/ja/docs/reference/glossary/addons.md
@@ -0,0 +1,16 @@
+---
+title: Add-ons
+id: addons
+date: 2019-12-15
+full_link: /ja/docs/concepts/cluster-administration/addons/
+short_description: >
+ Kubernetesの機能を拡張するリソース。
+
+aka:
+tags:
+- tool
+---
+ Kubernetesの機能を拡張するリソース。
+
+
+[Installing addons](/ja/docs/concepts/cluster-administration/addons/)では、クラスターのアドオン使用について詳しく説明し、いくつかの人気のあるアドオンを列挙します。
diff --git a/content/ja/docs/reference/glossary/kubeadm.md b/content/ja/docs/reference/glossary/kubeadm.md
new file mode 100644
index 0000000000000..f84150c9790e2
--- /dev/null
+++ b/content/ja/docs/reference/glossary/kubeadm.md
@@ -0,0 +1,18 @@
+---
+title: Kubeadm
+id: kubeadm
+date: 2018-04-12
+full_link: /ja/docs/reference/setup-tools/kubeadm/
+short_description: >
+ Kubernetesを迅速にインストールし、安全なクラスタをセットアップするためのツール。
+
+aka:
+tags:
+- tool
+- operation
+---
+ Kubernetesを迅速にインストールし、安全なクラスタをセットアップするためのツール。
+
+
+
+kubeadmを使用して、コントロールプレーンとワーカーノード{{< glossary_tooltip text="ワーカーノード" term_id="node" >}}コンポーネントの両方をインストールできます。
From 0f1a7a1b7b155a34bedebc64a0cd38244a06674d Mon Sep 17 00:00:00 2001
From: Sascha Grunert
Date: Tue, 15 Aug 2023 12:11:14 +0200
Subject: [PATCH 045/229] Fix `config.json` interpretation
As outlined in https://github.com/kubernetes/kubernetes/issues/119941,
the implementation is more specific than a regular glob match. Updating
the docs to reflect that.
Signed-off-by: Sascha Grunert
---
content/en/docs/concepts/containers/images.md | 42 ++++++++-----------
1 file changed, 17 insertions(+), 25 deletions(-)
diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md
index b01b2fd112eef..b4b837ae32fc6 100644
--- a/content/en/docs/concepts/containers/images.md
+++ b/content/en/docs/concepts/containers/images.md
@@ -265,38 +265,26 @@ See [Configure a kubelet image credential provider](/docs/tasks/administer-clust
The interpretation of `config.json` varies between the original Docker
implementation and the Kubernetes interpretation. In Docker, the `auths` keys
can only specify root URLs, whereas Kubernetes allows glob URLs as well as
-prefix-matched paths. This means that a `config.json` like this is valid:
+prefix-matched paths. The only limitation is that glob patterns (`*`) have to
+include the dot (`.`) for each subdomain. The amount of matched subdomains has
+to be equal to the amount of glob patterns (`*.`), for example:
+
+- `*.kubernetes.io` will *not* match `kubernetes.io`, but `abc.kubernetes.io`
+- `*.*.kubernetes.io` will *not* match `abc.kubernetes.io`, but `abc.def.kubernetes.io`
+- `prefix.*.io` will match `prefix.kubernetes.io`
+- `*-good.kubernetes.io` will match `prefix-good.kubernetes.io`
+
+This means that a `config.json` like this is valid:
```json
{
"auths": {
- "*my-registry.io/images": {
- "auth": "…"
- }
+ "my-registry.io/images": { "auth": "…" },
+ "*.my-registry.io/images": { "auth": "…" }
}
}
```
-The root URL (`*my-registry.io`) is matched by using the following syntax:
-
-```
-pattern:
- { term }
-
-term:
- '*' matches any sequence of non-Separator characters
- '?' matches any single non-Separator character
- '[' [ '^' ] { character-range } ']'
- character class (must be non-empty)
- c matches character c (c != '*', '?', '\\', '[')
- '\\' c matches character c
-
-character-range:
- c matches character c (c != '\\', '-', ']')
- '\\' c matches character c
- lo '-' hi matches character c for lo <= c <= hi
-```
-
Image pull operations would now pass the credentials to the CRI container
runtime for every valid pattern. For example the following container image names
would match successfully:
@@ -305,10 +293,14 @@ would match successfully:
- `my-registry.io/images/my-image`
- `my-registry.io/images/another-image`
- `sub.my-registry.io/images/my-image`
+
+But not:
+
- `a.sub.my-registry.io/images/my-image`
+- `a.b.sub.my-registry.io/images/my-image`
The kubelet performs image pulls sequentially for every found credential. This
-means, that multiple entries in `config.json` are possible, too:
+means, that multiple entries in `config.json` for different paths are possible, too:
```json
{
From 0ce4025b7071d6308a249a66d22225562f9a1300 Mon Sep 17 00:00:00 2001
From: Michael
Date: Thu, 28 Sep 2023 22:41:29 +0800
Subject: [PATCH 046/229] Revert SVG images and fix typos in memory qos cgroup
v2 blog
---
.../container-memory-high-best-effort.svg | 480 +++++-
.../container-memory-high-limit.svg | 1296 ++++++++++++---
.../container-memory-high-no-limits.svg | 1152 ++++++++++---
.../container-memory-high.svg | 1445 ++++++++++++++---
.../2023-05-05-memory-qos-cgroups-v2/index.md | 27 +-
5 files changed, 3615 insertions(+), 785 deletions(-)
diff --git a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-best-effort.svg b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-best-effort.svg
index cf9283885855e..e35b2f39509bb 100644
--- a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-best-effort.svg
+++ b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-best-effort.svg
@@ -1,87 +1,395 @@
-
+
-
\ No newline at end of file
+
+
diff --git a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-limit.svg b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-limit.svg
index 3a545f20dd85f..a2ba00c58fd4e 100644
--- a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-limit.svg
+++ b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-limit.svg
@@ -1,226 +1,1072 @@
-
+
-
\ No newline at end of file
+
+
diff --git a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-no-limits.svg b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-no-limits.svg
index 845f5d0d07bb2..57b207b80a0be 100644
--- a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-no-limits.svg
+++ b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high-no-limits.svg
@@ -1,203 +1,951 @@
-
+
-
\ No newline at end of file
+
+
diff --git a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high.svg b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high.svg
index 02357ef901582..4ba0b15957a28 100644
--- a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high.svg
+++ b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/container-memory-high.svg
@@ -1,252 +1,1195 @@
-
+
-
\ No newline at end of file
+
+
diff --git a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/index.md b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/index.md
index 26b1d626fd171..ff72afe083322 100644
--- a/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/index.md
+++ b/content/en/blog/_posts/2023-05-05-memory-qos-cgroups-v2/index.md
@@ -128,18 +128,14 @@ enforces the limit to prevent the container from using more than the configured
resource limit. If a process in a container tries to consume more than the
specified limit, kernel terminates a process(es) with an Out of Memory (OOM) error.
-```formula
-memory.max = pod.spec.containers[i].resources.limits[memory]
-```
+{{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-max.svg" title="memory.max maps to limits.memory" alt="memory.max maps to limits.memory" >}}
`memory.min` is mapped to `requests.memory`, which results in reservation of memory resources
that should never be reclaimed by the kernel. This is how Memory QoS ensures the availability of
memory for Kubernetes pods. If there's no unprotected reclaimable memory available, the OOM
killer is invoked to make more memory available.
-```formula
-memory.min = pod.spec.containers[i].resources.requests[memory]
-```
+{{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-min.svg" title="memory.min maps to requests.memory" alt="memory.min maps to requests.memory" >}}
For memory protection, in addition to the original way of limiting memory usage, Memory QoS
throttles workload approaching its memory limit, ensuring that the system is not overwhelmed
@@ -149,10 +145,7 @@ the KubeletConfiguration when you enable MemoryQoS feature. It is set to 0.9 by
`requests.memory` and `limits.memory` as in the formula below, and rounding down the
value to the nearest page size:
-```formula
-memory.high = pod.spec.containers[i].resources.requests[memory] + MemoryThrottlingFactor *
-{(pod.spec.containers[i].resources.limits[memory] or NodeAllocatableMemory) - pod.spec.containers[i].resources.requests[memory]}
-```
+{{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-high.svg" title="memory.high formula" alt="memory.high formula" >}}
{{< note >}}
If a container has no memory limits specified, `limits.memory` is substituted for node allocatable memory.
@@ -256,26 +249,18 @@ as per QOS classes:
* When requests.memory and limits.memory are set, the formula is used as-is:
- ```formula
- memory.high = pod.spec.containers[i].resources.requests[memory] + MemoryThrottlingFactor *
- {(pod.spec.containers[i].resources.limits[memory]) - pod.spec.containers[i].resources.requests[memory]}
- ```
+ {{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-high-limit.svg" title="memory.high when requests and limits are set" alt="memory.high when requests and limits are set" >}}
* When requests.memory is set and limits.memory is not set, limits.memory is substituted
for node allocatable memory in the formula:
- ```formula
- memory.high = pod.spec.containers[i].resources.requests[memory] + MemoryThrottlingFactor *
- {(NodeAllocatableMemory) - pod.spec.containers[i].resources.requests[memory]}
- ```
+ {{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-high-no-limits.svg" title="memory.high when requests and limits are not set" alt="memory.high when requests and limits are not set" >}}
1. **BestEffort** by their QoS definition do not require any memory or CPU limits or requests.
For this case, kubernetes sets requests.memory = 0 and substitute limits.memory for node allocatable
memory in the formula:
- ```formula
- memory.high = MemoryThrottlingFactor * NodeAllocatableMemory
- ```
+ {{< figure src="/blog/2023/05/05/qos-memory-resources/container-memory-high-best-effort.svg" title="memory.high for BestEffort Pod" alt="memory.high for BestEffort Pod" >}}
**Summary**: Only Pods in Burstable and BestEffort QoS classes will set `memory.high`.
Guaranteed QoS pods do not set `memory.high` as their memory is guaranteed.
From 053b689f63298dfa96cd17e1acb90ce1aaee7b2c Mon Sep 17 00:00:00 2001
From: Akihito INOH
Date: Fri, 29 Sep 2023 06:32:43 +0900
Subject: [PATCH 047/229] Update ja glossary about secret
This commit update glossary about secret on ja content.
The current content is a bit old, that's why I create this update.
---
content/ja/docs/reference/glossary/secret.md | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/content/ja/docs/reference/glossary/secret.md b/content/ja/docs/reference/glossary/secret.md
index 3324279bd65f9..8f7196f0c6d9c 100644
--- a/content/ja/docs/reference/glossary/secret.md
+++ b/content/ja/docs/reference/glossary/secret.md
@@ -15,4 +15,6 @@ tags:
-機密情報の取り扱い方法を細かく制御することができ、保存時には[暗号化](/ja/docs/tasks/administer-cluster/encrypt-data/#ensure-all-secrets-are-encrypted)するなど、誤って公開してしまうリスクを減らすことができます。{{< glossary_tooltip text="Pod" term_id="pod" >}}は、ボリュームマウントされたファイルとして、またはPodのイメージをPullするkubeletによって、Secretを参照します。Secretは機密情報を扱うのに最適で、機密でない情報には[ConfigMap](/ja/docs/tasks/configure-pod-container/configure-pod-configmap/)が適しています。
+Secretは、機密情報の使用方法をより管理しやすくし、偶発的な漏洩のリスクを減らすことができます。Secretの値はbase64文字列としてエンコードされ、デフォルトでは暗号化されずに保存されますが、[保存時に暗号化](/docs/tasks/administer-cluster/encrypt-data/#ensure-all-secrets-are-encrypted)するように設定することもできます。
+
+{{< glossary_tooltip text="Pod" term_id="pod" >}}は、ボリュームマウントや環境変数など、さまざまな方法でSecretを参照できます。Secretは機密データ用に設計されており、[ConfigMap](/ja/docs/tasks/configure-pod-container/configure-pod-configmap/)は非機密データ用に設計されています。
\ No newline at end of file
From f8f5a5c7a67dee2eab29e52c63a6453cb93ee8d7 Mon Sep 17 00:00:00 2001
From: Michael
Date: Fri, 29 Sep 2023 11:34:06 +0800
Subject: [PATCH 048/229] [zh] Sync /concepts/configuration/secret.md
---
.../docs/concepts/configuration/secret.md | 337 ++++++++++--------
1 file changed, 185 insertions(+), 152 deletions(-)
diff --git a/content/zh-cn/docs/concepts/configuration/secret.md b/content/zh-cn/docs/concepts/configuration/secret.md
index bcace1d3f74ac..a93cd9f2b9d0b 100644
--- a/content/zh-cn/docs/concepts/configuration/secret.md
+++ b/content/zh-cn/docs/concepts/configuration/secret.md
@@ -330,8 +330,8 @@ Kubernetes 并不对类型的名称作任何限制。不过,如果你要使用
则你必须满足为该类型所定义的所有要求。
如果你要定义一种公开使用的 Secret 类型,请遵守 Secret 类型的约定和结构,
@@ -339,19 +339,19 @@ by a `/`. For example: `cloud-hosting.example.net/cloud-api-credentials`.
例如:`cloud-hosting.example.net/cloud-api-credentials`。
### Opaque Secret
-当 Secret 配置文件中未作显式设定时,默认的 Secret 类型是 `Opaque`。
-当你使用 `kubectl` 来创建一个 Secret 时,你会使用 `generic`
-子命令来标明要创建的是一个 `Opaque` 类型 Secret。
-例如,下面的命令会创建一个空的 `Opaque` 类型 Secret 对象:
+当你未在 Secret 清单中显式指定类型时,默认的 Secret 类型是 `Opaque`。
+当你使用 `kubectl` 来创建一个 Secret 时,你必须使用 `generic`
+子命令来标明要创建的是一个 `Opaque` 类型的 Secret。
+例如,下面的命令会创建一个空的 `Opaque` 类型的 Secret:
```shell
kubectl create secret generic empty-secret
@@ -361,7 +361,7 @@ kubectl get secret empty-secret
-输出类似于
+输出类似于:
```
NAME TYPE DATA AGE
@@ -376,89 +376,87 @@ In this case, `0` means you have created an empty Secret.
在这个例子中,`0` 意味着你刚刚创建了一个空的 Secret。
-### 服务账号令牌 Secret {#service-account-token-secrets}
-
-类型为 `kubernetes.io/service-account-token` 的 Secret
-用来存放标识某{{< glossary_tooltip text="服务账号" term_id="service-account" >}}的令牌凭据。
+{{< glossary_tooltip text="ServiceAccount" term_id="service-account" >}}. This
+is a legacy mechanism that provides long-lived ServiceAccount credentials to
+Pods.
+-->
+### ServiceAccount 令牌 Secret {#service-account-token-secrets}
+
+类型为 `kubernetes.io/service-account-token` 的 Secret 用来存放标识某
+{{< glossary_tooltip text="ServiceAccount" term_id="service-account" >}} 的令牌凭据。
+这是为 Pod 提供长期有效 ServiceAccount 凭据的传统机制。
+
+
+在 Kubernetes v1.22 及更高版本中,推荐的方法是通过使用
+[`TokenRequest`](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) API
+来获取短期自动轮换的 ServiceAccount 令牌。你可以使用以下方法获取这些短期令牌:
+
+
+- 直接调用 `TokenRequest` API,或者使用像 `kubectl` 这样的 API 客户端。
+ 例如,你可以使用
+ [`kubectl create token`](/docs/reference/generated/kubectl/kubectl-commands#-em-token-em-) 命令。
+- 在 Pod 清单中请求使用[投射卷](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume)挂载的令牌。
+ Kubernetes 会创建令牌并将其挂载到 Pod 中。
+ 当挂载令牌的 Pod 被删除时,此令牌会自动失效。
+ 更多细节参阅[启动使用服务账号令牌投射的 Pod](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#launch-a-pod-using-service-account-token-projection)。
{{< note >}}
-Kubernetes 在 v1.22 版本之前都会自动创建用来访问 Kubernetes API 的凭据。
-这一老的机制是基于创建可被挂载到运行中 Pod 内的令牌 Secret 来实现的。
-在最近的版本中,包括 Kubernetes v{{< skew currentVersion >}} 中,API 凭据是直接通过
-[TokenRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/)
-API 来获得的,这一凭据会使用[投射卷](/zh-cn/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume)挂载到
-Pod 中。使用这种方式获得的令牌有确定的生命期,并且在挂载它们的 Pod 被删除时自动作废。
-
-
-你仍然可以[手动创建](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-service-account-api-token)
-服务账号令牌。例如,当你需要一个永远都不过期的令牌时。
-不过,仍然建议使用 [TokenRequest](/zh-cn/docs/reference/kubernetes-api/authentication-resources/token-request-v1/)
-子资源来获得访问 API 服务器的令牌。
-你可以使用 [`kubectl create token`](/docs/reference/generated/kubectl/kubectl-commands#-em-token-em-)
-命令调用 `TokenRequest` API 获得令牌。
-{{< /note >}}
-
-
只有在你无法使用 `TokenRequest` API 来获取令牌,
并且你能够接受因为将永不过期的令牌凭据写入到可读取的 API 对象而带来的安全风险时,
-才应该创建服务账号令牌 Secret 对象。
+才应该创建 ServiceAccount 令牌 Secret。
+更多细节参阅[为 ServiceAccount 手动创建长期有效的 API 令牌](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-service-account-api-token)。
+{{< /note >}}
使用这种 Secret 类型时,你需要确保对象的注解 `kubernetes.io/service-account-name`
-被设置为某个已有的服务账号名称。
-如果你同时负责 ServiceAccount 和 Secret 对象的创建,应该先创建 ServiceAccount 对象。
+被设置为某个已有的 ServiceAccount 名称。
+如果你同时创建 ServiceAccount 和 Secret 对象,应该先创建 ServiceAccount 对象。
-当 Secret 对象被创建之后,某个 Kubernetes{{< glossary_tooltip text="控制器" term_id="controller" >}}会填写
-Secret 的其它字段,例如 `kubernetes.io/service-account.uid` 注解以及 `data` 字段中的
-`token` 键值,使之包含实际的令牌内容。
+当 Secret 对象被创建之后,某个 Kubernetes
+{{< glossary_tooltip text="控制器" term_id="controller" >}}会填写
+Secret 的其它字段,例如 `kubernetes.io/service-account.uid` 注解和
+`data` 字段中的 `token` 键值(该键包含一个身份认证令牌)。
-下面的配置实例声明了一个服务账号令牌 Secret:
+下面的配置实例声明了一个 ServiceAccount 令牌 Secret:
-参考 [ServiceAccount](/zh-cn/docs/tasks/configure-pod-container/configure-service-account/)
-文档了解服务账号的工作原理。你也可以查看
+参考 [ServiceAccount](/zh-cn/docs/concepts/security/service-accounts/)
+文档了解 ServiceAccount 的工作原理。你也可以查看
[`Pod`](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core)
资源中的 `automountServiceAccountToken` 和 `serviceAccountName` 字段文档,
-进一步了解从 Pod 中引用服务账号凭据。
+进一步了解从 Pod 中引用 ServiceAccount 凭据。
-`kubernetes.io/dockercfg` 是一种保留类型,用来存放 `~/.dockercfg` 文件的序列化形式。
-该文件是配置 Docker 命令行的一种老旧形式。使用此 Secret 类型时,你需要确保
-Secret 的 `data` 字段中包含名为 `.dockercfg` 的主键,其对应键值是用 base64
-编码的某 `~/.dockercfg` 文件的内容。
+- `kubernetes.io/dockercfg`:存放 `~/.dockercfg` 文件的序列化形式,它是配置 Docker
+ 命令行的一种老旧形式。Secret 的 `data` 字段包含名为 `.dockercfg` 的主键,
+ 其值是用 base64 编码的某 `~/.dockercfg` 文件的内容。
+- `kubernetes.io/dockerconfigjson`:存放 JSON 数据的序列化形式,
+ 该 JSON 也遵从 `~/.docker/config.json` 文件的格式规则,而后者是 `~/.dockercfg`
+ 的新版本格式。使用此 Secret 类型时,Secret 对象的 `data` 字段必须包含
+ `.dockerconfigjson` 键,其键值为 base64 编码的字符串包含 `~/.docker/config.json`
+ 文件的内容。
-类型 `kubernetes.io/dockerconfigjson` 被设计用来保存 JSON 数据的序列化形式,
-该 JSON 也遵从 `~/.docker/config.json` 文件的格式规则,而后者是 `~/.dockercfg`
-的新版本格式。使用此 Secret 类型时,Secret 对象的 `data` 字段必须包含
-`.dockerconfigjson` 键,其键值为 base64 编码的字符串包含 `~/.docker/config.json`
-文件的内容。
-
下面是一个 `kubernetes.io/dockercfg` 类型 Secret 的示例:
```yaml
@@ -570,20 +560,20 @@ If you do not want to perform the base64 encoding, you can choose to use the
{{< /note >}}
-当你使用清单文件来创建这两类 Secret 时,API 服务器会检查 `data` 字段中是否存在所期望的主键,
+当你使用清单文件通过 Docker 配置来创建 Secret 时,API 服务器会检查 `data` 字段中是否存在所期望的主键,
并且验证其中所提供的键值是否是合法的 JSON 数据。
不过,API 服务器不会检查 JSON 数据本身是否是一个合法的 Docker 配置文件内容。
-当你没有 Docker 配置文件,或者你想使用 `kubectl` 创建一个 Secret
-来访问容器仓库时,你可以这样做:
+你还可以使用 `kubectl` 创建一个 Secret 来访问容器仓库时,
+当你没有 Docker 配置文件时你可以这样做:
```shell
kubectl create secret docker-registry secret-tiger-docker \
@@ -594,22 +584,24 @@ kubectl create secret docker-registry secret-tiger-docker \
```
-上面的命令创建一个类型为 `kubernetes.io/dockerconfigjson` 的 Secret。
-如果你对 `.data.dockerconfigjson` 内容进行转储并执行 base64 解码:
+此命令创建一个类型为 `kubernetes.io/dockerconfigjson` 的 Secret。
+
+从这个新的 Secret 中获取 `.data.dockerconfigjson` 字段并执行数据解码:
```shell
kubectl get secret secret-tiger-docker -o jsonpath='{.data.*}' | base64 -d
```
-那么输出等价于这个 JSON 文档(这也是一个有效的 Docker 配置文件):
+输出等价于以下 JSON 文档(这也是一个有效的 Docker 配置文件):
```json
{
@@ -657,16 +649,29 @@ Secret must contain one of the following two keys:
- `password`: 用于身份认证的密码或令牌。
以上两个键的键值都是 base64 编码的字符串。
-当然你也可以在创建 Secret 时使用 `stringData` 字段来提供明文形式的内容。
+当然你也可以在 Secret 清单中的使用 `stringData` 字段来提供明文形式的内容。
+
以下清单是基本身份验证 Secret 的示例:
+
```yaml
apiVersion: v1
kind: Secret
@@ -693,7 +698,7 @@ The Kubernetes API verifies that the required keys are set for a Secret of this
API 服务器会检查 Secret 配置中是否提供了所需要的主键。
```yaml
apiVersion: v1
kind: Secret
@@ -724,18 +742,18 @@ data:
```
-提供 SSH 身份认证类型的 Secret 仅仅是出于用户方便性考虑。
-你也可以使用 `Opaque` 类型来保存用于 SSH 身份认证的凭据。
+提供 SSH 身份认证类型的 Secret 仅仅是出于方便性考虑。
+你可以使用 `Opaque` 类型来保存用于 SSH 身份认证的凭据。
不过,使用预定义的、公开的 Secret 类型(`kubernetes.io/ssh-auth`)
有助于其他人理解你的 Secret 的用途,也可以就其中包含的主键名形成约定。
-API 服务器确实会检查 Secret 配置中是否提供了所需要的主键。
+Kubernetes API 会验证这种类型的 Secret 中是否设定了所需的主键。
{{< caution >}}
### TLS Secret
-Kubernetes 提供一种内置的 `kubernetes.io/tls` Secret 类型,用来存放 TLS
-场合通常要使用的证书及其相关密钥。
+`kubernetes.io/tls` Secret 类型用来存放 TLS 场合通常要使用的证书及其相关密钥。
+
TLS Secret 的一种典型用法是为 [Ingress](/zh-cn/docs/concepts/services-networking/ingress/)
资源配置传输过程中的数据加密,不过也可以用于其他资源或者直接在负载中使用。
当使用此类型的 Secret 时,Secret 配置中的 `data` (或 `stringData`)字段必须包含
@@ -779,6 +799,23 @@ TLS Secret 的一种典型用法是为 [Ingress](/zh-cn/docs/concepts/services-n
下面的 YAML 包含一个 TLS Secret 的配置示例:
+
```yaml
apiVersion: v1
kind: Secret
@@ -796,21 +833,20 @@ stringData:
```
-提供 TLS 类型的 Secret 仅仅是出于用户方便性考虑。
-你也可以使用 `Opaque` 类型来保存用于 TLS 服务器与/或客户端的凭据。
-不过,使用内置的 Secret 类型的有助于对凭据格式进行归一化处理,并且
-API 服务器确实会检查 Secret 配置中是否提供了所需要的主键。
+提供 TLS 类型的 Secret 仅仅是出于方便性考虑。
+你可以创建 `Opaque` 类型的 Secret 来保存用于 TLS 身份认证的凭据。
+不过,使用已定义和公开的 Secret 类型有助于确保你自己项目中的 Secret 格式的一致性。
+API 服务器会验证这种类型的 Secret 是否设定了所需的主键。
-当使用 `kubectl` 来创建 TLS Secret 时,你可以像下面的例子一样使用 `tls`
-子命令:
+要使用 `kubectl` 创建 TLS Secret,你可以使用 `tls` 子命令:
```shell
kubectl create secret tls my-tls-secret \
@@ -828,15 +864,13 @@ and must match the given private key for `--key`.
### 启动引导令牌 Secret {#bootstrap-token-secrets}
-通过将 Secret 的 `type` 设置为 `bootstrap.kubernetes.io/token`
-可以创建启动引导令牌类型的 Secret。这种类型的 Secret 被设计用来支持节点的启动引导过程。
+`bootstrap.kubernetes.io/token` Secret 类型针对的是节点启动引导过程所用的令牌。
其中包含用来为周知的 ConfigMap 签名的令牌。
-上面的 YAML 文件可能看起来令人费解,因为其中的数值均为 base64 编码的字符串。
-实际上,你完全可以使用下面的 YAML 来创建一个一模一样的 Secret:
+你也可以在 Secret 的 `stringData` 字段中提供值,而无需对其进行 base64 编码:
```yaml
apiVersion: v1
From 15133c0c6459321d5f5e1fdf9e29098ce7da84e4 Mon Sep 17 00:00:00 2001
From: kujiraitakahiro
Date: Sat, 30 Sep 2023 13:41:51 +0900
Subject: [PATCH 049/229] Update content/ja/docs/reference/glossary/kubeadm.md
Co-authored-by: inukai <82919057+t-inu@users.noreply.github.com>
---
content/ja/docs/reference/glossary/kubeadm.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/ja/docs/reference/glossary/kubeadm.md b/content/ja/docs/reference/glossary/kubeadm.md
index f84150c9790e2..eecc3db14591f 100644
--- a/content/ja/docs/reference/glossary/kubeadm.md
+++ b/content/ja/docs/reference/glossary/kubeadm.md
@@ -4,7 +4,7 @@ id: kubeadm
date: 2018-04-12
full_link: /ja/docs/reference/setup-tools/kubeadm/
short_description: >
- Kubernetesを迅速にインストールし、安全なクラスタをセットアップするためのツール。
+ Kubernetesを迅速にインストールし、安全なクラスターをセットアップするためのツール。
aka:
tags:
From b2468531f82c361831ade5e61268c08cdb1d26c7 Mon Sep 17 00:00:00 2001
From: Arhell
Date: Sun, 1 Oct 2023 00:43:43 +0300
Subject: [PATCH 050/229] [ja] Updated link referring to Portworx CSI Drivers
---
content/ja/docs/concepts/storage/volumes.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/ja/docs/concepts/storage/volumes.md b/content/ja/docs/concepts/storage/volumes.md
index c4445d49c49f7..df385712b7a7b 100644
--- a/content/ja/docs/concepts/storage/volumes.md
+++ b/content/ja/docs/concepts/storage/volumes.md
@@ -899,7 +899,7 @@ spec:
Portworxの`CSIMigration`機能が追加されましたが、Kubernetes 1.23ではAlpha状態であるため、デフォルトで無効になっています。
すべてのプラグイン操作を既存のツリー内プラグインから`pxd.portworx.com`Container Storage Interface(CSI)ドライバーにリダイレクトします。
-[Portworx CSIドライバー](https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/csi/)をクラスターにインストールする必要があります。
+[Portworx CSIドライバー](https://docs.portworx.com/portworx-enterprise/operations/operate-kubernetes/storage-operations/csi)をクラスターにインストールする必要があります。
この機能を有効にするには、kube-controller-managerとkubeletで`CSIMigrationPortworx=true`を設定します。
## subPathの使用 {#using-subpath}
From 18d86031f4bf9fa5ef7be38e1101b010df81c4a6 Mon Sep 17 00:00:00 2001
From: kujiraitakahiro
Date: Sun, 1 Oct 2023 16:42:52 +0900
Subject: [PATCH 051/229] Update content/ja/docs/reference/glossary/addons.md
Co-authored-by: inukai <82919057+t-inu@users.noreply.github.com>
---
content/ja/docs/reference/glossary/addons.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/ja/docs/reference/glossary/addons.md b/content/ja/docs/reference/glossary/addons.md
index 199aa23e6b1b1..ee781366e0231 100644
--- a/content/ja/docs/reference/glossary/addons.md
+++ b/content/ja/docs/reference/glossary/addons.md
@@ -13,4 +13,4 @@ tags:
Kubernetesの機能を拡張するリソース。
-[Installing addons](/ja/docs/concepts/cluster-administration/addons/)では、クラスターのアドオン使用について詳しく説明し、いくつかの人気のあるアドオンを列挙します。
+[アドオンのインストール](/ja/docs/concepts/cluster-administration/addons/)では、クラスターのアドオン使用について詳しく説明し、いくつかの人気のあるアドオンを列挙します。
From 12e4c487dda0aad0bc2a90b3718d044e7d0829d4 Mon Sep 17 00:00:00 2001
From: Gauravpadam <1032201077@tcetmumbai.in>
Date: Tue, 26 Sep 2023 23:11:30 +0530
Subject: [PATCH 052/229] Defined i18nDir in hugo.toml
add back symbolic links
resetting branch defaults
add back symlinks
Added current symbolic links to gitignore
The changes are limited to windows now
Add back symbolic links
.gitignore modified
reverse .gitignore
Add .env to .gitignore
configurations for i18nDir
Modify README.md
Re-structure directory
Restore commit
defined i18nDir to ./data/i18n
Restore readme.md
---
hugo.toml | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/hugo.toml b/hugo.toml
index ef2cb6a4aca1d..58c4a1aa0ff1e 100644
--- a/hugo.toml
+++ b/hugo.toml
@@ -301,6 +301,7 @@ languageName = "English"
# Weight used for sorting.
weight = 1
languagedirection = "ltr"
+i18nDir = "./data/i18n"
[languages.zh-cn]
title = "Kubernetes"
@@ -503,4 +504,4 @@ languagedirection = "ltr"
[languages.uk.params]
time_format_blog = "02.01.2006"
# A list of language codes to look for untranslated content, ordered from left to right.
-language_alternatives = ["en"]
+language_alternatives = ["en"]
\ No newline at end of file
From 6296f46c8792d4551a33cefd39557250aa75015e Mon Sep 17 00:00:00 2001
From: Gauravpadam <1032201077@tcetmumbai.in>
Date: Sun, 1 Oct 2023 11:28:06 +0530
Subject: [PATCH 053/229] Added github repo link to feedback
Limited changes to en.toml
Multiline string
changed html to markdown
---
data/i18n/en/en.toml | 2 +-
layouts/partials/feedback.html | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/data/i18n/en/en.toml b/data/i18n/en/en.toml
index 8ec60129fa5a6..f5ca7d43f422c 100644
--- a/data/i18n/en/en.toml
+++ b/data/i18n/en/en.toml
@@ -265,7 +265,7 @@ other = "Workload"
other = "suggest an improvement"
[layouts_docs_partials_feedback_issue]
-other = "Open an issue in the GitHub repo if you want to "
+other = """Open an issue in the [GitHub Repository](https://www.github.com/kubernetes/website/) if you want to """
[layouts_docs_partials_feedback_or]
other = "or"
diff --git a/layouts/partials/feedback.html b/layouts/partials/feedback.html
index f062cd847240c..1be98c1915de9 100644
--- a/layouts/partials/feedback.html
+++ b/layouts/partials/feedback.html
@@ -9,7 +9,7 @@
TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.
+
+
+
+
Field
Description
+
+
+
+
endpoint
+string
+
+
+
Endpoint of the collector this component will report traces to.
+The connection is insecure, and does not currently support TLS.
+Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.
+
+
+
samplingRatePerMillion
+int32
+
+
+
SamplingRatePerMillion is the number of samples to collect per million spans.
+Recommended is unset. If unset, sampler respects its parent span's sampling
+rate, but otherwise never samples.
+
+
+
+
+
## `AdmissionConfiguration` {#apiserver-k8s-io-v1alpha1-AdmissionConfiguration}
@@ -360,45 +401,4 @@ This does not use a unix:// prefix. (Eg: /etc/srv/kubernetes/konnectivity-server
TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.
-
-
-
-
Field
Description
-
-
-
-
endpoint
-string
-
-
-
Endpoint of the collector this component will report traces to.
-The connection is insecure, and does not currently support TLS.
-Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.
-
-
-
samplingRatePerMillion
-int32
-
-
-
SamplingRatePerMillion is the number of samples to collect per million spans.
-Recommended is unset. If unset, sampler respects its parent span's sampling
-rate, but otherwise never samples.
-
-
-
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/content/en/docs/reference/config-api/apiserver-config.v1beta1.md b/content/en/docs/reference/config-api/apiserver-config.v1beta1.md
index 6acb3540cd06f..06dfaab72291e 100644
--- a/content/en/docs/reference/config-api/apiserver-config.v1beta1.md
+++ b/content/en/docs/reference/config-api/apiserver-config.v1beta1.md
@@ -14,6 +14,49 @@ auto_generated: true
- [TracingConfiguration](#apiserver-k8s-io-v1beta1-TracingConfiguration)
+
+
+## `TracingConfiguration` {#TracingConfiguration}
+
+
+**Appears in:**
+
+- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
+
+- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration)
+
+- [TracingConfiguration](#apiserver-k8s-io-v1beta1-TracingConfiguration)
+
+
+
TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.
+
+
+
+
Field
Description
+
+
+
+
endpoint
+string
+
+
+
Endpoint of the collector this component will report traces to.
+The connection is insecure, and does not currently support TLS.
+Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.
+
+
+
samplingRatePerMillion
+int32
+
+
+
SamplingRatePerMillion is the number of samples to collect per million spans.
+Recommended is unset. If unset, sampler respects its parent span's sampling
+rate, but otherwise never samples.
+
+
+
+
+
## `EgressSelectorConfiguration` {#apiserver-k8s-io-v1beta1-EgressSelectorConfiguration}
@@ -291,47 +334,4 @@ This does not use a unix:// prefix. (Eg: /etc/srv/kubernetes/konnectivity-server
-
-
-
-
-## `TracingConfiguration` {#TracingConfiguration}
-
-
-**Appears in:**
-
-- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
-
-- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration)
-
-- [TracingConfiguration](#apiserver-k8s-io-v1beta1-TracingConfiguration)
-
-
-
TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.
-
-
-
-
Field
Description
-
-
-
-
endpoint
-string
-
-
-
Endpoint of the collector this component will report traces to.
-The connection is insecure, and does not currently support TLS.
-Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.
-
-
-
samplingRatePerMillion
-int32
-
-
-
SamplingRatePerMillion is the number of samples to collect per million spans.
-Recommended is unset. If unset, sampler respects its parent span's sampling
-rate, but otherwise never samples.
-
-
-
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/content/en/docs/reference/config-api/apiserver-encryption.v1.md b/content/en/docs/reference/config-api/apiserver-encryption.v1.md
index 148dc374e8cad..49e7695dc5062 100644
--- a/content/en/docs/reference/config-api/apiserver-encryption.v1.md
+++ b/content/en/docs/reference/config-api/apiserver-encryption.v1.md
@@ -12,7 +12,6 @@ auto_generated: true
- [EncryptionConfiguration](#apiserver-config-k8s-io-v1-EncryptionConfiguration)
-
## `EncryptionConfiguration` {#apiserver-config-k8s-io-v1-EncryptionConfiguration}
@@ -20,7 +19,7 @@ auto_generated: true
EncryptionConfiguration stores the complete configuration for encryption providers.
It also allows the use of wildcards to specify the resources that should be encrypted.
-Use '*.<group>' to encrypt all resources within a group or '*.*' to encrypt all resources.
+Use '*<group>o encrypt all resources within a group or '*.*' to encrypt all resources.
'*.' can be used to encrypt all resource in the core group. '*.*' will encrypt all
resources, even custom resources that are added after API server start.
Use of wildcards that overlap within the same resource list or across multiple
diff --git a/content/en/docs/reference/config-api/apiserver-eventratelimit.v1alpha1.md b/content/en/docs/reference/config-api/apiserver-eventratelimit.v1alpha1.md
index 2189c4910d277..60a5bcbedf9d2 100644
--- a/content/en/docs/reference/config-api/apiserver-eventratelimit.v1alpha1.md
+++ b/content/en/docs/reference/config-api/apiserver-eventratelimit.v1alpha1.md
@@ -11,7 +11,6 @@ auto_generated: true
- [Configuration](#eventratelimit-admission-k8s-io-v1alpha1-Configuration)
-
## `Configuration` {#eventratelimit-admission-k8s-io-v1alpha1-Configuration}
diff --git a/content/en/docs/reference/config-api/apiserver-webhookadmission.v1.md b/content/en/docs/reference/config-api/apiserver-webhookadmission.v1.md
index b806f3b6c6075..9520d2ce53768 100644
--- a/content/en/docs/reference/config-api/apiserver-webhookadmission.v1.md
+++ b/content/en/docs/reference/config-api/apiserver-webhookadmission.v1.md
@@ -12,7 +12,6 @@ auto_generated: true
- [WebhookAdmission](#apiserver-config-k8s-io-v1-WebhookAdmission)
-
## `WebhookAdmission` {#apiserver-config-k8s-io-v1-WebhookAdmission}
diff --git a/content/en/docs/reference/config-api/client-authentication.v1.md b/content/en/docs/reference/config-api/client-authentication.v1.md
index 53e602d0f22a2..33150093d9488 100644
--- a/content/en/docs/reference/config-api/client-authentication.v1.md
+++ b/content/en/docs/reference/config-api/client-authentication.v1.md
@@ -11,7 +11,6 @@ auto_generated: true
- [ExecCredential](#client-authentication-k8s-io-v1-ExecCredential)
-
## `ExecCredential` {#client-authentication-k8s-io-v1-ExecCredential}
diff --git a/content/en/docs/reference/config-api/client-authentication.v1beta1.md b/content/en/docs/reference/config-api/client-authentication.v1beta1.md
index d9e55d0ee2beb..95f65e4bbd597 100644
--- a/content/en/docs/reference/config-api/client-authentication.v1beta1.md
+++ b/content/en/docs/reference/config-api/client-authentication.v1beta1.md
@@ -11,7 +11,6 @@ auto_generated: true
- [ExecCredential](#client-authentication-k8s-io-v1beta1-ExecCredential)
-
## `ExecCredential` {#client-authentication-k8s-io-v1beta1-ExecCredential}
diff --git a/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md b/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md
index f6eaa915a8b41..e3ffcf0b73e2b 100644
--- a/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md
+++ b/content/en/docs/reference/config-api/imagepolicy.v1alpha1.md
@@ -11,7 +11,6 @@ auto_generated: true
- [ImageReview](#imagepolicy-k8s-io-v1alpha1-ImageReview)
-
## `ImageReview` {#imagepolicy-k8s-io-v1alpha1-ImageReview}
diff --git a/content/en/docs/reference/config-api/kube-controller-manager-config.v1alpha1.md b/content/en/docs/reference/config-api/kube-controller-manager-config.v1alpha1.md
index 348c557807eed..d63e35f68a973 100644
--- a/content/en/docs/reference/config-api/kube-controller-manager-config.v1alpha1.md
+++ b/content/en/docs/reference/config-api/kube-controller-manager-config.v1alpha1.md
@@ -9,301 +9,366 @@ auto_generated: true
## Resource Types
-- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration)
- [LeaderMigrationConfiguration](#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration)
+- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
+
-## `KubeControllerManagerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration}
+## `NodeControllerConfiguration` {#NodeControllerConfiguration}
+**Appears in:**
-
KubeControllerManagerConfiguration contains elements describing kube-controller manager.
ServiceControllerConfiguration contains elements describing ServiceController.
+
+
+
+
Field
Description
+
+
+
+
ConcurrentServiceSyncs[Required]
+int32
-
KubeCloudSharedConfiguration holds configuration for shared related features
-both in cloud controller manager and kube-controller manager.
+
concurrentServiceSyncs is the number of services that are
+allowed to sync concurrently. Larger number = more responsive service
+management, but more CPU (and network) load.
EphemeralVolumeControllerConfiguration holds configuration for EphemeralVolumeController
-related features.
+
externalCloudVolumePlugin specifies the plugin to use when cloudProvider is "external".
+It is currently used by the in repo cloud providers to handle node and volume control in the KCM.
PodGCControllerConfiguration holds configuration for PodGCController
-related features.
+
nodeSyncPeriod is the period for syncing nodes from cloudprovider. Longer
+periods will result in fewer calls to cloud provider, but may delay addition
+of new nodes to cluster.
WebhookConfiguration contains configuration related to
+cloud-controller-manager hosted webhooks
+
+
+
+
Field
Description
+
+
+
+
Webhooks[Required]
+[]string
-
ValidatingAdmissionPolicyStatusControllerConfiguration holds configuration for
-ValidatingAdmissionPolicyStatusController related features.
+
Webhooks is the list of webhooks to enable or disable
+'*' means "all enabled by default webhooks"
+'foo' means "enable 'foo'"
+'-foo' means "disable 'foo'"
+first item for a particular name wins
Reconciler runs a periodic loop to reconcile the desired state of the with
-the actual state of the world by triggering attach detach operations.
-This flag enables or disables reconcile. Is false by default, and thus enabled.
+
LeaderName is the name of the leader election resource that protects the migration
+E.g. 1-20-KCM-to-1-21-CCM
CSRSigningConfiguration holds information about a particular CSR signer
+
ControllerLeaderConfiguration provides the configuration for a migrating leader lock.
@@ -311,34 +376,37 @@ wait between successive executions. Is set to 5 sec by default.
-
CertFile[Required]
+
name[Required] string
-
certFile is the filename containing a PEM-encoded
-X509 CA certificate used to issue certificates
+
Name is the name of the controller being migrated
+E.g. service-controller, route-controller, cloud-node-controller, etc
-
KeyFile[Required]
+
component[Required] string
-
keyFile is the filename containing a PEM-encoded
-RSA or ECDSA private key used to issue certificates
+
Component is the name of the component in which the controller should be running.
+E.g. kube-controller-manager, cloud-controller-manager, etc
+Or '*' meaning the controller can be run under any component that participates in the migration
clusterSigningDuration is the max length of duration signed certificates will be given.
-Individual CSRs may request shorter certs by setting spec.expirationSeconds.
+
Controllers is the list of controllers to enable or disable
+'*' means "all enabled by default controllers"
+'foo' means "enable 'foo'"
+'-foo' means "disable 'foo'"
+first item for a particular name wins
concurrentCronJobSyncs is the number of job objects that are
-allowed to sync concurrently. Larger number = more responsive jobs,
-but more CPU (and network) load.
+
DebuggingConfiguration holds configuration for Debugging related features.
+
+
+
LeaderMigrationEnabled[Required]
+bool
+
+
+
LeaderMigrationEnabled indicates whether Leader Migration should be enabled for the controller manager.
concurrentDaemonSetSyncs is the number of daemonset objects that are
-allowed to sync concurrently. Larger number = more responsive daemonset,
-but more CPU (and network) load.
+
Generic holds configuration for a generic controller-manager
concurrentDeploymentSyncs is the number of deployment objects that are
-allowed to sync concurrently. Larger number = more responsive deployments,
-but more CPU (and network) load.
+
KubeCloudSharedConfiguration holds configuration for shared related features
+both in cloud controller manager and kube-controller manager.
concurrentEndpointSyncs is the number of endpoint syncing operations
-that will be done concurrently. Larger number = faster endpoint updating,
-but more CPU (and network) load.
+
AttachDetachControllerConfiguration holds configuration for
+AttachDetachController related features.
EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period.
-Processing of pod changes will be delayed by this duration to join them with potential
-upcoming updates and reduce the overall number of endpoints updates.
+
CSRSigningControllerConfiguration holds configuration for
+CSRSigningController related features.
concurrentServiceEndpointSyncs is the number of service endpoint syncing
-operations that will be done concurrently. Larger number = faster
-endpoint slice updating, but more CPU (and network) load.
+
DaemonSetControllerConfiguration holds configuration for DaemonSetController
+related features.
maxEndpointsPerSlice is the maximum number of endpoints that will be
-added to an EndpointSlice. More endpoints per slice will result in fewer
-and larger endpoint slices, but larger resources.
+
DeploymentControllerConfiguration holds configuration for
+DeploymentController related features.
EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period.
-Processing of pod changes will be delayed by this duration to join them with potential
-upcoming updates and reduce the overall number of endpoints updates.
+
StatefulSetControllerConfiguration holds configuration for
+StatefulSetController related features.
mirroringConcurrentServiceEndpointSyncs is the number of service endpoint
-syncing operations that will be done concurrently. Larger number = faster
-endpoint slice updating, but more CPU (and network) load.
+
DeprecatedControllerConfiguration holds configuration for some deprecated
+features.
mirroringEndpointUpdatesBatchPeriod can be used to batch EndpointSlice
-updates. All updates triggered by EndpointSlice changes will be delayed
-by up to 'mirroringEndpointUpdatesBatchPeriod'. If other addresses in the
-same Endpoints resource change in that period, they will be batched to a
-single EndpointSlice update. Default 0 value means that each Endpoints
-update triggers an EndpointSlice update.
+
EndpointSliceControllerConfiguration holds configuration for
+EndpointSliceController related features.
ConcurrentEphemeralVolumeSyncseSyncs is the number of ephemeral volume syncing operations
-that will be done concurrently. Larger number = faster ephemeral volume updating,
-but more CPU (and network) load.
+
EndpointSliceMirroringControllerConfiguration holds configuration for
+EndpointSliceMirroringController related features.
enables the generic garbage collector. MUST be synced with the
-corresponding flag of the kube-apiserver. WARNING: the generic garbage
-collector is an alpha feature.
+
EphemeralVolumeControllerConfiguration holds configuration for EphemeralVolumeController
+related features.
ConcurrentHorizontalPodAutoscalerSyncs is the number of HPA objects that are allowed to sync concurrently.
-Larger number = more responsive HPA processing, but more CPU (and network) load.
+
LegacySATokenCleanerConfiguration holds configuration for LegacySATokenCleaner related features.
HorizontalPodAutoscalerDowncaleStabilizationWindow is a period for which autoscaler will look
-backwards and not scale down below any recommendation it made during that period.
+
NodeLifecycleControllerConfiguration holds configuration for
+NodeLifecycleController related features.
HorizontalPodAutoscalerInitialReadinessDelay is period after pod start during which readiness
-changes are treated as readiness being set for the first time. The only effect of this is that
-HPA will disregard CPU samples from unready pods that had last readiness change during that
-period.
+
ReplicationControllerConfiguration holds configuration for
+ReplicationController related features.
concurrentJobSyncs is the number of job objects that are
-allowed to sync concurrently. Larger number = more responsive jobs,
-but more CPU (and network) load.
+
ResourceQuotaControllerConfiguration holds configuration for
+ResourceQuotaController related features.
CleanUpPeriod is the period of time since the last usage of an
-auto-generated service account token before it can be deleted.
+
ValidatingAdmissionPolicyStatusControllerConfiguration holds configuration for
+ValidatingAdmissionPolicyStatusController related features.
-## `NamespaceControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NamespaceControllerConfiguration}
+## `AttachDetachControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-AttachDetachControllerConfiguration}
**Appears in:**
@@ -881,7 +747,7 @@ auto-generated service account token before it can be deleted.
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
-
NamespaceControllerConfiguration contains elements describing NamespaceController.
+
AttachDetachControllerConfiguration contains elements describing AttachDetachController.
@@ -889,34 +755,35 @@ auto-generated service account token before it can be deleted.
-
namespaceSyncPeriod is the period for syncing namespace life-cycle
-updates.
+
Reconciler runs a periodic loop to reconcile the desired state of the with
+the actual state of the world by triggering attach detach operations.
+This flag enables or disables reconcile. Is false by default, and thus enabled.
NodeIPAMControllerConfiguration contains elements describing NodeIpamController.
+
CSRSigningConfiguration holds information about a particular CSR signer
@@ -924,45 +791,26 @@ allowed to sync concurrently.
-
ServiceCIDR[Required]
+
CertFile[Required] string
-
serviceCIDR is CIDR Range for Services in cluster.
+
certFile is the filename containing a PEM-encoded
+X509 CA certificate used to issue certificates
-
SecondaryServiceCIDR[Required]
+
KeyFile[Required] string
-
secondaryServiceCIDR is CIDR Range for Services in cluster. This is used in dual stack clusters. SecondaryServiceCIDR must be of different IP family than ServiceCIDR
-
-
-
NodeCIDRMaskSize[Required]
-int32
-
-
-
NodeCIDRMaskSize is the mask size for node cidr in cluster.
-
-
-
NodeCIDRMaskSizeIPv4[Required]
-int32
-
-
-
NodeCIDRMaskSizeIPv4 is the mask size for node cidr in dual-stack cluster.
-
-
-
NodeCIDRMaskSizeIPv6[Required]
-int32
-
-
-
NodeCIDRMaskSizeIPv6 is the mask size for node cidr in dual-stack cluster.
+
keyFile is the filename containing a PEM-encoded
+RSA or ECDSA private key used to issue certificates
nodeMontiorGracePeriod is the amount of time which we allow a running node to be
-unresponsive before marking it unhealthy. Must be N times more than kubelet's
-nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet
-to post node status.
+
kubeletClientSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kube-apiserver-client-kubelet
Zone is treated as unhealthy in nodeEvictionRate and secondaryNodeEvictionRate when at least
-unhealthyZoneThreshold (no less than 3) of Nodes in the zone are NotReady
+
clusterSigningDuration is the max length of duration signed certificates will be given.
+Individual CSRs may request shorter certs by setting spec.expirationSeconds.
-## `PersistentVolumeBinderControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeBinderControllerConfiguration}
+## `CronJobControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CronJobControllerConfiguration}
**Appears in:**
@@ -1043,8 +889,7 @@ unhealthyZoneThreshold (no less than 3) of Nodes in the zone are NotReady
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
-
PersistentVolumeBinderControllerConfiguration contains elements describing
-PersistentVolumeBinderController.
+
CronJobControllerConfiguration contains elements describing CrongJob2Controller.
volumeConfiguration holds configuration for volume related features.
-
-
-
VolumeHostCIDRDenylist[Required]
-[]string
-
-
-
DEPRECATED: VolumeHostCIDRDenylist is a list of CIDRs that should not be reachable by the
-controller from plugins.
-
-
-
VolumeHostAllowLocalLoopback[Required]
-bool
+
ConcurrentCronJobSyncs[Required]
+int32
-
DEPRECATED: VolumeHostAllowLocalLoopback indicates if local loopback hosts (127.0.0.1, etc)
-should be allowed from plugins.
+
concurrentCronJobSyncs is the number of job objects that are
+allowed to sync concurrently. Larger number = more responsive jobs,
+but more CPU (and network) load.
PersistentVolumeRecyclerConfiguration contains elements describing persistent volume plugins.
+
DaemonSetControllerConfiguration contains elements describing DaemonSetController.
@@ -1102,69 +925,19 @@ should be allowed from plugins.
-
MaximumRetry[Required]
-int32
-
-
-
maximumRetry is number of retries the PV recycler will execute on failure to recycle
-PV.
-
-
-
MinimumTimeoutNFS[Required]
-int32
-
-
-
minimumTimeoutNFS is the minimum ActiveDeadlineSeconds to use for an NFS Recycler
-pod.
-
-
-
PodTemplateFilePathNFS[Required]
-string
-
-
-
podTemplateFilePathNFS is the file path to a pod definition used as a template for
-NFS persistent volume recycling
-
-
-
IncrementTimeoutNFS[Required]
-int32
-
-
-
incrementTimeoutNFS is the increment of time added per Gi to ActiveDeadlineSeconds
-for an NFS scrubber pod.
-
-
-
PodTemplateFilePathHostPath[Required]
-string
-
-
-
podTemplateFilePathHostPath is the file path to a pod definition used as a template for
-HostPath persistent volume recycling. This is for development and testing only and
-will not work in a multi-node cluster.
-
-
-
MinimumTimeoutHostPath[Required]
-int32
-
-
-
minimumTimeoutHostPath is the minimum ActiveDeadlineSeconds to use for a HostPath
-Recycler pod. This is for development and testing only and will not work in a multi-node
-cluster.
-
-
-
IncrementTimeoutHostPath[Required]
+
ConcurrentDaemonSetSyncs[Required] int32
-
incrementTimeoutHostPath is the increment of time added per Gi to ActiveDeadlineSeconds
-for a HostPath scrubber pod. This is for development and testing only and will not work
-in a multi-node cluster.
+
concurrentDaemonSetSyncs is the number of daemonset objects that are
+allowed to sync concurrently. Larger number = more responsive daemonset,
+but more CPU (and network) load.
-## `PodGCControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PodGCControllerConfiguration}
+## `DeploymentControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DeploymentControllerConfiguration}
**Appears in:**
@@ -1172,7 +945,7 @@ in a multi-node cluster.
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
-
PodGCControllerConfiguration contains elements describing PodGCController.
+
DeploymentControllerConfiguration contains elements describing DeploymentController.
@@ -1180,19 +953,19 @@ in a multi-node cluster.
-
TerminatedPodGCThreshold[Required]
+
ConcurrentDeploymentSyncs[Required] int32
-
terminatedPodGCThreshold is the number of terminated pods that can exist
-before the terminated pod garbage collector starts deleting terminated pods.
-If <= 0, the terminated pod garbage collector is disabled.
+
concurrentDeploymentSyncs is the number of deployment objects that are
+allowed to sync concurrently. Larger number = more responsive deployments,
+but more CPU (and network) load.
-## `ReplicaSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ReplicaSetControllerConfiguration}
+## `DeprecatedControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DeprecatedControllerConfiguration}
**Appears in:**
@@ -1200,27 +973,12 @@ If <= 0, the terminated pod garbage collector is disabled.
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
-
ReplicaSetControllerConfiguration contains elements describing ReplicaSetController.
+
DeprecatedControllerConfiguration contains elements be deprecated.
-
-
Field
Description
-
-
-
-
ConcurrentRSSyncs[Required]
-int32
-
-
-
concurrentRSSyncs is the number of replica sets that are allowed to sync
-concurrently. Larger number = more responsive replica management, but more
-CPU (and network) load.
ReplicationControllerConfiguration contains elements describing ReplicationController.
+
EndpointControllerConfiguration contains elements describing EndpointController.
@@ -1236,19 +994,28 @@ CPU (and network) load.
-
ConcurrentRCSyncs[Required]
+
ConcurrentEndpointSyncs[Required] int32
-
concurrentRCSyncs is the number of replication controllers that are
-allowed to sync concurrently. Larger number = more responsive replica
-management, but more CPU (and network) load.
+
concurrentEndpointSyncs is the number of endpoint syncing operations
+that will be done concurrently. Larger number = faster endpoint updating,
+but more CPU (and network) load.
EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period.
+Processing of pod changes will be delayed by this duration to join them with potential
+upcoming updates and reduce the overall number of endpoints updates.
-## `ResourceQuotaControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ResourceQuotaControllerConfiguration}
+## `EndpointSliceControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointSliceControllerConfiguration}
**Appears in:**
@@ -1256,7 +1023,8 @@ management, but more CPU (and network) load.
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
-
ResourceQuotaControllerConfiguration contains elements describing ResourceQuotaController.
+
EndpointSliceControllerConfiguration contains elements describing
+EndpointSliceController.
@@ -1264,27 +1032,37 @@ management, but more CPU (and network) load.
-
resourceQuotaSyncPeriod is the period for syncing quota usage status
-in the system.
+
concurrentServiceEndpointSyncs is the number of service endpoint syncing
+operations that will be done concurrently. Larger number = faster
+endpoint slice updating, but more CPU (and network) load.
-
ConcurrentResourceQuotaSyncs[Required]
+
MaxEndpointsPerSlice[Required] int32
-
concurrentResourceQuotaSyncs is the number of resource quotas that are
-allowed to sync concurrently. Larger number = more responsive quota
-management, but more CPU (and network) load.
+
maxEndpointsPerSlice is the maximum number of endpoints that will be
+added to an EndpointSlice. More endpoints per slice will result in fewer
+and larger endpoint slices, but larger resources.
EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period.
+Processing of pod changes will be delayed by this duration to join them with potential
+upcoming updates and reduce the overall number of endpoints updates.
-## `SAControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-SAControllerConfiguration}
+## `EndpointSliceMirroringControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointSliceMirroringControllerConfiguration}
**Appears in:**
@@ -1292,7 +1070,8 @@ management, but more CPU (and network) load.
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
-
SAControllerConfiguration contains elements describing ServiceAccountController.
+
EndpointSliceMirroringControllerConfiguration contains elements describing
+EndpointSliceMirroringController.
@@ -1300,34 +1079,39 @@ management, but more CPU (and network) load.
-
serviceAccountKeyFile is the filename containing a PEM-encoded private RSA key
-used to sign service account tokens.
+
mirroringConcurrentServiceEndpointSyncs is the number of service endpoint
+syncing operations that will be done concurrently. Larger number = faster
+endpoint slice updating, but more CPU (and network) load.
-
ConcurrentSATokenSyncs[Required]
+
MirroringMaxEndpointsPerSubset[Required] int32
-
concurrentSATokenSyncs is the number of service account token syncing operations
-that will be done concurrently.
+
mirroringMaxEndpointsPerSubset is the maximum number of endpoints that
+will be mirrored to an EndpointSlice for an EndpointSubset.
rootCAFile is the root certificate authority will be included in service
-account's token secret. This must be a valid PEM-encoded CA bundle.
+
mirroringEndpointUpdatesBatchPeriod can be used to batch EndpointSlice
+updates. All updates triggered by EndpointSlice changes will be delayed
+by up to 'mirroringEndpointUpdatesBatchPeriod'. If other addresses in the
+same Endpoints resource change in that period, they will be batched to a
+single EndpointSlice update. Default 0 value means that each Endpoints
+update triggers an EndpointSlice update.
-## `StatefulSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-StatefulSetControllerConfiguration}
+## `EphemeralVolumeControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EphemeralVolumeControllerConfiguration}
**Appears in:**
@@ -1335,7 +1119,7 @@ account's token secret. This must be a valid PEM-encoded CA bundle.
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
-
StatefulSetControllerConfiguration contains elements describing StatefulSetController.
+
EphemeralVolumeControllerConfiguration contains elements describing EphemeralVolumeController.
@@ -1343,19 +1127,19 @@ account's token secret. This must be a valid PEM-encoded CA bundle.
-
ConcurrentStatefulSetSyncs[Required]
+
ConcurrentEphemeralVolumeSyncs[Required] int32
-
concurrentStatefulSetSyncs is the number of statefulset objects that are
-allowed to sync concurrently. Larger number = more responsive statefulsets,
+
ConcurrentEphemeralVolumeSyncseSyncs is the number of ephemeral volume syncing operations
+that will be done concurrently. Larger number = faster ephemeral volume updating,
but more CPU (and network) load.
-## `TTLAfterFinishedControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-TTLAfterFinishedControllerConfiguration}
+## `GarbageCollectorControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-GarbageCollectorControllerConfiguration}
**Appears in:**
@@ -1363,7 +1147,7 @@ but more CPU (and network) load.
- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration)
-
TTLAfterFinishedControllerConfiguration contains elements describing TTLAfterFinishedController.
+
GarbageCollectorControllerConfiguration contains elements describing GarbageCollectorController.
@@ -1371,26 +1155,42 @@ but more CPU (and network) load.
-
ConcurrentTTLSyncs[Required]
+
EnableGarbageCollector[Required]
+bool
+
+
+
enables the generic garbage collector. MUST be synced with the
+corresponding flag of the kube-apiserver. WARNING: the generic garbage
+collector is an alpha feature.
+
+
+
ConcurrentGCSyncs[Required] int32
-
concurrentTTLSyncs is the number of TTL-after-finished collector workers that are
+
concurrentGCSyncs is the number of garbage collector workers that are
allowed to sync concurrently.
ValidatingAdmissionPolicyStatusControllerConfiguration contains elements describing ValidatingAdmissionPolicyStatusController.
+
GroupResource describes an group resource.
@@ -1398,32 +1198,32 @@ allowed to sync concurrently.
-
ConcurrentPolicySyncs[Required]
-int32
+
Group[Required]
+string
-
ConcurrentPolicySyncs is the number of policy objects that are
-allowed to sync concurrently. Larger number = quicker type checking,
-but more CPU (and network) load.
-The default value is 5.
+
group is the group portion of the GroupResource.
+
+
+
Resource[Required]
+string
+
+
+
resource is the resource portion of the GroupResource.
VolumeConfiguration contains all enumerated flags meant to configure all volume
-plugins. From this config, the controller-manager binary will create many instances of
-volume.VolumeConfig, each containing only the configuration needed for that plugin which
-are then passed to the appropriate plugin. The ControllerManager binary is the only part
-of the code which knows what plugins are supported and which flags correspond to each plugin.
+
HPAControllerConfiguration contains elements describing HPAController.
@@ -1431,54 +1231,82 @@ of the code which knows what plugins are supported and which flags correspond to
-
enableHostPathProvisioning enables HostPath PV provisioning when running without a
-cloud provider. This allows testing and development of provisioning features. HostPath
-provisioning is not supported in any way, won't work in a multi-node cluster, and
-should not be used for anything other than testing or development.
+
ConcurrentHorizontalPodAutoscalerSyncs is the number of HPA objects that are allowed to sync concurrently.
+Larger number = more responsive HPA processing, but more CPU (and network) load.
enableDynamicProvisioning enables the provisioning of volumes when running within an environment
-that supports dynamic provisioning. Defaults to true.
+
HorizontalPodAutoscalerSyncPeriod is the period for syncing the number of
+pods in horizontal pod autoscaler.
volumePluginDir is the full path of the directory in which the flex
-volume plugin should search for additional third party volume plugins
+
HorizontalPodAutoscalerDowncaleStabilizationWindow is a period for which autoscaler will look
+backwards and not scale down below any recommendation it made during that period.
HorizontalPodAutoscalerInitialReadinessDelay is period after pod start during which readiness
+changes are treated as readiness being set for the first time. The only effect of this is that
+HPA will disregard CPU samples from unready pods that had last readiness change during that
+period.
NodeControllerConfiguration contains elements describing NodeController.
+
JobControllerConfiguration contains elements describing JobController.
@@ -1486,28 +1314,54 @@ volume plugin should search for additional third party volume plugins
-
ConcurrentNodeSyncs[Required]
+
ConcurrentJobSyncs[Required] int32
-
ConcurrentNodeSyncs is the number of workers
-concurrently synchronizing nodes
+
concurrentJobSyncs is the number of job objects that are
+allowed to sync concurrently. Larger number = more responsive jobs,
+but more CPU (and network) load.
namespaceSyncPeriod is the period for syncing namespace life-cycle
+updates.
+
+
+
ConcurrentNamespaceSyncs[Required] int32
-
concurrentServiceSyncs is the number of services that are
-allowed to sync concurrently. Larger number = more responsive service
-management, but more CPU (and network) load.
+
concurrentNamespaceSyncs is the number of namespace objects that are
+allowed to sync concurrently.
NodeController holds configuration for node controller
-related features.
+
secondaryServiceCIDR is CIDR Range for Services in cluster. This is used in dual stack clusters. SecondaryServiceCIDR must be of different IP family than ServiceCIDR
nodeMontiorGracePeriod is the amount of time which we allow a running node to be
+unresponsive before marking it unhealthy. Must be N times more than kubelet's
+nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet
+to post node status.
podEvictionTimeout is the grace period for deleting pods on failed nodes.
+
+
+
LargeClusterSizeThreshold[Required]
+int32
+
+
+
secondaryNodeEvictionRate is implicitly overridden to 0 for clusters smaller than or equal to largeClusterSizeThreshold
+
+
+
UnhealthyZoneThreshold[Required]
+float32
+
+
+
Zone is treated as unhealthy in nodeEvictionRate and secondaryNodeEvictionRate when at least
+unhealthyZoneThreshold (no less than 3) of Nodes in the zone are NotReady
externalCloudVolumePlugin specifies the plugin to use when cloudProvider is "external".
-It is currently used by the in repo cloud providers to handle node and volume control in the KCM.
+
volumeConfiguration holds configuration for volume related features.
-
UseServiceAccountCredentials[Required]
-bool
+
VolumeHostCIDRDenylist[Required]
+[]string
-
useServiceAccountCredentials indicates whether controllers should be run with
-individual service account credentials.
+
DEPRECATED: VolumeHostCIDRDenylist is a list of CIDRs that should not be reachable by the
+controller from plugins.
-
AllowUntaggedCloud[Required]
+
VolumeHostAllowLocalLoopback[Required] bool
-
run with untagged cloud instances
+
DEPRECATED: VolumeHostAllowLocalLoopback indicates if local loopback hosts (127.0.0.1, etc)
+should be allowed from plugins.
nodeMonitorPeriod is the period for syncing NodeStatus in NodeController.
+
minimumTimeoutNFS is the minimum ActiveDeadlineSeconds to use for an NFS Recycler
+pod.
-
ClusterName[Required]
+
PodTemplateFilePathNFS[Required] string
-
clusterName is the instance prefix for the cluster.
+
podTemplateFilePathNFS is the file path to a pod definition used as a template for
+NFS persistent volume recycling
-
ClusterCIDR[Required]
-string
+
IncrementTimeoutNFS[Required]
+int32
-
clusterCIDR is CIDR Range for Pods in cluster.
+
incrementTimeoutNFS is the increment of time added per Gi to ActiveDeadlineSeconds
+for an NFS scrubber pod.
-
AllocateNodeCIDRs[Required]
-bool
+
PodTemplateFilePathHostPath[Required]
+string
-
AllocateNodeCIDRs enables CIDRs for Pods to be allocated and, if
-ConfigureCloudRoutes is true, to be set on the cloud provider.
+
podTemplateFilePathHostPath is the file path to a pod definition used as a template for
+HostPath persistent volume recycling. This is for development and testing only and
+will not work in a multi-node cluster.
-
CIDRAllocatorType[Required]
-string
+
MinimumTimeoutHostPath[Required]
+int32
-
CIDRAllocatorType determines what kind of pod CIDR allocator will be used.
+
minimumTimeoutHostPath is the minimum ActiveDeadlineSeconds to use for a HostPath
+Recycler pod. This is for development and testing only and will not work in a multi-node
+cluster.
-
ConfigureCloudRoutes[Required]
-bool
+
IncrementTimeoutHostPath[Required]
+int32
-
configureCloudRoutes enables CIDRs allocated with allocateNodeCIDRs
-to be configured on the cloud provider.
+
incrementTimeoutHostPath is the increment of time added per Gi to ActiveDeadlineSeconds
+for a HostPath scrubber pod. This is for development and testing only and will not work
+in a multi-node cluster.
PodGCControllerConfiguration contains elements describing PodGCController.
+
+
+
+
Field
Description
+
+
+
+
TerminatedPodGCThreshold[Required]
+int32
-
nodeSyncPeriod is the period for syncing nodes from cloudprovider. Longer
-periods will result in fewer calls to cloud provider, but may delay addition
-of new nodes to cluster.
+
terminatedPodGCThreshold is the number of terminated pods that can exist
+before the terminated pod garbage collector starts deleting terminated pods.
+If <= 0, the terminated pod garbage collector is disabled.
Webhooks is the list of webhooks to enable or disable
-'*' means "all enabled by default webhooks"
-'foo' means "enable 'foo'"
-'-foo' means "disable 'foo'"
-first item for a particular name wins
+
concurrentRSSyncs is the number of replica sets that are allowed to sync
+concurrently. Larger number = more responsive replica management, but more
+CPU (and network) load.
ControllerLeaders contains a list of migrating leader lock configurations
+
concurrentRCSyncs is the number of replication controllers that are
+allowed to sync concurrently. Larger number = more responsive replica
+management, but more CPU (and network) load.
Name is the name of the controller being migrated
-E.g. service-controller, route-controller, cloud-node-controller, etc
+
resourceQuotaSyncPeriod is the period for syncing quota usage status
+in the system.
-
component[Required]
-string
+
ConcurrentResourceQuotaSyncs[Required]
+int32
-
Component is the name of the component in which the controller should be running.
-E.g. kube-controller-manager, cloud-controller-manager, etc
-Or '*' meaning the controller can be run under any component that participates in the migration
+
concurrentResourceQuotaSyncs is the number of resource quotas that are
+allowed to sync concurrently. Larger number = more responsive quota
+management, but more CPU (and network) load.
StatefulSetControllerConfiguration contains elements describing StatefulSetController.
+
+
+
+
Field
Description
+
+
+
+
ConcurrentStatefulSetSyncs[Required]
+int32
-
ClientConnection specifies the kubeconfig file and client connection
-settings for the proxy server to use when communicating with the apiserver.
+
concurrentStatefulSetSyncs is the number of statefulset objects that are
+allowed to sync concurrently. Larger number = more responsive statefulsets,
+but more CPU (and network) load.
ValidatingAdmissionPolicyStatusControllerConfiguration contains elements describing ValidatingAdmissionPolicyStatusController.
+
+
+
+
Field
Description
+
+
+
+
ConcurrentPolicySyncs[Required]
+int32
-
leaderElection defines the configuration of leader election client.
+
ConcurrentPolicySyncs is the number of policy objects that are
+allowed to sync concurrently. Larger number = quicker type checking,
+but more CPU (and network) load.
+The default value is 5.
VolumeConfiguration contains all enumerated flags meant to configure all volume
+plugins. From this config, the controller-manager binary will create many instances of
+volume.VolumeConfig, each containing only the configuration needed for that plugin which
+are then passed to the appropriate plugin. The ControllerManager binary is the only part
+of the code which knows what plugins are supported and which flags correspond to each plugin.
+
+
+
+
Field
Description
+
+
+
+
EnableHostPathProvisioning[Required]
+bool
-
Controllers is the list of controllers to enable or disable
-'*' means "all enabled by default controllers"
-'foo' means "enable 'foo'"
-'-foo' means "disable 'foo'"
-first item for a particular name wins
+
enableHostPathProvisioning enables HostPath PV provisioning when running without a
+cloud provider. This allows testing and development of provisioning features. HostPath
+provisioning is not supported in any way, won't work in a multi-node cluster, and
+should not be used for anything other than testing or development.
DebuggingConfiguration holds configuration for Debugging related features.
+
enableDynamicProvisioning enables the provisioning of volumes when running within an environment
+that supports dynamic provisioning. Defaults to true.
ClientConnectionConfiguration contains details for constructing a client.
+
+
+
+
Field
Description
+
+
+
+
kubeconfig[Required]
+string
+
+
+
kubeconfig is the path to a KubeConfig file.
+
+
+
acceptContentTypes[Required]
+string
+
+
+
acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the
+default value of 'application/json'. This field will control all connections to the server used by a particular
+client.
+
+
+
contentType[Required]
+string
+
+
+
contentType is the content type used when sending data to the server from this client.
+
+
+
qps[Required]
+float32
+
+
+
qps controls the number of queries per second allowed for this connection.
+
+
+
burst[Required]
+int32
+
+
+
burst allows extra queries to accumulate when a client is exceeding its rate.
LeaderElectionConfiguration defines the configuration of leader election
+clients for components that can run with leader election enabled.
+
+
+
+
Field
Description
+
+
+
+
leaderElect[Required]
+bool
+
+
+
leaderElect enables a leader election client to gain leadership
+before executing the main loop. Enable this when running replicated
+components for high availability.
leaseDuration is the duration that non-leader candidates will wait
+after observing a leadership renewal until attempting to acquire
+leadership of a led but unrenewed leader slot. This is effectively the
+maximum duration that a leader can be stopped before it is replaced
+by another candidate. This is only applicable if leader election is
+enabled.
renewDeadline is the interval between attempts by the acting master to
+renew a leadership slot before it stops leading. This must be less
+than or equal to the lease duration. This is only applicable if leader
+election is enabled.
retryPeriod is the duration the clients should wait between attempting
+acquisition and renewal of a leadership. This is only applicable if
+leader election is enabled.
+
+
+
resourceLock[Required]
+string
+
+
+
resourceLock indicates the resource object type that will be used to lock
+during leader election cycles.
+
+
+
resourceName[Required]
+string
+
+
+
resourceName indicates the name of resource object that will be used to lock
+during leader election cycles.
+
+
+
resourceNamespace[Required]
+string
+
+
+
resourceName indicates the namespace of resource object that will be used to lock
+during leader election cycles.
+
+
+
+
+
## `DefaultPreemptionArgs` {#kubescheduler-config-k8s-io-v1beta3-DefaultPreemptionArgs}
@@ -1074,180 +1250,4 @@ Weight defaults to 1 if not specified or explicitly set to 0.
-
-
-
-
-## `ClientConnectionConfiguration` {#ClientConnectionConfiguration}
-
-
-**Appears in:**
-
-- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
-
-
-
ClientConnectionConfiguration contains details for constructing a client.
-
-
-
-
Field
Description
-
-
-
-
kubeconfig[Required]
-string
-
-
-
kubeconfig is the path to a KubeConfig file.
-
-
-
acceptContentTypes[Required]
-string
-
-
-
acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the
-default value of 'application/json'. This field will control all connections to the server used by a particular
-client.
-
-
-
contentType[Required]
-string
-
-
-
contentType is the content type used when sending data to the server from this client.
-
-
-
qps[Required]
-float32
-
-
-
qps controls the number of queries per second allowed for this connection.
-
-
-
burst[Required]
-int32
-
-
-
burst allows extra queries to accumulate when a client is exceeding its rate.
LeaderElectionConfiguration defines the configuration of leader election
-clients for components that can run with leader election enabled.
-
-
-
-
Field
Description
-
-
-
-
leaderElect[Required]
-bool
-
-
-
leaderElect enables a leader election client to gain leadership
-before executing the main loop. Enable this when running replicated
-components for high availability.
leaseDuration is the duration that non-leader candidates will wait
-after observing a leadership renewal until attempting to acquire
-leadership of a led but unrenewed leader slot. This is effectively the
-maximum duration that a leader can be stopped before it is replaced
-by another candidate. This is only applicable if leader election is
-enabled.
renewDeadline is the interval between attempts by the acting master to
-renew a leadership slot before it stops leading. This must be less
-than or equal to the lease duration. This is only applicable if leader
-election is enabled.
retryPeriod is the duration the clients should wait between attempting
-acquisition and renewal of a leadership. This is only applicable if
-leader election is enabled.
-
-
-
resourceLock[Required]
-string
-
-
-
resourceLock indicates the resource object type that will be used to lock
-during leader election cycles.
-
-
-
resourceName[Required]
-string
-
-
-
resourceName indicates the name of resource object that will be used to lock
-during leader election cycles.
-
-
-
resourceNamespace[Required]
-string
-
-
-
resourceName indicates the namespace of resource object that will be used to lock
-during leader election cycles.
-
-
-
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md b/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md
index 3972691620bb0..9d94c614de168 100644
--- a/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md
+++ b/content/en/docs/reference/config-api/kubeadm-config.v1beta3.md
@@ -264,6 +264,109 @@ node only (e.g. the node ip).
- [JoinConfiguration](#kubeadm-k8s-io-v1beta3-JoinConfiguration)
+
+
+## `BootstrapToken` {#BootstrapToken}
+
+
+**Appears in:**
+
+- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration)
+
+
+
BootstrapToken describes one bootstrap token, stored as a Secret in the cluster
expires specifies the timestamp when this token expires. Defaults to being set
+dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.
+
+
+
usages
+[]string
+
+
+
usages describes the ways in which this token can be used. Can by default be used
+for establishing bidirectional trust, but that can be changed here.
+
+
+
groups
+[]string
+
+
+
groups specifies the extra groups that this token will authenticate as when/if
+used for authentication
BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used
+for both validation of the practically of the API server from a joining node's point
+of view and as an authentication method for the node in the bootstrap phase of
+"kubeadm join". This token is and should be short-lived.
expires specifies the timestamp when this token expires. Defaults to being set
-dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.
-
-
-
usages
-[]string
-
-
-
usages describes the ways in which this token can be used. Can by default be used
-for establishing bidirectional trust, but that can be changed here.
-
-
-
groups
-[]string
-
-
-
groups specifies the extra groups that this token will authenticate as when/if
-used for authentication
BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used
-for both validation of the practically of the API server from a joining node's point
-of view and as an authentication method for the node in the bootstrap phase of
-"kubeadm join". This token is and should be short-lived.
-
-
-
-
Field
Description
-
-
-
-
-[Required]
-string
-
-
- No description provided.
-
-
-[Required]
-string
-
-
- No description provided.
-
-
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/content/en/docs/reference/config-api/kubeadm-config.v1beta4.md b/content/en/docs/reference/config-api/kubeadm-config.v1beta4.md
index f7349db30c606..1689232505839 100644
--- a/content/en/docs/reference/config-api/kubeadm-config.v1beta4.md
+++ b/content/en/docs/reference/config-api/kubeadm-config.v1beta4.md
@@ -291,6 +291,111 @@ node only (e.g. the node ip).
- [ResetConfiguration](#kubeadm-k8s-io-v1beta4-ResetConfiguration)
+
+
+## `BootstrapToken` {#BootstrapToken}
+
+
+**Appears in:**
+
+- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration)
+
+- [InitConfiguration](#kubeadm-k8s-io-v1beta4-InitConfiguration)
+
+
+
BootstrapToken describes one bootstrap token, stored as a Secret in the cluster
expires specifies the timestamp when this token expires. Defaults to being set
+dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.
+
+
+
usages
+[]string
+
+
+
usages describes the ways in which this token can be used. Can by default be used
+for establishing bidirectional trust, but that can be changed here.
+
+
+
groups
+[]string
+
+
+
groups specifies the extra groups that this token will authenticate as when/if
+used for authentication
BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used
+for both validation of the practically of the API server from a joining node's point
+of view and as an authentication method for the node in the bootstrap phase of
+"kubeadm join". This token is and should be short-lived.
expires specifies the timestamp when this token expires. Defaults to being set
-dynamically at runtime based on the ttl. expires and ttl are mutually exclusive.
-
-
-
usages
-[]string
-
-
-
usages describes the ways in which this token can be used. Can by default be used
-for establishing bidirectional trust, but that can be changed here.
-
-
-
groups
-[]string
-
-
-
groups specifies the extra groups that this token will authenticate as when/if
-used for authentication
BootstrapTokenString is a token of the format abcdef.abcdef0123456789 that is used
-for both validation of the practically of the API server from a joining node's point
-of view and as an authentication method for the node in the bootstrap phase of
-"kubeadm join". This token is and should be short-lived.
-
-
-
-
Field
Description
-
-
-
-
-[Required]
-string
-
-
- No description provided.
-
-
-[Required]
-string
-
-
- No description provided.
-
-
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/content/en/docs/reference/config-api/kubeconfig.v1.md b/content/en/docs/reference/config-api/kubeconfig.v1.md
index 42cf3bd7cc9c6..72a5c63358ce8 100644
--- a/content/en/docs/reference/config-api/kubeconfig.v1.md
+++ b/content/en/docs/reference/config-api/kubeconfig.v1.md
@@ -11,6 +11,83 @@ auto_generated: true
- [Config](#Config)
+
+
+## `Config` {#Config}
+
+
+
+
Config holds the information needed to build connect to remote kubernetes clusters as a given user
+
+
+
+
Field
Description
+
+
+
apiVersion string
/v1
+
kind string
Config
+
+
+
kind
+string
+
+
+
Legacy field from pkg/api/types.go TypeMeta.
+TODO(jlowdermilk): remove this after eliminating downstream dependencies.
+
+
+
apiVersion
+string
+
+
+
Legacy field from pkg/api/types.go TypeMeta.
+TODO(jlowdermilk): remove this after eliminating downstream dependencies.
Each entry in matchImages is a pattern which can optionally contain a port and a path.
Globs can be used in the domain, but not in the port or the path. Globs are supported
as subdomains like '*.k8s.io' or 'k8s.*.io', and top-level-domains such as 'k8s.*'.
-Matching partial subdomains like 'app*.k8s.io' is also supported. Each glob can only match
+Matching partial subdomains like 'app.k8s.io' is also supported. Each glob can only match
a single subdomain segment, so *.io does not match *.k8s.io.
A match exists between an image and a matchImage when all of the below are true:
JSONOptions contains options for logging format "json".
+
+
+
+
Field
Description
+
+
+
+
splitStream[Required]
+bool
+
+
+
[Alpha] SplitStream redirects error messages to stderr while
+info messages go to stdout, with buffering. The default is to write
+both to stdout, without buffering. Only available when
+the LoggingAlphaOptions feature gate is enabled.
[Alpha] InfoBufferSize sets the size of the info stream when
+using split streams. The default is zero, which disables buffering.
+Only available when the LoggingAlphaOptions feature gate is enabled.
Maximum time between log flushes.
+If a string, parsed as a duration (i.e. "1s")
+If an int, the maximum number of nanoseconds (i.e. 1s = 1000000000).
+Ignored if the selected logging backend writes log messages without buffering.
Verbosity is the threshold that determines which log messages are
+logged. Default is zero which logs only the most important
+messages. Higher values enable additional messages. Error messages
+are always logged.
[Alpha] Options holds additional parameters that are specific
+to the different logging formats. Only the options for the selected
+format get used, but all of them get validated.
+Only available when the LoggingAlphaOptions feature gate is enabled.
+
+
+
+
+
+## `LoggingOptions` {#LoggingOptions}
+
+
+
+
LoggingOptions can be used with ValidateAndApplyWithOptions to override
+certain global defaults.
TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.
+
+
+
+
Field
Description
+
+
+
+
endpoint
+string
+
+
+
Endpoint of the collector this component will report traces to.
+The connection is insecure, and does not currently support TLS.
+Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.
+
+
+
samplingRatePerMillion
+int32
+
+
+
SamplingRatePerMillion is the number of samples to collect per million spans.
+Recommended is unset. If unset, sampler respects its parent span's sampling
+rate, but otherwise never samples.
registerWithTaints are an array of taints to add to a node object when
the kubelet registers itself. This only takes effect when registerNode
-is true and upon the initial registration of the node.
-Default: nil
-
-
-
registerNode
-bool
-
-
-
registerNode enables automatic registration with the apiserver.
-Default: true
Tracing specifies the versioned configuration for OpenTelemetry tracing clients.
-See https://kep.k8s.io/2832 for more details.
-Default: nil
-
-
-
localStorageCapacityIsolation
-bool
-
-
-
LocalStorageCapacityIsolation enables local ephemeral storage isolation feature. The default setting is true.
-This feature allows users to set request/limit for container's ephemeral storage and manage it in a similar way
-as cpu and memory. It also allows setting sizeLimit for emptyDir volume, which will trigger pod eviction if disk
-usage from the volume exceeds the limit.
-This feature depends on the capability of detecting correct root file system disk usage. For certain systems,
-such as kind rootless, if this capability cannot be supported, the feature LocalStorageCapacityIsolation should be
-disabled. Once disabled, user should not set request/limit for container's ephemeral storage, or sizeLimit for emptyDir.
-Default: true
-
-
-
containerRuntimeEndpoint[Required]
-string
-
-
-
ContainerRuntimeEndpoint is the endpoint of container runtime.
-Unix Domain Sockets are supported on Linux, while npipe and tcp endpoints are supported on Windows.
-Examples:'unix:///path/to/runtime.sock', 'npipe:////./pipe/runtime'
-
-
-
imageServiceEndpoint
-string
-
-
-
ImageServiceEndpoint is the endpoint of container image service.
-Unix Domain Socket are supported on Linux, while npipe and tcp endpoints are supported on Windows.
-Examples:'unix:///path/to/runtime.sock', 'npipe:////./pipe/runtime'.
-If not specified, the value in containerRuntimeEndpoint is used.
SerializedNodeConfigSource allows us to serialize v1.NodeConfigSource.
-This type is used internally by the Kubelet for tracking checkpointed dynamic configs.
-It exists in the kubeletconfig API group because it is classified as a versioned input to the Kubelet.
CredentialProvider represents an exec plugin to be invoked by the kubelet. The plugin is only
-invoked when an image being pulled matches the images handled by the plugin (see matchImages).
-
-
-
-
Field
Description
-
-
-
-
name[Required]
-string
-
-
-
name is the required name of the credential provider. It must match the name of the
-provider executable as seen by the kubelet. The executable must be in the kubelet's
-bin directory (set by the --image-credential-provider-bin-dir flag).
-
-
-
matchImages[Required]
-[]string
-
-
-
matchImages is a required list of strings used to match against images in order to
-determine if this provider should be invoked. If one of the strings matches the
-requested image from the kubelet, the plugin will be invoked and given a chance
-to provide credentials. Images are expected to contain the registry domain
-and URL path.
-
Each entry in matchImages is a pattern which can optionally contain a port and a path.
-Globs can be used in the domain, but not in the port or the path. Globs are supported
-as subdomains like '*.k8s.io' or 'k8s.*.io', and top-level-domains such as 'k8s.*'.
-Matching partial subdomains like 'app*.k8s.io' is also supported. Each glob can only match
-a single subdomain segment, so *.io does not match *.k8s.io.
-
A match exists between an image and a matchImage when all of the below are true:
-
-
Both contain the same number of domain parts and each part matches.
-
The URL path of an imageMatch must be a prefix of the target image URL path.
-
If the imageMatch contains a port, then the port must match in the image as well.
defaultCacheDuration is the default duration the plugin will cache credentials in-memory
-if a cache duration is not provided in the plugin response. This field is required.
-
-
-
apiVersion[Required]
-string
-
-
-
Required input version of the exec CredentialProviderRequest. The returned CredentialProviderResponse
-MUST use the same encoding version as the input. Current supported values are:
-
-
credentialprovider.kubelet.k8s.io/v1beta1
-
-
-
-
args
-[]string
-
-
-
Arguments to pass to the command when executing it.
Env defines additional environment variables to expose to the process. These
-are unioned with the host's environment, as well as variables client-go uses
-to pass argument to the plugin.
enabled allows anonymous requests to the kubelet server.
-Requests that are not rejected by another authentication method are treated as
-anonymous requests.
-Anonymous requests have a username of system:anonymous, and a group name of
-system:unauthenticated.
anonymous contains settings related to anonymous authentication.
+
LocalStorageCapacityIsolation enables local ephemeral storage isolation feature. The default setting is true.
+This feature allows users to set request/limit for container's ephemeral storage and manage it in a similar way
+as cpu and memory. It also allows setting sizeLimit for emptyDir volume, which will trigger pod eviction if disk
+usage from the volume exceeds the limit.
+This feature depends on the capability of detecting correct root file system disk usage. For certain systems,
+such as kind rootless, if this capability cannot be supported, the feature LocalStorageCapacityIsolation should be
+disabled. Once disabled, user should not set request/limit for container's ephemeral storage, or sizeLimit for emptyDir.
+Default: true
mode is the authorization mode to apply to requests to the kubelet server.
-Valid values are AlwaysAllow and Webhook.
-Webhook mode uses the SubjectAccessReview API to determine authorization.
+
ContainerRuntimeEndpoint is the endpoint of container runtime.
+Unix Domain Sockets are supported on Linux, while npipe and tcp endpoints are supported on Windows.
+Examples:'unix:///path/to/runtime.sock', 'npipe:////./pipe/runtime'
webhook contains settings related to Webhook authorization.
+
ImageServiceEndpoint is the endpoint of container image service.
+Unix Domain Socket are supported on Linux, while npipe and tcp endpoints are supported on Windows.
+Examples:'unix:///path/to/runtime.sock', 'npipe:////./pipe/runtime'.
+If not specified, the value in containerRuntimeEndpoint is used.
SerializedNodeConfigSource allows us to serialize v1.NodeConfigSource.
+This type is used internally by the Kubelet for tracking checkpointed dynamic configs.
+It exists in the kubeletconfig API group because it is classified as a versioned input to the Kubelet.
Field
Description
+
apiVersion string
kubelet.config.k8s.io/v1beta1
+
kind string
SerializedNodeConfigSource
+
-
enabled
-bool
-
-
-
enabled allows bearer token authentication backed by the
-tokenreviews.authentication.k8s.io API.
CredentialProvider represents an exec plugin to be invoked by the kubelet. The plugin is only
+invoked when an image being pulled matches the images handled by the plugin (see matchImages).
cacheAuthorizedTTL is the duration to cache 'authorized' responses from the
-webhook authorizer.
+
name is the required name of the credential provider. It must match the name of the
+provider executable as seen by the kubelet. The executable must be in the kubelet's
+bin directory (set by the --image-credential-provider-bin-dir flag).
-
cacheUnauthorizedTTL
+
matchImages[Required]
+[]string
+
+
+
matchImages is a required list of strings used to match against images in order to
+determine if this provider should be invoked. If one of the strings matches the
+requested image from the kubelet, the plugin will be invoked and given a chance
+to provide credentials. Images are expected to contain the registry domain
+and URL path.
+
Each entry in matchImages is a pattern which can optionally contain a port and a path.
+Globs can be used in the domain, but not in the port or the path. Globs are supported
+as subdomains like '*.k8s.io' or 'k8s.*.io', and top-level-domains such as 'k8s.*'.
+Matching partial subdomains like 'app*.k8s.io' is also supported. Each glob can only match
+a single subdomain segment, so *.io does not match *.k8s.io.
+
A match exists between an image and a matchImage when all of the below are true:
+
+
Both contain the same number of domain parts and each part matches.
+
The URL path of an imageMatch must be a prefix of the target image URL path.
+
If the imageMatch contains a port, then the port must match in the image as well.
cacheUnauthorizedTTL is the duration to cache 'unauthorized' responses from
-the webhook authorizer.
+
defaultCacheDuration is the default duration the plugin will cache credentials in-memory
+if a cache duration is not provided in the plugin response. This field is required.
clientCAFile is the path to a PEM-encoded certificate bundle. If set, any request
-presenting a client certificate signed by one of the authorities in the bundle
-is authenticated with a username corresponding to the CommonName,
-and groups corresponding to the Organization in the client certificate.
+
Required input version of the exec CredentialProviderRequest. The returned CredentialProviderResponse
+MUST use the same encoding version as the input. Current supported values are:
Env defines additional environment variables to expose to the process. These
+are unioned with the host's environment, as well as variables client-go uses
+to pass argument to the plugin.
ExecEnvVar is used for setting environment variables when executing an exec-based
+credential plugin.
+
Field
Description
-
swapBehavior
+
name[Required] string
-
swapBehavior configures swap memory available to container workloads. May be one of
-"", "LimitedSwap": workload combined memory and swap usage cannot exceed pod memory limit
-"UnlimitedSwap": workloads can use unlimited swap, up to the allocatable limit.
priority is the priority value associated with the shutdown grace period
-
-
-
shutdownGracePeriodSeconds[Required]
-int64
+
enabled
+bool
-
shutdownGracePeriodSeconds is the shutdown grace period in seconds
+
enabled allows anonymous requests to the kubelet server.
+Requests that are not rejected by another authentication method are treated as
+anonymous requests.
+Anonymous requests have a username of system:anonymous, and a group name of
+system:unauthenticated.
[Alpha] SplitStream redirects error messages to stderr while
-info messages go to stdout, with buffering. The default is to write
-both to stdout, without buffering. Only available when
-the LoggingAlphaOptions feature gate is enabled.
+
mode is the authorization mode to apply to requests to the kubelet server.
+Valid values are AlwaysAllow and Webhook.
+Webhook mode uses the SubjectAccessReview API to determine authorization.
[Alpha] InfoBufferSize sets the size of the info stream when
-using split streams. The default is zero, which disables buffering.
-Only available when the LoggingAlphaOptions feature gate is enabled.
+
webhook contains settings related to Webhook authorization.
Maximum time between log flushes.
-If a string, parsed as a duration (i.e. "1s")
-If an int, the maximum number of nanoseconds (i.e. 1s = 1000000000).
-Ignored if the selected logging backend writes log messages without buffering.
+
enabled allows bearer token authentication backed by the
+tokenreviews.authentication.k8s.io API.
Verbosity is the threshold that determines which log messages are
-logged. Default is zero which logs only the most important
-messages. Higher values enable additional messages. Error messages
-are always logged.
+
cacheTTL enables caching of authentication results
[Alpha] Options holds additional parameters that are specific
-to the different logging formats. Only the options for the selected
-format get used, but all of them get validated.
-Only available when the LoggingAlphaOptions feature gate is enabled.
+
cacheUnauthorizedTTL is the duration to cache 'unauthorized' responses from
+the webhook authorizer.
InfoStream can be used to override the os.Stdout default.
+
clientCAFile is the path to a PEM-encoded certificate bundle. If set, any request
+presenting a client certificate signed by one of the authorities in the bundle
+is authenticated with a username corresponding to the CommonName,
+and groups corresponding to the Organization in the client certificate.
SerializeAsString controls whether the value is serialized as a string or an integer
-
+ No description provided.
-## `TracingConfiguration` {#TracingConfiguration}
+## `MemorySwapConfiguration` {#kubelet-config-k8s-io-v1beta1-MemorySwapConfiguration}
**Appears in:**
@@ -1913,60 +1905,69 @@ flushFrequency field, and new fields should use metav1.Duration.
- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
-
TracingConfiguration provides versioned configuration for OpenTelemetry tracing clients.
-
Field
Description
-
endpoint
+
swapBehavior string
-
Endpoint of the collector this component will report traces to.
-The connection is insecure, and does not currently support TLS.
-Recommended is unset, and endpoint is the otlp grpc default, localhost:4317.
-
-
-
samplingRatePerMillion
-int32
-
-
-
SamplingRatePerMillion is the number of samples to collect per million spans.
-Recommended is unset. If unset, sampler respects its parent span's sampling
-rate, but otherwise never samples.
+
swapBehavior configures swap memory available to container workloads. May be one of
+"", "LimitedSwap": workload combined memory and swap usage cannot exceed pod memory limit
+"UnlimitedSwap": workloads can use unlimited swap, up to the allocatable limit.
-## `VModuleConfiguration` {#VModuleConfiguration}
+## `ResourceChangeDetectionStrategy` {#kubelet-config-k8s-io-v1beta1-ResourceChangeDetectionStrategy}
-(Alias of `[]k8s.io/component-base/logs/api/v1.VModuleItem`)
+(Alias of `string`)
**Appears in:**
-- [LoggingConfiguration](#LoggingConfiguration)
+- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
-
VModuleConfiguration is a collection of individual file names or patterns
-and the corresponding verbosity threshold.
+
ResourceChangeDetectionStrategy denotes a mode in which internal
+managers (secret, configmap) are discovering object changes.
From 8d49270fede09357b038bbf6b72183c0f24ed504 Mon Sep 17 00:00:00 2001
From: Mohammed Affan
Date: Mon, 2 Oct 2023 16:38:44 +0530
Subject: [PATCH 056/229] Add statement for explaining the example image name
---
.../extend-kubernetes/configure-multiple-schedulers.md | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md
index 4967ff8f27cc2..a24dad21cf30e 100644
--- a/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md
+++ b/content/en/docs/tasks/extend-kubernetes/configure-multiple-schedulers.md
@@ -52,11 +52,13 @@ Save the file as `Dockerfile`, build the image and push it to a registry. This e
pushes the image to
[Google Container Registry (GCR)](https://cloud.google.com/container-registry/).
For more details, please read the GCR
-[documentation](https://cloud.google.com/container-registry/docs/).
+[documentation](https://cloud.google.com/container-registry/docs/). Alternatively
+you can also use the [docker hub](https://hub.docker.com/search?q=). For more details
+refer to the docker hub [documentation](https://docs.docker.com/docker-hub/repos/create/#create-a-repository).
```shell
-docker build -t gcr.io/my-gcp-project/my-kube-scheduler:1.0 .
-gcloud docker -- push gcr.io/my-gcp-project/my-kube-scheduler:1.0
+docker build -t gcr.io/my-gcp-project/my-kube-scheduler:1.0 . # The image name and the repository
+gcloud docker -- push gcr.io/my-gcp-project/my-kube-scheduler:1.0 # used in here is just an example
```
## Define a Kubernetes Deployment for the scheduler
From e8225026544d001fb16a1b73ca965de072a61507 Mon Sep 17 00:00:00 2001
From: steve-hardman <132999137+steve-hardman@users.noreply.github.com>
Date: Mon, 2 Oct 2023 23:58:12 +0100
Subject: [PATCH 057/229] Update jq link
---
content/en/blog/_posts/2020-09-03-warnings/index.md | 2 +-
.../en/blog/_posts/2021-10-18-kpng-specialized-proxiers.md | 2 +-
content/en/docs/reference/kubectl/cheatsheet.md | 2 +-
.../docs/tasks/administer-cluster/verify-signed-artifacts.md | 2 +-
content/en/docs/tasks/tls/managing-tls-in-a-cluster.md | 4 ++--
5 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/content/en/blog/_posts/2020-09-03-warnings/index.md b/content/en/blog/_posts/2020-09-03-warnings/index.md
index a5cfb9f710db7..5d31aedef2f41 100644
--- a/content/en/blog/_posts/2020-09-03-warnings/index.md
+++ b/content/en/blog/_posts/2020-09-03-warnings/index.md
@@ -63,7 +63,7 @@ This metric has labels for the API `group`, `version`, `resource`, and `subresou
and a `removed_release` label that indicates the Kubernetes release in which the API will no longer be served.
This is an example query using `kubectl`, [prom2json](https://github.com/prometheus/prom2json),
-and [jq](https://stedolan.github.io/jq/) to determine which deprecated APIs have been requested
+and [jq](https://jqlang.github.io/jq/) to determine which deprecated APIs have been requested
from the current instance of the API server:
```sh
diff --git a/content/en/blog/_posts/2021-10-18-kpng-specialized-proxiers.md b/content/en/blog/_posts/2021-10-18-kpng-specialized-proxiers.md
index 2c60c12f3f079..1e1b32b265ce3 100644
--- a/content/en/blog/_posts/2021-10-18-kpng-specialized-proxiers.md
+++ b/content/en/blog/_posts/2021-10-18-kpng-specialized-proxiers.md
@@ -210,7 +210,7 @@ podip=$(cat /tmp/out | jq -r '.Endpoints[]|select(.Local == true)|select(.IPs.V6
ip6tables -t nat -A PREROUTING -d $xip/128 -j DNAT --to-destination $podip
```
-Assuming the JSON output above is stored in `/tmp/out` ([jq](https://stedolan.github.io/jq/) is an *awesome* program!).
+Assuming the JSON output above is stored in `/tmp/out` ([jq](https://jqlang.github.io/jq/) is an *awesome* program!).
As this is an example we make it really simple for ourselves by using
diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md
index b933c6eaeecd8..fa7b9c3eb2912 100644
--- a/content/en/docs/reference/kubectl/cheatsheet.md
+++ b/content/en/docs/reference/kubectl/cheatsheet.md
@@ -213,7 +213,7 @@ kubectl get pods --field-selector=status.phase=Running
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'
# List Names of Pods that belong to Particular RC
-# "jq" command useful for transformations that are too complex for jsonpath, it can be found at https://stedolan.github.io/jq/
+# "jq" command useful for transformations that are too complex for jsonpath, it can be found at https://jqlang.github.io/jq/
sel=${$(kubectl get rc my-rc --output=json | jq -j '.spec.selector | to_entries | .[] | "\(.key)=\(.value),"')%?}
echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name})
diff --git a/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md b/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md
index 660b4e903bc96..45f18cec89160 100644
--- a/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md
+++ b/content/en/docs/tasks/administer-cluster/verify-signed-artifacts.md
@@ -15,7 +15,7 @@ You will need to have the following tools installed:
- `cosign` ([install guide](https://docs.sigstore.dev/cosign/installation/))
- `curl` (often provided by your operating system)
-- `jq` ([download jq](https://stedolan.github.io/jq/download/))
+- `jq` ([download jq](https://jqlang.github.io/jq/download/))
## Verifying binary signatures
diff --git a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md
index f8705587dab70..57a1cf510388d 100644
--- a/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md
+++ b/content/en/docs/tasks/tls/managing-tls-in-a-cluster.md
@@ -36,7 +36,7 @@ You need the `cfssl` tool. You can download `cfssl` from
Some steps in this page use the `jq` tool. If you don't have `jq`, you can
install it via your operating system's software sources, or fetch it from
-[https://stedolan.github.io/jq/](https://stedolan.github.io/jq/).
+[https://jqlang.github.io/jq/](https://jqlang.github.io/jq/).
@@ -267,7 +267,7 @@ kubectl get csr my-svc.my-namespace -o json | \
```
{{< note >}}
-This uses the command line tool [`jq`](https://stedolan.github.io/jq/) to populate the base64-encoded
+This uses the command line tool [`jq`](https://jqlang.github.io/jq/) to populate the base64-encoded
content in the `.status.certificate` field.
If you do not have `jq`, you can also save the JSON output to a file, populate this field manually, and
upload the resulting file.
From 27e6da11d9e30a3dfe715fce06c29ec4e6537a1b Mon Sep 17 00:00:00 2001
From: abhiram royals <110195480+Royal-Dragon@users.noreply.github.com>
Date: Wed, 4 Oct 2023 19:18:14 +0530
Subject: [PATCH 058/229] changed "search" to "search this site" #43291
a feature request to replace "search" with "search this site"
label #43291
---
data/i18n/en/en.toml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/data/i18n/en/en.toml b/data/i18n/en/en.toml
index 8ec60129fa5a6..9ba8d0e51d284 100644
--- a/data/i18n/en/en.toml
+++ b/data/i18n/en/en.toml
@@ -430,7 +430,7 @@ other = """🛇 This item links to a third party project or product that is
other = """
Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or projects. See the CNCF website guidelines for more details.
You should read the content guide before proposing a change that adds an extra third-party link.
"""
[ui_search_placeholder]
-other = "Search"
+other = "Search this site"
[version_check_mustbe]
other = "Your Kubernetes server must be version "
From 1e3b672e7211b9ad4141534c16f511e1e54089f5 Mon Sep 17 00:00:00 2001
From: "Kenneth J. Miller"
Date: Thu, 28 Sep 2023 12:35:21 +0200
Subject: [PATCH 059/229] content: es: Fix incorrect letter case for data
storage units
Gib (Gibibit) was used where GiB (Gibibyte) is intended.
---
content/es/docs/concepts/workloads/controllers/statefulset.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/es/docs/concepts/workloads/controllers/statefulset.md b/content/es/docs/concepts/workloads/controllers/statefulset.md
index 95e86a7a3f674..7ed24d2b7edaf 100644
--- a/content/es/docs/concepts/workloads/controllers/statefulset.md
+++ b/content/es/docs/concepts/workloads/controllers/statefulset.md
@@ -153,7 +153,7 @@ El valor de Cluster Domain se pondrá a `cluster.local` a menos que
Kubernetes crea un [PersistentVolume](/docs/concepts/storage/persistent-volumes/) para cada
VolumeClaimTemplate. En el ejemplo de nginx de arriba, cada Pod recibirá un único PersistentVolume
-con una StorageClass igual a `my-storage-class` y 1 Gib de almacenamiento provisionado. Si no se indica ninguna StorageClass,
+con una StorageClass igual a `my-storage-class` y 1 GiB de almacenamiento provisionado. Si no se indica ninguna StorageClass,
entonces se usa la StorageClass por defecto. Cuando un Pod se (re)programa
en un nodo, sus `volumeMounts` montan los PersistentVolumes asociados con sus
PersistentVolume Claims. Nótese que los PersistentVolumes asociados con los
From 894a2215c8e8b2fdeb55c259c14d8b6be7e209bb Mon Sep 17 00:00:00 2001
From: "Kenneth J. Miller"
Date: Thu, 28 Sep 2023 12:35:21 +0200
Subject: [PATCH 060/229] content: id: Fix incorrect letter case for data
storage units
Gib (Gibibit) was used where GiB (Gibibyte) is intended.
---
content/id/docs/concepts/workloads/controllers/statefulset.md | 3 +--
.../manage-resources/memory-constraint-namespace.md | 2 +-
2 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/content/id/docs/concepts/workloads/controllers/statefulset.md b/content/id/docs/concepts/workloads/controllers/statefulset.md
index a309e223a3693..5c091d3620939 100644
--- a/content/id/docs/concepts/workloads/controllers/statefulset.md
+++ b/content/id/docs/concepts/workloads/controllers/statefulset.md
@@ -154,7 +154,7 @@ Domain klaster akan diatur menjadi `cluster.local` kecuali
Kubernetes membuat sebuah [PersistentVolume](/id/docs/concepts/storage/persistent-volumes/) untuk setiap
VolumeClaimTemplate. Pada contoh nginx di atas, setiap Pod akan menerima sebuah PersistentVolume
-dengan StorageClass `my-storage-class` dan penyimpanan senilai 1 Gib yang sudah di-_provisioning_. Jika tidak ada StorageClass
+dengan StorageClass `my-storage-class` dan penyimpanan senilai 1 GiB yang sudah di-_provisioning_. Jika tidak ada StorageClass
yang dispesifikasikan, maka StorageClass _default_ akan digunakan. Ketika sebuah Pod dilakukan _(re)schedule_
pada sebuah Node, `volumeMounts` akan me-_mount_ PersistentVolumes yang terkait dengan
PersistentVolume Claim-nya. Perhatikan bahwa, PersistentVolume yang terkait dengan
@@ -275,4 +275,3 @@ StatefulSet akan mulai membuat Pod dengan templat konfigurasi yang sudah di-_rev
* Ikuti contoh yang ada pada [bagaimana cara melakukan deploy Cassandra dengan StatefulSets](/docs/tutorials/stateful-application/cassandra/).
-
diff --git a/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md b/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md
index 1aae3e38f009b..406363d0e2882 100644
--- a/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md
+++ b/content/id/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace.md
@@ -202,7 +202,7 @@ dari LimitRange.
Pada tahap ini, Containermu mungkin saja berjalan ataupun mungkin juga tidak berjalan. Ingat bahwa prasyarat
untuk tugas ini adalah Node harus memiliki setidaknya 1 GiB memori. Jika tiap Node hanya memiliki
-1 GiB memori, maka tidak akan ada cukup memori untuk dialokasikan pada setiap Node untuk memenuhi permintaan 1 Gib memori. Jika ternyata kamu menggunakan Node dengan 2 GiB memori, maka kamu mungkin memiliki cukup ruang untuk memenuhi permintaan 1 GiB tersebut.
+1 GiB memori, maka tidak akan ada cukup memori untuk dialokasikan pada setiap Node untuk memenuhi permintaan 1 GiB memori. Jika ternyata kamu menggunakan Node dengan 2 GiB memori, maka kamu mungkin memiliki cukup ruang untuk memenuhi permintaan 1 GiB tersebut.
Menghapus Pod:
From 2b61bd9cfeacc3852cb7ee174526847c6b9dcb3d Mon Sep 17 00:00:00 2001
From: Mohammed Affan
Date: Thu, 5 Oct 2023 12:04:48 +0530
Subject: [PATCH 061/229] Remove misleading docs around eviction and image
garbage collection
---
.../scheduling-eviction/node-pressure-eviction.md | 12 +++++-------
1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md b/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md
index 80367800153a6..381e291488fa1 100644
--- a/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md
+++ b/content/en/docs/concepts/scheduling-eviction/node-pressure-eviction.md
@@ -105,13 +105,11 @@ does not support other configurations.
Some kubelet garbage collection features are deprecated in favor of eviction:
-| Existing Flag | New Flag | Rationale |
-| ------------- | -------- | --------- |
-| `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection |
-| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | eviction reclaims achieve the same behavior |
-| `--maximum-dead-containers` | - | deprecated once old logs are stored outside of container's context |
-| `--maximum-dead-containers-per-container` | - | deprecated once old logs are stored outside of container's context |
-| `--minimum-container-ttl-duration` | - | deprecated once old logs are stored outside of container's context |
+| Existing Flag | Rationale |
+| ------------- | --------- |
+| `--maximum-dead-containers` | deprecated once old logs are stored outside of container's context |
+| `--maximum-dead-containers-per-container` | deprecated once old logs are stored outside of container's context |
+| `--minimum-container-ttl-duration` | deprecated once old logs are stored outside of container's context |
### Eviction thresholds
From 45644aa65c6410f698e5d288a81fc5b4999cc6cf Mon Sep 17 00:00:00 2001
From: Michael
Date: Thu, 5 Oct 2023 20:56:08 +0800
Subject: [PATCH 062/229] [zh] Clean up kubeadm_init_phase_kubeconfig files
---
.../kubeadm_init_phase_kubeconfig.md | 18 +--------------
.../kubeadm_init_phase_kubeconfig_admin.md | 20 ++---------------
.../kubeadm_init_phase_kubeconfig_all.md | 19 +++-------------
...nit_phase_kubeconfig_controller-manager.md | 20 ++---------------
.../kubeadm_init_phase_kubeconfig_kubelet.md | 22 +++----------------
...kubeadm_init_phase_kubeconfig_scheduler.md | 20 ++---------------
6 files changed, 13 insertions(+), 106 deletions(-)
diff --git a/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig.md b/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig.md
index 5f1cf4bef421d..d2ec9d5d351e1 100644
--- a/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig.md
+++ b/content/zh-cn/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_kubeconfig.md
@@ -1,29 +1,16 @@
-
-
-生成所有建立控制平面和管理员(admin)所需的 kubeconfig 文件
+生成所有建立控制平面和管理员(admin)所需的 kubeconfig 文件。
-
### 概要
-
此命令并非设计用来单独运行。请阅读可用子命令列表。
```
@@ -33,7 +20,6 @@ kubeadm init phase kubeconfig [flags]
-
### 选项
@@ -135,7 +121,7 @@ Don't apply any changes; just output what would be done.
-
scheduler 操作的帮助命令
+
scheduler 操作的帮助命令。
@@ -179,7 +165,6 @@ Don't apply any changes; just output what would be done.
-
### 继承于父命令的选项
@@ -203,4 +188,3 @@ Don't apply any changes; just output what would be done.
-
From 78bc401fbcf49c345431c9199d5492b4245a186b Mon Sep 17 00:00:00 2001
From: Wilson Wu
Date: Tue, 3 Oct 2023 23:04:43 +0800
Subject: [PATCH 063/229] Init translate
---
.../2023-08-15-pkgs-k8s-io-introduction.md | 324 ++++++++++++++++++
1 file changed, 324 insertions(+)
create mode 100644 content/zh-cn/blog/_posts/2023-08-15-pkgs-k8s-io-introduction.md
diff --git a/content/zh-cn/blog/_posts/2023-08-15-pkgs-k8s-io-introduction.md b/content/zh-cn/blog/_posts/2023-08-15-pkgs-k8s-io-introduction.md
new file mode 100644
index 0000000000000..2dd837d0cf8d7
--- /dev/null
+++ b/content/zh-cn/blog/_posts/2023-08-15-pkgs-k8s-io-introduction.md
@@ -0,0 +1,324 @@
+---
+layout: blog
+title: "pkgs.k8s.io:介绍 Kubernetes 社区自有的包仓库"
+date: 2023-08-15T20:00:00+0000
+slug: pkgs-k8s-io-introduction
+---
+
+
+
+**作者**:Marko Mudrinić (Kubermatic)
+
+**译者**:Wilson Wu (DaoCloud)
+
+
+我很高兴代表 Kubernetes SIG Release 介绍 Kubernetes
+社区自有的 Debian 和 RPM 软件仓库:`pkgs.k8s.io`!
+这些全新的仓库取代了我们自 Kubernetes v1.5 以来一直使用的托管在
+Google 的仓库(`apt.kubernetes.io` 和 `yum.kubernetes.io`)。
+
+
+这篇博文包含关于这些新的包仓库的信息、它对最终用户意味着什么以及如何迁移到新仓库。
+
+
+**ℹ️ 更新(2023 年 8 月 31 日):旧版托管在 Google 的仓库已被弃用,并将于 2023 年 9 月 13 日开始被冻结。**
+查看[弃用公告](/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/)了解有关此更改的更多详细信息。
+
+
+## 关于新的包仓库,你需要了解哪些信息? {#what-you-need-to-know-about-the-new-package-repositories}
+
+
+**(更新于 2023 年 8 月 31 日)**
+
+
+- 这是一个**明确同意的更改**;你需要手动从托管在 Google 的仓库迁移到
+ Kubernetes 社区自有的仓库。请参阅本公告后面的[如何迁移](#how-to-migrate),
+ 了解迁移信息和说明。
+
+- 旧版托管在 Google 的仓库**自 2023 年 8 月 31 日起被弃用**,
+ 并将**于 2023 年 9 月 13 日左右被冻结**。
+ 冻结将在计划于 2023 年 9 月发布补丁之后立即发生。
+ 冻结旧仓库意味着我们在 2023 年 9 月 13 日这个时间点之后仅将 Kubernetes
+ 项目的包发布到社区自有的仓库。有关此更改的更多详细信息,
+ 请查看[弃用公告](/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/)。
+
+- 旧仓库中的现有包将在可预见的未来一段时间内可用。
+ 然而,Kubernetes 项目无法保证这会持续多久。
+ 已弃用的旧仓库及其内容可能会在未来随时被删除,恕不另行通知。
+
+
+- 鉴于在 2023 年 9 月 13 日这个截止时间点之后不会向旧仓库发布任何新版本,
+ 如果你不在该截止时间点迁移至新的 Kubernetes 仓库,
+ 你将无法升级到该日期之后发布的任何补丁或次要版本。
+ 也就是说,我们建议**尽快**迁移到新的 Kubernetes 仓库。
+
+- 新的 Kubernetes 仓库中包含社区开始接管包构建以来仍在支持的 Kubernetes 版本的包。
+ 这意味着 v1.24.0 之前的任何内容都只存在于托管在 Google 的仓库中。
+
+- 每个 Kubernetes 次要版本都有一个专用的仓库。
+ 当升级到不同的次要版本时,你必须记住,仓库详细信息也会发生变化。
+
+
+## 为什么我们要引入新的包仓库? {#why-are-we-introducing-new-package-repositories}
+
+
+随着 Kubernetes 项目的不断发展,我们希望确保最终用户获得最佳体验。
+托管在 Google 的仓库多年来一直为我们提供良好的服务,
+但我们开始面临一些问题,需要对发布包的方式进行重大变更。
+我们的另一个目标是对所有关键组件使用社区拥有的基础设施,其中包括仓库。
+
+
+将包发布到托管在 Google 的仓库是一个手动过程,
+只能由名为 [Google 构建管理员](/zh-cn/releases/release-managers/#build-admins)的 Google 员工团队来完成。
+[Kubernetes 发布管理员团队](/zh-cn/releases/release-managers/#release-managers)是一个非常多元化的团队,
+尤其是在我们工作的时区方面。考虑到这一限制,我们必须对每个版本进行非常仔细的规划,
+确保我们有发布经理和 Google 构建管理员来执行发布。
+
+
+另一个问题是由于我们只有一个包仓库。因此,我们无法发布预发行版本
+(Alpha、Beta 和 RC)的包。这使得任何有兴趣测试的人都更难测试 Kubernetes 预发布版本。
+我们从测试这些版本的人员那里收到的反馈对于确保版本的最佳质量至关重要,
+因此我们希望尽可能轻松地测试这些版本。最重要的是,只有一个仓库限制了我们对
+`cri-tools` 和 `kubernetes-cni` 等依赖进行发布,
+
+
+尽管存在这些问题,我们仍非常感谢 Google 和 Google 构建管理员这些年来的参与、支持和帮助!
+
+
+## 新的包仓库如何工作? {#how-the-new-package-repositories-work}
+
+
+新的 Debian 和 RPM 仓库托管在 `pkgs.k8s.io`。
+目前,该域指向一个 CloudFront CDN,其后是包含仓库和包的 S3 存储桶。
+然而,我们计划在未来添加更多的镜像站点,让其他公司有可能帮助我们提供软件包服务。
+
+
+包通过 [OpenBuildService(OBS)平台](http://openbuildservice.org)构建和发布。
+经过长时间评估不同的解决方案后,我们决定使用 OpenBuildService 作为管理仓库和包的平台。
+首先,OpenBuildService 是一个开源平台,被大量开源项目和公司使用,
+如 openSUSE、VideoLAN、Dell、Intel 等。OpenBuildService 具有许多功能,
+使其非常灵活且易于与我们现有的发布工具集成。
+它还允许我们以与托管在 Google 的仓库类似的方式构建包,从而使迁移过程尽可能无缝。
+
+
+SUSE 赞助 Kubernetes 项目并且支持访问其引入的 OpenBuildService 环境
+([`build.opensuse.org`](http://build.opensuse.org)),
+还提供将 OBS 与我们的发布流程集成的技术支持。
+
+
+我们使用 SUSE 的 OBS 实例来构建和发布包。构建新版本后,
+我们的工具会自动将所需的制品和包设置推送到 `build.opensuse.org`。
+这将触发构建过程,为所有支持的架构(AMD64、ARM64、PPC64LE、S390X)构建包。
+最后,生成的包将自动推送到我们社区拥有的 S3 存储桶,以便所有用户都可以使用它们。
+
+
+我们想借此机会感谢 SUSE 允许我们使用 `build.opensuse.org`
+以及他们的慷慨支持,使这种集成成为可能!
+
+
+## 托管在 Google 的仓库和 Kubernetes 仓库之间有哪些显著差异? {#what-are-significant-differences-between-the-google-hosted-and-kubernetes-package-repositories}
+
+
+你应该注意三个显著差异:
+
+
+- 每个 Kubernetes 次要版本都有一个专用的仓库。例如,
+ 名为 `core:/stable:/v1.28` 的仓库仅托管稳定 Kubernetes v1.28 版本的包。
+ 这意味着你可以从此仓库安装 v1.28.0,但无法安装 v1.27.0 或 v1.28 之外的任何其他次要版本。
+ 升级到另一个次要版本后,你必须添加新的仓库并可以选择删除旧的仓库
+
+- 每个 Kubernetes 仓库中可用的 `cri-tools` 和 `kubernetes-cni` 包版本有所不同
+ - 这两个包是 `kubelet` 和 `kubeadm` 的依赖项
+ - v1.24 到 v1.27 的 Kubernetes 仓库与托管在 Google 的仓库具有这些包的相同版本
+ - v1.28 及更高版本的 Kubernetes 仓库将仅发布该 Kubernetes 次要版本
+ - 就 v1.28 而言,Kubernetes v1.28 的仓库中仅提供 kubernetes-cni 1.2.0 和 cri-tools v1.28
+ - 与 v1.29 类似,我们只计划发布 cri-tools v1.29 以及 Kubernetes v1.29 将使用的 kubernetes-cni 版本
+
+- 包版本的修订部分(`1.28.0-00` 中的 `-00` 部分)现在由 OpenBuildService
+ 平台自动生成,并具有不同的格式。修订版本现在采用 `-x.y` 格式,例如 `1.28.0-1.1`
+
+
+## 这是否会影响现有的托管在 Google 的仓库? {#does-this-in-any-way-affect-existing-google-hosted-repositories}
+
+
+托管在 Google 的仓库以及发布到其中的所有包仍然可用,与之前一样。
+我们构建包并将其发布到托管在 Google 仓库的方式没有变化,
+所有新引入的更改仅影响发布到社区自有仓库的包。
+
+
+然而,正如本文开头提到的,我们计划将来停止将包发布到托管在 Google 的仓库。
+
+
+## 如何迁移到 Kubernetes 社区自有的仓库? {#how-to-migrate}
+
+
+### 使用 `apt`/`apt-get` 的 Debian、Ubuntu 一起其他操作系统 {#how-to-migrate-deb}
+
+
+1. 替换 `apt` 仓库定义,以便 `apt` 指向新仓库而不是托管在 Google 的仓库。
+ 确保将以下命令中的 Kubernetes 次要版本替换为你当前使用的次要版本:
+
+ ```shell
+ echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
+ ```
+
+
+2. 下载 Kubernetes 仓库的公共签名密钥。所有仓库都使用相同的签名密钥,
+ 因此你可以忽略 URL 中的版本:
+
+ ```shell
+ curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
+ ```
+
+
+3. 更新 `apt` 包索引:
+
+ ```shell
+ sudo apt-get update
+ ```
+
+
+### 使用 `rpm`/`dnf` 的 CentOS、Fedora、RHEL 以及其他操作系统 {#how-to-migrate-rpm}
+
+
+1. 替换 `yum` 仓库定义,使 `yum` 指向新仓库而不是托管在 Google 的仓库。
+ 确保将以下命令中的 Kubernetes 次要版本替换为你当前使用的次要版本:
+
+ ```shell
+ cat <
+## 迁移到 Kubernetes 仓库后是否可以回滚到托管在 Google 的仓库? {#can-i-rollback-to-the-google-hosted-repository-after-migrating-to-the-kubernetes-repositories}
+
+
+一般来说,可以。只需执行与迁移时相同的步骤,但使用托管在 Google 的仓库参数。
+你可以在[“安装 kubeadm”](/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm)等文档中找到这些参数。
+
+
+## 为什么没有固定的域名/IP 列表?为什么我无法限制包下载? {#why-isn-t-there-a-stable-list-of-domains-ips-why-can-t-i-restrict-package-downloads}
+
+
+我们对 `pkgs.k8s.io` 的计划是使其根据用户位置充当一组后端(包镜像)的重定向器。
+此更改的本质意味着下载包的用户可以随时重定向到任何镜像。
+鉴于架构和我们计划在不久的将来加入更多镜像,我们无法提供给你可以添加到允许列表中的
+IP 地址或域名列表。
+
+
+限制性控制机制(例如限制访问特定 IP/域名列表的中间人代理或网络策略)将随着此更改而中断。
+对于这些场景,我们鼓励你将包的发布版本与你可以严格控制的本地仓库建立镜像。
+
+
+## 如果我发现新的仓库有异常怎么办? {#what-should-i-do-if-i-detect-some-abnormality-with-the-new-repositories}
+
+
+如果你在新的 Kubernetes 仓库中遇到任何问题,
+请在 [`kubernetes/release` 仓库](https://github.com/kubernetes/release/issues/new/choose)中提交问题。
From 3729d4a75a97bad0dfa048aa5c655f24281335c6 Mon Sep 17 00:00:00 2001
From: Arhell
Date: Fri, 6 Oct 2023 00:36:28 +0300
Subject: [PATCH 064/229] [ja] update the range of pod-deletion-cost
---
content/ja/docs/concepts/workloads/controllers/replicaset.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/ja/docs/concepts/workloads/controllers/replicaset.md b/content/ja/docs/concepts/workloads/controllers/replicaset.md
index c3d5282fa5c15..1f73fe842f911 100644
--- a/content/ja/docs/concepts/workloads/controllers/replicaset.md
+++ b/content/ja/docs/concepts/workloads/controllers/replicaset.md
@@ -275,7 +275,7 @@ ReplicaSetは、ただ`.spec.replicas`フィールドを更新することによ
[`controller.kubernetes.io/pod-deletion-cost`](/docs/reference/labels-annotations-taints/#pod-deletion-cost)アノテーションを使用すると、ReplicaSetをスケールダウンする際に、どのPodを最初に削除するかについて、ユーザーが優先順位を設定することができます。
-アノテーションはPodに設定する必要があり、範囲は[-2147483647, 2147483647]になります。同じReplicaSetに属する他のPodと比較して、Podを削除する際のコストを表しています。削除コストの低いPodは、削除コストの高いPodより優先的に削除されます。
+アノテーションはPodに設定する必要があり、範囲は[-2147483648, 2147483647]になります。同じReplicaSetに属する他のPodと比較して、Podを削除する際のコストを表しています。削除コストの低いPodは、削除コストの高いPodより優先的に削除されます。
このアノテーションを設定しないPodは暗黙的に0と設定され、負の値は許容されます。
無効な値はAPIサーバーによって拒否されます。
From f2cfc91486c1d88eefbe861393194b43d5e7e4b5 Mon Sep 17 00:00:00 2001
From: Shannon Kularathna
Date: Tue, 15 Aug 2023 19:54:37 +0000
Subject: [PATCH 065/229] Fix the case of Secrets wherever it refers to the
Kubernetes object
---
.../en/docs/concepts/configuration/secret.md | 28 +++++++++----------
1 file changed, 14 insertions(+), 14 deletions(-)
diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md
index d546ec12e4964..a7c38d4d4a5d5 100644
--- a/content/en/docs/concepts/configuration/secret.md
+++ b/content/en/docs/concepts/configuration/secret.md
@@ -6,8 +6,8 @@ content_type: concept
feature:
title: Secret and configuration management
description: >
- Deploy and update secrets and application configuration without rebuilding your image
- and without exposing secrets in your stack configuration.
+ Deploy and update Secrets and application configuration without rebuilding your image
+ and without exposing Secrets in your stack configuration.
weight: 30
---
@@ -68,7 +68,7 @@ help automate node registration.
### Use case: dotfiles in a secret volume
You can make your data "hidden" by defining a key that begins with a dot.
-This key represents a dotfile or "hidden" file. For example, when the following secret
+This key represents a dotfile or "hidden" file. For example, when the following Secret
is mounted into a volume, `secret-volume`, the volume will contain a single file,
called `.secret-file`, and the `dotfile-test-container` will have this file
present at the path `/etc/secret-volume/.secret-file`.
@@ -135,8 +135,8 @@ Here are some of your options:
[ServiceAccount](/docs/reference/access-authn-authz/authentication/#service-account-tokens)
and its tokens to identify your client.
- There are third-party tools that you can run, either within or outside your cluster,
- that provide secrets management. For example, a service that Pods access over HTTPS,
- that reveals a secret if the client correctly authenticates (for example, with a ServiceAccount
+ that provide Secrets management. For example, a service that Pods access over HTTPS,
+ that reveals a Secret if the client correctly authenticates (for example, with a ServiceAccount
token).
- For authentication, you can implement a custom signer for X.509 certificates, and use
[CertificateSigningRequests](/docs/reference/access-authn-authz/certificate-signing-requests/)
@@ -505,7 +505,7 @@ data:
A bootstrap token Secret has the following keys specified under `data`:
- `token-id`: A random 6 character string as the token identifier. Required.
-- `token-secret`: A random 16 character string as the actual token secret. Required.
+- `token-secret`: A random 16 character string as the actual token Secret. Required.
- `description`: A human-readable string that describes what the token is
used for. Optional.
- `expiration`: An absolute UTC time using [RFC3339](https://datatracker.ietf.org/doc/html/rfc3339) specifying when the token
@@ -568,9 +568,9 @@ precedence.
#### Size limit {#restriction-data-size}
-Individual secrets are limited to 1MiB in size. This is to discourage creation
-of very large secrets that could exhaust the API server and kubelet memory.
-However, creation of many smaller secrets could also exhaust memory. You can
+Individual Secrets are limited to 1MiB in size. This is to discourage creation
+of very large Secrets that could exhaust the API server and kubelet memory.
+However, creation of many smaller Secrets could also exhaust memory. You can
use a [resource quota](/docs/concepts/policy/resource-quotas/) to limit the
number of Secrets (or other resources) in a namespace.
@@ -708,17 +708,17 @@ LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT
0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names.
```
-### Container image pull secrets {#using-imagepullsecrets}
+### Container image pull Secrets {#using-imagepullsecrets}
If you want to fetch container images from a private repository, you need a way for
the kubelet on each node to authenticate to that repository. You can configure
-_image pull secrets_ to make this possible. These secrets are configured at the Pod
+_image pull Secrets_ to make this possible. These Secrets are configured at the Pod
level.
#### Using imagePullSecrets
-The `imagePullSecrets` field is a list of references to secrets in the same namespace.
-You can use an `imagePullSecrets` to pass a secret that contains a Docker (or other) image registry
+The `imagePullSecrets` field is a list of references to Secrets in the same namespace.
+You can use an `imagePullSecrets` to pass a Secret that contains a Docker (or other) image registry
password to the kubelet. The kubelet uses this information to pull a private image on behalf of your Pod.
See the [PodSpec API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core)
for more information about the `imagePullSecrets` field.
@@ -787,7 +787,7 @@ Secrets it expects to interact with, other apps within the same namespace can
render those assumptions invalid.
A Secret is only sent to a node if a Pod on that node requires it.
-For mounting secrets into Pods, the kubelet stores a copy of the data into a `tmpfs`
+For mounting Secrets into Pods, the kubelet stores a copy of the data into a `tmpfs`
so that the confidential data is not written to durable storage.
Once the Pod that depends on the Secret is deleted, the kubelet deletes its local copy
of the confidential data from the Secret.
From cc62cbfda3fb47bf5a0585e7db237a23cf868bd4 Mon Sep 17 00:00:00 2001
From: Shannon Kularathna
Date: Tue, 15 Aug 2023 20:11:43 +0000
Subject: [PATCH 066/229] Move YAML snippets to examples directory and include
with code shortcode
---
.../en/docs/concepts/configuration/secret.md | 154 ++----------------
.../en/examples/secret/basicauth-secret.yaml | 8 +
.../secret/bootstrap-token-secret-base64.yaml | 13 ++
.../bootstrap-token-secret-literal.yaml | 18 ++
.../en/examples/secret/dockercfg-secret.yaml | 8 +
.../en/examples/secret/dotfile-secret.yaml | 27 +++
.../en/examples/secret/optional-secret.yaml | 17 ++
.../secret/serviceaccount-token-secret.yaml | 9 +
.../en/examples/secret/ssh-auth-secret.yaml | 9 +
.../en/examples/secret/tls-auth-secret.yaml | 28 ++++
10 files changed, 148 insertions(+), 143 deletions(-)
create mode 100644 content/en/examples/secret/basicauth-secret.yaml
create mode 100644 content/en/examples/secret/bootstrap-token-secret-base64.yaml
create mode 100644 content/en/examples/secret/bootstrap-token-secret-literal.yaml
create mode 100644 content/en/examples/secret/dockercfg-secret.yaml
create mode 100644 content/en/examples/secret/dotfile-secret.yaml
create mode 100644 content/en/examples/secret/optional-secret.yaml
create mode 100644 content/en/examples/secret/serviceaccount-token-secret.yaml
create mode 100644 content/en/examples/secret/ssh-auth-secret.yaml
create mode 100644 content/en/examples/secret/tls-auth-secret.yaml
diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md
index a7c38d4d4a5d5..e523c38ee7347 100644
--- a/content/en/docs/concepts/configuration/secret.md
+++ b/content/en/docs/concepts/configuration/secret.md
@@ -24,7 +24,7 @@ Because Secrets can be created independently of the Pods that use them, there
is less risk of the Secret (and its data) being exposed during the workflow of
creating, viewing, and editing Pods. Kubernetes, and applications that run in
your cluster, can also take additional precautions with Secrets, such as avoiding
-writing secret data to nonvolatile storage.
+writing sensitive data to nonvolatile storage.
Secrets are similar to {{< glossary_tooltip text="ConfigMaps" term_id="configmap" >}}
but are specifically intended to hold confidential data.
@@ -78,35 +78,7 @@ Files beginning with dot characters are hidden from the output of `ls -l`;
you must use `ls -la` to see them when listing directory contents.
{{< /note >}}
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: dotfile-secret
-data:
- .secret-file: dmFsdWUtMg0KDQo=
----
-apiVersion: v1
-kind: Pod
-metadata:
- name: secret-dotfiles-pod
-spec:
- volumes:
- - name: secret-volume
- secret:
- secretName: dotfile-secret
- containers:
- - name: dotfile-test-container
- image: registry.k8s.io/busybox
- command:
- - ls
- - "-l"
- - "/etc/secret-volume"
- volumeMounts:
- - name: secret-volume
- readOnly: true
- mountPath: "/etc/secret-volume"
-```
+{{% code language="yaml" file="secret/dotfile-secret.yaml" %}}
### Use case: Secret visible to one container in a Pod
@@ -135,7 +107,7 @@ Here are some of your options:
[ServiceAccount](/docs/reference/access-authn-authz/authentication/#service-account-tokens)
and its tokens to identify your client.
- There are third-party tools that you can run, either within or outside your cluster,
- that provide Secrets management. For example, a service that Pods access over HTTPS,
+ that manage sensitive data. For example, a service that Pods access over HTTPS,
that reveals a Secret if the client correctly authenticates (for example, with a ServiceAccount
token).
- For authentication, you can implement a custom signer for X.509 certificates, and use
@@ -251,18 +223,7 @@ fills in some other fields such as the `kubernetes.io/service-account.uid` annot
The following example configuration declares a ServiceAccount token Secret:
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: secret-sa-sample
- annotations:
- kubernetes.io/service-account.name: "sa-name"
-type: kubernetes.io/service-account-token
-data:
- # You can include additional key value pairs as you do with Opaque Secrets
- extra: YmFyCg==
-```
+{{% code language="yaml" file="secret/serviceaccount-token-secret.yaml" %}}
After creating the Secret, wait for Kubernetes to populate the `token` key in the `data` field.
@@ -290,16 +251,7 @@ you must use one of the following `type` values for that Secret:
Below is an example for a `kubernetes.io/dockercfg` type of Secret:
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: secret-dockercfg
-type: kubernetes.io/dockercfg
-data:
- .dockercfg: |
- ""
-```
+{{% code language="yaml" file="secret/dockercfg-secret.yaml" %}}
{{< note >}}
If you do not want to perform the base64 encoding, you can choose to use the
@@ -369,16 +321,7 @@ Secret manifest.
The following manifest is an example of a basic authentication Secret:
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: secret-basic-auth
-type: kubernetes.io/basic-auth
-stringData:
- username: admin # required field for kubernetes.io/basic-auth
- password: t0p-Secret # required field for kubernetes.io/basic-auth
-```
+{{% code language="yaml" file="secret/basicauth-secret.yaml" %}}
The basic authentication Secret type is provided only for convenience.
You can create an `Opaque` type for credentials used for basic authentication.
@@ -397,17 +340,7 @@ as the SSH credential to use.
The following manifest is an example of a Secret used for SSH public/private
key authentication:
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: secret-ssh-auth
-type: kubernetes.io/ssh-auth
-data:
- # the data is abbreviated in this example
- ssh-privatekey: |
- MIIEpQIBAAKCAQEAulqb/Y ...
-```
+{{% code language="yaml" file="secret/ssh-auth-secret.yaml" %}}
The SSH authentication Secret type is provided only for convenience.
You can create an `Opaque` type for credentials used for SSH authentication.
@@ -440,21 +373,7 @@ the base64 encoded certificate and private key. For details, see
The following YAML contains an example config for a TLS Secret:
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: secret-tls
-type: kubernetes.io/tls
-stringData:
- # the data is abbreviated in this example
- tls.crt: |
- --------BEGIN CERTIFICATE-----
- MIIC2DCCAcCgAwIBAgIBATANBgkqh ...
- tls.key: |
- -----BEGIN RSA PRIVATE KEY-----
- MIIEpgIBAAKCAQEA7yn3bRHQ5FHMQ ...
-```
+{{% code language="yaml" file="secret/tls-auth-secret.yaml" %}}
The TLS Secret type is provided only for convenience.
You can create an `Opaque` type for credentials used for TLS authentication.
@@ -486,21 +405,7 @@ string of the token ID.
As a Kubernetes manifest, a bootstrap token Secret might look like the
following:
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: bootstrap-token-5emitj
- namespace: kube-system
-type: bootstrap.kubernetes.io/token
-data:
- auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=
- expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=
- token-id: NWVtaXRq
- token-secret: a3E0Z2lodnN6emduMXAwcg==
- usage-bootstrap-authentication: dHJ1ZQ==
- usage-bootstrap-signing: dHJ1ZQ==
-```
+{{% code language="yaml" file="secret/bootstrap-token-secret-base64.yaml" %}}
A bootstrap token Secret has the following keys specified under `data`:
@@ -518,26 +423,7 @@ A bootstrap token Secret has the following keys specified under `data`:
You can alternatively provide the values in the `stringData` field of the Secret
without base64 encoding them:
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- # Note how the Secret is named
- name: bootstrap-token-5emitj
- # A bootstrap token Secret usually resides in the kube-system namespace
- namespace: kube-system
-type: bootstrap.kubernetes.io/token
-stringData:
- auth-extra-groups: "system:bootstrappers:kubeadm:default-node-token"
- expiration: "2020-09-13T04:39:10Z"
- # This token ID is used in the name
- token-id: "5emitj"
- token-secret: "kq4gihvszzgn1p0r"
- # This token can be used for authentication
- usage-bootstrap-authentication: "true"
- # and it can be used for signing
- usage-bootstrap-signing: "true"
-```
+{{% code language="yaml" file="secret/bootstrap-token-secret-literal.yaml" %}}
## Working with Secrets
@@ -613,25 +499,7 @@ When you reference a Secret in a Pod, you can mark the Secret as _optional_,
such as in the following example. If an optional Secret doesn't exist,
Kubernetes ignores it.
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: mypod
-spec:
- containers:
- - name: mypod
- image: redis
- volumeMounts:
- - name: foo
- mountPath: "/etc/foo"
- readOnly: true
- volumes:
- - name: foo
- secret:
- secretName: mysecret
- optional: true
-```
+{{% code language="yaml" file="secret/optional-secret.yaml" %}}
By default, Secrets are required. None of a Pod's containers will start until
all non-optional Secrets are available.
diff --git a/content/en/examples/secret/basicauth-secret.yaml b/content/en/examples/secret/basicauth-secret.yaml
new file mode 100644
index 0000000000000..a854b267a01a5
--- /dev/null
+++ b/content/en/examples/secret/basicauth-secret.yaml
@@ -0,0 +1,8 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: secret-basic-auth
+type: kubernetes.io/basic-auth
+stringData:
+ username: admin # required field for kubernetes.io/basic-auth
+ password: t0p-Secret # required field for kubernetes.io/basic-auth
\ No newline at end of file
diff --git a/content/en/examples/secret/bootstrap-token-secret-base64.yaml b/content/en/examples/secret/bootstrap-token-secret-base64.yaml
new file mode 100644
index 0000000000000..98233758e2e7c
--- /dev/null
+++ b/content/en/examples/secret/bootstrap-token-secret-base64.yaml
@@ -0,0 +1,13 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: bootstrap-token-5emitj
+ namespace: kube-system
+type: bootstrap.kubernetes.io/token
+data:
+ auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=
+ expiration: MjAyMC0wOS0xM1QwNDozOToxMFo=
+ token-id: NWVtaXRq
+ token-secret: a3E0Z2lodnN6emduMXAwcg==
+ usage-bootstrap-authentication: dHJ1ZQ==
+ usage-bootstrap-signing: dHJ1ZQ==
\ No newline at end of file
diff --git a/content/en/examples/secret/bootstrap-token-secret-literal.yaml b/content/en/examples/secret/bootstrap-token-secret-literal.yaml
new file mode 100644
index 0000000000000..6aec11ce870fc
--- /dev/null
+++ b/content/en/examples/secret/bootstrap-token-secret-literal.yaml
@@ -0,0 +1,18 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ # Note how the Secret is named
+ name: bootstrap-token-5emitj
+ # A bootstrap token Secret usually resides in the kube-system namespace
+ namespace: kube-system
+type: bootstrap.kubernetes.io/token
+stringData:
+ auth-extra-groups: "system:bootstrappers:kubeadm:default-node-token"
+ expiration: "2020-09-13T04:39:10Z"
+ # This token ID is used in the name
+ token-id: "5emitj"
+ token-secret: "kq4gihvszzgn1p0r"
+ # This token can be used for authentication
+ usage-bootstrap-authentication: "true"
+ # and it can be used for signing
+ usage-bootstrap-signing: "true"
\ No newline at end of file
diff --git a/content/en/examples/secret/dockercfg-secret.yaml b/content/en/examples/secret/dockercfg-secret.yaml
new file mode 100644
index 0000000000000..ccf73bc306f24
--- /dev/null
+++ b/content/en/examples/secret/dockercfg-secret.yaml
@@ -0,0 +1,8 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: secret-dockercfg
+type: kubernetes.io/dockercfg
+data:
+ .dockercfg: |
+ eyJhdXRocyI6eyJodHRwczovL2V4YW1wbGUvdjEvIjp7ImF1dGgiOiJvcGVuc2VzYW1lIn19fQo=
\ No newline at end of file
diff --git a/content/en/examples/secret/dotfile-secret.yaml b/content/en/examples/secret/dotfile-secret.yaml
new file mode 100644
index 0000000000000..5c7900ad97479
--- /dev/null
+++ b/content/en/examples/secret/dotfile-secret.yaml
@@ -0,0 +1,27 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: dotfile-secret
+data:
+ .secret-file: dmFsdWUtMg0KDQo=
+---
+apiVersion: v1
+kind: Pod
+metadata:
+ name: secret-dotfiles-pod
+spec:
+ volumes:
+ - name: secret-volume
+ secret:
+ secretName: dotfile-secret
+ containers:
+ - name: dotfile-test-container
+ image: registry.k8s.io/busybox
+ command:
+ - ls
+ - "-l"
+ - "/etc/secret-volume"
+ volumeMounts:
+ - name: secret-volume
+ readOnly: true
+ mountPath: "/etc/secret-volume"
\ No newline at end of file
diff --git a/content/en/examples/secret/optional-secret.yaml b/content/en/examples/secret/optional-secret.yaml
new file mode 100644
index 0000000000000..cc510b9078130
--- /dev/null
+++ b/content/en/examples/secret/optional-secret.yaml
@@ -0,0 +1,17 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mypod
+spec:
+ containers:
+ - name: mypod
+ image: redis
+ volumeMounts:
+ - name: foo
+ mountPath: "/etc/foo"
+ readOnly: true
+ volumes:
+ - name: foo
+ secret:
+ secretName: mysecret
+ optional: true
\ No newline at end of file
diff --git a/content/en/examples/secret/serviceaccount-token-secret.yaml b/content/en/examples/secret/serviceaccount-token-secret.yaml
new file mode 100644
index 0000000000000..8ec8fb577d547
--- /dev/null
+++ b/content/en/examples/secret/serviceaccount-token-secret.yaml
@@ -0,0 +1,9 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: secret-sa-sample
+ annotations:
+ kubernetes.io/service-account.name: "sa-name"
+type: kubernetes.io/service-account-token
+data:
+ extra: YmFyCg==
\ No newline at end of file
diff --git a/content/en/examples/secret/ssh-auth-secret.yaml b/content/en/examples/secret/ssh-auth-secret.yaml
new file mode 100644
index 0000000000000..9f79cbfb065fd
--- /dev/null
+++ b/content/en/examples/secret/ssh-auth-secret.yaml
@@ -0,0 +1,9 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: secret-ssh-auth
+type: kubernetes.io/ssh-auth
+data:
+ # the data is abbreviated in this example
+ ssh-privatekey: |
+ UG91cmluZzYlRW1vdGljb24lU2N1YmE=
\ No newline at end of file
diff --git a/content/en/examples/secret/tls-auth-secret.yaml b/content/en/examples/secret/tls-auth-secret.yaml
new file mode 100644
index 0000000000000..1e14b8e00ac47
--- /dev/null
+++ b/content/en/examples/secret/tls-auth-secret.yaml
@@ -0,0 +1,28 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: secret-tls
+type: kubernetes.io/tls
+data:
+ # values are base64 encoded, which obscures them but does NOT provide
+ # any useful level of confidentiality
+ tls.crt: |
+ LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNVakNDQWJzQ0FnMytNQTBHQ1NxR1NJYjNE
+ UUVCQlFVQU1JR2JNUXN3Q1FZRFZRUUdFd0pLVURFT01Bd0cKQTFVRUNCTUZWRzlyZVc4eEVEQU9C
+ Z05WQkFjVEIwTm9kVzh0YTNVeEVUQVBCZ05WQkFvVENFWnlZVzVyTkVSRQpNUmd3RmdZRFZRUUxF
+ dzlYWldKRFpYSjBJRk4xY0hCdmNuUXhHREFXQmdOVkJBTVREMFp5WVc1ck5FUkVJRmRsCllpQkRR
+ VEVqTUNFR0NTcUdTSWIzRFFFSkFSWVVjM1Z3Y0c5eWRFQm1jbUZ1YXpSa1pDNWpiMjB3SGhjTk1U
+ TXcKTVRFeE1EUTFNVE01V2hjTk1UZ3dNVEV3TURRMU1UTTVXakJMTVFzd0NRWURWUVFHREFKS1VE
+ RVBNQTBHQTFVRQpDQXdHWEZSdmEzbHZNUkV3RHdZRFZRUUtEQWhHY21GdWF6UkVSREVZTUJZR0Ex
+ VUVBd3dQZDNkM0xtVjRZVzF3CmJHVXVZMjl0TUlHYU1BMEdDU3FHU0liM0RRRUJBUVVBQTRHSUFE
+ Q0JoQUo5WThFaUhmeHhNL25PbjJTbkkxWHgKRHdPdEJEVDFKRjBReTliMVlKanV2YjdjaTEwZjVN
+ Vm1UQllqMUZTVWZNOU1vejJDVVFZdW4yRFljV29IcFA4ZQpqSG1BUFVrNVd5cDJRN1ArMjh1bklI
+ QkphVGZlQ09PekZSUFY2MEdTWWUzNmFScG04L3dVVm16eGFLOGtCOWVaCmhPN3F1TjdtSWQxL2pW
+ cTNKODhDQXdFQUFUQU5CZ2txaGtpRzl3MEJBUVVGQUFPQmdRQU1meTQzeE15OHh3QTUKVjF2T2NS
+ OEtyNWNaSXdtbFhCUU8xeFEzazlxSGtyNFlUY1JxTVQ5WjVKTm1rWHYxK2VSaGcwTi9WMW5NUTRZ
+ RgpnWXcxbnlESnBnOTduZUV4VzQyeXVlMFlHSDYyV1hYUUhyOVNVREgrRlowVnQvRGZsdklVTWRj
+ UUFEZjM4aU9zCjlQbG1kb3YrcE0vNCs5a1h5aDhSUEkzZXZ6OS9NQT09Ci0tLS0tRU5EIENFUlRJ
+ RklDQVRFLS0tLS0K
+ # In this example, the key data is not a real PEM-encoded private key
+ tls.key: |
+ RXhhbXBsZSBkYXRhIGZvciB0aGUgVExTIGNydCBmaWVsZA==
\ No newline at end of file
From 9e1201fb4a8666ce4ee6064f116ec274b88a576e Mon Sep 17 00:00:00 2001
From: Chun-Wei Chang
Date: Thu, 5 Oct 2023 07:10:00 -0700
Subject: [PATCH 067/229] fix: link text in glossary cri
---
.../en/docs/reference/glossary/container-runtime-interface.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/reference/glossary/container-runtime-interface.md b/content/en/docs/reference/glossary/container-runtime-interface.md
index e3ad8f5b092a0..c2dab628efb04 100644
--- a/content/en/docs/reference/glossary/container-runtime-interface.md
+++ b/content/en/docs/reference/glossary/container-runtime-interface.md
@@ -17,6 +17,6 @@ The main protocol for the communication between the {{< glossary_tooltip text="k
The Kubernetes Container Runtime Interface (CRI) defines the main
[gRPC](https://grpc.io) protocol for the communication between the
-[cluster components](/docs/concepts/overview/components/#node-components)
+[node components](/docs/concepts/overview/components/#node-components)
{{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and
{{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}.
From c9cd9a269347af941267ef5da7d518b00f5f2f5b Mon Sep 17 00:00:00 2001
From: Arhell
Date: Sat, 7 Oct 2023 01:27:04 +0300
Subject: [PATCH 068/229] [pt] update the range of pod-deletion-cost
---
content/pt-br/docs/concepts/workloads/controllers/replicaset.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/pt-br/docs/concepts/workloads/controllers/replicaset.md b/content/pt-br/docs/concepts/workloads/controllers/replicaset.md
index dcd22d1f77000..440187b4b9460 100644
--- a/content/pt-br/docs/concepts/workloads/controllers/replicaset.md
+++ b/content/pt-br/docs/concepts/workloads/controllers/replicaset.md
@@ -280,7 +280,7 @@ Se o Pod obedecer todos os items acima simultaneamente, a seleção é aleatóri
Utilizando a anotação [`controller.kubernetes.io/pod-deletion-cost`](/docs/reference/labels-annotations-taints/#pod-deletion-cost),
usuários podem definir uma preferência em relação à quais pods serão removidos primeiro caso o ReplicaSet precise escalonar para baixo.
-A anotação deve ser definida no pod, com uma variação de [-2147483647, 2147483647]. Isso representa o custo de deletar um pod comparado com outros pods que pertencem à esse mesmo ReplicaSet. Pods com um custo de deleção menor são eleitos para deleção antes de pods com um custo maior.
+A anotação deve ser definida no pod, com uma variação de [-2147483648, 2147483647]. Isso representa o custo de deletar um pod comparado com outros pods que pertencem à esse mesmo ReplicaSet. Pods com um custo de deleção menor são eleitos para deleção antes de pods com um custo maior.
O valor implícito para essa anotação para pods que não a tem definida é 0; valores negativos são permitidos.
Valores inválidos serão rejeitados pelo servidor API.
From 318ff2e797d991d9551670f22554c22131215ac7 Mon Sep 17 00:00:00 2001
From: Michael
Date: Fri, 6 Oct 2023 19:24:16 +0800
Subject: [PATCH 069/229] Clean up kubelet-tls-bootstrapping.md
---
.../kubelet-tls-bootstrapping.md | 152 ++++++++++--------
1 file changed, 88 insertions(+), 64 deletions(-)
diff --git a/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md b/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md
index c1b33647407c1..c4393b261e205 100644
--- a/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md
+++ b/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md
@@ -11,31 +11,35 @@ weight: 120
-In a Kubernetes cluster, the components on the worker nodes - kubelet and kube-proxy - need to communicate with Kubernetes control plane components, specifically kube-apiserver.
-In order to ensure that communication is kept private, not interfered with, and ensure that each component of the cluster is talking to another trusted component, we strongly
+In a Kubernetes cluster, the components on the worker nodes - kubelet and kube-proxy - need
+to communicate with Kubernetes control plane components, specifically kube-apiserver.
+In order to ensure that communication is kept private, not interfered with, and ensure that
+each component of the cluster is talking to another trusted component, we strongly
recommend using client TLS certificates on nodes.
-The normal process of bootstrapping these components, especially worker nodes that need certificates so they can communicate safely with kube-apiserver,
-can be a challenging process as it is often outside of the scope of Kubernetes and requires significant additional work.
+The normal process of bootstrapping these components, especially worker nodes that need certificates
+so they can communicate safely with kube-apiserver, can be a challenging process as it is often outside
+of the scope of Kubernetes and requires significant additional work.
This in turn, can make it challenging to initialize or scale a cluster.
-In order to simplify the process, beginning in version 1.4, Kubernetes introduced a certificate request and signing API. The proposal can be
-found [here](https://github.com/kubernetes/kubernetes/pull/20439).
+In order to simplify the process, beginning in version 1.4, Kubernetes introduced a certificate request
+and signing API. The proposal can be found [here](https://github.com/kubernetes/kubernetes/pull/20439).
This document describes the process of node initialization, how to set up TLS client certificate bootstrapping for
kubelets, and how it works.
-## Initialization Process
+## Initialization process
When a worker node starts up, the kubelet does the following:
1. Look for its `kubeconfig` file
-2. Retrieve the URL of the API server and credentials, normally a TLS key and signed certificate from the `kubeconfig` file
-3. Attempt to communicate with the API server using the credentials.
+1. Retrieve the URL of the API server and credentials, normally a TLS key and signed certificate from the `kubeconfig` file
+1. Attempt to communicate with the API server using the credentials.
-Assuming that the kube-apiserver successfully validates the kubelet's credentials, it will treat the kubelet as a valid node, and begin to assign pods to it.
+Assuming that the kube-apiserver successfully validates the kubelet's credentials,
+it will treat the kubelet as a valid node, and begin to assign pods to it.
Note that the above process depends upon:
@@ -45,35 +49,36 @@ Note that the above process depends upon:
All of the following are responsibilities of whoever sets up and manages the cluster:
1. Creating the CA key and certificate
-2. Distributing the CA certificate to the control plane nodes, where kube-apiserver is running
-3. Creating a key and certificate for each kubelet; strongly recommended to have a unique one, with a unique CN, for each kubelet
-4. Signing the kubelet certificate using the CA key
-5. Distributing the kubelet key and signed certificate to the specific node on which the kubelet is running
+1. Distributing the CA certificate to the control plane nodes, where kube-apiserver is running
+1. Creating a key and certificate for each kubelet; strongly recommended to have a unique one, with a unique CN, for each kubelet
+1. Signing the kubelet certificate using the CA key
+1. Distributing the kubelet key and signed certificate to the specific node on which the kubelet is running
-The TLS Bootstrapping described in this document is intended to simplify, and partially or even completely automate, steps 3 onwards, as these are the most common when initializing or scaling
+The TLS Bootstrapping described in this document is intended to simplify, and partially or even
+completely automate, steps 3 onwards, as these are the most common when initializing or scaling
a cluster.
-### Bootstrap Initialization
+### Bootstrap initialization
In the bootstrap initialization process, the following occurs:
1. kubelet begins
-2. kubelet sees that it does _not_ have a `kubeconfig` file
-3. kubelet searches for and finds a `bootstrap-kubeconfig` file
-4. kubelet reads its bootstrap file, retrieving the URL of the API server and a limited usage "token"
-5. kubelet connects to the API server, authenticates using the token
-6. kubelet now has limited credentials to create and retrieve a certificate signing request (CSR)
-7. kubelet creates a CSR for itself with the signerName set to `kubernetes.io/kube-apiserver-client-kubelet`
-8. CSR is approved in one of two ways:
+1. kubelet sees that it does _not_ have a `kubeconfig` file
+1. kubelet searches for and finds a `bootstrap-kubeconfig` file
+1. kubelet reads its bootstrap file, retrieving the URL of the API server and a limited usage "token"
+1. kubelet connects to the API server, authenticates using the token
+1. kubelet now has limited credentials to create and retrieve a certificate signing request (CSR)
+1. kubelet creates a CSR for itself with the signerName set to `kubernetes.io/kube-apiserver-client-kubelet`
+1. CSR is approved in one of two ways:
* If configured, kube-controller-manager automatically approves the CSR
* If configured, an outside process, possibly a person, approves the CSR using the Kubernetes API or via `kubectl`
-9. Certificate is created for the kubelet
-10. Certificate is issued to the kubelet
-11. kubelet retrieves the certificate
-12. kubelet creates a proper `kubeconfig` with the key and signed certificate
-13. kubelet begins normal operation
-14. Optional: if configured, kubelet automatically requests renewal of the certificate when it is close to expiry
-15. The renewed certificate is approved and issued, either automatically or manually, depending on configuration.
+1. Certificate is created for the kubelet
+1. Certificate is issued to the kubelet
+1. kubelet retrieves the certificate
+1. kubelet creates a proper `kubeconfig` with the key and signed certificate
+1. kubelet begins normal operation
+1. Optional: if configured, kubelet automatically requests renewal of the certificate when it is close to expiry
+1. The renewed certificate is approved and issued, either automatically or manually, depending on configuration.
The rest of this document describes the necessary steps to configure TLS Bootstrapping, and its limitations.
@@ -90,13 +95,16 @@ In addition, you need your Kubernetes Certificate Authority (CA).
## Certificate Authority
-As without bootstrapping, you will need a Certificate Authority (CA) key and certificate. As without bootstrapping, these will be used
-to sign the kubelet certificate. As before, it is your responsibility to distribute them to control plane nodes.
+As without bootstrapping, you will need a Certificate Authority (CA) key and certificate.
+As without bootstrapping, these will be used to sign the kubelet certificate. As before,
+it is your responsibility to distribute them to control plane nodes.
-For the purposes of this document, we will assume these have been distributed to control plane nodes at `/var/lib/kubernetes/ca.pem` (certificate) and `/var/lib/kubernetes/ca-key.pem` (key).
+For the purposes of this document, we will assume these have been distributed to control
+plane nodes at `/var/lib/kubernetes/ca.pem` (certificate) and `/var/lib/kubernetes/ca-key.pem` (key).
We will refer to these as "Kubernetes CA certificate and key".
-All Kubernetes components that use these certificates - kubelet, kube-apiserver, kube-controller-manager - assume the key and certificate to be PEM-encoded.
+All Kubernetes components that use these certificates - kubelet, kube-apiserver,
+kube-controller-manager - assume the key and certificate to be PEM-encoded.
## kube-apiserver configuration
@@ -116,24 +124,27 @@ containing the signing certificate, for example
### Initial bootstrap authentication
-In order for the bootstrapping kubelet to connect to kube-apiserver and request a certificate, it must first authenticate to the server.
-You can use any [authenticator](/docs/reference/access-authn-authz/authentication/) that can authenticate the kubelet.
+In order for the bootstrapping kubelet to connect to kube-apiserver and request a certificate,
+it must first authenticate to the server. You can use any
+[authenticator](/docs/reference/access-authn-authz/authentication/) that can authenticate the kubelet.
While any authentication strategy can be used for the kubelet's initial
bootstrap credentials, the following two authenticators are recommended for ease
of provisioning.
1. [Bootstrap Tokens](#bootstrap-tokens)
-2. [Token authentication file](#token-authentication-file)
+1. [Token authentication file](#token-authentication-file)
-Using bootstrap tokens is a simpler and more easily managed method to authenticate kubelets, and does not require any additional flags when starting kube-apiserver.
+Using bootstrap tokens is a simpler and more easily managed method to authenticate kubelets,
+and does not require any additional flags when starting kube-apiserver.
Whichever method you choose, the requirement is that the kubelet be able to authenticate as a user with the rights to:
1. create and retrieve CSRs
-2. be automatically approved to request node client certificates, if automatic approval is enabled.
+1. be automatically approved to request node client certificates, if automatic approval is enabled.
-A kubelet authenticating using bootstrap tokens is authenticated as a user in the group `system:bootstrappers`, which is the standard method to use.
+A kubelet authenticating using bootstrap tokens is authenticated as a user in the group
+`system:bootstrappers`, which is the standard method to use.
As this feature matures, you
should ensure tokens are bound to a Role Based Access Control (RBAC) policy
@@ -144,17 +155,20 @@ particular bootstrap group's access when you are done provisioning the nodes.
#### Bootstrap tokens
-Bootstrap tokens are described in detail [here](/docs/reference/access-authn-authz/bootstrap-tokens/). These are tokens that are stored as secrets in the Kubernetes cluster,
-and then issued to the individual kubelet. You can use a single token for an entire cluster, or issue one per worker node.
+Bootstrap tokens are described in detail [here](/docs/reference/access-authn-authz/bootstrap-tokens/).
+These are tokens that are stored as secrets in the Kubernetes cluster, and then issued to the individual kubelet.
+You can use a single token for an entire cluster, or issue one per worker node.
The process is two-fold:
1. Create a Kubernetes secret with the token ID, secret and scope(s).
-2. Issue the token to the kubelet
+1. Issue the token to the kubelet
From the kubelet's perspective, one token is like another and has no special meaning.
-From the kube-apiserver's perspective, however, the bootstrap token is special. Due to its `type`, `namespace` and `name`, kube-apiserver recognizes it as a special token,
-and grants anyone authenticating with that token special bootstrap rights, notably treating them as a member of the `system:bootstrappers` group. This fulfills a basic requirement
+From the kube-apiserver's perspective, however, the bootstrap token is special.
+Due to its `type`, `namespace` and `name`, kube-apiserver recognizes it as a special token,
+and grants anyone authenticating with that token special bootstrap rights, notably treating
+them as a member of the `system:bootstrappers` group. This fulfills a basic requirement
for TLS bootstrapping.
The details for creating the secret are available [here](/docs/reference/access-authn-authz/bootstrap-tokens/).
@@ -198,7 +212,8 @@ certificate signing request (CSR) as well as retrieve it when done.
Fortunately, Kubernetes ships with a `ClusterRole` with precisely these (and
only these) permissions, `system:node-bootstrapper`.
-To do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` group to the cluster role `system:node-bootstrapper`.
+To do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers`
+group to the cluster role `system:node-bootstrapper`.
```yaml
# enable bootstrapping nodes to create CSR
@@ -237,9 +252,10 @@ In order for the controller-manager to sign certificates, it needs the following
As described earlier, you need to create a Kubernetes CA key and certificate, and distribute it to the control plane nodes.
These will be used by the controller-manager to sign the kubelet certificates.
-Since these signed certificates will, in turn, be used by the kubelet to authenticate as a regular kubelet to kube-apiserver, it is important that the CA
-provided to the controller-manager at this stage also be trusted by kube-apiserver for authentication. This is provided to kube-apiserver
-with the flag `--client-ca-file=FILENAME` (for example, `--client-ca-file=/var/lib/kubernetes/ca.pem`), as described in the kube-apiserver configuration section.
+Since these signed certificates will, in turn, be used by the kubelet to authenticate as a regular kubelet
+to kube-apiserver, it is important that the CA provided to the controller-manager at this stage also be
+trusted by kube-apiserver for authentication. This is provided to kube-apiserver with the flag `--client-ca-file=FILENAME`
+(for example, `--client-ca-file=/var/lib/kubernetes/ca.pem`), as described in the kube-apiserver configuration section.
To provide the Kubernetes CA key and certificate to kube-controller-manager, use the following flags:
@@ -266,10 +282,14 @@ RBAC permissions to the correct group.
There are two distinct sets of permissions:
-* `nodeclient`: If a node is creating a new certificate for a node, then it does not have a certificate yet. It is authenticating using one of the tokens listed above, and thus is part of the group `system:bootstrappers`.
-* `selfnodeclient`: If a node is renewing its certificate, then it already has a certificate (by definition), which it uses continuously to authenticate as part of the group `system:nodes`.
+* `nodeclient`: If a node is creating a new certificate for a node, then it does not have a certificate yet.
+ It is authenticating using one of the tokens listed above, and thus is part of the group `system:bootstrappers`.
+* `selfnodeclient`: If a node is renewing its certificate, then it already has a certificate (by definition),
+ which it uses continuously to authenticate as part of the group `system:nodes`.
-To enable the kubelet to request and receive a new certificate, create a `ClusterRoleBinding` that binds the group in which the bootstrapping node is a member `system:bootstrappers` to the `ClusterRole` that grants it permission, `system:certificates.k8s.io:certificatesigningrequests:nodeclient`:
+To enable the kubelet to request and receive a new certificate, create a `ClusterRoleBinding` that binds
+the group in which the bootstrapping node is a member `system:bootstrappers` to the `ClusterRole` that
+grants it permission, `system:certificates.k8s.io:certificatesigningrequests:nodeclient`:
```yaml
# Approve all CSRs for the group "system:bootstrappers"
@@ -287,7 +307,8 @@ roleRef:
apiGroup: rbac.authorization.k8s.io
```
-To enable the kubelet to renew its own client certificate, create a `ClusterRoleBinding` that binds the group in which the fully functioning node is a member `system:nodes` to the `ClusterRole` that
+To enable the kubelet to renew its own client certificate, create a `ClusterRoleBinding` that binds
+the group in which the fully functioning node is a member `system:nodes` to the `ClusterRole` that
grants it permission, `system:certificates.k8s.io:certificatesigningrequests:selfnodeclient`:
```yaml
@@ -316,10 +337,10 @@ built-in approver doesn't explicitly deny CSRs. It only ignores unauthorized
requests. The controller also prunes expired certificates as part of garbage
collection.
-
## kubelet configuration
-Finally, with the control plane nodes properly set up and all of the necessary authentication and authorization in place, we can configure the kubelet.
+Finally, with the control plane nodes properly set up and all of the necessary
+authentication and authorization in place, we can configure the kubelet.
The kubelet requires the following configuration to bootstrap:
@@ -385,7 +406,7 @@ referencing the generated key and obtained certificate is written to the path
specified by `--kubeconfig`. The certificate and key file will be placed in the
directory specified by `--cert-dir`.
-### Client and Serving Certificates
+### Client and serving certificates
All of the above relate to kubelet _client_ certificates, specifically, the certificates a kubelet
uses to authenticate to kube-apiserver.
@@ -402,7 +423,7 @@ be used as serving certificates, or `server auth`.
However, you _can_ enable its server certificate, at least partially, via certificate rotation.
-### Certificate Rotation
+### Certificate rotation
Kubernetes v1.8 and higher kubelet implements features for enabling
rotation of its client and/or serving certificates. Note, rotation of serving
@@ -420,7 +441,7 @@ or pass the following command line argument to the kubelet (deprecated):
Enabling `RotateKubeletServerCertificate` causes the kubelet **both** to request a serving
certificate after bootstrapping its client credentials **and** to rotate that
-certificate. To enable this behavior, use the field `serverTLSBootstrap` of
+certificate. To enable this behavior, use the field `serverTLSBootstrap` of
the [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/)
or pass the following command line argument to the kubelet (deprecated):
@@ -430,8 +451,8 @@ or pass the following command line argument to the kubelet (deprecated):
{{< note >}}
The CSR approving controllers implemented in core Kubernetes do not
-approve node _serving_ certificates for [security
-reasons](https://github.com/kubernetes/community/pull/1982). To use
+approve node _serving_ certificates for
+[security reasons](https://github.com/kubernetes/community/pull/1982). To use
`RotateKubeletServerCertificate` operators need to run a custom approving
controller, or manually approve the serving certificate requests.
@@ -439,9 +460,9 @@ A deployment-specific approval process for kubelet serving certificates should t
1. are requested by nodes (ensure the `spec.username` field is of the form
`system:node:` and `spec.groups` contains `system:nodes`)
-2. request usages for a serving certificate (ensure `spec.usages` contains `server auth`,
+1. request usages for a serving certificate (ensure `spec.usages` contains `server auth`,
optionally contains `digital signature` and `key encipherment`, and contains no other usages)
-3. only have IP and DNS subjectAltNames that belong to the requesting node,
+1. only have IP and DNS subjectAltNames that belong to the requesting node,
and have no URI and Email subjectAltNames (parse the x509 Certificate Signing Request
in `spec.request` to verify `subjectAltNames`)
@@ -457,8 +478,11 @@ Like the kubelet, these other components also require a method of authenticating
You have several options for generating these credentials:
* The old way: Create and distribute certificates the same way you did for kubelet before TLS bootstrapping
-* DaemonSet: Since the kubelet itself is loaded on each node, and is sufficient to start base services, you can run kube-proxy and other node-specific services not as a standalone process, but rather as a daemonset in the `kube-system` namespace. Since it will be in-cluster, you can give it a proper service account with appropriate permissions to perform its activities. This may be the simplest way to configure such services.
-
+* DaemonSet: Since the kubelet itself is loaded on each node, and is sufficient to start base services,
+ you can run kube-proxy and other node-specific services not as a standalone process, but rather as a
+ daemonset in the `kube-system` namespace. Since it will be in-cluster, you can give it a proper service
+ account with appropriate permissions to perform its activities. This may be the simplest way to configure
+ such services.
## kubectl approval
From 8b72e69169f8824774bc486556318da94d154cf6 Mon Sep 17 00:00:00 2001
From: Qiming Teng
Date: Thu, 5 Oct 2023 09:24:15 +0800
Subject: [PATCH 070/229] [zh] Resync cluster-administration concepts
---
.../concepts/cluster-administration/_index.md | 12 +++++++
.../cluster-administration/networking.md | 31 +++++++++++++------
.../cluster-administration/system-traces.md | 17 +++++++---
3 files changed, 45 insertions(+), 15 deletions(-)
diff --git a/content/zh-cn/docs/concepts/cluster-administration/_index.md b/content/zh-cn/docs/concepts/cluster-administration/_index.md
index f261dcc037f02..cec70f46da945 100644
--- a/content/zh-cn/docs/concepts/cluster-administration/_index.md
+++ b/content/zh-cn/docs/concepts/cluster-administration/_index.md
@@ -5,6 +5,12 @@ content_type: concept
description: >
关于创建和管理 Kubernetes 集群的底层细节。
no_list: true
+card:
+ name: setup
+ weight: 60
+ anchors:
+ - anchor: "#securing-a-cluster"
+ title: 保护集群
---
diff --git a/content/zh-cn/docs/concepts/cluster-administration/networking.md b/content/zh-cn/docs/concepts/cluster-administration/networking.md
index 18f13acd5cfd9..a71411adf13c1 100644
--- a/content/zh-cn/docs/concepts/cluster-administration/networking.md
+++ b/content/zh-cn/docs/concepts/cluster-administration/networking.md
@@ -57,29 +57,40 @@ Kubernetes 的宗旨就是在应用之间共享机器。
与其去解决这些问题,Kubernetes 选择了其他不同的方法。
要了解 Kubernetes 网络模型,请参阅[此处](/zh-cn/docs/concepts/services-networking/)。
+
## 如何实现 Kubernetes 的网络模型 {#how-to-implement-the-kubernetes-network-model}
-网络模型由每个节点上的容器运行时实现。最常见的容器运行时使用
-[Container Network Interface](https://github.com/containernetworking/cni) (CNI) 插件来管理其网络和安全功能。
-许多不同的 CNI 插件来自于许多不同的供应商。其中一些仅提供添加和删除网络接口的基本功能,
+网络模型由各节点上的容器运行时来实现。最常见的容器运行时使用
+[Container Network Interface](https://github.com/containernetworking/cni) (CNI) 插件来管理其网络和安全能力。
+来自不同供应商 CNI 插件有很多。其中一些仅提供添加和删除网络接口的基本功能,
而另一些则提供更复杂的解决方案,例如与其他容器编排系统集成、运行多个 CNI 插件、高级 IPAM 功能等。
+
请参阅[此页面](/zh-cn/docs/concepts/cluster-administration/addons/#networking-and-network-policy)了解
Kubernetes 支持的网络插件的非详尽列表。
## {{% heading "whatsnext" %}}
-网络模型的早期设计、运行原理以及未来的一些计划,
-都在[联网设计文档](https://git.k8s.io/design-proposals-archive/network/networking.md)里有更详细的描述。
+网络模型的早期设计、运行原理都在[联网设计文档](https://git.k8s.io/design-proposals-archive/network/networking.md)里有详细描述。
+关于未来的计划,以及旨在改进 Kubernetes 联网能力的一些正在进行的工作,可以参考 SIG Network
+的 [KEPs](https://github.com/kubernetes/enhancements/tree/master/keps/sig-network)。
diff --git a/content/zh-cn/docs/concepts/cluster-administration/system-traces.md b/content/zh-cn/docs/concepts/cluster-administration/system-traces.md
index 56a4373bf06e0..aefeb8e90a2ed 100644
--- a/content/zh-cn/docs/concepts/cluster-administration/system-traces.md
+++ b/content/zh-cn/docs/concepts/cluster-administration/system-traces.md
@@ -215,12 +215,19 @@ span will be sent to the exporter.
-Kubernetes v{{< skew currentVersion >}} 中的 kubelet 从垃圾回收、Pod
-同步例程以及每个 gRPC 方法中收集 span。CRI-O 和 containerd
-这类关联的容器运行时可以将链路链接到其导出的 span,以提供更多上下文信息。
+Kubernetes v{{< skew currentVersion >}} 中的 kubelet 收集与垃圾回收、Pod
+同步例程以及每个 gRPC 方法相关的 Span。
+kubelet 借助 gRPC 来传播跟踪上下文,以便 CRI-O 和 containerd
+这类带有跟踪插桩的容器运行时可以在其导出的 Span 与 kubelet
+所提供的跟踪上下文之间建立关联。所得到的跟踪数据会包含 kubelet
+与容器运行时 Span 之间的父子链接关系,从而为调试节点问题提供有用的上下文信息。
{{< glossary_definition term_id="workload" length="short" >}}
@@ -19,29 +26,24 @@ Whether your workload is a single component or several that work together, on Ku
it inside a set of [_pods_](/docs/concepts/workloads/pods).
In Kubernetes, a Pod represents a set of running
{{< glossary_tooltip text="containers" term_id="container" >}} on your cluster.
-
-A Pod has a defined lifecycle. For example, once a Pod is running in your cluster then
-a critical failure on the {{< glossary_tooltip text="node" term_id="node" >}} where that
-Pod is running means that all the Pods on that node fail. Kubernetes treats that level
-of failure as final: you would need to create a new Pod even if the node later recovers.
-->
在 Kubernetes 中,无论你的负载是由单个组件还是由多个一同工作的组件构成,
你都可以在一组 [**Pod**](/zh-cn/docs/concepts/workloads/pods) 中运行它。
在 Kubernetes 中,Pod 代表的是集群上处于运行状态的一组
-{{< glossary_tooltip text="容器" term_id="container" >}} 的集合。
+{{< glossary_tooltip text="容器" term_id="container" >}}的集合。
Kubernetes Pod 遵循[预定义的生命周期](/zh-cn/docs/concepts/workloads/pods/pod-lifecycle/)。
例如,当在你的集群中运行了某个 Pod,但是 Pod 所在的
{{< glossary_tooltip text="节点" term_id="node" >}} 出现致命错误时,
所有该节点上的 Pod 的状态都会变成失败。Kubernetes 将这类失败视为最终状态:
-即使该节点后来恢复正常运行,你也需要创建新的 `Pod` 以恢复应用。
+即使该节点后来恢复正常运行,你也需要创建新的 Pod 以恢复应用。
-* [`Deployment`](/zh-cn/docs/concepts/workloads/controllers/deployment/) 和
- [`ReplicaSet`](/zh-cn/docs/concepts/workloads/controllers/replicaset/)
+* [Deployment](/zh-cn/docs/concepts/workloads/controllers/deployment/) 和
+ [ReplicaSet](/zh-cn/docs/concepts/workloads/controllers/replicaset/)
(替换原来的资源 {{< glossary_tooltip text="ReplicationController" term_id="replication-controller" >}})。
- `Deployment` 很适合用来管理你的集群上的无状态应用,`Deployment` 中的所有
- `Pod` 都是相互等价的,并且在需要的时候被替换。
+ Deployment 很适合用来管理你的集群上的无状态应用,Deployment 中的所有
+ Pod 都是相互等价的,并且在需要的时候被替换。
* [StatefulSet](/zh-cn/docs/concepts/workloads/controllers/statefulset/)
让你能够运行一个或者多个以某种方式跟踪应用状态的 Pod。
- 例如,如果你的负载会将数据作持久存储,你可以运行一个 `StatefulSet`,将每个
- `Pod` 与某个 [`PersistentVolume`](/zh-cn/docs/concepts/storage/persistent-volumes/)
- 对应起来。你在 `StatefulSet` 中各个 `Pod` 内运行的代码可以将数据复制到同一
- `StatefulSet` 中的其它 `Pod` 中以提高整体的服务可靠性。
+ 例如,如果你的负载会将数据作持久存储,你可以运行一个 StatefulSet,将每个
+ Pod 与某个 [PersistentVolume](/zh-cn/docs/concepts/storage/persistent-volumes/)
+ 对应起来。你在 StatefulSet 中各个 Pod 内运行的代码可以将数据复制到同一
+ StatefulSet 中的其它 Pod 中以提高整体的服务可靠性。
* [DaemonSet](/zh-cn/docs/concepts/workloads/controllers/daemonset/)
- 定义提供节点本地支撑设施的 `Pod`。这些 Pod 可能对于你的集群的运维是
+ 定义提供节点本地支撑设施的 Pod。这些 Pod 可能对于你的集群的运维是
非常重要的,例如作为网络链接的辅助工具或者作为网络
{{< glossary_tooltip text="插件" term_id="addons" >}}
的一部分等等。每次你向集群中添加一个新节点时,如果该节点与某 `DaemonSet`
- 的规约匹配,则控制平面会为该 `DaemonSet` 调度一个 `Pod` 到该新节点上运行。
+ 的规约匹配,则控制平面会为该 DaemonSet 调度一个 Pod 到该新节点上运行。
* [Job](/zh-cn/docs/concepts/workloads/controllers/job/) 和
[CronJob](/zh-cn/docs/concepts/workloads/controllers/cron-jobs/)。
- 定义一些一直运行到结束并停止的任务。`Job` 用来执行一次性任务,而
- `CronJob` 用来执行的根据时间规划反复运行的任务。
+ 定义一些一直运行到结束并停止的任务。
+ 你可以使用 [Job](/zh-cn/docs/concepts/workloads/controllers/job/)
+ 来定义只需要执行一次并且执行后即视为完成的任务。你可以使用
+ [CronJob](/zh-cn/docs/concepts/workloads/controllers/cron-jobs/)
+ 来根据某个排期表来多次运行同一个 Job。
在庞大的 Kubernetes 生态系统中,你还可以找到一些提供额外操作的第三方工作负载相关的资源。
通过使用[定制资源定义(CRD)](/zh-cn/docs/concepts/extend-kubernetes/api-extension/custom-resources/),
你可以添加第三方工作负载资源,以完成原本不是 Kubernetes 核心功能的工作。
-例如,如果你希望运行一组 `Pod`,但要求**所有** Pod 都可用时才执行操作
+例如,如果你希望运行一组 Pod,但要求**所有** Pod 都可用时才执行操作
(比如针对某种高吞吐量的分布式任务),你可以基于定制资源实现一个能够满足这一需求的扩展,
并将其安装到集群中运行。
@@ -127,23 +135,23 @@ then you can implement or install an extension that does provide that feature.
除了阅读了解每类资源外,你还可以了解与这些资源相关的任务:
-* [使用 `Deployment` 运行一个无状态的应用](/zh-cn/docs/tasks/run-application/run-stateless-application-deployment/)
+* [使用 Deployment 运行一个无状态的应用](/zh-cn/docs/tasks/run-application/run-stateless-application-deployment/)
* 以[单实例](/zh-cn/docs/tasks/run-application/run-single-instance-stateful-application/)或者[多副本集合](/zh-cn/docs/tasks/run-application/run-replicated-stateful-application/)
的形式运行有状态的应用;
-* [使用 `CronJob` 运行自动化的任务](/zh-cn/docs/tasks/job/automated-tasks-with-cron-jobs/)
+* [使用 CronJob 运行自动化的任务](/zh-cn/docs/tasks/job/automated-tasks-with-cron-jobs/)
-要了解 Kubernetes 将代码与配置分离的实现机制,可参阅[配置部分](/zh-cn/docs/concepts/configuration/)。
+要了解 Kubernetes 将代码与配置分离的实现机制,可参阅[配置](/zh-cn/docs/concepts/configuration/)节。
一旦你的应用处于运行状态,你就可能想要以
-[`Service`](/zh-cn/docs/concepts/services-networking/service/)
+[Service](/zh-cn/docs/concepts/services-networking/service/)
的形式使之可在互联网上访问;或者对于 Web 应用而言,使用
-[`Ingress`](/zh-cn/docs/concepts/services-networking/ingress) 资源将其暴露到互联网上。
+[Ingress](/zh-cn/docs/concepts/services-networking/ingress) 资源将其暴露到互联网上。
diff --git a/content/zh-cn/docs/concepts/workloads/controllers/job.md b/content/zh-cn/docs/concepts/workloads/controllers/job.md
index 482c2c737419a..06ec1cfd7660d 100644
--- a/content/zh-cn/docs/concepts/workloads/controllers/job.md
+++ b/content/zh-cn/docs/concepts/workloads/controllers/job.md
@@ -890,7 +890,7 @@ These are some requirements and semantics of the API:
are ignored. When no rule matches the Pod failure, the default
handling applies.
- you may want to restrict a rule to a specific container by specifying its name
- in`spec.podFailurePolicy.rules[*].containerName`. When not specified the rule
+ in`spec.podFailurePolicy.rules[*].onExitCodes.containerName`. When not specified the rule
applies to all containers. When specified, it should match one the container
or `initContainer` names in the Pod template.
- you may specify the action taken when a Pod failure policy is matched by
@@ -910,9 +910,9 @@ These are some requirements and semantics of the API:
- 在 `spec.podFailurePolicy.rules` 中设定的 Pod 失效策略规则将按序评估。
一旦某个规则与 Pod 失效策略匹配,其余规则将被忽略。
当没有规则匹配 Pod 失效策略时,将会采用默认的处理方式。
-- 你可能希望在 `spec.podFailurePolicy.rules[*].containerName`
- 中通过指定的名称将规则限制到特定容器。
- 如果不设置,规则将适用于所有容器。
+- 你可能希望在 `spec.podFailurePolicy.rules[*].onExitCodes.containerName`
+ 中通过指定的名称限制只能针对特定容器应用对应的规则。
+ 如果不设置此属性,规则将适用于所有容器。
如果指定了容器名称,它应该匹配 Pod 模板中的一个普通容器或一个初始容器(Init Container)。
- 你可以在 `spec.podFailurePolicy.rules[*].action` 指定当 Pod 失效策略发生匹配时要采取的操作。
可能的值为:
@@ -1155,17 +1155,13 @@ consume.
## Job 模式 {#job-patterns}
-Job 对象可以用来支持多个 Pod 的可靠的并发执行。
-Job 对象不是设计用来支持相互通信的并行进程的,后者一般在科学计算中应用较多。
-Job 的确能够支持对一组相互独立而又有所关联的**工作条目**的并行处理。
+Job 对象可以用来处理一组相互独立而又彼此关联的“工作条目”。
这类工作条目可能是要发送的电子邮件、要渲染的视频帧、要编解码的文件、NoSQL
数据库中要扫描的主键范围等等。
@@ -1182,25 +1178,35 @@ The tradeoffs are:
并行计算的模式有好多种,每种都有自己的强项和弱点。这里要权衡的因素有:
+- 每个工作条目对应一个 Job 或者所有工作条目对应同一 Job 对象。
+ 为每个工作条目创建一个 Job 的做法会给用户带来一些额外的负担,系统需要管理大量的 Job 对象。
+ 用一个 Job 对象来完成所有工作条目的做法更适合处理大量工作条目的场景。
+- 创建数目与工作条目相等的 Pod 或者令每个 Pod 可以处理多个工作条目。
+ 当 Pod 个数与工作条目数目相等时,通常不需要在 Pod 中对现有代码和容器做较大改动;
+ 让每个 Pod 能够处理多个工作条目的做法更适合于工作条目数量较大的场合。
+
-- 每个工作条目对应一个 Job 或者所有工作条目对应同一 Job 对象。
- 后者更适合处理大量工作条目的场景;
- 前者会给用户带来一些额外的负担,而且需要系统管理大量的 Job 对象。
-- 创建与工作条目相等的 Pod 或者令每个 Pod 可以处理多个工作条目。
- 前者通常不需要对现有代码和容器做较大改动;
- 后者则更适合工作条目数量较大的场合,原因同上。
- 有几种技术都会用到工作队列。这意味着需要运行一个队列服务,
并修改现有程序或容器使之能够利用该工作队列。
与之比较,其他方案在修改现有容器化应用以适应需求方面可能更容易一些。
+- 当 Job 与某个[无头 Service](/zh-cn/docs/concepts/services-networking/service/#headless-services)
+ 之间存在关联时,你可以让 Job 中的 Pod 之间能够相互通信,从而协作完成计算。
下面是对这些权衡的汇总,第 2 到 4 列对应上面的权衡比较。
模式的名称对应了相关示例和更详细描述的链接。
@@ -1222,8 +1228,8 @@ The pattern names are also links to examples and more detailed description.
| [每工作条目一 Pod 的队列](/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue/) | ✓ | | 有时 |
| [Pod 数量可变的队列](/zh-cn/docs/tasks/job/fine-parallel-processing-work-queue/) | ✓ | ✓ | |
| [静态任务分派的带索引的 Job](/zh-cn/docs/tasks/job/indexed-parallel-processing-static) | ✓ | | ✓ |
-| [Job 模板扩展](/zh-cn/docs/tasks/job/parallel-processing-expansion/) | | | ✓ |
| [带 Pod 间通信的 Job](/zh-cn/docs/tasks/job/job-with-pod-to-pod-communication/) | ✓ | 有时 | 有时 |
+| [Job 模板扩展](/zh-cn/docs/tasks/job/parallel-processing-expansion/) | | | ✓ |
| 模式 | `.spec.completions` | `.spec.parallelism` |
| ----- |:-------------------:|:--------------------:|
| [每工作条目一 Pod 的队列](/zh-cn/docs/tasks/job/coarse-parallel-processing-work-queue/) | W | 任意值 |
| [Pod 个数可变的队列](/zh-cn/docs/tasks/job/fine-parallel-processing-work-queue/) | 1 | 任意值 |
| [静态任务分派的带索引的 Job](/zh-cn/docs/tasks/job/indexed-parallel-processing-static) | W | | 任意值 |
-| [Job 模板扩展](/zh-cn/docs/tasks/job/parallel-processing-expansion/) | 1 | 应该为 1 |
| [带 Pod 间通信的 Job](/zh-cn/docs/tasks/job/job-with-pod-to-pod-communication/) | W | W |
+| [Job 模板扩展](/zh-cn/docs/tasks/job/parallel-processing-expansion/) | 1 | 应该为 1 |
-### 有序索引 {#ordinal-index}
+### 序号索引 {#ordinal-index}
对于具有 N 个[副本](#replicas)的 StatefulSet,该 StatefulSet 中的每个 Pod 将被分配一个整数序号,
-该序号在此 StatefulSet 上是唯一的。默认情况下,这些 Pod 将被从 0 到 N-1 的序号。
+该序号在此 StatefulSet 中是唯一的。默认情况下,这些 Pod 将被赋予从 0 到 N-1 的序号。
+StatefulSet 的控制器也会添加一个包含此索引的 Pod 标签:`apps.kubernetes.io/pod-index`。
+### Pod 索引标签 {#pod-index-label}
+
+{{< feature-state for_k8s_version="v1.28" state="beta" >}}
+
+
+当 StatefulSet {{}}创建一个 Pod 时,
+新的 Pod 会被打上 `apps.kubernetes.io/pod-index` 标签。标签的取值为 Pod 的序号索引。
+此标签使你能够将流量路由到特定索引值的 Pod、使用 Pod 索引标签来过滤日志或度量值等等。
+注意要使用这一特性需要启用特性门控 `PodIndexLabel`,而该门控默认是被启用的。
+
节点鉴权是一种特殊用途的鉴权模式,专门对 kubelet 发出的 API 请求进行授权。
-
* services
* endpoints
@@ -57,8 +58,10 @@ Write operations:
写入操作:
* 节点和节点状态(启用 `NodeRestriction` 准入插件以限制 kubelet 只能修改自己的节点)
@@ -71,8 +74,11 @@ Auth-related operations:
身份认证与鉴权相关的操作:
* 对于基于 TLS 的启动引导过程时使用的
[certificationsigningrequests API](/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/)
@@ -80,25 +86,33 @@ Auth-related operations:
* 为委派的身份验证/鉴权检查创建 TokenReview 和 SubjectAccessReview 的能力
在将来的版本中,节点鉴权器可能会添加或删除权限,以确保 kubelet 具有正确操作所需的最小权限集。
-为了获得节点鉴权器的授权,kubelet 必须使用一个凭证以表示它在 `system:nodes`
+为了获得节点鉴权器的授权,kubelet 必须使用一个凭据以表示它在 `system:nodes`
组中,用户名为 `system:node:`。上述的组名和用户名格式要与
[kubelet TLS 启动引导](/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/)
过程中为每个 kubelet 创建的标识相匹配。
`` 的值**必须**与 kubelet 注册的节点名称精确匹配。默认情况下,节点名称是由
`hostname` 提供的主机名,或者通过 kubelet `--hostname-override`
@@ -114,7 +128,10 @@ To enable the Node authorizer, start the apiserver with `--authorization-mode=No
要启用节点鉴权器,请使用 `--authorization-mode=Node` 启动 API 服务器。
要限制 kubelet 可以写入的 API 对象,请使用
`--enable-admission-plugins=...,NodeRestriction,...` 启动 API 服务器,从而启用
@@ -132,8 +149,9 @@ To limit the API objects kubelets are able to write, enable the [NodeRestriction
### 在 `system:nodes` 组之外的 kubelet {#kubelets-outside-the-system-nodes-group}
`system:nodes` 组之外的 kubelet 不会被 `Node` 鉴权模式授权,并且需要继续通过当前授权它们的机制来授权。
@@ -151,7 +169,7 @@ because they do not have a username in the `system:node:...` format.
These kubelets would not be authorized by the `Node` authorization mode,
and would need to continue to be authorized via whatever mechanism currently authorizes them.
-->
-在一些部署中,kubelet 具有 `system:nodes` 组的凭证,
+在一些部署中,kubelet 具有 `system:nodes` 组的凭据,
但是无法给出它们所关联的节点的标识,因为它们没有 `system:node:...` 格式的用户名。
这些 kubelet 不会被 `Node` 鉴权模式授权,并且需要继续通过当前授权它们的任何机制来授权。
@@ -161,65 +179,3 @@ since the default node identifier implementation would not consider that a node
-->
因为默认的节点标识符实现不会把它当作节点身份标识,`NodeRestriction`
准入插件会忽略来自这些 kubelet 的请求。
-
-
-### 相对于以前使用 RBAC 的版本的更新 {#upgrades-from-previous-versions-using-rbac}
-
-
-升级的 1.7 之前的使用 [RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/)
-的集群将继续按原样运行,因为 `system:nodes` 组绑定已经存在。
-
-
-如果集群管理员希望开始使用 `Node` 鉴权器和 `NodeRestriction` 准入插件来限制节点对
-API 的访问,这一需求可以通过下列操作来完成且不会影响已部署的应用:
-
-
-1. 启用 `Node` 鉴权模式 (`--authorization-mode=Node,RBAC`) 和 `NodeRestriction` 准入插件
-2. 确保所有 kubelet 的凭据符合组/用户名要求
-3. 审核 API 服务器日志以确保 `Node` 鉴权器不会拒绝来自 kubelet 的请求(日志中没有持续的 `NODE DENY` 消息)
-4. 删除 `system:node` 集群角色绑定
-
-
-### RBAC 节点权限 {#rbac-node-permissions}
-
-
-在 1.6 版本中,当使用 [RBAC 鉴权模式](/zh-cn/docs/reference/access-authn-authz/rbac/)
-时,`system:nodes` 集群角色会被自动绑定到 `system:node` 组。
-
-
-在 1.7 版本中,不再推荐将 `system:nodes` 组自动绑定到 `system:node`
-角色,因为节点鉴权器通过对 Secret 和 ConfigMap 访问的额外限制完成了相同的任务。
-如果同时启用了 `Node` 和 `RBAC` 鉴权模式,1.7 版本则不会创建 `system:nodes`
-组到 `system:node` 角色的自动绑定。
-
-
-在 1.8 版本中,绑定将根本不会被创建。
-
-
-使用 RBAC 时,将继续创建 `system:node` 集群角色,以便与将其他用户或组绑定到该角色的部署方法兼容。
From dd7930f5d7fd8956fe0697a68620cbf2c2d45bf5 Mon Sep 17 00:00:00 2001
From: xin gu <418294249@qq.com>
Date: Sat, 7 Oct 2023 13:12:46 +0800
Subject: [PATCH 073/229] sync configure-upgrade-etc kubeadm-certs
verify-signed-artifacts
---
.../administer-cluster/configure-upgrade-etcd.md | 11 +++++++++++
.../tasks/administer-cluster/kubeadm/kubeadm-certs.md | 5 +++--
.../administer-cluster/verify-signed-artifacts.md | 4 ++--
3 files changed, 16 insertions(+), 4 deletions(-)
diff --git a/content/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd.md b/content/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd.md
index 9545872007f26..89d76cca43508 100644
--- a/content/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd.md
+++ b/content/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd.md
@@ -21,6 +21,17 @@ weight: 270
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
+
+你需要有一个 Kubernetes 集群,并且必须配置 kubectl 命令行工具以与你的集群通信。
+建议在至少有两个不充当控制平面的节点上运行此任务。如果你还没有集群,
+你可以使用 [minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/) 创建一个。
+
在集群创建过程中,kubeadm 对 `admin.conf` 中的证书进行签名时,将其配置为
`Subject: O = system:masters, CN = kubernetes-admin`。
[`system:masters`](/zh-cn/docs/reference/access-authn-authz/rbac/#user-facing-roles)
-是一个例外的超级用户组,可以绕过鉴权层(例如 RBAC)。
+是一个例外的超级用户组,可以绕过鉴权层(例如 [RBAC](/zh-cn/docs/reference/access-authn-authz/rbac/))。
强烈建议不要将 `admin.conf` 文件与任何人共享。
你需要安装以下工具:
- `cosign`([安装指南](https://docs.sigstore.dev/cosign/installation/))
- `curl`(通常由你的操作系统提供)
-- `jq`([下载 jq](https://stedlan.github.io/jq/download/))
+- `jq`([下载 jq](https://jqlang.github.io/jq/download/))
关键的设计思想是在 Pod 的卷来源中允许使用
-[卷申领的参数](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ephemeralvolumesource-v1alpha1-core)。
+[卷申领的参数](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#ephemeralvolumesource-v1-core)。
PersistentVolumeClaim 的标签、注解和整套字段集均被支持。
-创建这样一个 Pod 后,
-临时卷控制器在 Pod 所属的命名空间中创建一个实际的 PersistentVolumeClaim 对象,
-并确保删除 Pod 时,同步删除 PersistentVolumeClaim。
+创建这样一个 Pod 后,临时卷控制器在 Pod 所属的命名空间中创建一个实际的
+PersistentVolumeClaim 对象,并确保删除 Pod 时,同步删除 PersistentVolumeClaim。
如上设置将触发卷的绑定与/或制备,相应动作或者在
{{< glossary_tooltip text="StorageClass" term_id="storage-class" >}}
-使用即时卷绑定时立即执行,
-或者当 Pod 被暂时性调度到某节点时执行 (`WaitForFirstConsumer` 卷绑定模式)。
+使用即时卷绑定时立即执行,或者当 Pod 被暂时性调度到某节点时执行 (`WaitForFirstConsumer` 卷绑定模式)。
对于通用的临时卷,建议采用后者,这样调度器就可以自由地为 Pod 选择合适的节点。
对于即时绑定,调度器则必须选出一个节点,使得在卷可用时,能立即访问该卷。
@@ -355,8 +353,8 @@ and in this case you need to ensure that volume clean up happens separately.
拥有通用临时存储的 Pod 是提供临时存储 (ephemeral storage) 的 PersistentVolumeClaim 的所有者。
当 Pod 被删除时,Kubernetes 垃圾收集器会删除 PVC,
然后 PVC 通常会触发卷的删除,因为存储类的默认回收策略是删除卷。
-你可以使用带有 `retain` 回收策略的 StorageClass 创建准临时 (quasi-ephemeral) 本地存储:
-该存储比 Pod 寿命长,在这种情况下,你需要确保单独进行卷清理。
+你可以使用带有 `retain` 回收策略的 StorageClass 创建准临时 (Quasi-Ephemeral) 本地存储:
+该存储比 Pod 寿命长,所以在这种情况下,你需要确保单独进行卷清理。
自动创建的 PVC 采取确定性的命名机制:名称是 Pod 名称和卷名称的组合,中间由连字符(`-`)连接。
-在上面的示例中,PVC 将命名为 `my-app-scratch-volume` 。
+在上面的示例中,PVC 将被命名为 `my-app-scratch-volume` 。
这种确定性的命名机制使得与 PVC 交互变得更容易,因为一旦知道 Pod 名称和卷名,就不必搜索它。
-这种命名机制也引入了潜在的冲突,
-不同的 Pod 之间(名为 “Pod-a” 的 Pod 挂载名为 "scratch" 的卷,
-和名为 "pod" 的 Pod 挂载名为 “a-scratch” 的卷,这两者均会生成名为
-"pod-a-scratch" 的 PVC),或者在 Pod 和手工创建的 PVC 之间可能出现冲突。
+这种命名机制也引入了潜在的冲突,不同的 Pod 之间(名为 “Pod-a” 的
+Pod 挂载名为 "scratch" 的卷,和名为 "pod" 的 Pod 挂载名为 “a-scratch” 的卷,
+这两者均会生成名为 "pod-a-scratch" 的 PVC),或者在 Pod 和手工创建的
+PVC 之间可能出现冲突。
-以下冲突会被检测到:如果 PVC 是为 Pod 创建的,那么它只用于临时卷。
+这类冲突会被检测到:如果 PVC 是为 Pod 创建的,那么它只用于临时卷。
此检测基于所有权关系。现有的 PVC 不会被覆盖或修改。
但这并不能解决冲突,因为如果没有正确的 PVC,Pod 就无法启动。
+{{< caution >}}
-{{< caution >}}
当同一个命名空间中命名 Pod 和卷时,要小心,以防止发生此类冲突。
{{< /caution >}}
@@ -461,7 +459,7 @@ See [local ephemeral storage](/docs/concepts/configuration/manage-resources-cont
- 有关设计的更多信息,参阅
[Ephemeral Inline CSI volumes KEP](https://github.com/kubernetes/enhancements/blob/ad6021b3d61a49040a3f835e12c8bb5424db2bbb/keps/sig-storage/20190122-csi-inline-volumes.md)。
-- 本特性下一步开发的更多信息,参阅
+- 关于本特性下一步开发的更多信息,参阅
[enhancement tracking issue #596](https://github.com/kubernetes/enhancements/issues/596)。
旧版本的 Kubernetes 仍支持这些“树内(In-Tree)”持久卷类型:
@@ -943,13 +943,10 @@ Older versions of Kubernetes also supported the following in-tree PersistentVolu
* [`cinder`](/zh-cn/docs/concepts/storage/volumes/#cinder) - Cinder (OpenStack block storage)
(v1.27 开始**不可用**)
* `photonPersistentDisk` - Photon 控制器持久化盘。(从 v1.15 版本开始将**不可用**)
-* [`scaleIO`](/zh-cn/docs/concepts/storage/volumes/#scaleio) - ScaleIO 卷(v1.21 之后**不可用**)
-* [`flocker`](/zh-cn/docs/concepts/storage/volumes/#flocker) - Flocker 存储
- (v1.25 之后**不可用**)
-* [`quobyte`](/zh-cn/docs/concepts/storage/volumes/#quobyte) - Quobyte 卷
- (v1.25 之后**不可用**)
-* [`storageos`](/zh-cn/docs/concepts/storage/volumes/#storageos) - StorageOS 卷
- (v1.25 之后**不可用**)
+* `scaleIO` - ScaleIO 卷(v1.21 之后**不可用**)
+* `flocker` - Flocker 存储 (v1.25 之后**不可用**)
+* `quobyte` - Quobyte 卷 (v1.25 之后**不可用**)
+* `storageos` - StorageOS 卷 (v1.25 之后**不可用**)
### 带有 Secret、DownwardAPI 和 ConfigMap 的配置示例 {#example-configuration-secret-downwardapi-configmap}
-{{< codenew file="pods/storage/projected-secret-downwardapi-configmap.yaml" >}}
+{{% code_sample file="pods/storage/projected-secret-downwardapi-configmap.yaml" %}}
### 带有非默认权限模式设置的 Secret 的配置示例 {#example-configuration-secrets-nondefault-permission-mode}
-{{< codenew file="pods/storage/projected-secrets-nondefault-permission-mode.yaml" >}}
+{{% code_sample file="pods/storage/projected-secrets-nondefault-permission-mode.yaml" %}}
StorageClass 对象的命名很重要,用户使用这个命名来请求生成一个特定的类。
-当创建 StorageClass 对象时,管理员设置 StorageClass 对象的命名和其他参数,
-一旦创建了对象就不能再对其更新。
+当创建 StorageClass 对象时,管理员设置 StorageClass 对象的命名和其他参数。
### 删除策略 {#deletion-policy}
-卷快照类具有 `deletionPolicy` 属性。用户可以配置当所绑定的 VolumeSnapshot
-对象将被删除时,如何处理 VolumeSnapshotContent 对象。
+卷快照类具有 [deletionPolicy] 属性(/zh-cn/docs/concepts/storage/volume-snapshots/#delete)。
+用户可以配置当所绑定的 VolumeSnapshot 对象将被删除时,如何处理 VolumeSnapshotContent 对象。
卷快照类的这个策略可以是 `Retain` 或者 `Delete`。这个策略字段必须指定。
如果删除策略是 `Delete`,那么底层的存储快照会和 VolumeSnapshotContent 对象
From a6f2e7d26666335cd157fe6ec9f5163a632c64c2 Mon Sep 17 00:00:00 2001
From: windsonsea
Date: Sun, 8 Oct 2023 14:09:27 +0800
Subject: [PATCH 075/229] [zh] Sync kubelet-tls-bootstrapping.md
---
.../kubelet-tls-bootstrapping.md | 207 ++++++++++--------
1 file changed, 120 insertions(+), 87 deletions(-)
diff --git a/content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md b/content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md
index 9cfd2cf49dc86..5ad0658471c08 100644
--- a/content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md
+++ b/content/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md
@@ -17,8 +17,10 @@ weight: 120
在一个 Kubernetes 集群中,工作节点上的组件(kubelet 和 kube-proxy)需要与
@@ -27,8 +29,9 @@ Kubernetes 控制平面组件通信,尤其是 kube-apiserver。
我们强烈建议使用节点上的客户端 TLS 证书。
启动引导这些组件的正常过程,尤其是需要证书来与 kube-apiserver 安全通信的工作节点,
@@ -36,8 +39,8 @@ This in turn, can make it challenging to initialize or scale a cluster.
这也使得初始化或者扩缩一个集群的操作变得具有挑战性。
@@ -61,16 +64,17 @@ When a worker node starts up, the kubelet does the following:
1. 寻找自己的 `kubeconfig` 文件
-2. 检索 API 服务器的 URL 和凭据,通常是来自 `kubeconfig` 文件中的
+1. 检索 API 服务器的 URL 和凭据,通常是来自 `kubeconfig` 文件中的
TLS 密钥和已签名证书
-3. 尝试使用这些凭据来与 API 服务器通信
+1. 尝试使用这些凭据来与 API 服务器通信
负责部署和管理集群的人有以下责任:
1. 创建 CA 密钥和证书
-2. 将 CA 证书发布到 kube-apiserver 运行所在的控制平面节点上
-3. 为每个 kubelet 创建密钥和证书;强烈建议为每个 kubelet 使用独一无二的、
+1. 将 CA 证书发布到 kube-apiserver 运行所在的控制平面节点上
+1. 为每个 kubelet 创建密钥和证书;强烈建议为每个 kubelet 使用独一无二的、
CN 取值与众不同的密钥和证书
-4. 使用 CA 密钥对 kubelet 证书签名
-5. 将 kubelet 密钥和签名的证书发布到 kubelet 运行所在的特定节点上
+1. 使用 CA 密钥对 kubelet 证书签名
+1. 将 kubelet 密钥和签名的证书发布到 kubelet 运行所在的特定节点上
本文中描述的 TLS 启动引导过程有意简化甚至完全自动化上述过程,
@@ -121,16 +126,16 @@ In the bootstrap initialization process, the following occurs:
1. kubelet 启动
2. kubelet 看到自己**没有**对应的 `kubeconfig` 文件
@@ -145,12 +150,12 @@ In the bootstrap initialization process, the following occurs:
来批复该 CSR
9. kubelet 所需要的证书被创建
10. 证书被发放给 kubelet
11. kubelet 取回该证书
@@ -190,8 +195,9 @@ In addition, you need your Kubernetes Certificate Authority (CA).
## 证书机构 {#certificate-authority}
@@ -200,10 +206,12 @@ to sign the kubelet certificate. As before, it is your responsibility to distrib
如前所述,将证书机构密钥和证书发布到控制平面节点是你的责任。
就本文而言,我们假定这些数据被发布到控制平面节点上的 `/var/lib/kubernetes/ca.pem`(证书)和
`/var/lib/kubernetes/ca-key.pem`(密钥)文件中。
@@ -247,8 +255,9 @@ containing the signing certificate, for example
### 初始启动引导认证 {#initial-bootstrap-authentication}
@@ -262,16 +271,17 @@ bootstrap credentials, the following two authenticators are recommended for ease
of provisioning.
1. [Bootstrap Tokens](#bootstrap-tokens)
-2. [Token authentication file](#token-authentication-file)
+1. [Token authentication file](#token-authentication-file)
-->
尽管所有身份认证策略都可以用来对 kubelet 的初始启动凭据来执行认证,
-出于容易准备的因素,建议使用如下两个身份认证组件:
+但出于容易准备的因素,建议使用如下两个身份认证组件:
1. [启动引导令牌(Bootstrap Token)](#bootstrap-tokens)
2. [令牌认证文件](#token-authentication-file)
启动引导令牌是一种对 kubelet 进行身份认证的方法,相对简单且容易管理,
且不需要在启动 kube-apiserver 时设置额外的标志。
@@ -280,15 +290,16 @@ Using bootstrap tokens is a simpler and more easily managed method to authentica
Whichever method you choose, the requirement is that the kubelet be able to authenticate as a user with the rights to:
1. create and retrieve CSRs
-2. be automatically approved to request node client certificates, if automatic approval is enabled.
+1. be automatically approved to request node client certificates, if automatic approval is enabled.
-->
无论选择哪种方法,这里的需求是 kubelet 能够被身份认证为某个具有如下权限的用户:
1. 创建和读取 CSR
-2. 在启用了自动批复时,能够在请求节点客户端证书时得到自动批复
+1. 在启用了自动批复时,能够在请求节点客户端证书时得到自动批复
使用启动引导令牌执行身份认证的 kubelet 会被认证为 `system:bootstrappers`
组中的用户。这是使用启动引导令牌的一种标准方法。
@@ -301,38 +312,41 @@ requests related to certificate provisioning. With RBAC in place, scoping the
tokens to a group allows for great flexibility. For example, you could disable a
particular bootstrap group's access when you are done provisioning the nodes.
-->
-随着这个功能特性的逐渐成熟,你需要确保令牌绑定到某基于角色的访问控制(RBAC)
-策略上,从而严格限制请求(使用[启动引导令牌](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/))
+随着这个功能特性的逐渐成熟,你需要确保令牌绑定到某基于角色的访问控制(RBAC)策略上,
+从而严格限制请求(使用[启动引导令牌](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/))
仅限于客户端申请提供证书。当 RBAC 被配置启用时,可以将令牌限制到某个组,
从而提高灵活性。例如,你可以在准备节点期间禁止某特定启动引导组的访问。
#### 启动引导令牌 {#bootstrap-tokens}
-启动引导令牌的细节在[这里](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/)
-详述。启动引导令牌在 Kubernetes 集群中存储为 Secret 对象,被发放给各个 kubelet。
+启动引导令牌的细节在[这里](/zh-cn/docs/reference/access-authn-authz/bootstrap-tokens/)详述。
+启动引导令牌在 Kubernetes 集群中存储为 Secret 对象,被发放给各个 kubelet。
你可以在整个集群中使用同一个令牌,也可以为每个节点发放单独的令牌。
这一过程有两个方面:
1. 基于令牌 ID、机密数据和范畴信息创建 Kubernetes Secret
-2. 将令牌发放给 kubelet
+1. 将令牌发放给 kubelet
从 kubelet 的角度,所有令牌看起来都很像,没有特别的含义。
@@ -407,7 +421,8 @@ certificate signing request (CSR) as well as retrieve it when done.
Fortunately, Kubernetes ships with a `ClusterRole` with precisely these (and
only these) permissions, `system:node-bootstrapper`.
-To do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers` group to the cluster role `system:node-bootstrapper`.
+To do this, you only need to create a `ClusterRoleBinding` that binds the `system:bootstrappers`
+group to the cluster role `system:node-bootstrapper`.
-->
### 授权 kubelet 创建 CSR {#authorize-kubelet-to-create-csr}
@@ -419,6 +434,9 @@ To do this, you only need to create a `ClusterRoleBinding` that binds the `syste
为了实现这一点,你只需要创建 `ClusterRoleBinding`,将 `system:bootstrappers`
组绑定到集群角色 `system:node-bootstrapper`。
+
```yaml
# 允许启动引导节点创建 CSR
apiVersion: rbac.authorization.k8s.io/v1
@@ -443,7 +461,7 @@ the controller-manager is responsible for issuing actual signed certificates.
-->
## kube-controller-manager 配置 {#kube-controller-manager-configuration}
-API 服务器从 kubelet 收到证书请求并对这些请求执行身份认证,
+尽管 API 服务器从 kubelet 收到证书请求并对这些请求执行身份认证,
但真正负责发放签名证书的是控制器管理器(controller-manager)。
由于这些被签名的证书反过来会被 kubelet 用来在 kube-apiserver 执行普通的
kubelet 身份认证,很重要的一点是为控制器管理器所提供的 CA 也被 kube-apiserver
信任用来执行身份认证。CA 密钥和证书是通过 kube-apiserver 的标志
-`--client-ca-file=FILENAME`(例如,`--client-ca-file=/var/lib/kubernetes/ca.pem`),
-来设定的,正如 kube-apiserver 配置节所述。
+`--client-ca-file=FILENAME`(例如 `--client-ca-file=/var/lib/kubernetes/ca.pem`)来设定的,
+正如 kube-apiserver 配置节所述。
要将 Kubernetes CA 密钥和证书提供给 kube-controller-manager,可使用以下标志:
@@ -530,23 +549,30 @@ RBAC permissions to the correct group.
许可权限有两组:
* `nodeclient`:如果节点在为节点创建新的证书,则该节点还没有证书。
- 该节点使用前文所列的令牌之一来执行身份认证,因此是组 `system:bootstrappers` 组的成员。
+ 该节点使用前文所列的令牌之一来执行身份认证,因此是 `system:bootstrappers` 组的成员。
* `selfnodeclient`:如果节点在对证书执行续期操作,则该节点已经拥有一个证书。
节点持续使用现有的证书将自己认证为 `system:nodes` 组的成员。
要允许 kubelet 请求并接收新的证书,可以创建一个 `ClusterRoleBinding`
将启动引导节点所处的组 `system:bootstrappers` 绑定到为其赋予访问权限的 `ClusterRole`
`system:certificates.k8s.io:certificatesigningrequests:nodeclient`:
+
```yaml
# 批复 "system:bootstrappers" 组的所有 CSR
apiVersion: rbac.authorization.k8s.io/v1
@@ -564,13 +590,17 @@ roleRef:
```
要允许 kubelet 对其客户端证书执行续期操作,可以创建一个 `ClusterRoleBinding`
将正常工作的节点所处的组 `system:nodes` 绑定到为其授予访问许可的 `ClusterRole`
`system:certificates.k8s.io:certificatesigningrequests:selfnodeclient`:
+
```yaml
# 批复 "system:nodes" 组的 CSR 续约请求
apiVersion: rbac.authorization.k8s.io/v1
@@ -602,14 +632,14 @@ collection.
的一部分的 `csrapproving` 控制器是自动被启用的。
该控制器使用 [`SubjectAccessReview` API](/zh-cn/docs/reference/access-authn-authz/authorization/#checking-api-access)
来确定给定用户是否被授权请求 CSR,之后基于鉴权结果执行批复操作。
-为了避免与其它批复组件发生冲突,内置的批复组件不会显式地拒绝任何 CSRs。
-该组件仅是忽略未被授权的请求。
-控制器也会作为垃圾收集的一部分清除已过期的证书。
+为了避免与其它批复组件发生冲突,内置的批复组件不会显式地拒绝任何 CSR。
+该组件仅是忽略未被授权的请求。控制器也会作为垃圾收集的一部分清除已过期的证书。
## kubelet 配置 {#kubelet-configuration}
@@ -640,7 +670,7 @@ Its format is identical to a normal `kubeconfig` file. A sample file might look
启动引导 `kubeconfig` 文件应该放在一个 kubelet 可访问的路径下,例如
`/var/lib/kubelet/bootstrap-kubeconfig`。
-其格式与普通的 `kubeconfig` 文件完全相同。实例文件可能看起来像这样:
+其格式与普通的 `kubeconfig` 文件完全相同。示例文件可能看起来像这样:
```yaml
apiVersion: v1
@@ -721,12 +751,12 @@ directory specified by `--cert-dir`.
证书和密钥文件会被放到 `--cert-dir` 所指定的目录中。
-### 客户和服务证书 {#client-and-serving-certificates}
+### 客户端和服务证书 {#client-and-serving-certificates}
前文所述的内容都与 kubelet **客户端**证书相关,尤其是 kubelet 用来向
kube-apiserver 认证自身身份的证书。
@@ -758,7 +788,7 @@ TLS 启动引导所提供的客户端证书默认被签名为仅用于 `client a
不过,你可以启用服务器证书,至少可以部分地通过证书轮换来实现这点。
@@ -818,9 +848,9 @@ A deployment-specific approval process for kubelet serving certificates should t
1. are requested by nodes (ensure the `spec.username` field is of the form
`system:node:` and `spec.groups` contains `system:nodes`)
-2. request usages for a serving certificate (ensure `spec.usages` contains `server auth`,
+1. request usages for a serving certificate (ensure `spec.usages` contains `server auth`,
optionally contains `digital signature` and `key encipherment`, and contains no other usages)
-3. only have IP and DNS subjectAltNames that belong to the requesting node,
+1. only have IP and DNS subjectAltNames that belong to the requesting node,
and have no URI and Email subjectAltNames (parse the x509 Certificate Signing Request
in `spec.request` to verify `subjectAltNames`)
-->
@@ -828,9 +858,9 @@ A deployment-specific approval process for kubelet serving certificates should t
1. 由节点发出的请求(确保 `spec.username` 字段形式为 `system:node:`
且 `spec.groups` 包含 `system:nodes`)
-2. 请求中包含服务证书用法(确保 `spec.usages` 中包含 `server auth`,可选地也可包含
+1. 请求中包含服务证书用法(确保 `spec.usages` 中包含 `server auth`,可选地也可包含
`digital signature` 和 `key encipherment`,且不包含其它用法)
-3. 仅包含隶属于请求节点的 IP 和 DNS 的 `subjectAltNames`,没有 URI 和 Email
+1. 仅包含隶属于请求节点的 IP 和 DNS 的 `subjectAltNames`,没有 URI 和 Email
形式的 `subjectAltNames`(解析 `spec.request` 中的 x509 证书签名请求可以检查
`subjectAltNames`)
{{< /note >}}
@@ -857,7 +887,11 @@ You have several options for generating these credentials:
* 较老的方式:和 kubelet 在 TLS 启动引导之前所做的一样,用类似的方式创建和分发证书。
* DaemonSet:由于 kubelet 自身被加载到所有节点之上,并且有足够能力来启动基本服务,
@@ -874,7 +908,7 @@ manager.
-->
## kubectl 批复 {#kubectl-approval}
-CSR 可以在编译进控制器内部的批复工作流之外被批复。
+CSR 可以在编译进控制器管理器内部的批复工作流之外被批复。
+
+
+**作者**:Frederico Muñoz (SAS Institute)
+
+**译者**:[Michael Yao](https://github.com/windsonsea) (DaoCloud)
+
+
+**这是 SIG Architecture 焦点访谈系列的首次采访,这一系列访谈将涵盖多个子项目。
+我们从 SIG Architecture:Conformance 子项目开始。**
+
+在本次 [SIG Architecture](https://github.com/kubernetes/community/blob/master/sig-architecture/README.md)
+访谈中,我们与 [Riaan Kleinhans](https://github.com/Riaankl) (ii-Team) 进行了对话,他是
+[Conformance 子项目](https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#conformance-definition-1)的负责人。
+
+
+## 关于 SIG Architecture 和 Conformance 子项目
+
+**Frederico (FSM)**:你好 Riaan,欢迎!首先,请介绍一下你自己,你的角色以及你是如何参与 Kubernetes 的。
+
+**Riaan Kleinhans (RK)**:嗨!我叫 Riaan Kleinhans,我住在南非。
+我是新西兰 [ii-Team](ii.nz) 的项目经理。在我加入 ii 时,本来计划在 2020 年 4 月搬到新西兰,
+然后新冠疫情爆发了。幸运的是,作为一个灵活和富有活力的团队,我们能够在各个不同的时区以远程方式协作。
+
+
+ii 团队负责管理 Kubernetes Conformance 测试的技术债务,并编写测试内容来消除这些技术债务。
+我担任项目经理的角色,成为监控、测试内容编写和社区之间的桥梁。通过这项工作,我有幸在最初的几个月里结识了
+[Dan Kohn](https://github.com/dankohn),他对我们的工作充满热情,给了我很大的启发。
+
+
+**FSM**:谢谢!所以,你参与 SIG Architecture 是因为合规性的工作?
+
+**RK**:SIG Architecture 负责管理 Kubernetes Conformance 子项目。
+最初,我大部分时间直接与 SIG Architecture 交流 Conformance 子项目。
+然而,随着我们开始按 SIG 来组织工作任务,我们开始直接与各个 SIG 进行协作。
+与拥有未被测试的 API 的这些 SIG 的协作帮助我们加快了工作进度。
+
+
+**FSM**:你如何描述 Conformance 子项目的主要目标和介入的领域?
+
+**RM**: Kubernetes Conformance 子项目专注于通过开发和维护全面的合规性测试套件来确保兼容性并遵守
+Kubernetes 规范。其主要目标包括确保不同 Kubernetes 实现之间的兼容性,验证 API 规范的遵守情况,
+通过鼓励合规性认证来支持生态体系,并促进 Kubernetes 社区内的合作。
+通过提供标准化的测试并促进一致的行为和功能,
+Conformance 子项目为开发人员和用户提供了一个可靠且兼容的 Kubernetes 生态体系。
+
+
+## 关于 Conformance Test Suite 的更多内容
+
+**FSM**:我认为,提供这些标准化测试的一部分工作在于
+[Conformance Test Suite](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)。
+你能解释一下它是什么以及其重要性吗?
+
+**RK**:Kubernetes Conformance Test Suite 检查 Kubernetes 发行版是否符合项目的规范,
+确保在不同的实现之间的兼容性。它涵盖了诸如 API、联网、存储、调度和安全等各个特性。
+能够通过测试,则表示实现合理,便于推动构建一致且可移植的容器编排平台。
+
+
+**FSM**:是的,这些测试很重要,因为它们定义了所有 Kubernetes 集群必须支持的最小特性集合。
+你能描述一下决定将哪些特性包含在内的过程吗?在最小特性集的思路与其他 SIG 提案之间是否有所冲突?
+
+**RK**:SIG Architecture 针对经受合规性测试的每个端点的要求,都有明确的定义。
+API 端点只有正式发布且不是可选的特性,才会被(进一步)考虑是否合规。
+多年来,关于合规性配置文件已经进行了若干讨论,
+探讨将被大多数终端用户广泛使用的可选端点(例如 RBAC)纳入特定配置文件中的可能性。
+然而,这一方面仍在不断改进中。
+
+
+不满足合规性标准的端点被列在
+[ineligible_endpoints.yaml](https://github.com/kubernetes/kubernetes/blob/master/test/conformance/testdata/ineligible_endpoints.yaml) 中,
+该文件放在 Kubernetes 代码仓库中,是被公开访问的。
+随着这些端点的状态或要求发生变化,此文件可能会被更新以添加或删除端点。
+不合格的端点也可以在 [APISnoop](https://apisnoop.cncf.io/) 上看到。
+
+对于 SIG Architecture 来说,确保透明度并纳入社区意见以确定端点的合格或不合格状态是至关重要的。
+
+
+**FSM**:为新特性编写测试内容通常需要某种强制执行方式。
+你如何看待 Kubernetes 中这方面的演变?是否有人在努力改进这个流程,
+使得必须具备测试成为头等要务,或许这从来都不是一个问题?
+
+**RK**:在 2018 年开始围绕 Kubernetes 合规性计划进行讨论时,只有大约 11% 的端点被测试所覆盖。
+那时,CNCF 的管理委员会提出一个要求,如果要提供资金覆盖缺失的合规性测试,Kubernetes 社区应采取一个策略,
+即如果新特性没有包含稳定 API 的合规性测试,则不允许添加此特性。
+
+
+SIG Architecture 负责监督这一要求,[APISnoop](https://apisnoop.cncf.io/)
+在此方面被证明是一个非常有价值的工具。通过自动化流程,APISnoop 在每个周末生成一个 PR,
+以突出 Conformance 覆盖范围的变化。如果有端点在没有进行合规性测试的情况下进阶至正式发布,
+将会被迅速识别发现。这种方法有助于防止积累新的技术债务。
+
+此外,我们计划在不久的将来创建一个发布通知任务,作用是添加额外一层防护,以防止产生新的技术债务。
+
+
+**FSM**:我明白了,工具化和自动化在其中起着重要的作用。
+在你看来,就合规性而言,还有哪些领域需要做一些工作?
+换句话说,目前标记为优先改进的领域有哪些?
+
+**RK**:在 1.27 版本中,我们已完成了 “100% 合规性测试” 的里程碑!
+
+
+当时,社区重新审视了所有被列为不合规的端点。这个列表是收集多年的社区意见后填充的。
+之前被认为不合规的几个端点已被挑选出来并迁移到一个新的专用列表中,
+该列表中包含目前合规性测试开发的焦点。同样,可以在 apisnoop.cncf.io 上查阅此列表。
+
+
+为了确保在合规性项目中避免产生新的技术债务,我们计划建立一个发布通知任务作为额外的预防措施。
+
+虽然 APISnoop 目前被托管在 CNCF 基础设施上,但此项目已慷慨地捐赠给了 Kubernetes 社区。
+因此,它将在 2023 年底之前转移到社区自治的基础设施上。
+
+
+**FSM**:这是个好消息!对于想要提供帮助的人们,你能否重点说明一下协作的价值所在?
+参与贡献是否需要对 Kubernetes 有很扎实的知识,或否有办法让一些新人也能为此项目做出贡献?
+
+**RK**:参与合规性测试就像 "洗碗" 一样,它可能不太显眼,但仍然非常重要。
+这需要对 Kubernetes 有深入的理解,特别是在需要对端点进行测试的领域。
+这就是为什么与负责测试 API 端点的每个 SIG 进行协作会如此重要。
+
+
+我们的承诺是让所有人都能参与测试内容编写,作为这一承诺的一部分,
+ii 团队目前正在开发一个 “点击即部署(click and deploy)” 的解决方案。
+此解决方案旨在使所有人都能在几分钟内快速创建一个在真实硬件上工作的环境。
+我们将在准备好后分享有关此项开发的更新。
+
+
+**FSM**:那会非常有帮助,谢谢。最后你还想与我们的读者分享些什么见解吗?
+
+**RK**:合规性测试是一个协作性的社区工作,涉及各个 SIG 之间的广泛合作。
+SIG Architecture 在推动倡议并提供指导方面起到了领头作用。然而,
+工作的进展在很大程度上依赖于所有 SIG 在审查、增强和认可测试方面的支持。
+
+
+我要衷心感谢 ii 团队多年来对解决技术债务的坚定承诺。
+特别要感谢 [Hippie Hacker](https://github.com/hh) 的指导和对愿景的引领作用,这是非常宝贵的。
+此外,我还要特别表扬 Stephen Heywood 在最近几个版本中承担了大部分测试内容编写工作而做出的贡献,
+还有 Zach Mandeville 对 APISnoop 也做了很好的贡献。
+
+
+**FSM**:非常感谢你参加本次访谈并分享你的深刻见解,我本人从中获益良多,我相信读者们也会同样受益。
From 2a51003aec5ceb3938a115c1d4b7e648436699c1 Mon Sep 17 00:00:00 2001
From: Edith Puclla <58795858+edithturn@users.noreply.github.com>
Date: Sun, 8 Oct 2023 08:26:15 +0100
Subject: [PATCH 077/229] Update
content/es/docs/concepts/storage/projected-volumes.md
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Si, de acuerdo con esto Rodolfo!
Co-authored-by: Rodolfo Martínez Vega
---
content/es/docs/concepts/storage/projected-volumes.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/content/es/docs/concepts/storage/projected-volumes.md b/content/es/docs/concepts/storage/projected-volumes.md
index e496094a1d90c..11c8149df46c5 100644
--- a/content/es/docs/concepts/storage/projected-volumes.md
+++ b/content/es/docs/concepts/storage/projected-volumes.md
@@ -20,9 +20,9 @@ Un volumen `proyectado` asigna varias fuentes de volúmenes existentes al mismo
Actualmente se pueden proyectar los siguientes tipos de fuentes de volumen:
-- [`secret`](/docs/concepts/storage/volumes/#secret)
-- [`downwardAPI`](/docs/concepts/storage/volumes/#downwardapi)
-- [`configMap`](/docs/concepts/storage/volumes/#configmap)
+- [`secret`](/es/docs/concepts/storage/volumes/#secret)
+- [`downwardAPI`](/es/docs/concepts/storage/volumes/#downwardapi)
+- [`configMap`](/es/docs/concepts/storage/volumes/#configmap)
- [`serviceAccountToken`](#serviceaccounttoken)
Se requiere que todas las fuentes estén en el mismo espacio de nombres que el Pod. Para más detalles,
From 30b7c6f19485483132adf774c3ed44cdbe844b73 Mon Sep 17 00:00:00 2001
From: Edith Puclla
Date: Sun, 8 Oct 2023 09:09:00 +0100
Subject: [PATCH 078/229] Taking feedback in docs
---
.../es/docs/concepts/storage/projected-volumes.md | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/content/es/docs/concepts/storage/projected-volumes.md b/content/es/docs/concepts/storage/projected-volumes.md
index 11c8149df46c5..1540d5df97b54 100644
--- a/content/es/docs/concepts/storage/projected-volumes.md
+++ b/content/es/docs/concepts/storage/projected-volumes.md
@@ -48,12 +48,14 @@ en un Pod en una ruta especificada. Por ejemplo:
{{% code_sample file="pods/storage/projected-service-account-token.yaml" %}}
-El Pod de ejemplo tiene un volumen proyectado que contiene el token de cuenta de servicio inyectado. Los contenedores en este Pod pueden usar ese token para acceder al servidor API de Kubernetes, autenticándose con la identidad de [the pod's ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/).
+El Pod de ejemplo tiene un volumen proyectado que contiene el token de cuenta de servicio inyectado.
+Los contenedores en este Pod pueden usar ese token para acceder al servidor API de Kubernetes, autenticándose con la identidad de [the pod's ServiceAccount](/docs/tasks/configure-pod-container/configure-service-account/).
El campo `audience` contiene la audiencia prevista del
token. Un destinatario del token debe identificarse con un identificador especificado en la audiencia del token y, de lo contrario, debe rechazar el token. Este campo es opcional y de forma predeterminada es el identificador del servidor API.
-The `expirationSeconds` es la duración esperada de validez del token de la cuenta de servicio. El valor predeterminado es 1 hora y debe durar al menos 10 minutos (600 segundos). Un administrador
+The `expirationSeconds` es la duración esperada de validez del token de la cuenta de servicio. El valor predeterminado es 1 hora y debe durar al menos 10 minutos (600 segundos).
+Un administrador
también puede limitar su valor máximo especificando la opción `--service-account-max-token-expiration`
para el servidor API. El campo `path` especifica una ruta relativa al punto de montaje del volumen proyectado.
@@ -87,7 +89,11 @@ Si los permisos de volumen `serviceAccountToken` de un Pod se establecieron en `
### Windows
-En los pods de Windows que tienen un volumen proyectado y `RunAsUsername` configurado en el pod `SecurityContext`, la propiedad no se aplica debido a la forma en que se administran las cuentas de usuario en Windows. Windows almacena y administra cuentas de grupos y usuarios locales en un archivo de base de datos llamado Administrador de cuentas de seguridad (SAM). Cada contenedor mantiene su propia instancia de la base de datos SAM, de la cual el host no tiene visibilidad mientras el contenedor se está ejecutando. Los contenedores de Windows están diseñados para ejecutar la parte del modo de usuario del sistema operativo de forma aislada del host, de ahí el mantenimiento de una base de datos SAM virtual. Como resultado, el kubelet que se ejecuta en el host no tiene la capacidad de configurar dinámicamente la propiedad de los archivos del host para cuentas de contenedores virtualizados. Se recomienda que, si los archivos de la máquina host se van a compartir con el contenedor, se coloquen en su propio montaje de volumen fuera de `C:\`.
+En los pods de Windows que tienen un volumen proyectado y `RunAsUsername` configurado en el pod `SecurityContext`, la propiedad no se aplica debido a la forma en que se administran las cuentas de usuario en Windows.
+Windows almacena y administra cuentas de grupos y usuarios locales en un archivo de base de datos llamado Administrador de cuentas de seguridad (SAM).
+Cada contenedor mantiene su propia instancia de la base de datos SAM, de la cual el host no tiene visibilidad mientras el contenedor se está ejecutando.
+Los contenedores de Windows están diseñados para ejecutar la parte del modo de usuario del sistema operativo de forma aislada del host, de ahí el mantenimiento de una base de datos SAM virtual.
+Como resultado, el kubelet que se ejecuta en el host no tiene la capacidad de configurar dinámicamente la propiedad de los archivos del host para cuentas de contenedores virtualizados. Se recomienda que, si los archivos de la máquina host se van a compartir con el contenedor, se coloquen en su propio montaje de volumen fuera de `C:\`.
De forma predeterminada, los archivos proyectados tendrán la siguiente propiedad, como se muestra en un archivo de volumen proyectado de ejemplo:
From c9e4030a0806d14ba4f9c4f55f07aca82de5d174 Mon Sep 17 00:00:00 2001
From: Arhell
Date: Sun, 8 Oct 2023 11:12:06 +0300
Subject: [PATCH 079/229] [pl] Updated resources of README.md
---
README-pl.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README-pl.md b/README-pl.md
index 7544de45835a6..62dc2d0ee22f3 100644
--- a/README-pl.md
+++ b/README-pl.md
@@ -43,7 +43,7 @@ make container-image
make container-serve
```
-Jeśli widzisz błędy, prawdopodobnie kontener z Hugo nie dysponuje wystarczającymi zasobami. Aby rozwiązać ten problem, zwiększ ilość dostępnych zasobów CPU i pamięci dla Dockera na Twojej maszynie ([MacOSX](https://docs.docker.com/docker-for-mac/#resources) i [Windows](https://docs.docker.com/docker-for-windows/#resources)).
+Jeśli widzisz błędy, prawdopodobnie kontener z Hugo nie dysponuje wystarczającymi zasobami. Aby rozwiązać ten problem, zwiększ ilość dostępnych zasobów CPU i pamięci dla Dockera na Twojej maszynie ([MacOS](https://docs.docker.com/desktop/settings/mac/) i [Windows](https://docs.docker.com/desktop/settings/windows/)).
Aby obejrzeć zawartość serwisu, otwórz w przeglądarce adres . Po każdej zmianie plików źródłowych, Hugo automatycznie aktualizuje stronę i odświeża jej widok w przeglądarce.
From 6f4039bcef85945533f1ffc8209597962f038709 Mon Sep 17 00:00:00 2001
From: Sarthak Patel <76515568+Community-Programmer@users.noreply.github.com>
Date: Sun, 8 Oct 2023 01:24:51 -0700
Subject: [PATCH 080/229] Add KCSA Certification to Training page (#43318)
* Add KCSA certification to Training page
Co-Authored-By: Tim Bannister
* Change CSS for training
Account for the addition of KCSA.
Co-Authored-By: Tim Bannister
---------
Co-authored-by: Tim Bannister
---
content/en/training/_index.html | 39 +++--
static/css/training.css | 59 ++++++-
.../images/training/kubernetes-cksa-white.svg | 147 ++++++++++++++++++
3 files changed, 231 insertions(+), 14 deletions(-)
create mode 100644 static/images/training/kubernetes-cksa-white.svg
diff --git a/content/en/training/_index.html b/content/en/training/_index.html
index 74880486a6bf2..28fe46cfc0c43 100644
--- a/content/en/training/_index.html
+++ b/content/en/training/_index.html
@@ -14,17 +14,22 @@
Build your cloud native career
Kubernetes is at the core of the cloud native movement. Training and certifications from the Linux Foundation and our training partners lets you invest in your career, learn Kubernetes, and make your cloud native projects successful.
+ Kubernetes and Cloud Native Security Associate (KCSA)
+
+
The KCSA is a pre-professional certification designed for candidates interested in advancing to the professional level through a demonstrated understanding of foundational knowledge and skills of security technologies in the cloud native ecosystem.
+
A certified KCSA will confirm an understanding of the baseline security configuration of Kubernetes clusters to meet compliance objectives.
The Certified Kubernetes Administrator (CKA) program provides assurance that CKAs have the skills, knowledge, and competency to perform the responsibilities of Kubernetes administrators.
A certified Kubernetes administrator has demonstrated the ability to do basic installation as well as configuring and managing production-grade Kubernetes clusters.
@@ -115,6 +131,7 @@
Certified Kubernetes Security Specialist (CKS)
+
The Certified Kubernetes Security Specialist program provides assurance that the holder is comfortable and competent with a broad range of best practices. CKS certification covers skills for securing container-based applications and Kubernetes platforms during build, deployment and runtime.
Candidates for CKS must hold a current Certified Kubernetes Administrator (CKA) certification to demonstrate they possess sufficient Kubernetes expertise before sitting for the CKS.
From 0d8d697861cb9da6debd19d0e158be74ce6a1697 Mon Sep 17 00:00:00 2001
From: YAMADA Kazuaki
Date: Mon, 9 Oct 2023 15:09:39 +0900
Subject: [PATCH 087/229] fix: anbiguous translation of diagnostic failed
---
content/ja/docs/concepts/workloads/pods/pod-lifecycle.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md b/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md
index 89d521098c809..b6bc7598f67b5 100644
--- a/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md
+++ b/content/ja/docs/concepts/workloads/pods/pod-lifecycle.md
@@ -206,7 +206,7 @@ probeを使ってコンテナをチェックする4つの異なる方法があ
: コンテナの診断が失敗しました。
`Unknown`
-: コンテナの診断が失敗しました(何も実行する必要はなく、kubeletはさらにチェックを行います)。
+: コンテナの診断自体が失敗しました(何も実行する必要はなく、kubeletはさらにチェックを行います)。
### Probeの種類 {#types-of-probe}
From d6817aeab52490df9ec2e2fbc85208f775b66eb3 Mon Sep 17 00:00:00 2001
From: Maciej Filocha
Date: Sun, 8 Oct 2023 15:05:05 +0200
Subject: [PATCH 088/229] Update main Polish index page
---
content/pl/_index.html | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/content/pl/_index.html b/content/pl/_index.html
index 1fa0f72c4cd08..06312f7197954 100644
--- a/content/pl/_index.html
+++ b/content/pl/_index.html
@@ -4,9 +4,10 @@
cid: home
sitemap:
priority: 1.0
-
---
+{{< site-searchbar >}}
+
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
[Kubernetes]({{< relref "/docs/concepts/overview/" >}}), znany też jako K8s, to otwarte oprogramowanie służące do automatyzacji procesów uruchamiania, skalowania i zarządzania aplikacjami w kontenerach.
@@ -58,5 +59,3 @@
The Challenges of Migrating 150+ Microservices to Kubernetes
{{< /blocks/section >}}
-
-{{< blocks/kubernetes-features >}}
From 926770351c839258ef021432401973c9cce65cff Mon Sep 17 00:00:00 2001
From: Raul Mahiques <18713435+rmahique@users.noreply.github.com>
Date: Mon, 9 Oct 2023 09:05:47 +0200
Subject: [PATCH 089/229] Added instructions for SUSE-based distributions
(#42913)
* Update install-kubectl-linux.md
Added instructions for SUSE based distributions
* Update change-package-repository.md
Added a section for openSUSE and SLES distributions
* Update content/en/docs/tasks/tools/install-kubectl-linux.md
Co-authored-by: Michael
* Update content/en/docs/tasks/tools/install-kubectl-linux.md
Co-authored-by: Michael
* Update content/en/docs/tasks/tools/install-kubectl-linux.md
Co-authored-by: Michael
---------
Co-authored-by: Michael
---
.../kubeadm/change-package-repository.md | 26 ++++++++++++++++
.../docs/tasks/tools/install-kubectl-linux.md | 30 +++++++++++++++++++
2 files changed, 56 insertions(+)
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md b/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md
index d39f2a4891e6e..db633d15ee253 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md
@@ -66,6 +66,32 @@ exclude=kubelet kubeadm kubectl
**You're using the Kubernetes package repositories and this guide applies to you.**
Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories.
+{{% /tab %}}
+
+{{% tab name="openSUSE or SLES" %}}
+
+Print the contents of the file that defines the Kubernetes `zypper` repository:
+
+```shell
+# On your system, this configuration file could have a different name
+cat /etc/zypp/repos.d/kubernetes.repo
+```
+
+If you see a `baseurl` similar to the `baseurl` in the output below:
+
+```
+[kubernetes]
+name=Kubernetes
+baseurl=https://pkgs.k8s.io/core:/stable:/v{{< skew currentVersionAddMinor -1 "." >}}/rpm/
+enabled=1
+gpgcheck=1
+gpgkey=https://pkgs.k8s.io/core:/stable:/v{{< skew currentVersionAddMinor -1 "." >}}/rpm/repodata/repomd.xml.key
+exclude=kubelet kubeadm kubectl
+```
+
+**You're using the Kubernetes package repositories and this guide applies to you.**
+Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories.
+
{{% /tab %}}
{{< /tabs >}}
diff --git a/content/en/docs/tasks/tools/install-kubectl-linux.md b/content/en/docs/tasks/tools/install-kubectl-linux.md
index 684f904b14bda..dafb88fa025a6 100644
--- a/content/en/docs/tasks/tools/install-kubectl-linux.md
+++ b/content/en/docs/tasks/tools/install-kubectl-linux.md
@@ -192,6 +192,36 @@ To upgrade kubectl to another minor release, you'll need to bump the version in
sudo yum install -y kubectl
```
+{{% /tab %}}
+
+{{% tab name="SUSE-based distributions" %}}
+
+1. Add the Kubernetes `zypper` repository. If you want to use Kubernetes version
+ different than {{< param "version" >}}, replace {{< param "version" >}} with
+ the desired minor version in the command below.
+
+ ```bash
+ # This overwrites any existing configuration in /etc/zypp/repos.d/kubernetes.repo
+ cat <}}/rpm/
+ enabled=1
+ gpgcheck=1
+ gpgkey=https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/rpm/repodata/repomd.xml.key
+ EOF
+
+ {{< note >}}
+ To upgrade kubectl to another minor release, you'll need to bump the version in `/etc/zypp/repos.d/kubernetes.repo`
+ before running `zypper update`. This procedure is described in more detail in
+ [Changing The Kubernetes Package Repository](/docs/tasks/administer-cluster/kubeadm/change-package-repository/).
+ {{< /note >}}
+
+1. Install kubectl using `zypper`:
+
+ ```bash
+ sudo zypper install -y kubectl
+
{{% /tab %}}
{{< /tabs >}}
From aabb993812d84497822fef09dd23166eb9d7ac26 Mon Sep 17 00:00:00 2001
From: Shubham
Date: Mon, 9 Oct 2023 15:39:01 +0530
Subject: [PATCH 090/229] Fix hyperlinks in Ingress concept (#43110)
* Fixed the Hyperlinks under the Ingress docs.
* Modified the hyperlink for Ingress spec.
---
content/en/docs/concepts/services-networking/ingress.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/content/en/docs/concepts/services-networking/ingress.md b/content/en/docs/concepts/services-networking/ingress.md
index 836373b3b8898..cad103a7d05d0 100644
--- a/content/en/docs/concepts/services-networking/ingress.md
+++ b/content/en/docs/concepts/services-networking/ingress.md
@@ -84,7 +84,7 @@ is the [rewrite-target annotation](https://github.com/kubernetes/ingress-nginx/b
Different [Ingress controllers](/docs/concepts/services-networking/ingress-controllers) support different annotations.
Review the documentation for your choice of Ingress controller to learn which annotations are supported.
-The Ingress [spec](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)
+The [Ingress spec](/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec)
has all the information needed to configure a load balancer or proxy server. Most importantly, it
contains a list of rules matched against all incoming requests. Ingress resource only supports rules
for directing HTTP(S) traffic.
@@ -94,8 +94,8 @@ should be defined.
There are some ingress controllers, that work without the definition of a
default `IngressClass`. For example, the Ingress-NGINX controller can be
-configured with a [flag](https://kubernetes.github.io/ingress-nginx/#what-is-the-flag-watch-ingress-without-class)
-`--watch-ingress-without-class`. It is [recommended](https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do) though, to specify the
+configured with a [flag](https://kubernetes.github.io/ingress-nginx/user-guide/k8s-122-migration/#what-is-the-flag-watch-ingress-without-class)
+`--watch-ingress-without-class`. It is [recommended](https://kubernetes.github.io/ingress-nginx/user-guide/k8s-122-migration/#i-have-only-one-ingress-controller-in-my-cluster-what-should-i-do) though, to specify the
default `IngressClass` as shown [below](#default-ingress-class).
### Ingress rules
From 0c63fb814b55ecac2828943d7665847d21cea387 Mon Sep 17 00:00:00 2001
From: Meenu Yadav <116630390+MeenuyD@users.noreply.github.com>
Date: Mon, 9 Oct 2023 15:47:27 +0530
Subject: [PATCH 091/229] fix:Mismatch of footer colour at Case-Study pages
(#43021)
---
assets/scss/_case-studies.scss | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/assets/scss/_case-studies.scss b/assets/scss/_case-studies.scss
index 4f44864127525..5c907d1c08809 100644
--- a/assets/scss/_case-studies.scss
+++ b/assets/scss/_case-studies.scss
@@ -1,7 +1,7 @@
// SASS for Case Studies pages go here:
hr {
- background-color: #999999;
+ background-color: #303030;
margin-top: 0;
}
From 4e0c6cbf9b9dc52415203ee08aab62fbce86ddb3 Mon Sep 17 00:00:00 2001
From: alok0277 <120774363+alok0277@users.noreply.github.com>
Date: Mon, 9 Oct 2023 15:53:07 +0530
Subject: [PATCH 092/229] Fix banner image scaling on Case Studies pages
(#43056)
* image flow is synchronized with the responsive design.
* Update single.html
* Update quote.html
* fixed banner image
---
layouts/case-studies/single.html | 2 +-
layouts/shortcodes/case-studies/quote.html | 2 +-
static/css/new-case-studies.css | 3 ++-
3 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/layouts/case-studies/single.html b/layouts/case-studies/single.html
index ad0335685acfd..b5bf43b7b8ba7 100644
--- a/layouts/case-studies/single.html
+++ b/layouts/case-studies/single.html
@@ -4,7 +4,7 @@
{{ else }}
{{ if .Params.new_case_study_styles }}
-
{{ end }}
diff --git a/static/css/new-case-studies.css b/static/css/new-case-studies.css
index fbc6edb143596..9db8be16195df 100644
--- a/static/css/new-case-studies.css
+++ b/static/css/new-case-studies.css
@@ -66,7 +66,8 @@ h4[id]:before {
padding-left: 11.9%;
padding-right: 11.9%;
font-size: 1.2em;
- background-size: 100% auto;
+ background-position: center;
+ background-size: cover;
background-color: #666;
background-repeat: no-repeat;
}
From d9a9011a7bc4ec6a0a04b7729c6ae3aa214812f2 Mon Sep 17 00:00:00 2001
From: Roman Bednar
Date: Mon, 9 Oct 2023 14:30:33 +0200
Subject: [PATCH 093/229] Add blog for PersistentVolume last phase transition
time
---
...23-10-23-pv-last-phase-transtition-time.md | 105 ++++++++++++++++++
1 file changed, 105 insertions(+)
create mode 100644 content/en/blog/_posts/2023-10-23-pv-last-phase-transtition-time.md
diff --git a/content/en/blog/_posts/2023-10-23-pv-last-phase-transtition-time.md b/content/en/blog/_posts/2023-10-23-pv-last-phase-transtition-time.md
new file mode 100644
index 0000000000000..819e8f76353b7
--- /dev/null
+++ b/content/en/blog/_posts/2023-10-23-pv-last-phase-transtition-time.md
@@ -0,0 +1,105 @@
+---
+layout: blog
+title: PersistentVolume Last Phase Transition Time in Kubernetes
+date: 2023-10-23
+slug: persistent-volume-last-phase-transition-time
+---
+
+**Author:** Roman Bednář (Red Hat)
+
+In the recent Kubernetes v1.28 release, we (SIG Storage) introduced a new alpha feature that aims to improve PersistentVolume (PV)
+storage management and help cluster administrators gain better insights into the lifecycle of PVs.
+With the addition of the `lastPhaseTransitionTime` field into the status of a PV,
+cluster administrators are now able to track the last time a PV transitioned to a different
+[phase](/docs/concepts/storage/persistent-volumes/#phase), allowing for more efficient
+and informed resource management.
+
+## Why do we need new PV field? {#why-new-field}
+
+PersistentVolumes in Kubernetes play a crucial role in providing storage resources to workloads running in the cluster.
+However, managing these PVs effectively can be challenging, especially when it comes
+to determining the last time a PV transitioned between different phases, such as
+`Pending`, `Bound` or `Released`.
+Administrators often need to know when a PV was last used or transitioned to certain
+phases; for instance, to implement retention policies, perform cleanup, or monitor storage health.
+
+In the past, Kubernetes users have faced data loss issues when using the `Delete` retain policy and had to resort to the safer `Retain` policy.
+When we planned the work to introduce the new `lastPhaseTransitionTime` field, we
+wanted to provide a more generic solution that can be used for various use cases,
+including manual cleanup based on the time a volume was last used or producing alerts based on phase transition times.
+
+## How lastPhaseTransitionTime helps
+
+Provided you've enabled the feature gate (see [How to use it](#how-to-use-it), the new `.status.lastPhaseTransitionTime` field of a PersistentVolume (PV)
+is updated every time that PV transitions from one phase to another.
+``
+Whether it's transitioning from `Pending` to `Bound`, `Bound` to `Released`, or any other phase transition, the `lastPhaseTransitionTime` will be recorded.
+For newly created PVs the phase will be set to `Pending` and the `lastPhaseTransitionTime` will be recorded as well.
+
+This feature allows cluster administrators to:
+
+1. Implement Retention Policies
+
+ With the `lastPhaseTransitionTime`, administrators can now track when a PV was last used or transitioned to the `Released` phase.
+ This information can be crucial for implementing retention policies to clean up resources that have been in the `Released` phase for a specific duration.
+ For example, it is now trivial to write a script or a policy that deletes all PVs that have been in the `Released` phase for a week.
+
+2. Monitor Storage Health
+
+ By analyzing the phase transition times of PVs, administrators can monitor storage health more effectively.
+ For example, they can identify PVs that have been in the `Pending` phase for an unusually long time, which may indicate underlying issues with the storage provisioner.
+
+## How to use it
+
+The `lastPhaseTransitionTime` field is alpha starting from Kubernetes v1.28, so it requires
+the `PersistentVolumeLastPhaseTransitionTime` feature gate to be enabled.
+
+If you want to test the feature whilst it's alpha, you need to enable this feature gate on the `kube-controller-manager` and the `kube-apiserver`.
+
+Use the `--feature-gates` command line argument:
+
+```shell
+--feature-gates="...,PersistentVolumeLastPhaseTransitionTime=true"
+```
+
+Keep in mind that the feature enablement does not have immediate effect; the new field will be populated whenever a PV is updated and transitions between phases.
+Administrators can then access the new field through the PV status, which can be retrieved using standard Kubernetes API calls or through Kubernetes client libraries.
+
+Here is an example of how to retrieve the `lastPhaseTransitionTime` for a specific PV using the `kubectl` command-line tool:
+
+```shell
+kubectl get pv -o jsonpath='{.status.lastPhaseTransitionTime}'
+```
+
+## Going forward
+
+This feature was initially introduced as an alpha feature, behind a feature gate that is disabled by default.
+During the alpha phase, we (Kubernetes SIG Storage) will collect feedback from the end user community and address any issues or improvements identified.
+
+Once sufficient feedback has been received, or no complaints are received the feature can move to beta.
+The beta phase will allow us to further validate the implementation and ensure its stability.
+
+At least two Kubernetes releases will happen between the release where this field graduates
+to beta and the release that graduates the field to general availability (GA). That means that
+the earliest release where this field could be generally available is Kubernetes 1.32,
+likely to be scheduled for early 2025.
+
+## Getting involved
+
+We always welcome new contributors so if you would like to get involved you can
+join our [Kubernetes Storage Special-Interest-Group](https://github.com/kubernetes/community/tree/master/sig-storage) (SIG).
+
+If you would like to share feedback, you can do so on our
+[public Slack channel](https://app.slack.com/client/T09NY5SBT/C09QZFCE5).
+If you're not already part of that Slack workspace, you can visit https://slack.k8s.io/ for an invitation.
+
+Special thanks to all the contributors that provided great reviews, shared valuable insight and helped implement this feature (alphabetical order):
+
+- Han Kang ([logicalhan](https://github.com/logicalhan))
+- Jan Šafránek ([jsafrane](https://github.com/jsafrane))
+- Jordan Liggitt ([liggitt](https://github.com/liggitt))
+- Kiki ([carlory](https://github.com/carlory))
+- Michelle Au ([msau42](https://github.com/msau42))
+- Tim Bannister ([sftim](https://github.com/sftim))
+- Wojciech Tyczynski ([wojtek-t](https://github.com/wojtek-t))
+- Xing Yang ([xing-yang](https://github.com/xing-yang))
From f66b8966d8fda50292c3be1e1ec8efc181957f15 Mon Sep 17 00:00:00 2001
From: Tamilselvan
Date: Mon, 9 Oct 2023 18:30:10 +0530
Subject: [PATCH 094/229] Fix for Label nfd.node.kubernetes.io/node-name
(#42997)
* rebase-1 Fix for Label nfd.node.kubernetes.io/node-name
Fix for Label nfd.node.kubernetes.io/node-name
Co-authored-by: Qiming Teng
* Update _index.md
---------
Co-authored-by: Qiming Teng
---
.../labels-annotations-taints/_index.md | 19 ++++++++++++++++---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md
index bdb54e504db1c..8e6255daf7b26 100644
--- a/content/en/docs/reference/labels-annotations-taints/_index.md
+++ b/content/en/docs/reference/labels-annotations-taints/_index.md
@@ -1466,10 +1466,23 @@ This annotation records a comma-separated list of
managed by [Node Feature Discovery](https://kubernetes-sigs.github.io/node-feature-discovery/) (NFD).
NFD uses this for an internal mechanism. You should not edit this annotation yourself.
+### nfd.node.kubernetes.io/node-name
+
+Type: Label
+
+Example: `nfd.node.kubernetes.io/node-name: node-1`
+
+Used on: Nodes
+
+It specifies which node the NodeFeature object is targeting.
+Creators of NodeFeature objects must set this label and
+consumers of the objects are supposed to use the label for
+filtering features designated for a certain node.
+
{{< note >}}
-These annotations only applies to nodes where NFD is running.
-To learn more about NFD and its components go to its official
-[documentation](https://kubernetes-sigs.github.io/node-feature-discovery/stable/get-started/).
+These Node Feature Discovery (NFD) labels or annotations only apply to
+the nodes where NFD is running. To learn more about NFD and
+its components go to its official [documentation](https://kubernetes-sigs.github.io/node-feature-discovery/stable/get-started/).
{{< /note >}}
### service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval (beta) {#service-beta-kubernetes-io-aws-load-balancer-access-log-emit-interval}
From a2586b9d24cad7df7fab5269fde3f476507e9840 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E6=BE=84=E6=BD=AD?=
Date: Mon, 9 Oct 2023 21:17:30 +0800
Subject: [PATCH 095/229] Update ingress-controllers.md (#43367)
* Update ingress-controllers.md
Add more additional controllers.
* Update ingress-controllers.md
---
.../en/docs/concepts/services-networking/ingress-controllers.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/content/en/docs/concepts/services-networking/ingress-controllers.md b/content/en/docs/concepts/services-networking/ingress-controllers.md
index 924604e5dd8a7..3b7c50378f63a 100644
--- a/content/en/docs/concepts/services-networking/ingress-controllers.md
+++ b/content/en/docs/concepts/services-networking/ingress-controllers.md
@@ -28,6 +28,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
{{% thirdparty-content %}}
* [AKS Application Gateway Ingress Controller](https://docs.microsoft.com/azure/application-gateway/tutorial-ingress-controller-add-on-existing?toc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Faks%2Ftoc.json&bc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fbread%2Ftoc.json) is an ingress controller that configures the [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/overview).
+* [Alibaba Cloud MSE Ingress](https://www.alibabacloud.com/help/en/mse/user-guide/overview-of-mse-ingress-gateways) is an ingress controller that configures the [Alibaba Cloud Native Gateway](https://www.alibabacloud.com/help/en/mse/product-overview/cloud-native-gateway-overview?spm=a2c63.p38356.0.0.20563003HJK9is), which is also the commercial version of [Higress](https://github.com/alibaba/higress).
* [Apache APISIX ingress controller](https://github.com/apache/apisix-ingress-controller) is an [Apache APISIX](https://github.com/apache/apisix)-based ingress controller.
* [Avi Kubernetes Operator](https://github.com/vmware/load-balancer-and-ingress-services-for-kubernetes) provides L4-L7 load-balancing using [VMware NSX Advanced Load Balancer](https://avinetworks.com/).
* [BFE Ingress Controller](https://github.com/bfenetworks/ingress-bfe) is a [BFE](https://www.bfe-networks.net)-based ingress controller.
@@ -46,6 +47,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet
which offers API gateway functionality.
* [HAProxy Ingress](https://haproxy-ingress.github.io/) is an ingress controller for
[HAProxy](https://www.haproxy.org/#desc).
+* [Higress](https://github.com/alibaba/higress) is an [Envoy](https://www.envoyproxy.io) based API gateway that can run as an ingress controller.
* The [HAProxy Ingress Controller for Kubernetes](https://github.com/haproxytech/kubernetes-ingress#readme)
is also an ingress controller for [HAProxy](https://www.haproxy.org/#desc).
* [Istio Ingress](https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/)
From c85b46f20386e946aaefc20891079c697c6eb402 Mon Sep 17 00:00:00 2001
From: Satyam Soni <94950988+satyampsoni@users.noreply.github.com>
Date: Mon, 9 Oct 2023 18:54:01 +0530
Subject: [PATCH 096/229] Document annotation
container.apparmor.security.beta.kubernetes.io/ (#43227)
* Document annotation container.apparmor.security.beta.kubernetes.io/
* Update content/en/docs/reference/labels-annotations-taints/_index.md
Co-authored-by: Ritika <52399571+Ritikaa96@users.noreply.github.com>
* Update content/en/docs/reference/labels-annotations-taints/_index.md
Co-authored-by: Ritika <52399571+Ritikaa96@users.noreply.github.com>
* Update content/en/docs/reference/labels-annotations-taints/_index.md
Co-authored-by: Ritika <52399571+Ritikaa96@users.noreply.github.com>
* Update content/en/docs/reference/labels-annotations-taints/_index.md
Co-authored-by: Tim Bannister
* Update content/en/docs/reference/labels-annotations-taints/_index.md
Co-authored-by: Qiming Teng
---------
Co-authored-by: Ritika <52399571+Ritikaa96@users.noreply.github.com>
Co-authored-by: Tim Bannister
Co-authored-by: Qiming Teng
---
.../labels-annotations-taints/_index.md | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md
index 8e6255daf7b26..d591f10852961 100644
--- a/content/en/docs/reference/labels-annotations-taints/_index.md
+++ b/content/en/docs/reference/labels-annotations-taints/_index.md
@@ -299,6 +299,23 @@ This annotation is part of the Kubernetes Resource Model (KRM) Functions Specifi
which is used by Kustomize and similar third-party tools.
For example, Kustomize removes objects with this annotation from its final build output.
+
+### container.apparmor.security.beta.kubernetes.io/* (beta) {#container-apparmor-security-beta-kubernetes-io}
+
+Type: Annotation
+
+Example: `container.apparmor.security.beta.kubernetes.io/my-container: my-custom-profile`
+
+Used on: Pods
+
+This annotation allows you to specify the AppArmor security profile for a container within a
+Kubernetes pod.
+To learn more, see the [AppArmor](/docs/tutorials/security/apparmor/) tutorial.
+The tutorial illustrates using AppArmor to restrict a container's abilities and access.
+
+The profile specified dictates the set of rules and restrictions that the containerized process must
+adhere to. This helps enforce security policies and isolation for your containers.
+
### internal.config.kubernetes.io/* (reserved prefix) {#internal.config.kubernetes.io-reserved-wildcard}
Type: Annotation
From fe706baae5c29a5952b573168847c9bc58e2e5ab Mon Sep 17 00:00:00 2001
From: Meenu Yadav <116630390+MeenuyD@users.noreply.github.com>
Date: Mon, 9 Oct 2023 19:07:45 +0530
Subject: [PATCH 097/229] improve: Update alt attributes for images on the
front page (#43088)
* improve: Update alt attributes for images on the front page
* case studies image alt attribute
* Update case-studies.html
---
layouts/shortcodes/blocks/feature.html | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/layouts/shortcodes/blocks/feature.html b/layouts/shortcodes/blocks/feature.html
index 39fab4f20bfc3..106e077f46075 100644
--- a/layouts/shortcodes/blocks/feature.html
+++ b/layouts/shortcodes/blocks/feature.html
@@ -8,7 +8,7 @@
{{ if $imageName }}{{- template "shortcodes-blocks_getimage" (dict "name" $imageName "ctx" . "target" "feature-image") -}}{{ end }}
{{- $image := $.Scratch.Get "feature-image" -}}
- {{ with $image }}{{ end }}
+ {{ with $image }}{{ end }}
{{ $.Inner }}
From edd262d10d87b9b80d7171ebeb31b11958d972f4 Mon Sep 17 00:00:00 2001
From: Prasanna Karunanayaka
Date: Mon, 9 Oct 2023 19:11:34 +0530
Subject: [PATCH 098/229] Update kubectl cheat sheet with command to identify
Nodes in Ready state (#43114)
* Update cheatsheet.md
Added - Check which nodes are ready with custom-columns
* Update content/en/docs/reference/kubectl/cheatsheet.md
Co-authored-by: Ritika <52399571+Ritikaa96@users.noreply.github.com>
---------
Co-authored-by: Ritika <52399571+Ritikaa96@users.noreply.github.com>
---
content/en/docs/reference/kubectl/cheatsheet.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md
index fa7b9c3eb2912..41a2d557159de 100644
--- a/content/en/docs/reference/kubectl/cheatsheet.md
+++ b/content/en/docs/reference/kubectl/cheatsheet.md
@@ -224,6 +224,9 @@ kubectl get pods --show-labels
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"
+# Check which nodes are ready with custom-columns
+kubectl get node -o custom-columns='NODE_NAME:.metadata.name,STATUS:.status.conditions[?(@.type=="Ready")].status'
+
# Output decoded secrets without external tools
kubectl get secret my-secret -o go-template='{{range $k,$v := .data}}{{"### "}}{{$k}}{{"\n"}}{{$v|base64decode}}{{"\n\n"}}{{end}}'
From 2e51dc03f6a2388213ddb531e30dd262bbc8acd0 Mon Sep 17 00:00:00 2001
From: Ritika <52399571+Ritikaa96@users.noreply.github.com>
Date: Mon, 9 Oct 2023 19:14:03 +0530
Subject: [PATCH 099/229] Added, registered aws-lb-security-groups annotation
(#43009)
Signed-off-by: Ritikaa96
---
.../labels-annotations-taints/_index.md | 20 +++++++++++++++++++
1 file changed, 20 insertions(+)
diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md
index d591f10852961..344e4c7504d24 100644
--- a/content/en/docs/reference/labels-annotations-taints/_index.md
+++ b/content/en/docs/reference/labels-annotations-taints/_index.md
@@ -1820,6 +1820,26 @@ uses this annotation.
See [annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/)
in the AWS load balancer controller documentation.
+### service.beta.kubernetes.io/aws-load-balancer-security-groups (deprecated) {#service-beta-kubernetes-io-aws-load-balancer-security-groups}
+
+Example: `service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-53fae93f,sg-8725gr62r"`
+
+Used on: Service
+
+The AWS load balancer controller uses this annotation to specify a comma seperated list
+of security groups you want to attach to an AWS load balancer. Both name and ID of security
+are supported where name matches a `Name` tag, not the `groupName` attribute.
+
+When this annotation is added to a Service, the load-balancer controller attaches the security groups
+referenced by the annotation to the load balancer. If you omit this annotation, the AWS load balancer
+controller automatically creates a new security group and attaches it to the load balancer.
+
+{{< note >}}
+Kubernetes v1.27 and later do not directly set or read this annotation. However, the AWS
+load balancer controller (part of the Kubernetes project) does still use the
+`service.beta.kubernetes.io/aws-load-balancer-security-groups` annotation.
+{{< /note >}}
+
### service.beta.kubernetes.io/load-balancer-source-ranges (deprecated) {#service-beta-kubernetes-io-load-balancer-source-ranges}
Example: `service.beta.kubernetes.io/load-balancer-source-ranges: "192.0.2.0/25"`
From 60d9f83f594d7d3b40ba3e7421a6925a2ff8fd85 Mon Sep 17 00:00:00 2001
From: Qiming Teng
Date: Mon, 9 Oct 2023 21:53:16 +0800
Subject: [PATCH 100/229] Update secret.md
---
content/en/docs/concepts/configuration/secret.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md
index 071e0ed8361cc..ac74aaaf54e54 100644
--- a/content/en/docs/concepts/configuration/secret.md
+++ b/content/en/docs/concepts/configuration/secret.md
@@ -388,7 +388,7 @@ stringData:
```
{{< note >}}
-`stringData` for a Secret does not work well with server-side apply
+The `stringData` field for a Secret does not work well with server-side apply.
{{< /note >}}
The basic authentication Secret type is provided only for convenience.
@@ -550,7 +550,7 @@ stringData:
```
{{< note >}}
-`stringData` for a Secret does not work well with server-side apply
+The `stringData` field for a Secret does not work well with server-side apply.
{{< /note >}}
## Working with Secrets
From cf837603093dee27329b8e4a965e61d1d481ba2f Mon Sep 17 00:00:00 2001
From: Qiming Teng
Date: Mon, 9 Oct 2023 21:53:43 +0800
Subject: [PATCH 101/229] Update managing-secret-using-config-file.md
---
.../configmap-secret/managing-secret-using-config-file.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md
index 17696ae16e6c4..2f3dd8dc0ae92 100644
--- a/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md
+++ b/content/en/docs/tasks/configmap-secret/managing-secret-using-config-file.md
@@ -110,7 +110,7 @@ stringData:
```
{{< note >}}
-`stringData` for a Secret does not work well with server-side apply
+The `stringData` field for a Secret does not work well with server-side apply.
{{< /note >}}
When you retrieve the Secret data, the command returns the encoded values,
@@ -157,7 +157,7 @@ stringData:
```
{{< note >}}
-`stringData` for a Secret does not work well with server-side apply
+The `stringData` field for a Secret does not work well with server-side apply.
{{< /note >}}
The `Secret` object is created as follows:
From 7349a0f693cb1385ce4412d6f0c0d39fc6121933 Mon Sep 17 00:00:00 2001
From: Saumya
Date: Mon, 9 Oct 2023 19:33:19 +0530
Subject: [PATCH 102/229] updated doc for key type and size (#42981)
* updated the changes
Signed-off-by: SaumyaBhushan
* removed the note section and added a plain text
Signed-off-by: SaumyaBhushan
* Update high-availability.md
---------
Signed-off-by: SaumyaBhushan
Co-authored-by: Qiming Teng
---
.../production-environment/tools/kubeadm/high-availability.md | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md b/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md
index c82a84d616823..98ba9069ea324 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/high-availability.md
@@ -218,8 +218,10 @@ option. Your cluster requirements may need a different configuration.
kubeadm certs certificate-key
```
+ The certificate key is a hex encoded string that is an AES key of size 32 bytes.
+
{{< note >}}
- The `kubeadm-certs` Secret and decryption key expire after two hours.
+ The `kubeadm-certs` Secret and the decryption key expire after two hours.
{{< /note >}}
{{< caution >}}
From 62640d9920f2e8c6b03824100f1646d3c7af23c2 Mon Sep 17 00:00:00 2001
From: Patrick Ohly
Date: Mon, 9 Oct 2023 19:36:55 +0200
Subject: [PATCH 103/229] system-logs: warn about log output changes
This is not a new policy: there never has been any promise of stability and
log output has changed frequently (whenever a log call got modified), including
changes that affect all log calls (when changing the underlying output
code). We just have not explicit warned about this.
---
.../en/docs/concepts/cluster-administration/system-logs.md | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/content/en/docs/concepts/cluster-administration/system-logs.md b/content/en/docs/concepts/cluster-administration/system-logs.md
index d2a9d46bbd7e6..1feeecd3db7e5 100644
--- a/content/en/docs/concepts/cluster-administration/system-logs.md
+++ b/content/en/docs/concepts/cluster-administration/system-logs.md
@@ -17,6 +17,13 @@ scheduler decisions).
+{{< warning >}}
+In contrast to the command line flags described here, the *log
+output* itself does *not* fall under the Kubernetes API stability guarantees:
+individual log entries and their formatting may change from one release
+to the next!
+{{< /warning >}}
+
## Klog
klog is the Kubernetes logging library. [klog](https://github.com/kubernetes/klog)
From 6ad9d415457b3cf4b2ac89e12cfcef5be937226b Mon Sep 17 00:00:00 2001
From: Meenu Yadav <116630390+MeenuyD@users.noreply.github.com>
Date: Tue, 10 Oct 2023 01:05:26 +0530
Subject: [PATCH 104/229] Document annotation
service.kubernetes.io/topology-mode (#43390)
* fix: Annotation service.kubernetes.io/topology-mode not documented / registered
* Update content/en/docs/reference/labels-annotations-taints/_index.md
Co-authored-by: Tim Bannister
* Update content/en/docs/reference/labels-annotations-taints/_index.md
Co-authored-by: Tim Bannister
---------
Co-authored-by: Tim Bannister
---
.../labels-annotations-taints/_index.md | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md
index 344e4c7504d24..f49673ad19c43 100644
--- a/content/en/docs/reference/labels-annotations-taints/_index.md
+++ b/content/en/docs/reference/labels-annotations-taints/_index.md
@@ -957,6 +957,22 @@ works in that release.
There are no other valid values for this annotation. If you don't want topology aware hints
for a Service, don't add this annotation.
+### service.kubernetes.io/topology-mode
+
+Type: Annotation
+
+Example: `service.kubernetes.io/topology-mode: Auto`
+
+Used on: Service
+
+This annotation provides a way to define how Services handle network topology;
+for example, you can configure a Service so that Kubernetes prefers keeping traffic between
+a client and server within a single topology zone.
+In some cases this can help reduce costs or improve network performance.
+
+See [Topology Aware Routing](/docs/concepts/services-networking/topology-aware-routing/)
+for more details.
+
### kubernetes.io/service-name {#kubernetesioservice-name}
Type: Label
From 16fdb345407fb1983391e51519254601f0ec3547 Mon Sep 17 00:00:00 2001
From: Qiming Teng
Date: Tue, 10 Oct 2023 10:57:07 +0800
Subject: [PATCH 105/229] Update pull-image-private-registry.md
---
.../pull-image-private-registry.md | 16 +++++++---------
1 file changed, 7 insertions(+), 9 deletions(-)
diff --git a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md
index 312cdb6248536..fc84b3426e43f 100644
--- a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md
+++ b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md
@@ -38,7 +38,8 @@ docker login
When prompted, enter your Docker ID, and then the credential you want to use (access token,
or the password for your Docker ID).
-The login process creates or updates a `config.json` file that holds an authorization token. Review [how Kubernetes interprets this file](/docs/concepts/containers/images#config-json).
+The login process creates or updates a `config.json` file that holds an authorization token.
+Review [how Kubernetes interprets this file](/docs/concepts/containers/images#config-json).
View the `config.json` file:
@@ -60,7 +61,8 @@ The output contains a section similar to this:
{{< note >}}
If you use a Docker credentials store, you won't see that `auth` entry but a `credsStore` entry with the name of the store as value.
-In that case, you can create a secret directly. See [Create a Secret by providing credentials on the command line](#create-a-secret-by-providing-credentials-on-the-command-line).
+In that case, you can create a secret directly.
+See [Create a Secret by providing credentials on the command line](#create-a-secret-by-providing-credentials-on-the-command-line).
{{< /note >}}
## Create a Secret based on existing credentials {#registry-secret-existing-credentials}
@@ -215,8 +217,10 @@ To use image pull secrets for a Pod (or a Deployment, or other object that
has a pod template that you are using), you need to make sure that the appropriate
Secret does exist in the right namespace. The namespace to use is the same
namespace where you defined the Pod.
+{{< /note >}}
Also, in case the Pod fails to start with the status `ImagePullBackOff`, view the Pod events:
+
```shell
kubectl describe pod private-reg
```
@@ -234,12 +238,6 @@ Events:
... FailedToRetrieveImagePullSecret ... Unable to retrieve some image pull secrets (); attempting to pull the image may not succeed.
```
-
-{{< /note >}}
-
-
-
-
## {{% heading "whatsnext" %}}
* Learn more about [Secrets](/docs/concepts/configuration/secret/)
@@ -247,4 +245,4 @@ Events:
* Learn more about [using a private registry](/docs/concepts/containers/images/#using-a-private-registry).
* Learn more about [adding image pull secrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account).
* See [kubectl create secret docker-registry](/docs/reference/generated/kubectl/kubectl-commands/#-em-secret-docker-registry-em-).
-* See the `imagePullSecrets` field within the [container definitions](/docs/reference/kubernetes-api/workload-resources/pod-v1/#containers) of a Pod
\ No newline at end of file
+* See the `imagePullSecrets` field within the [container definitions](/docs/reference/kubernetes-api/workload-resources/pod-v1/#containers) of a Pod
From 76756833c11d93c980405bf8d36662a7829bd513 Mon Sep 17 00:00:00 2001
From: Aritra Ghosh
Date: Mon, 9 Oct 2023 20:04:38 -0700
Subject: [PATCH 106/229] Update taint-and-toleration.md with types of taint
effects (#42315)
* Update taint-and-toleration.md
Revamped the taint effect to introduce the effects early in the article
* Update taint-and-toleration.md
* Update taint-and-toleration.md
* Update taint-and-toleration.md
* Update taint-and-toleration.md
* Update content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md
Co-authored-by: Mauren Berti <698465+stormqueen1990@users.noreply.github.com>
* Update content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md
Co-authored-by: Mauren Berti <698465+stormqueen1990@users.noreply.github.com>
* Update content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md
Co-authored-by: Mauren Berti <698465+stormqueen1990@users.noreply.github.com>
* Update taint-and-toleration.md
* Update taint-and-toleration.md
* Update taint-and-toleration.md
* Update taint-and-toleration.md
* Apply suggestions from code review
Co-authored-by: Tim Bannister
* Update content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md
Co-authored-by: Qiming Teng
* Apply suggestions from code review
Co-authored-by: Qiming Teng
---------
Co-authored-by: Mauren Berti <698465+stormqueen1990@users.noreply.github.com>
Co-authored-by: Tim Bannister
Co-authored-by: Qiming Teng
---
.../taint-and-toleration.md | 35 +++++++++++++------
1 file changed, 24 insertions(+), 11 deletions(-)
diff --git a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md
index 1b44a1fab4ccb..c9afb795a11c2 100644
--- a/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md
+++ b/content/en/docs/concepts/scheduling-eviction/taint-and-toleration.md
@@ -85,9 +85,27 @@ An empty `effect` matches all effects with key `key1`.
{{< /note >}}
The above example used `effect` of `NoSchedule`. Alternatively, you can use `effect` of `PreferNoSchedule`.
-This is a "preference" or "soft" version of `NoSchedule` -- the system will *try* to avoid placing a
-pod that does not tolerate the taint on the node, but it is not required. The third kind of `effect` is
-`NoExecute`, described later.
+
+
+The allowed values for the `effect` field are:
+
+`NoExecute`
+: This affects pods that are already running on the node as follows:
+ * Pods that do not tolerate the taint are evicted immediately
+ * Pods that tolerate the taint without specifying `tolerationSeconds` in
+ their toleration specification remain bound forever
+ * Pods that tolerate the taint with a specified `tolerationSeconds` remain
+ bound for the specified amount of time. After that time elapses, the node
+ lifecycle controller evicts the Pods from the node.
+
+`NoSchedule`
+: No new Pods will be scheduled on the tainted node unless they have a matching
+ toleration. Pods currently running on the node are **not** evicted.
+
+`PreferNoSchedule`
+: `PreferNoSchedule` is a "preference" or "soft" version of `NoSchedule`.
+ The control plane will *try* to avoid placing a Pod that does not tolerate
+ the taint on the node, but it is not guaranteed.
You can put multiple taints on the same node and multiple tolerations on the same pod.
The way Kubernetes processes multiple taints and tolerations is like a filter: start
@@ -194,14 +212,7 @@ when there are node problems, which is described in the next section.
{{< feature-state for_k8s_version="v1.18" state="stable" >}}
-The `NoExecute` taint effect, mentioned above, affects pods that are already
-running on the node as follows
- * pods that do not tolerate the taint are evicted immediately
- * pods that tolerate the taint without specifying `tolerationSeconds` in
- their toleration specification remain bound forever
- * pods that tolerate the taint with a specified `tolerationSeconds` remain
- bound for the specified amount of time
The node controller automatically taints a Node when certain conditions
are true. The following taints are built in:
@@ -221,7 +232,9 @@ are true. The following taints are built in:
this node, the kubelet removes this taint.
In case a node is to be drained, the node controller or the kubelet adds relevant taints
-with `NoExecute` effect. If the fault condition returns to normal the kubelet or node
+with `NoExecute` effect. This effect is added by default for the
+`node.kubernetes.io/not-ready` and `node.kubernetes.io/unreachable` taints.
+If the fault condition returns to normal, the kubelet or node
controller can remove the relevant taint(s).
In some cases when the node is unreachable, the API server is unable to communicate
From 17f080049278a691d417c44decddbdb297f744b5 Mon Sep 17 00:00:00 2001
From: Utkarsh Singh <96516301+utkarsh-singh1@users.noreply.github.com>
Date: Tue, 10 Oct 2023 08:56:10 +0530
Subject: [PATCH 107/229] Add diagram to Cluster Architecture concept page
(#42272)
* Added componenets-of-kubernetes svg in content/en/docs/concepts/architecture
Signed-off-by: utkarsh-singh1
* Updated cloud-controller-manager svg
Signed-off-by: utkarsh-singh1
* Updated image caption in /docs/concepts/architecture/_index.md
Signed-off-by: utkarsh-singh1
* Added kubernetes-cluster-architecture svg in content/en/docs/concepts/architecture
Signed-off-by: utkarsh-singh1
---------
Signed-off-by: utkarsh-singh1
---
content/en/docs/concepts/architecture/_index.md | 1 +
static/images/docs/kubernetes-cluster-architecture.svg | 4 ++++
2 files changed, 5 insertions(+)
create mode 100644 static/images/docs/kubernetes-cluster-architecture.svg
diff --git a/content/en/docs/concepts/architecture/_index.md b/content/en/docs/concepts/architecture/_index.md
index 61fb48e7142b7..7c9a45c71e294 100644
--- a/content/en/docs/concepts/architecture/_index.md
+++ b/content/en/docs/concepts/architecture/_index.md
@@ -5,3 +5,4 @@ description: >
The architectural concepts behind Kubernetes.
---
+{{< figure src="/images/docs/kubernetes-cluster-architecture.svg" alt="Components of Kubernetes" caption="Kubernetes cluster architecture" class="diagram-large" >}}
diff --git a/static/images/docs/kubernetes-cluster-architecture.svg b/static/images/docs/kubernetes-cluster-architecture.svg
new file mode 100644
index 0000000000000..841c8cf277f72
--- /dev/null
+++ b/static/images/docs/kubernetes-cluster-architecture.svg
@@ -0,0 +1,4 @@
+
+
+
+
\ No newline at end of file
From 69c80491cf2dceb45edf72c7fcc216d4ae2ac7a9 Mon Sep 17 00:00:00 2001
From: Mohammed Affan <72978371+Affan-7@users.noreply.github.com>
Date: Tue, 10 Oct 2023 09:00:22 +0530
Subject: [PATCH 108/229] Fix explanation in service concept (#42198)
* Fix explanation in service concept
* Revert "Fix explanation in service concept"
This reverts commit 221cd695971f0af25ee885c0250ac3576b6f2a67.
* Update service.md
---
content/en/docs/concepts/services-networking/service.md | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md
index 1cb64bb99e3ba..b9d809f8bfdca 100644
--- a/content/en/docs/concepts/services-networking/service.md
+++ b/content/en/docs/concepts/services-networking/service.md
@@ -437,8 +437,10 @@ The available `type` values and their behaviors are:
No proxying of any kind is set up.
The `type` field in the Service API is designed as nested functionality - each level
-adds to the previous. This is not strictly required on all cloud providers, but
-the Kubernetes API design for Service requires it anyway.
+adds to the previous. However there is an exception to this nested design. You can
+define a `LoadBalancer` Service by
+[disabling the load balancer `NodePort` allocation.](/docs/concepts/services-networking/service/#load-balancer-nodeport-allocation)
+
### `type: ClusterIP` {#type-clusterip}
From d50d91f4920eb82de59ffcc46078ee876a7bc40b Mon Sep 17 00:00:00 2001
From: Qiming Teng
Date: Tue, 10 Oct 2023 14:20:18 +0800
Subject: [PATCH 109/229] Update network-plugins.md
---
.../compute-storage-net/network-plugins.md | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
index a58f41abde860..0487ca61ca29d 100644
--- a/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
+++ b/content/en/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins.md
@@ -172,9 +172,7 @@ metadata:
## {{% heading "whatsnext" %}}
-* Learn about [Network Policies](/docs/concepts/services-networking/network-policies/) using network
- plugins
-* Learn about [Cluster Networking](/docs/concepts/cluster-administration/networking/)
- with network plugins
-* Learn about the [Troubleshooting CNI plugin-related errors](/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/)
+- Learn more about [Cluster Networking](/docs/concepts/cluster-administration/networking/)
+- Learn more about [Network Policies](/docs/concepts/services-networking/network-policies/)
+- Learn about the [Troubleshooting CNI plugin-related errors](/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/)
From 5f4fa222599836232ba9091bc1cae400a77d1f74 Mon Sep 17 00:00:00 2001
From: Marlow Weston
Date: Tue, 10 Oct 2023 01:54:22 -0500
Subject: [PATCH 110/229] Small mistake between sections of the document
(#42089)
* Small mistake between sections of the document
The note for --kube-reserved-cgroup should match formatting for --system-reserved-cgroup. This changes helps those match.
* Update reserve-compute-resources.md
---------
Co-authored-by: Qiming Teng
---
.../docs/tasks/administer-cluster/reserve-compute-resources.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md
index 66c0fc8f2c655..fedc88f2b2757 100644
--- a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md
+++ b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md
@@ -96,7 +96,7 @@ system daemon should ideally run within its own child control group. Refer to
for more details on recommended control group hierarchy.
Note that Kubelet **does not** create `--kube-reserved-cgroup` if it doesn't
-exist. Kubelet will fail if an invalid cgroup is specified. With `systemd`
+exist. The kubelet will fail to start if an invalid cgroup is specified. With `systemd`
cgroup driver, you should follow a specific pattern for the name of the cgroup you
define: the name should be the value you set for `--kube-reserved-cgroup`,
with `.slice` appended.
From 0f2189338c9f21a703e05992db028ccf95f80d3d Mon Sep 17 00:00:00 2001
From: harsh
Date: Tue, 10 Oct 2023 12:41:04 +0530
Subject: [PATCH 111/229] Fixed the reference for 'secret permissions with
default mode'
---
content/en/docs/concepts/storage/windows-storage.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/concepts/storage/windows-storage.md b/content/en/docs/concepts/storage/windows-storage.md
index 1aa3941a1f208..6bfc117029d8e 100644
--- a/content/en/docs/concepts/storage/windows-storage.md
+++ b/content/en/docs/concepts/storage/windows-storage.md
@@ -41,7 +41,7 @@ As a result, the following storage functionality is not supported on Windows nod
* Block device mapping
* Memory as the storage medium (for example, `emptyDir.medium` set to `Memory`)
* File system features like uid/gid; per-user Linux filesystem permissions
-* Setting [secret permissions with DefaultMode](/docs/concepts/configuration/secret/#secret-files-permissions) (due to UID/GID dependency)
+* Setting [secret permissions with DefaultMode](/docs/tasks/inject-data-application/distribute-credentials-secure/#set-posix-permissions-for-secret-keys) (due to UID/GID dependency)
* NFS based storage/volume support
* Expanding the mounted volume (resizefs)
From f7df84d3e9f11e1e9c6eaf155143f7fac4d5af32 Mon Sep 17 00:00:00 2001
From: Kundan Kumar
Date: Mon, 17 Jul 2023 19:13:43 +0530
Subject: [PATCH 112/229] documented annotation leader
---
.../labels-annotations-taints/_index.md | 21 +++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md
index 0e650582e4d4e..dd8222e7c2f2e 100644
--- a/content/en/docs/reference/labels-annotations-taints/_index.md
+++ b/content/en/docs/reference/labels-annotations-taints/_index.md
@@ -1086,6 +1086,27 @@ has been truncated to 1000.
If the number of backend endpoints falls below 1000, the control plane removes this annotation.
+### control-plane.alpha.kubernetes.io/leader (deprecated) {#control-plane-alpha-kubernetes-io-leader}
+
+Type: Annotation
+
+Example: `control-plane.alpha.kubernetes.io/leader={"holderIdentity":"controller-0","leaseDurationSeconds":15,"acquireTime":"2023-01-19T13:12:57Z","renewTime":"2023-01-19T13:13:54Z","leaderTransitions":1}`
+
+Used on: Endpoints
+
+The {{< glossary_tooltip text="control plane" term_id="control-plane" >}} previously set annotation on
+an [Endpoints](/docs/concepts/services-networking/service/#endpoints) object. This annotation provided
+the following detail:
+
+- Who is the current leader.
+- The time when the current leadership was acquired.
+- The duration of the lease (of the leadership) in seconds.
+- The time the current lease (the current leadership) should be renewed.
+- The number of leadership transitions that happened in the past.
+
+Kubernetes now uses [Leases](/docs/concepts/architecture/leases/) to
+manage leader assignment for the Kubernetes control plane.
+
### batch.kubernetes.io/job-tracking (deprecated) {#batch-kubernetes-io-job-tracking}
Type: Annotation
From 972a46738e7e9c4b125c0c23888c22af6b940474 Mon Sep 17 00:00:00 2001
From: Hasan Rashid
Date: Tue, 10 Oct 2023 03:19:02 -0400
Subject: [PATCH 113/229] Remove redudant steps from instruction (#41953)
* Remove redudant steps from instruction
* Update content/en/docs/tutorials/services/connect-applications-service.md
Co-authored-by: Michael
* Update content/en/docs/tutorials/services/connect-applications-service.md
Co-authored-by: Michael
* Update content/en/docs/tutorials/services/connect-applications-service.md
Co-authored-by: Michael
* Update content/en/docs/tutorials/services/connect-applications-service.md
Co-authored-by: Michael
* Update content/en/docs/tutorials/services/connect-applications-service.md
Co-authored-by: Tim Bannister
* Update content/en/docs/tutorials/services/connect-applications-service.md
Co-authored-by: Michael
* Change line 313, and add missing line
* Fix missing command end on line 313
* Update connect-applications-service.md
---------
Co-authored-by: Michael
Co-authored-by: Tim Bannister
Co-authored-by: Qiming Teng
---
.../services/connect-applications-service.md | 51 +++++++++++++++++--
1 file changed, 48 insertions(+), 3 deletions(-)
diff --git a/content/en/docs/tutorials/services/connect-applications-service.md b/content/en/docs/tutorials/services/connect-applications-service.md
index 2c7cdf94f8558..771149566b422 100644
--- a/content/en/docs/tutorials/services/connect-applications-service.md
+++ b/content/en/docs/tutorials/services/connect-applications-service.md
@@ -292,6 +292,10 @@ And also the configmap:
```shell
kubectl create configmap nginxconfigmap --from-file=default.conf
```
+
+You can find an example for `default.conf` in
+[the Kubernetes examples project repo](https://github.com/kubernetes/examples/tree/bc9ca4ca32bb28762ef216386934bef20f1f9930/staging/https-nginx/).
+
```
configmap/nginxconfigmap created
```
@@ -302,6 +306,49 @@ kubectl get configmaps
NAME DATA AGE
nginxconfigmap 1 114s
```
+
+You can view the details of the `nginxconfigmap` ConfigMap using the following command:
+
+```shell
+kubectl describe configmap nginxconfigmap
+```
+
+The output is similar to:
+
+```console
+Name: nginxconfigmap
+Namespace: default
+Labels:
+Annotations:
+
+Data
+====
+default.conf:
+----
+server {
+ listen 80 default_server;
+ listen [::]:80 default_server ipv6only=on;
+
+ listen 443 ssl;
+
+ root /usr/share/nginx/html;
+ index index.html;
+
+ server_name localhost;
+ ssl_certificate /etc/nginx/ssl/tls.crt;
+ ssl_certificate_key /etc/nginx/ssl/tls.key;
+
+ location / {
+ try_files $uri $uri/ =404;
+ }
+}
+
+BinaryData
+====
+
+Events:
+```
+
Following are the manual steps to follow in case you run into problems running make (on windows for example):
```shell
@@ -311,7 +358,7 @@ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -ou
cat /d/tmp/nginx.crt | base64
cat /d/tmp/nginx.key | base64
```
-
+
Use the output from the previous commands to create a yaml file as follows.
The base64 encoded value should all be on a single line.
@@ -476,5 +523,3 @@ LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.el
* Learn more about [Using a Service to Access an Application in a Cluster](/docs/tasks/access-application-cluster/service-access-application-cluster/)
* Learn more about [Connecting a Front End to a Back End Using a Service](/docs/tasks/access-application-cluster/connecting-frontend-backend/)
* Learn more about [Creating an External Load Balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/)
-
-
From f39b1e1398e67974f74319876b1ca16a97d451d1 Mon Sep 17 00:00:00 2001
From: xin gu <418294249@qq.com>
Date: Tue, 10 Oct 2023 18:39:08 +0800
Subject: [PATCH 114/229] sync list-all-running-container-images
reserve-compute-resources managing-tls-in-a-cluster
---
.../list-all-running-container-images.md | 8 ++++----
.../administer-cluster/reserve-compute-resources.md | 4 ++--
.../zh-cn/docs/tasks/tls/managing-tls-in-a-cluster.md | 10 +++++-----
3 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/content/zh-cn/docs/tasks/access-application-cluster/list-all-running-container-images.md b/content/zh-cn/docs/tasks/access-application-cluster/list-all-running-container-images.md
index 571b51b5dda7c..e3882cab542ce 100644
--- a/content/zh-cn/docs/tasks/access-application-cluster/list-all-running-container-images.md
+++ b/content/zh-cn/docs/tasks/access-application-cluster/list-all-running-container-images.md
@@ -36,7 +36,7 @@ of Containers for each.
- Fetch all Pods in all namespaces using `kubectl get pods --all-namespaces`
- Format the output to include only the list of Container image names
- using `-o jsonpath={.items[*].spec.containers[*].image}`. This will recursively parse out the
+ using `-o jsonpath={.items[*].spec['initContainers', 'containers'][*].image}`. This will recursively parse out the
`image` field from the returned json.
- See the [jsonpath reference](/docs/reference/kubectl/jsonpath/)
for further information on how to use jsonpath.
@@ -48,7 +48,7 @@ of Containers for each.
## 列出所有命名空间下的所有容器镜像 {#list-all-container-images-in-all-namespaces}
- 使用 `kubectl get pods --all-namespaces` 获取所有命名空间下的所有 Pod
-- 使用 `-o jsonpath={.items[*].spec.containers[*].image}` 来格式化输出,以仅包含容器镜像名称。
+- 使用 `-o jsonpath={.items[*].spec['initContainers', 'containers'][*].image}` 来格式化输出,以仅包含容器镜像名称。
这将以递归方式从返回的 json 中解析出 `image` 字段。
- 参阅 [jsonpath 说明](/zh-cn/docs/reference/kubectl/jsonpath/)
获取更多关于如何使用 jsonpath 的信息。
@@ -69,14 +69,14 @@ The jsonpath is interpreted as follows:
- `.items[*]`: for each returned value
- `.spec`: get the spec
-- `.containers[*]`: for each container
+- `['initContainers', 'containers'][*]`: for each container
- `.image`: get the image
-->
jsonpath 解释如下:
- `.items[*]`: 对于每个返回的值
- `.spec`: 获取 spec
-- `.containers[*]`: 对于每个容器
+- `['initContainers', 'containers'][*]`: 对于每个容器
- `.image`: 获取镜像
请注意,如果 `--kube-reserved-cgroup` 不存在,Kubelet 将 **不会** 创建它。
-如果指定了一个无效的 cgroup,Kubelet 将会失败。就 `systemd` cgroup 驱动而言,
+如果指定了一个无效的 cgroup,Kubelet 将会无法启动。就 `systemd` cgroup 驱动而言,
你要为所定义的 cgroup 设置名称时要遵循特定的模式:
所设置的名字应该是你为 `--kube-reserved-cgroup` 所给的参数值加上 `.slice` 后缀。
diff --git a/content/zh-cn/docs/tasks/tls/managing-tls-in-a-cluster.md b/content/zh-cn/docs/tasks/tls/managing-tls-in-a-cluster.md
index 8774c6152d628..dcf79de54c92c 100644
--- a/content/zh-cn/docs/tasks/tls/managing-tls-in-a-cluster.md
+++ b/content/zh-cn/docs/tasks/tls/managing-tls-in-a-cluster.md
@@ -48,14 +48,14 @@ You need the `cfssl` tool. You can download `cfssl` from
Some steps in this page use the `jq` tool. If you don't have `jq`, you can
install it via your operating system's software sources, or fetch it from
-[https://stedolan.github.io/jq/](https://stedolan.github.io/jq/).
+[https://jqlang.github.io/jq/](https://jqlang.github.io/jq/).
-->
你需要 `cfssl` 工具。
你可以从 [https://github.com/cloudflare/cfssl/releases](https://github.com/cloudflare/cfssl/releases)
下载 `cfssl`。
本文中某些步骤使用 `jq` 工具。如果你没有 `jq`,你可以通过操作系统的软件源安装,
-或者从 [https://stedolan.github.io/jq/](https://stedolan.github.io/jq/) 获取。
+或者从 [https://jqlang.github.io/jq/](https://jqlang.github.io/jq/) 获取。
@@ -346,7 +346,7 @@ This produces a certificate authority key file (`ca-key.pem`) and certificate (`
### 颁发证书
-{{% code file="tls/server-signing-config.json" %}}
+{{% code_sample file="tls/server-signing-config.json" %}}
-这使用命令行工具 [`jq`](https://stedolan.github.io/jq/)
+这使用命令行工具 [`jq`](https://jqlang.github.io/jq/)
在 `.status.certificate` 字段中填充 base64 编码的内容。
如果你没有 `jq` 工具,你还可以将 JSON 输出保存到文件中,手动填充此字段,然后上传结果文件。
{{< /note >}}
From 4ea5c843e05470b6157ec4a938d11eebca4eda9a Mon Sep 17 00:00:00 2001
From: windsonsea
Date: Tue, 10 Oct 2023 16:35:49 +0800
Subject: [PATCH 115/229] [zh] Sync connect-applications-service.md
---
.../services/connect-applications-service.md | 102 +++++++++++++++---
1 file changed, 90 insertions(+), 12 deletions(-)
diff --git a/content/zh-cn/docs/tutorials/services/connect-applications-service.md b/content/zh-cn/docs/tutorials/services/connect-applications-service.md
index 1be3682f11cc0..060c6088b3ed5 100644
--- a/content/zh-cn/docs/tutorials/services/connect-applications-service.md
+++ b/content/zh-cn/docs/tutorials/services/connect-applications-service.md
@@ -55,7 +55,7 @@ Create an nginx Pod, and note that it has a container port specification:
我们在之前的示例中已经做过,然而让我们以网络连接的视角再重做一遍。
创建一个 Nginx Pod,注意其中包含一个容器端口的规约:
-{{< code file="service/networking/run-my-nginx.yaml" >}}
+{{% code_sample file="service/networking/run-my-nginx.yaml" %}}
你应该能够通过 ssh 登录到集群中的任何一个节点上,并使用诸如 `curl` 之类的工具向这两个 IP 地址发出查询请求。
需要注意的是,容器 **不会** 使用该节点上的 80 端口,也不会使用任何特定的 NAT 规则去路由流量到 Pod 上。
-这意味着可以在同一个节点上运行多个 Nginx Pod,使用相同的 `containerPort`,并且可以从集群中任何其他的
-Pod 或节点上使用 IP 的方式访问到它们。
+这意味着你可以使用相同的 `containerPort` 在同一个节点上运行多个 Nginx Pod,
+并且可以从集群中任何其他的 Pod 或节点上使用为 Pod 分配的 IP 地址访问到它们。
如果你想的话,你依然可以将宿主节点的某个端口的流量转发到 Pod 中,但是出于网络模型的原因,你不必这么做。
-如果对此好奇,请参考 [Kubernetes 网络模型](/zh-cn/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model)。
+如果对此好奇,请参考
+[Kubernetes 网络模型](/zh-cn/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model)。
这等价于使用 `kubectl create -f` 命令及如下的 yaml 文件创建:
-{{< code file="service/networking/nginx-svc.yaml" >}}
+{{% code_sample file="service/networking/nginx-svc.yaml" %}}
+你可以在
+[Kubernetes examples 项目代码仓库](https://github.com/kubernetes/examples/tree/bc9ca4ca32bb28762ef216386934bef20f1f9930/staging/https-nginx/)中找到
+`default.conf` 示例。
+
```
configmap/nginxconfigmap created
```
+
```shell
kubectl get configmaps
```
+
```
NAME DATA AGE
nginxconfigmap 1 114s
```
+
+你可以使用以下命令来查看 `nginxconfigmap` ConfigMap 的细节:
+
+```shell
+kubectl describe configmap nginxconfigmap
+```
+
+
+输出类似于:
+
+```console
+Name: nginxconfigmap
+Namespace: default
+Labels:
+Annotations:
+
+Data
+====
+default.conf:
+----
+server {
+ listen 80 default_server;
+ listen [::]:80 default_server ipv6only=on;
+
+ listen 443 ssl;
+
+ root /usr/share/nginx/html;
+ index index.html;
+
+ server_name localhost;
+ ssl_certificate /etc/nginx/ssl/tls.crt;
+ ssl_certificate_key /etc/nginx/ssl/tls.key;
+
+ location / {
+ try_files $uri $uri/ =404;
+ }
+}
+
+BinaryData
+====
+
+Events:
+```
+
@@ -493,6 +566,7 @@ Now create the secrets using the file:
kubectl apply -f nginxsecrets.yaml
kubectl get secrets
```
+
```
NAME TYPE DATA AGE
nginxsecret kubernetes.io/tls 2 1m
@@ -504,7 +578,7 @@ in the secret, and the Service, to expose both ports (80 and 443):
-->
现在修改 Nginx 副本以启动一个使用 Secret 中的证书的 HTTPS 服务器以及相应的用于暴露其端口(80 和 443)的 Service:
-{{< code file="service/networking/nginx-secure-app.yaml" >}}
+{{% code_sample file="service/networking/nginx-secure-app.yaml" %}}
@@ -40,7 +40,7 @@ Because Secrets can be created independently of the Pods that use them, there
is less risk of the Secret (and its data) being exposed during the workflow of
creating, viewing, and editing Pods. Kubernetes, and applications that run in
your cluster, can also take additional precautions with Secrets, such as avoiding
-writing secret data to nonvolatile storage.
+writing sensitive data to nonvolatile storage.
Secrets are similar to {{< glossary_tooltip text="ConfigMaps" term_id="configmap" >}}
but are specifically intended to hold confidential data.
@@ -48,7 +48,7 @@ but are specifically intended to hold confidential data.
由于创建 Secret 可以独立于使用它们的 Pod,
因此在创建、查看和编辑 Pod 的工作流程中暴露 Secret(及其数据)的风险较小。
Kubernetes 和在集群中运行的应用程序也可以对 Secret 采取额外的预防措施,
-例如避免将机密数据写入非易失性存储。
+例如避免将敏感数据写入非易失性存储。
Secret 类似于 {{}}
但专门用于保存机密数据。
@@ -124,7 +124,7 @@ Kubernetes 控制面也使用 Secret;
### Use case: dotfiles in a secret volume
You can make your data "hidden" by defining a key that begins with a dot.
-This key represents a dotfile or "hidden" file. For example, when the following secret
+This key represents a dotfile or "hidden" file. For example, when the following Secret
is mounted into a volume, `secret-volume`, the volume will contain a single file,
called `.secret-file`, and the `dotfile-test-container` will have this file
present at the path `/etc/secret-volume/.secret-file`.
@@ -146,35 +146,7 @@ you must use `ls -la` to see them when listing directory contents.
列举目录内容时你必须使用 `ls -la` 才能看到它们。
{{< /note >}}
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: dotfile-secret
-data:
- .secret-file: dmFsdWUtMg0KDQo=
----
-apiVersion: v1
-kind: Pod
-metadata:
- name: secret-dotfiles-pod
-spec:
- volumes:
- - name: secret-volume
- secret:
- secretName: dotfile-secret
- containers:
- - name: dotfile-test-container
- image: registry.k8s.io/busybox
- command:
- - ls
- - "-l"
- - "/etc/secret-volume"
- volumeMounts:
- - name: secret-volume
- readOnly: true
- mountPath: "/etc/secret-volume"
-```
+{{% code language="yaml" file="secret/dotfile-secret.yaml" %}}
- 如果你的云原生组件需要执行身份认证来访问你所知道的、在同一 Kubernetes 集群中运行的另一个应用,
@@ -458,32 +430,7 @@ Secret 的其它字段,例如 `kubernetes.io/service-account.uid` 注解和
下面的配置实例声明了一个 ServiceAccount 令牌 Secret:
-
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: secret-sa-sample
- annotations:
- kubernetes.io/service-account.name: "sa-name"
-type: kubernetes.io/service-account-token
-data:
- # 你可以像 Opaque Secret 一样在这里添加额外的键/值偶对
- extra: YmFyCg==
-```
+{{% code language="yaml" file="secret/serviceaccount-token-secret.yaml" %}}
下面是一个 `kubernetes.io/dockercfg` 类型 Secret 的示例:
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: secret-dockercfg
-type: kubernetes.io/dockercfg
-data:
- .dockercfg: |
- ""
-```
+{{% code language="yaml" file="secret/dockercfg-secret.yaml" %}}
{{< note >}}
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: secret-basic-auth
-type: kubernetes.io/basic-auth
-stringData:
- username: admin # kubernetes.io/basic-auth 类型的必需字段
- password: t0p-Secret # kubernetes.io/basic-auth 类型的必需字段
-```
+Secret 的 `stringData` 字段不能很好地与服务器端应用配合使用。
+{{< /note >}}
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: secret-ssh-auth
-type: kubernetes.io/ssh-auth
-data:
- # 此例中的实际数据被截断
- ssh-privatekey: |
- MIIEpQIBAAKCAQEAulqb/Y ...
-```
+{{% code language="yaml" file="secret/ssh-auth-secret.yaml" %}}
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: secret-tls
-type: kubernetes.io/tls
-stringData:
- # 此例中的数据被截断
- tls.crt: |
- --------BEGIN CERTIFICATE-----
- MIIC2DCCAcCgAwIBAgIBATANBgkqh ...
- tls.key: |
- -----BEGIN RSA PRIVATE KEY-----
- MIIEpgIBAAKCAQEA7yn3bRHQ5FHMQ ...
-```
+{{% code language="yaml" file="secret/tls-auth-secret.yaml" %}}
你也可以在 Secret 的 `stringData` 字段中提供值,而无需对其进行 base64 编码:
-```yaml
-apiVersion: v1
-kind: Secret
-metadata:
- # 注意 Secret 的命名方式
- name: bootstrap-token-5emitj
- # 启动引导令牌 Secret 通常位于 kube-system 名字空间
- namespace: kube-system
-type: bootstrap.kubernetes.io/token
-stringData:
- auth-extra-groups: "system:bootstrappers:kubeadm:default-node-token"
- expiration: "2020-09-13T04:39:10Z"
- # 此令牌 ID 被用于生成 Secret 名称
- token-id: "5emitj"
- token-secret: "kq4gihvszzgn1p0r"
- # 此令牌还可用于 authentication (身份认证)
- usage-bootstrap-authentication: "true"
- # 且可用于 signing (证书签名)
- usage-bootstrap-signing: "true"
-```
+{{% code language="yaml" file="secret/bootstrap-token-secret-literal.yaml" %}}
+
+{{< note >}}
+
+Secret 的 `stringData` 字段不能很好地与服务器端应用配合使用。
+{{< /note >}}
@@ -1127,24 +953,7 @@ Kubernetes ignores it.
当你在 Pod 中引用 Secret 时,你可以将该 Secret 标记为**可选**,就像下面例子中所展示的那样。
如果可选的 Secret 不存在,Kubernetes 将忽略它。
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: mypod
-spec:
- containers:
- - name: mypod
- image: redis
- volumeMounts:
- mountPath: "/etc/foo"
- readOnly: true
- volumes:
- - name: foo
- secret:
- secretName: mysecret
- optional: true
-```
+{{% code language="yaml" file="secret/optional-secret.yaml" %}}
### 容器镜像拉取 Secret {#using-imagepullsecrets}
@@ -1311,8 +1120,8 @@ Secret 是在 Pod 层面来配置的。
+**出现在:**
+
+- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
+
+- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration)
+
+
From 7d706d992180841973c066d0561dec6074a18137 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Marko=20Mudrini=C4=87?=
Date: Tue, 10 Oct 2023 14:36:26 +0200
Subject: [PATCH 124/229] Remove instructions for legacy package repos
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Signed-off-by: Marko Mudrinić
---
.../tools/kubeadm/install-kubeadm.md | 102 ++----------------
.../kubeadm/change-package-repository.md | 29 ++---
.../kubeadm/kubeadm-upgrade.md | 6 +-
.../kubeadm/upgrading-linux-nodes.md | 6 +-
4 files changed, 30 insertions(+), 113 deletions(-)
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
index 8125e3857f96c..eab7bd10410bc 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
@@ -149,29 +149,16 @@ For more information on version skews, see:
* Kubeadm-specific [version skew policy](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#version-skew-policy)
{{< note >}}
-Kubernetes has two different package repositories starting from August 2023.
-The Google-hosted repository is deprecated and it's being replaced with the
-Kubernetes (community-owned) package repositories. The Kubernetes project strongly
-recommends using the Kubernetes community-owned package repositories, because the
-project plans to stop publishing packages to the Google-hosted repository in the future.
-
-There are some important considerations for the Kubernetes package repositories:
-
-- The Kubernetes package repositories contain packages beginning with those
- Kubernetes versions that were still under support when the community took
- over the package builds. This means that anything before v1.24.0 will only be
- available in the Google-hosted repository.
-- There's a dedicated package repository for each Kubernetes minor version.
- When upgrading to a different minor release, you must bear in mind that
- the package repository details also change.
-
+Kubernetes has [new package repositories hosted at `pkgs.k8s.io`](/blog/2023/08/15/pkgs-k8s-io-introduction/)
+starting from August 2023. The legacy package repositories (`apt.kubernetes.io` and `yum.kubernetes.io`)
+have been frozen starting from September 13, 2023. Please read our
+[deprecation and freezing announcement](/blog/2023/08/31/legacy-package-repository-deprecation/)
+for more details.
{{< /note >}}
{{< tabs name="k8s_install" >}}
{{% tab name="Debian-based distributions" %}}
-### Kubernetes package repositories {#dpkg-k8s-package-repo}
-
These instructions are for Kubernetes {{< skew currentVersion >}}.
1. Update the `apt` package index and install packages needed to use the Kubernetes `apt` repository:
@@ -208,49 +195,13 @@ In releases older than Debian 12 and Ubuntu 22.04, `/etc/apt/keyrings` does not
you can create it by running `sudo mkdir -m 755 /etc/apt/keyrings`
{{< /note >}}
-### Google-hosted package repository (deprecated) {#dpkg-google-package-repo}
-
-These instructions are for Kubernetes {{< skew currentVersion >}}.
-
-1. Update the `apt` package index and install packages needed to use the Kubernetes `apt` repository:
-
- ```shell
- sudo apt-get update
- # apt-transport-https may be a dummy package; if so, you can skip that package
- sudo apt-get install -y apt-transport-https ca-certificates curl
- ```
-
-2. Download the Google Cloud public signing key:
-
- ```shell
- curl -fsSL https://dl.k8s.io/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
- ```
-
-3. Add the Google-hosted `apt` repository:
-
- ```shell
- # This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
- echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
- ```
-
-4. Update the `apt` package index, install kubelet, kubeadm and kubectl, and pin their version:
-
- ```shell
- sudo apt-get update
- sudo apt-get install -y kubelet kubeadm kubectl
- sudo apt-mark hold kubelet kubeadm kubectl
- ```
-
-{{< note >}}
-In releases older than Debian 12 and Ubuntu 22.04, `/etc/apt/keyrings` does not exist by default;
-you can create it by running `sudo mkdir -m 755 /etc/apt/keyrings`
-{{< /note >}}
-
{{% /tab %}}
{{% tab name="Red Hat-based distributions" %}}
1. Set SELinux to `permissive` mode:
+These instructions are for Kubernetes {{< skew currentVersion >}}.
+
```shell
# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
@@ -266,10 +217,6 @@ sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
settings that are not supported by kubeadm.
{{< /caution >}}
-### Kubernetes package repositories {#rpm-k8s-package-repo}
-
-These instructions are for Kubernetes {{< skew currentVersion >}}.
-
2. Add the Kubernetes `yum` repository. The `exclude` parameter in the
repository definition ensures that the packages related to Kubernetes are
not upgraded upon running `yum update` as there's a special procedure that
@@ -295,41 +242,6 @@ sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
```
-### Google-hosted package repository (deprecated) {#rpm-google-package-repo}
-
-These instructions are for Kubernetes {{< skew currentVersion >}}.
-
-2. Add the Google-hosted `yum` repository. The `exclude` parameter in the
- repository definition ensures that the packages related to Kubernetes are
- not upgraded upon running `yum update` as there's a special procedure that
- must be followed for upgrading Kubernetes.
-
-```shell
-# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo
-cat <}}
-If the `baseurl` fails because your RPM-based distribution cannot interpret `$basearch`, replace `\$basearch` with your computer's architecture.
-Type `uname -m` to see that value.
-For example, the `baseurl` URL for `x86_64` could be: `https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64`.
-{{< /note >}}
-
{{% /tab %}}
{{% tab name="Without a package manager" %}}
Install CNI plugins (required for most pod network):
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md b/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md
index db633d15ee253..07361c6a077ec 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/change-package-repository.md
@@ -6,21 +6,23 @@ weight: 120
-This page explains how to switch from one Kubernetes package repository to another
-when upgrading Kubernetes minor releases. Unlike deprecated Google-hosted
-repositories, the Kubernetes package repositories are structured in a way that
-there's a dedicated package repository for each Kubernetes minor version.
+This page explains how to enable a package repository for a new Kubernetes minor release
+for users of the community-owned package repositories hosted at `pkgs.k8s.io`.
+Unlike the legacy package repositories, the community-owned package repositories are
+structured in a way that there's a dedicated package repository for each Kubernetes
+minor version.
## {{% heading "prerequisites" %}}
-This document assumes that you're already using the Kubernetes community-owned
-package repositories. If that's not the case, it's strongly recommended to migrate
-to the Kubernetes package repositories.
+This document assumes that you're already using the community-owned
+package repositories (`pkgs.k8s.io`). If that's not the case, it's strongly
+recommended to migrate to the community-owned package repositories as described
+in the [official announcement](/blog/2023/08/15/pkgs-k8s-io-introduction/).
### Verifying if the Kubernetes package repositories are used
-If you're unsure whether you're using the Kubernetes package repositories or the
-Google-hosted repository, take the following steps to verify:
+If you're unsure whether you're using the community-owned package repositories or the
+legacy package repositories, take the following steps to verify:
{{< tabs name="k8s_install_versions" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
@@ -39,7 +41,8 @@ deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io
```
**You're using the Kubernetes package repositories and this guide applies to you.**
-Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories.
+Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories
+as described in the [official announcement](/blog/2023/08/15/pkgs-k8s-io-introduction/).
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
@@ -64,7 +67,8 @@ exclude=kubelet kubeadm kubectl
```
**You're using the Kubernetes package repositories and this guide applies to you.**
-Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories.
+Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories
+as described in the [official announcement](/blog/2023/08/15/pkgs-k8s-io-introduction/).
{{% /tab %}}
@@ -90,7 +94,8 @@ exclude=kubelet kubeadm kubectl
```
**You're using the Kubernetes package repositories and this guide applies to you.**
-Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories.
+Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories
+as described in the [official announcement](/blog/2023/08/15/pkgs-k8s-io-introduction/).
{{% /tab %}}
{{< /tabs >}}
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md
index 075b610b688b5..543b2d678923e 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade.md
@@ -54,9 +54,9 @@ The upgrade workflow at high level is the following:
## Changing the package repository
-If you're using the Kubernetes community-owned repositories, you need to change
-the package repository to one that contains packages for your desired Kubernetes
-minor version. This is explained in [Changing the Kubernetes package repository](/docs/tasks/administer-cluster/kubeadm/change-package-repository/)
+If you're using the community-owned package repositories (`pkgs.k8s.io`), you need to
+enable the package repository for the desired Kubernetes minor release. This is explained in
+[Changing the Kubernetes package repository](/docs/tasks/administer-cluster/kubeadm/change-package-repository/)
document.
## Determine which version to upgrade to
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes.md b/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes.md
index cff79362570c2..e5fc0a120d1b3 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes.md
@@ -19,9 +19,9 @@ upgrade the control plane nodes before upgrading your Linux Worker nodes.
## Changing the package repository
-If you're using the Kubernetes community-owned repositories, you need to change
-the package repository to one that contains packages for your desired Kubernetes
-minor version. This is explained in [Changing the Kubernetes package repository](/docs/tasks/administer-cluster/kubeadm/change-package-repository/)
+If you're using the community-owned package repositories (`pkgs.k8s.io`), you need to
+enable the package repository for the desired Kubernetes minor release. This is explained in
+[Changing the Kubernetes package repository](/docs/tasks/administer-cluster/kubeadm/change-package-repository/)
document.
## Upgrading worker nodes
From b22fa3cbe82991d7443276aae4214671403527fd Mon Sep 17 00:00:00 2001
From: Arhell
Date: Wed, 11 Oct 2023 01:50:14 +0300
Subject: [PATCH 125/229] [es] Fix sha256 url missing '/release/' to download
kubectl
---
content/es/docs/tasks/tools/included/install-kubectl-linux.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/es/docs/tasks/tools/included/install-kubectl-linux.md b/content/es/docs/tasks/tools/included/install-kubectl-linux.md
index 10b756f079167..5ff454b8f9928 100644
--- a/content/es/docs/tasks/tools/included/install-kubectl-linux.md
+++ b/content/es/docs/tasks/tools/included/install-kubectl-linux.md
@@ -45,7 +45,7 @@ Por ejemplo, para descargar la versión {{< skew currentPatchVersion >}} en Linu
Descargue el archivo de comprobación de kubectl:
```bash
- curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
+ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
```
Valide el binario kubectl con el archivo de comprobación:
@@ -199,7 +199,7 @@ A continuación, se muestran los procedimientos para configurar el autocompletad
Descargue el archivo de comprobación kubectl-convert:
```bash
- curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256"
+ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256"
```
Valide el binario kubectl-convert con el archivo de comprobación:
From 0b4b80800730549a09b5d07afd4374ad430f4ca6 Mon Sep 17 00:00:00 2001
From: windsonsea
Date: Wed, 11 Oct 2023 09:10:14 +0800
Subject: [PATCH 126/229] Clean up /kubeadm/install-kubeadm.md
---
.../tools/kubeadm/install-kubeadm.md | 84 ++++++++++---------
1 file changed, 43 insertions(+), 41 deletions(-)
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
index 6c93053c9e486..2a6bc7a637edf 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
@@ -15,10 +15,8 @@ This page shows how to install the `kubeadm` toolbox.
For information on how to create a cluster with kubeadm once you have performed this installation process,
see the [Creating a cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page.
-
## {{% heading "prerequisites" %}}
-
* A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions
based on Debian and Red Hat, and those distributions without a package manager.
* 2 GB or more of RAM per machine (any less will leave little room for your apps).
@@ -59,6 +57,7 @@ If you have more than one network adapter, and your Kubernetes components are no
route, we recommend you add IP route(s) so Kubernetes cluster addresses go via the appropriate adapter.
## Check required ports
+
These [required ports](/docs/reference/networking/ports-and-protocols/)
need to be open in order for Kubernetes components to communicate with each other.
You can use tools like netcat to check if a port is open. For example:
@@ -131,7 +130,7 @@ You will install these packages on all of your machines:
* `kubeadm`: the command to bootstrap the cluster.
* `kubelet`: the component that runs on all of the machines in your cluster
- and does things like starting pods and containers.
+ and does things like starting pods and containers.
* `kubectl`: the command line util to talk to your cluster.
@@ -159,7 +158,7 @@ For more information on version skews, see:
{{< note >}}
Kubernetes has [new package repositories hosted at `pkgs.k8s.io`](/blog/2023/08/15/pkgs-k8s-io-introduction/)
starting from August 2023. The legacy package repositories (`apt.kubernetes.io` and `yum.kubernetes.io`)
-have been frozen starting from September 13, 2023. Please read our
+have been frozen starting from September 13, 2023. Please read our
[deprecation and freezing announcement](/blog/2023/08/31/legacy-package-repository-deprecation/)
for more details.
{{< /note >}}
@@ -177,7 +176,8 @@ These instructions are for Kubernetes {{< skew currentVersion >}}.
sudo apt-get install -y apt-transport-https ca-certificates curl
```
-2. Download the public signing key for the Kubernetes package repositories. The same signing key is used for all repositories so you can disregard the version in the URL:
+2. Download the public signing key for the Kubernetes package repositories.
+ The same signing key is used for all repositories so you can disregard the version in the URL:
```shell
curl -fsSL https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
@@ -208,47 +208,47 @@ you can create it by running `sudo mkdir -m 755 /etc/apt/keyrings`
1. Set SELinux to `permissive` mode:
-These instructions are for Kubernetes {{< skew currentVersion >}}.
+ These instructions are for Kubernetes {{< skew currentVersion >}}.
-```shell
-# Set SELinux in permissive mode (effectively disabling it)
-sudo setenforce 0
-sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
-```
+ ```shell
+ # Set SELinux in permissive mode (effectively disabling it)
+ sudo setenforce 0
+ sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
+ ```
-{{< caution >}}
-- Setting SELinux in permissive mode by running `setenforce 0` and `sed ...`
- effectively disables it. This is required to allow containers to access the host
- filesystem; for example, some cluster network plugins require that. You have to
- do this until SELinux support is improved in the kubelet.
-- You can leave SELinux enabled if you know how to configure it but it may require
- settings that are not supported by kubeadm.
-{{< /caution >}}
+ {{< caution >}}
+ - Setting SELinux in permissive mode by running `setenforce 0` and `sed ...`
+ effectively disables it. This is required to allow containers to access the host
+ filesystem; for example, some cluster network plugins require that. You have to
+ do this until SELinux support is improved in the kubelet.
+ - You can leave SELinux enabled if you know how to configure it but it may require
+ settings that are not supported by kubeadm.
+ {{< /caution >}}
2. Add the Kubernetes `yum` repository. The `exclude` parameter in the
repository definition ensures that the packages related to Kubernetes are
not upgraded upon running `yum update` as there's a special procedure that
must be followed for upgrading Kubernetes.
-```shell
-# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo
-cat <}}/rpm/
-enabled=1
-gpgcheck=1
-gpgkey=https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/rpm/repodata/repomd.xml.key
-exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
-EOF
-```
+ ```shell
+ # This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo
+ cat <}}/rpm/
+ enabled=1
+ gpgcheck=1
+ gpgkey=https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/rpm/repodata/repomd.xml.key
+ exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
+ EOF
+ ```
3. Install kubelet, kubeadm and kubectl, and enable kubelet to ensure it's automatically started on startup:
-```shell
-sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
-sudo systemctl enable --now kubelet
-```
+ ```shell
+ sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
+ sudo systemctl enable --now kubelet
+ ```
{{% /tab %}}
{{% tab name="Without a package manager" %}}
@@ -262,7 +262,7 @@ sudo mkdir -p "$DEST"
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_PLUGINS_VERSION}/cni-plugins-linux-${ARCH}-${CNI_PLUGINS_VERSION}.tgz" | sudo tar -C "$DEST" -xz
```
-Define the directory to download command files
+Define the directory to download command files:
{{< note >}}
The `DOWNLOAD_DIR` variable must be set to a writable directory.
@@ -274,7 +274,7 @@ DOWNLOAD_DIR="/usr/local/bin"
sudo mkdir -p "$DOWNLOAD_DIR"
```
-Install crictl (required for kubeadm / Kubelet Container Runtime Interface (CRI))
+Install crictl (required for kubeadm / Kubelet Container Runtime Interface (CRI)):
```bash
CRICTL_VERSION="v1.28.0"
@@ -298,7 +298,8 @@ curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSIO
```
{{< note >}}
-Please refer to the note in the [Before you begin](#before-you-begin) section for Linux distributions that do not include `glibc` by default.
+Please refer to the note in the [Before you begin](#before-you-begin) section for Linux distributions
+that do not include `glibc` by default.
{{< /note >}}
Install `kubectl` by following the instructions on [Install Tools page](/docs/tasks/tools/#kubectl).
@@ -312,12 +313,12 @@ systemctl enable --now kubelet
{{< note >}}
The Flatcar Container Linux distribution mounts the `/usr` directory as a read-only filesystem.
Before bootstrapping your cluster, you need to take additional steps to configure a writable directory.
-See the [Kubeadm Troubleshooting guide](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#usr-mounted-read-only/) to learn how to set up a writable directory.
+See the [Kubeadm Troubleshooting guide](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#usr-mounted-read-only/)
+to learn how to set up a writable directory.
{{< /note >}}
{{% /tab %}}
{{< /tabs >}}
-
The kubelet is now restarting every few seconds, as it waits in a crashloop for
kubeadm to tell it what to do.
@@ -335,7 +336,8 @@ See [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configu
## Troubleshooting
-If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).
+If you are running into difficulties with kubeadm, please consult our
+[troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).
## {{% heading "whatsnext" %}}
From e3f943b61866afd18ce8a99e305b0e38a1c15301 Mon Sep 17 00:00:00 2001
From: windsonsea
Date: Wed, 11 Oct 2023 10:25:49 +0800
Subject: [PATCH 127/229] [zh] update /controllers/deployment.md
---
.../workloads/controllers/deployment.md | 20 +++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/content/zh-cn/docs/concepts/workloads/controllers/deployment.md b/content/zh-cn/docs/concepts/workloads/controllers/deployment.md
index f96341f95d634..eb031308efdd7 100644
--- a/content/zh-cn/docs/concepts/workloads/controllers/deployment.md
+++ b/content/zh-cn/docs/concepts/workloads/controllers/deployment.md
@@ -2154,9 +2154,9 @@ Here are some Rolling Update Deployment examples that use the `maxUnavailable` a
以下是一些使用 `maxUnavailable` 和 `maxSurge` 的滚动更新 Deployment 的示例:
{{< tabs name="tab_with_md" >}}
-{{% tab name="Max Unavailable" %}}
+{{% tab name="最大不可用" %}}
- ```yaml
+```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
@@ -2182,12 +2182,12 @@ spec:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
- ```
+```
{{% /tab %}}
-{{% tab name="Max Surge" %}}
+{{% tab name="最大峰值" %}}
- ```yaml
+```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
@@ -2213,12 +2213,12 @@ spec:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
- ```
+```
{{% /tab %}}
-{{% tab name="Hybrid" %}}
+{{% tab name="两项混合" %}}
- ```yaml
+```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
@@ -2245,7 +2245,7 @@ spec:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
- ```
+```
{{% /tab %}}
{{< /tabs >}}
@@ -2261,7 +2261,7 @@ retrying the Deployment. This defaults to 600. In the future, once automatic rol
controller will roll back a Deployment as soon as it observes such a condition.
-->
### 进度期限秒数 {#progress-deadline-seconds}
-
+
`.spec.progressDeadlineSeconds` 是一个可选字段,用于指定系统在报告 Deployment
[进展失败](#failed-deployment) 之前等待 Deployment 取得进展的秒数。
这类报告会在资源状态中体现为 `type: Progressing`、`status: False`、
From 9942a736e5aaa1702f70e4c6f2221bbc8c41fb1d Mon Sep 17 00:00:00 2001
From: "xin.li"
Date: Tue, 10 Oct 2023 22:42:32 +0800
Subject: [PATCH 128/229] [zh-cn] sync releases/version-skew-policy.md
Signed-off-by: xin.li
---
content/zh-cn/releases/version-skew-policy.md | 131 ++++++++++++------
1 file changed, 88 insertions(+), 43 deletions(-)
diff --git a/content/zh-cn/releases/version-skew-policy.md b/content/zh-cn/releases/version-skew-policy.md
index 15b3fb6443864..113b5532b7cc4 100644
--- a/content/zh-cn/releases/version-skew-policy.md
+++ b/content/zh-cn/releases/version-skew-policy.md
@@ -31,10 +31,15 @@ Specific cluster deployment tools may place additional restrictions on version s
## 支持的版本 {#supported-versions}
@@ -48,8 +53,9 @@ Kubernetes 1.19 和更新的版本获得[大约 1 年的补丁支持](/zh-cn/rel
Kubernetes 1.18 及更早的版本获得了大约 9 个月的补丁支持。
### kubelet {#kubelet}
@@ -118,7 +126,9 @@ If version skew exists between `kube-apiserver` instances in an HA cluster, this
Example:
* `kube-apiserver` instances are at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}**
-* `kubelet` is supported at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, and **{{< skew currentVersionAddMinor -3 >}}** (**{{< skew currentVersion >}}** is not supported because that would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**)
+* `kubelet` is supported at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**,
+ and **{{< skew currentVersionAddMinor -3 >}}** (**{{< skew currentVersion >}}** is not supported because that
+ would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**)
-->
例如:
@@ -130,24 +140,32 @@ Example:
### kube-proxy {#kube-proxy}
-`kube-proxy` 不能比 `kube-apiserver` 新,并且可以比它旧两个次版本。
-`kube-proxy` 可以比一起运行的 `kubelet` 实例旧或新两个次版本。
+* `kube-proxy` 不能比 `kube-apiserver` 新。
+* `kube-proxy` 最多可以比 `kube-apiserver` 旧三个小版本(`kube-proxy` < 1.25 最多只能比 `kube-apiserver` 旧两个小版本)。
+* `kube-proxy` 可能比它旁边运行的 `kubelet` 实例旧或新最多三个次要版本(`kube-proxy` < 1.25 最多只能是比它并行运行的 `kubelet` 实例旧或新的两个次要版本)。
+
例如:
* `kube-apiserver` 的版本是 **{{< skew currentVersion >}}**
-* `kube-proxy` 支持的版本是 **{{< skew currentVersion >}}**,
- **{{< skew currentVersionAddMinor -1 >}}** 和 **{{< skew currentVersionAddMinor -2 >}}**
+* `kube-proxy` 支持的版本是 **{{< skew currentVersion >}}**、
+ **{{< skew currentVersionAddMinor -1 >}}** 、**{{< skew currentVersionAddMinor -2 >}}** 和
+ **{{< skew currentVersionAddMinor -3 >}}**
{{< note >}}
例如:
* `kube-apiserver` 实例的版本是 **{{< skew currentVersion >}}** 和 **{{< skew currentVersionAddMinor -1 >}}**
-* `kube-proxy` 版本为 **{{< skew currentVersionAddMinor -1 >}}** 和
- **{{< skew currentVersionAddMinor -2 >}}**。(**{{< skew currentVersion >}}** 将不被支持,
+* `kube-proxy` 版本为 **{{< skew currentVersionAddMinor -1 >}}**、
+ **{{< skew currentVersionAddMinor -2 >}}** 和 {{< skew currentVersionAddMinor -3 >}}。(**{{< skew currentVersion >}}** 将不被支持,
因为该版本将比 **{{< skew currentVersionAddMinor -1 >}}** 的 kube-apiserver 实例更新)
### kube-controller-manager、kube-scheduler 和 cloud-controller-manager {#kube-controller-manager-kube-scheduler-and-cloud-controller-manager}
@@ -194,7 +216,9 @@ Example:
{{< note >}}
如果 HA 集群中的 `kube-apiserver` 实例之间存在版本偏差,
并且这些组件可以与集群中的任何 `kube-apiserver`
@@ -205,8 +229,11 @@ If version skew exists between `kube-apiserver` instances in an HA cluster, and
Example:
* `kube-apiserver` instances are at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}**
-* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` communicate with a load balancer that can route to any `kube-apiserver` instance
-* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at **{{< skew currentVersionAddMinor -1 >}}** (**{{< skew currentVersion >}}** is not supported because that would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**)
+* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` communicate with a load balancer
+ that can route to any `kube-apiserver` instance
+* `kube-controller-manager`, `kube-scheduler`, and `cloud-controller-manager` are supported at
+ **{{< skew currentVersionAddMinor -1 >}}** (**{{< skew currentVersion >}}** is not supported
+ because that would be newer than the `kube-apiserver` instance at version **{{< skew currentVersionAddMinor -1 >}}**)
-->
例如:
@@ -226,7 +253,8 @@ Example:
Example:
* `kube-apiserver` is at **{{< skew currentVersion >}}**
-* `kubectl` is supported at **{{< skew nextMinorVersion >}}**, **{{< skew currentVersion >}}**, and **{{< skew currentVersionAddMinor -1 >}}**
+* `kubectl` is supported at **{{< skew nextMinorVersion >}}**, **{{< skew currentVersion >}}**,
+ and **{{< skew currentVersionAddMinor -1 >}}**
-->
### kubectl {#kubectl}
@@ -249,7 +277,8 @@ If version skew exists between `kube-apiserver` instances in an HA cluster, this
Example:
* `kube-apiserver` instances are at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}**
-* `kubectl` is supported at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}** (other versions would be more than one minor version skewed from one of the `kube-apiserver` components)
+* `kubectl` is supported at **{{< skew currentVersion >}}** and **{{< skew currentVersionAddMinor -1 >}}**
+ (other versions would be more than one minor version skewed from one of the `kube-apiserver` components)
-->
例如:
@@ -261,8 +290,10 @@ Example:
## 支持的组件升级顺序 {#supported-component-upgrade-order}
@@ -273,12 +304,12 @@ This section describes the order in which components must be upgraded to transit
### kube-apiserver {#kube-apiserver-1}
@@ -349,7 +387,9 @@ require `kube-apiserver` to not skip minor versions when upgrading, even in sing
Pre-requisites:
-* The `kube-apiserver` instances these components communicate with are at **{{< skew currentVersion >}}** (in HA clusters in which these control plane components can communicate with any `kube-apiserver` instance in the cluster, all `kube-apiserver` instances must be upgraded before upgrading these components)
+* The `kube-apiserver` instances these components communicate with are at **{{< skew currentVersion >}}**
+ (in HA clusters in which these control plane components can communicate with any `kube-apiserver`
+ instance in the cluster, all `kube-apiserver` instances must be upgraded before upgrading these components)
Upgrade `kube-controller-manager`, `kube-scheduler`, and
`cloud-controller-manager` to **{{< skew currentVersion >}}**. There is no
@@ -377,7 +417,8 @@ Pre-requisites:
* The `kube-apiserver` instances the `kubelet` communicates with are at **{{< skew currentVersion >}}**
-Optionally upgrade `kubelet` instances to **{{< skew currentVersion >}}** (or they can be left at **{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, or **{{< skew currentVersionAddMinor -3 >}}**)
+Optionally upgrade `kubelet` instances to **{{< skew currentVersion >}}** (or they can be left at
+**{{< skew currentVersionAddMinor -1 >}}**, **{{< skew currentVersionAddMinor -2 >}}**, or **{{< skew currentVersionAddMinor -3 >}}**)
-->
### kubelet {#kubelet-1}
@@ -399,7 +440,8 @@ In-place minor version `kubelet` upgrades are not supported.
{{< warning >}}
在一个集群中运行持续比 `kube-apiserver` 落后两个次版本的 `kubelet` 实例意味着在升级控制平面之前必须先升级它们。
{{ warning >}}
@@ -411,7 +453,9 @@ Pre-requisites:
* The `kube-apiserver` instances `kube-proxy` communicates with are at **{{< skew currentVersion >}}**
-Optionally upgrade `kube-proxy` instances to **{{< skew currentVersion >}}** (or they can be left at **{{< skew currentVersionAddMinor -1 >}}** or **{{< skew currentVersionAddMinor -2 >}}**)
+Optionally upgrade `kube-proxy` instances to **{{< skew currentVersion >}}**
+(or they can be left at **{{< skew currentVersionAddMinor -1 >}}**
+or **{{< skew currentVersionAddMinor -2 >}}**)
-->
### kube-proxy {#kube-proxy}
@@ -423,7 +467,8 @@ Optionally upgrade `kube-proxy` instances to **{{< skew currentVersion >}}** (or
{{< warning >}}
在一个集群中运行持续比 `kube-apiserver` 落后三个次版本的 `kube-proxy` 实例意味着在升级控制平面之前必须先升级它们。
{{ warning >}}
From f2f7698d725b3ae15fdff04188a1d6fb6b12e29e Mon Sep 17 00:00:00 2001
From: windsonsea
Date: Wed, 11 Oct 2023 08:55:08 +0800
Subject: [PATCH 129/229] [zh] Sync /kubeadm/install-kubeadm.md
---
.../tools/kubeadm/install-kubeadm.md | 240 +++++-------------
1 file changed, 67 insertions(+), 173 deletions(-)
diff --git a/content/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
index 52a9925ef99c5..2073e54bf82fc 100644
--- a/content/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
+++ b/content/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
@@ -40,8 +40,12 @@ see the [Creating a cluster with kubeadm](/docs/setup/production-environment/too
* Full network connectivity between all machines in the cluster (public or private network is fine).
* Unique hostname, MAC address, and product_uuid for every node. See [here](#verify-mac-address) for more details.
* Certain ports are open on your machines. See [here](#check-required-ports) for more details.
-* Swap disabled. You **MUST** disable swap in order for the kubelet to work properly.
- * For example, `sudo swapoff -a` will disable swapping temporarily. To make this change persistent across reboots, make sure swap is disabled in config files like `/etc/fstab`, `systemd.swap`, depending how it was configured on your system.
+* Swap configuration. The default behavior of a kubelet was to fail to start if swap memory was detected on a node.
+ Swap has been supported since v1.22. And since v1.28, Swap is supported for cgroup v2 only; the NodeSwap feature
+ gate of the kubelet is beta but disabled by default.
+ * You **MUST** disable swap if the kubelet is not properly configured to use swap. For example, `sudo swapoff -a`
+ will disable swapping temporarily. To make this change persistent across reboots, make sure swap is disabled in
+ config files like `/etc/fstab`, `systemd.swap`, depending how it was configured on your system.
-->
* 一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux
发行版以及一些不提供包管理器的发行版提供通用的指令。
@@ -50,12 +54,30 @@ see the [Creating a cluster with kubeadm](/docs/setup/production-environment/too
* 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)。
* 节点之中不可以有重复的主机名、MAC 地址或 product_uuid。请参见[这里](#verify-mac-address)了解更多详细信息。
* 开启机器上的某些端口。请参见[这里](#check-required-ports)了解更多详细信息。
-* 禁用交换分区。为了保证 kubelet 正常工作,你**必须**禁用交换分区。
- * 例如,`sudo swapoff -a` 将暂时禁用交换分区。要使此更改在重启后保持不变,请确保在如
+* 交换分区的配置。kubelet 的默认行为是在节点上检测到交换内存时无法启动。
+ kubelet 自 v1.22 起已开始支持交换分区。自 v1.28 起,仅针对 cgroup v2 支持交换分区;
+ kubelet 的 NodeSwap 特性门控处于 Beta 阶段,但默认被禁用。
+ * 如果 kubelet 未被正确配置使用交换分区,则你**必须**禁用交换分区。
+ 例如,`sudo swapoff -a` 将暂时禁用交换分区。要使此更改在重启后保持不变,请确保在如
`/etc/fstab`、`systemd.swap` 等配置文件中禁用交换分区,具体取决于你的系统如何配置。
+{{< note >}}
+
+`kubeadm` 的安装是通过使用动态链接的二进制文件完成的,安装时假设你的目标系统提供 `glibc`。
+这个假设在许多 Linux 发行版(包括 Debian、Ubuntu、Fedora、CentOS 等)上是合理的,
+但对于不包含默认 `glibc` 的自定义和轻量级发行版(如 Alpine Linux),情况并非总是如此。
+预期的情况是,发行版要么包含 `glibc`,
+要么提供了一个[兼容层](https://wiki.alpinelinux.org/wiki/Running_glibc_programs)以提供所需的符号。
+{{< /note >}}
+
@@ -271,49 +293,25 @@ For more information on version skews, see:
{{< note >}}
-自2023年8月起,Kubernetes 有两个不同的软件包仓库。
-Google 托管的仓库已被弃用,并正在被 Kubernetes(由社区拥有)软件包仓库替代。
-Kubernetes 项目强烈建议使用 Kubernetes 社区拥有的软件包仓库,
-因为该项目计划将来停止向 Google 托管的仓库发布软件包。
-
-
-
-对于 Kubernetes 软件包仓库,有一些重要的考虑事项:
-
-- Kubernetes 软件包仓库包含从社区接管软件包构建时仍在支持范围内的 Kubernetes 版本开始的软件包。
- 这意味着v1.24.0之前的版本只在 Google 托管的仓库中提供。
-- 每个 Kubernetes 次要版本都有一个专用的软件包仓库。
- 当升级到不同的次要版本时,必须记住软件包仓库的详细信息也会发生变化。
+Kubernetes 从 2023 年 8 月开始使用托管在 `pkgs.k8s.io`
+上的[新软件包仓库](/zh-cn/blog/2023/08/15/pkgs-k8s-io-introduction/)。
+自 2023 年 9 月 13 日起,老旧的软件包仓库(`apt.kubernetes.io` 和 `yum.kubernetes.io`)已被冻结。
+更多细节参阅[弃用和冻结公告](/zh-cn/blog/2023/08/31/legacy-package-repository-deprecation/)。
{{< /note >}}
{{< tabs name="k8s_install" >}}
{{% tab name="基于 Debian 的发行版" %}}
-
-### Kubernetes 软件包仓库 {#dpkg-k8s-package-repo}
-
-这些说明适用于 Kubernetes {{< skew currentVersion >}}.
+以下指令适用于 Kubernetes {{< skew currentVersion >}}.
2. 下载用于 Kubernetes 软件包仓库的公共签名密钥。所有仓库都使用相同的签名密钥,因此你可以忽略URL中的版本:
@@ -356,66 +355,6 @@ These instructions are for Kubernetes {{< skew currentVersion >}}.
sudo apt-mark hold kubelet kubeadm kubectl
```
-{{< note >}}
-
-在低于 Debian 12 和 Ubuntu 22.04 的发行版本中,`/etc/apt/keyrings` 默认不存在。
-如有需要,你可以创建此目录,并将其设置为对所有人可读,但仅对管理员可写。
-{{< /note >}}
-
-
-### Google 托管的软件包仓库(已弃用) {#dpkg-google-package-repo}
-
-
-这些说明适用于 Kubernetes {{< skew currentVersion >}}.
-
-
-1. 更新 `apt` 软件包索引并安装使用 Kubernetes `apt` 仓库所需的软件包:
-
- ```shell
- sudo apt-get update
- # apt-transport-https 可能是一个虚拟包(dummy package);如果是的话,你可以跳过安装这个包
- sudo apt-get install -y apt-transport-https ca-certificates curl
- ```
-
-
-2. 下载 Google Cloud 公共签名密钥:
-
- ```shell
- curl -fsSL https://dl.k8s.io/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
- ```
-
-
-3. 添加 Google 托管的 `apt` 仓库:
-
- ```shell
- # 此操作会覆盖 /etc/apt/sources.list.d/kubernetes.list 中现存的所有配置
- echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
- ```
-
-
-4. 更新 `apt` 软件包索引,安装 kubelet、kubeadm 和 kubectl,并锁定它们的版本:
-
- ```shell
- sudo apt-get update
- sudo apt-get install -y kubelet kubeadm kubectl
- sudo apt-mark hold kubelet kubeadm kubectl
- ```
-
{{< note >}}
1. 将 SELinux 设置为 `permissive` 模式:
+ 以下指令适用于 Kubernetes {{< skew currentVersion >}}。
+
```shell
# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
@@ -455,16 +404,6 @@ you can create it by running `sudo mkdir -m 755 /etc/apt/keyrings`
- 如果你知道如何配置 SELinux 则可以将其保持启用状态,但可能需要设定部分 kubeadm 不支持的配置。
{{< /caution >}}
-
-### Kubernetes 软件包仓库 {#rpm-k8s-package-repo}
-
-
-这些说明适用于 Kubernetes {{< skew currentVersion >}}.
-
-### Google 托管的软件包仓库(已弃用) {#rpm-google-package-repo}
-
-
-这些说明适用于 Kubernetes {{< skew currentVersion >}}.
-
-
-2. 添加 Google 托管的 `yum` 仓库。
- 仓库定义中的 `exclude` 参数确保了与 Kubernetes 相关的软件包在运行
- `yum update` 时不会升级,因为升级 Kubernetes 需要遵循特定的过程。"
-
- ```shell
- # 此操作会覆盖 /etc/yum.repos.d/kubernetes.repo 中现存的所有配置
- cat <
-3. 安装 kubelet、kubeadm 和 kubectl,并启用 kubelet 以确保它在启动时自动启动:
-
- ```shell
- sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
- sudo systemctl enable --now kubelet
- ```
-
-{{< note >}}
-
-如果 `baseurl` 因为你的基于 RPM 的 Linux 发行版无法解释 `$basearch` 而失败,
-你需要将 `\$basearch` 替换为你的计算机的体系结构。
-输入 `uname -m` 命令来查看该值。
-例如,对于 `x86_64` 架构,`baseurl` URL 可能是:`https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64`。
-{{< /note >}}
-
{{% /tab %}}
{{% tab name="无包管理器的情况" %}}
-定义要下载命令文件的目录。
+定义要下载命令文件的目录:
{{< note >}}
-安装 crictl(kubeadm/kubelet 容器运行时接口(CRI)所需)
+安装 crictl(kubeadm/kubelet 容器运行时接口(CRI)所需):
```bash
CRICTL_VERSION="v1.28.0"
@@ -616,6 +500,14 @@ sudo mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | sudo tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
```
+{{< note >}}
+
+对于默认不包括 `glibc` 的 Linux 发行版,请参阅[开始之前](#before-you-begin)一节的注释。
+{{< /note >}}
+
Flatcar Container Linux 发行版会将 `/usr/` 目录挂载为一个只读文件系统。
在启动引导你的集群之前,你需要执行一些额外的操作来配置一个可写入的目录。
@@ -652,13 +545,13 @@ kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm
## Configuring a cgroup driver
Both the container runtime and the kubelet have a property called
-["cgroup driver"](/docs/setup/production-environment/container-runtimes/), which is important
+["cgroup driver"](/docs/setup/production-environment/container-runtimes/#cgroup-drivers), which is important
for the management of cgroups on Linux machines.
-->
## 配置 cgroup 驱动程序 {#configuring-a-cgroup-driver}
容器运行时和 kubelet 都具有名字为
-["cgroup driver"](/zh-cn/docs/setup/production-environment/container-runtimes/)
+["cgroup driver"](/zh-cn/docs/setup/production-environment/container-runtimes/#cgroup-drivers)
的属性,该属性对于在 Linux 机器上管理 CGroups 而言非常重要。
{{< warning >}}
@@ -676,7 +569,8 @@ See [Configuring a cgroup driver](/docs/tasks/administer-cluster/kubeadm/configu
## 故障排查 {#troubleshooting}
From 4ed9c343e205f8bbee04c64e40be57dc5c19c0da Mon Sep 17 00:00:00 2001
From: Utkarsh Singh <96516301+utkarsh-singh1@users.noreply.github.com>
Date: Wed, 11 Oct 2023 11:02:47 +0000
Subject: [PATCH 130/229] Updated con
docs/contribute/generate-ref-docs/prerequisites-ref-docs.md
Signed-off-by: Utkarsh Singh <96516301+utkarsh-singh1@users.noreply.github.com>
---
.../docs/contribute/generate-ref-docs/prerequisites-ref-docs.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md b/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md
index ff87f63aa1271..e33239a5da429 100644
--- a/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md
+++ b/content/en/docs/contribute/generate-ref-docs/prerequisites-ref-docs.md
@@ -12,7 +12,7 @@
- [PyYAML](https://pyyaml.org/) v5.1.2
- [make](https://www.gnu.org/software/make/)
- [gcc compiler/linker](https://gcc.gnu.org/)
- - [Docker](https://docs.docker.com/engine/installation/) (Required only for `kubectl` command reference, also kubernetes moving on from dockershim read [here](https://kubernetes.io/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/))
+ - [Docker](https://docs.docker.com/engine/installation/) (Required only for `kubectl` command reference)
- Your `PATH` environment variable must include the required build tools, such as the `Go` binary and `python`.
From 4de9683e85e6a6c7e3b92ecd37fc0df34613fae2 Mon Sep 17 00:00:00 2001
From: "xin.li"
Date: Tue, 10 Oct 2023 22:50:28 +0800
Subject: [PATCH 131/229] [zh-cn] sync apiserver-config.v1beta1.md
Signed-off-by: xin.li
---
.../config-api/apiserver-config.v1beta1.md | 117 +++++++++---------
1 file changed, 59 insertions(+), 58 deletions(-)
diff --git a/content/zh-cn/docs/reference/config-api/apiserver-config.v1beta1.md b/content/zh-cn/docs/reference/config-api/apiserver-config.v1beta1.md
index 1d35c64e2e081..b4610b736dc8a 100644
--- a/content/zh-cn/docs/reference/config-api/apiserver-config.v1beta1.md
+++ b/content/zh-cn/docs/reference/config-api/apiserver-config.v1beta1.md
@@ -52,6 +52,65 @@ EgressSelectorConfiguration 为 Egress 选择算符客户端提供版本化的
\ No newline at end of file
From 44a6b0f501ab6dc6066974493983410ec49c010a Mon Sep 17 00:00:00 2001
From: Arhell
Date: Thu, 12 Oct 2023 00:21:59 +0300
Subject: [PATCH 132/229] [pt] Fix sha256 url missing '/release/' to download
kubectl
---
content/pt-br/docs/tasks/tools/install-kubectl-linux.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/pt-br/docs/tasks/tools/install-kubectl-linux.md b/content/pt-br/docs/tasks/tools/install-kubectl-linux.md
index 4c37e5f96b0fb..2656115f22b4c 100644
--- a/content/pt-br/docs/tasks/tools/install-kubectl-linux.md
+++ b/content/pt-br/docs/tasks/tools/install-kubectl-linux.md
@@ -44,7 +44,7 @@ Por exemplo, para fazer download da versão {{< skew currentPatchVersion >}} no
Faça download do arquivo checksum de verificação do kubectl:
```bash
- curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
+ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
```
Valide o binário kubectl em relação ao arquivo de verificação:
@@ -215,7 +215,7 @@ Abaixo estão os procedimentos para configurar o autocompletar para Bash, Fish e
Faça download do arquivo checksum de verificação do kubectl-convert:
```bash
- curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256"
+ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256"
```
Valide o binário kubectl-convert com o arquivo de verificação:
From 298d7cd864de010e1d955c9066f1252aa4591cff Mon Sep 17 00:00:00 2001
From: steve-hardman <132999137+steve-hardman@users.noreply.github.com>
Date: Thu, 12 Oct 2023 00:22:32 +0100
Subject: [PATCH 133/229] Fix page layout
---
.../docs/tasks/tools/install-kubectl-linux.md | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/content/en/docs/tasks/tools/install-kubectl-linux.md b/content/en/docs/tasks/tools/install-kubectl-linux.md
index dafb88fa025a6..ef7a87889a935 100644
--- a/content/en/docs/tasks/tools/install-kubectl-linux.md
+++ b/content/en/docs/tasks/tools/install-kubectl-linux.md
@@ -210,17 +210,19 @@ To upgrade kubectl to another minor release, you'll need to bump the version in
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/rpm/repodata/repomd.xml.key
EOF
+ ```
- {{< note >}}
- To upgrade kubectl to another minor release, you'll need to bump the version in `/etc/zypp/repos.d/kubernetes.repo`
- before running `zypper update`. This procedure is described in more detail in
- [Changing The Kubernetes Package Repository](/docs/tasks/administer-cluster/kubeadm/change-package-repository/).
- {{< /note >}}
+{{< note >}}
+To upgrade kubectl to another minor release, you'll need to bump the version in `/etc/zypp/repos.d/kubernetes.repo`
+before running `zypper update`. This procedure is described in more detail in
+[Changing The Kubernetes Package Repository](/docs/tasks/administer-cluster/kubeadm/change-package-repository/).
+{{< /note >}}
-1. Install kubectl using `zypper`:
+ 2. Install kubectl using `zypper`:
- ```bash
- sudo zypper install -y kubectl
+ ```bash
+ sudo zypper install -y kubectl
+ ```
{{% /tab %}}
{{< /tabs >}}
From caed9a192495c4dc9dc762d870042394d5033458 Mon Sep 17 00:00:00 2001
From: Kensei Nakada
Date: Thu, 12 Oct 2023 09:50:44 +0900
Subject: [PATCH 134/229] Modify `SchedulerQueueingHints` (#43427)
* Modify `SchedulerQueueingHints`
* fix: QueueingHint is enabled by default
* Add more context in the description
* format the description
Co-authored-by: Qiming Teng
* Update the description of QueueingHint feature
Co-authored-by: Aldo Culquicondor <1299064+alculquicondor@users.noreply.github.com>
---------
Co-authored-by: Qiming Teng
Co-authored-by: Aldo Culquicondor <1299064+alculquicondor@users.noreply.github.com>
---
.../command-line-tools-reference/feature-gates.md | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md
index c389f320c8e31..7344efa8f59be 100644
--- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md
+++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md
@@ -186,7 +186,7 @@ For a reference to old feature gates that are removed, please refer to
| `SELinuxMountReadWriteOncePod` | `false` | Alpha | 1.25 | 1.26 |
| `SELinuxMountReadWriteOncePod` | `false` | Beta | 1.27 | 1.27 |
| `SELinuxMountReadWriteOncePod` | `true` | Beta | 1.28 | |
-| `SchedulerQueueingHints` | `false` | Alpha | 1.28 | |
+| `SchedulerQueueingHints` | `true` | Beta | 1.28 | |
| `SecurityContextDeny` | `false` | Alpha | 1.27 | |
| `ServiceNodePortStaticSubrange` | `false` | Alpha | 1.27 | 1.27 |
| `ServiceNodePortStaticSubrange` | `true` | Beta | 1.28 | |
@@ -701,8 +701,11 @@ Each feature gate is designed for enabling/disabling a specific feature:
- `SELinuxMountReadWriteOncePod`: Speeds up container startup by allowing kubelet to mount volumes
for a Pod directly with the correct SELinux label instead of changing each file on the volumes
recursively. The initial implementation focused on ReadWriteOncePod volumes.
-- `SchedulerQueueingHints`: Enables the scheduler's _queueing hints_ enhancement,
+- `SchedulerQueueingHints`: Enables [the scheduler's _queueing hints_ enhancement](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/4247-queueinghint/README.md),
which benefits to reduce the useless requeueing.
+ The scheduler retries scheduling pods if something changes in the cluster that could make the pod scheduled.
+ Queueing hints are internal signals that allow the scheduler to filter the changes in the cluster
+ that are relevant to the unscheduled pod, based on previous scheduling attempts.
- `SeccompDefault`: Enables the use of `RuntimeDefault` as the default seccomp profile
for all workloads.
The seccomp profile is specified in the `securityContext` of a Pod and/or a Container.
From 858985f779f32eaeb4c48210524f86876b5df37c Mon Sep 17 00:00:00 2001
From: windsonsea
Date: Thu, 12 Oct 2023 09:46:26 +0800
Subject: [PATCH 135/229] [zh] Sync kube-scheduler-config.v1beta3.md
---
.../kube-scheduler-config.v1beta3.md | 484 +++++++++---------
1 file changed, 242 insertions(+), 242 deletions(-)
diff --git a/content/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3.md b/content/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3.md
index 6db16ca75e85a..60fb03f3e9c54 100644
--- a/content/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3.md
+++ b/content/zh-cn/docs/reference/config-api/kube-scheduler-config.v1beta3.md
@@ -24,6 +24,248 @@ auto_generated: true
- [PodTopologySpreadArgs](#kubescheduler-config-k8s-io-v1beta3-PodTopologySpreadArgs)
- [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1beta3-VolumeBindingArgs)
+## `ClientConnectionConfiguration` {#ClientConnectionConfiguration}
+
+
+**出现在:**
+
+- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta3-KubeSchedulerConfiguration)
+
+
+
From 1b56dc5ad5f5a1de263e2195484091f80f486115 Mon Sep 17 00:00:00 2001
From: Mohammed Affan <72978371+Affan-7@users.noreply.github.com>
Date: Thu, 12 Oct 2023 11:19:56 +0530
Subject: [PATCH 136/229] Add a task page for troubleshooting kubectl (#42347)
* Add troubleshoot-kubectl-connectivity.md
* Add overview
* Add body
* Update content/en/docs/tasks/debug/debug-cluster/_index.md
Co-authored-by: Michael
* Modify note in _index.md
* Add prerequisites section
* Fix API server section
* Fix TLS section
* Fix heading formatting
* Change the title
* Revise prerequisites
* Drop the note
* Fix some nits
---------
Co-authored-by: Michael
---
.../docs/tasks/debug/debug-cluster/_index.md | 3 +
.../debug-cluster/troubleshoot-kubectl.md | 158 ++++++++++++++++++
2 files changed, 161 insertions(+)
create mode 100644 content/en/docs/tasks/debug/debug-cluster/troubleshoot-kubectl.md
diff --git a/content/en/docs/tasks/debug/debug-cluster/_index.md b/content/en/docs/tasks/debug/debug-cluster/_index.md
index 3278fdfa7d4ce..fcb7ba4a016c5 100644
--- a/content/en/docs/tasks/debug/debug-cluster/_index.md
+++ b/content/en/docs/tasks/debug/debug-cluster/_index.md
@@ -14,6 +14,9 @@ problem you are experiencing. See
the [application troubleshooting guide](/docs/tasks/debug/debug-application/) for tips on application debugging.
You may also visit the [troubleshooting overview document](/docs/tasks/debug/) for more information.
+For troubleshooting {{}}, refer to
+[Troubleshooting kubectl](/docs/tasks/debug/debug-cluster/troubleshoot-kubectl/).
+
## Listing your cluster
diff --git a/content/en/docs/tasks/debug/debug-cluster/troubleshoot-kubectl.md b/content/en/docs/tasks/debug/debug-cluster/troubleshoot-kubectl.md
new file mode 100644
index 0000000000000..2166d204b3776
--- /dev/null
+++ b/content/en/docs/tasks/debug/debug-cluster/troubleshoot-kubectl.md
@@ -0,0 +1,158 @@
+---
+title: "Troubleshooting kubectl"
+content_type: task
+weight: 10
+---
+
+
+
+This documentation is about investigating and diagnosing
+{{}} related issues.
+If you encounter issues accessing `kubectl` or connecting to your cluster, this
+document outlines various common scenarios and potential solutions to help
+identify and address the likely cause.
+
+
+
+## {{% heading "prerequisites" %}}
+
+* You need to have a Kubernetes cluster.
+* You also need to have `kubectl` installed - see [install tools](/docs/tasks/tools/#kubectl)
+
+## Verify kubectl setup
+
+Make sure you have installed and configured `kubectl` correctly on your local machine.
+Check the `kubectl` version to ensure it is up-to-date and compatible with your cluster.
+
+Check kubectl version:
+
+```shell
+kubectl version
+```
+
+You'll see a similar output:
+
+```console
+Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.4",GitCommit:"fa3d7990104d7c1f16943a67f11b154b71f6a132", GitTreeState:"clean",BuildDate:"2023-07-19T12:20:54Z", GoVersion:"go1.20.6", Compiler:"gc", Platform:"linux/amd64"}
+Kustomize Version: v5.0.1
+Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.3",GitCommit:"25b4e43193bcda6c7328a6d147b1fb73a33f1598", GitTreeState:"clean",BuildDate:"2023-06-14T09:47:40Z", GoVersion:"go1.20.5", Compiler:"gc", Platform:"linux/amd64"}
+
+```
+
+If you see `Unable to connect to the server: dial tcp :8443: i/o timeout`,
+instead of `Server Version`, you need to troubleshoot kubectl connectivity with your cluster.
+
+Make sure you have installed the kubectl by following the
+[official documentation for installing kubectl](/docs/tasks/tools/#kubectl), and you have
+properly configured the `$PATH` environment variable.
+
+## Check kubeconfig
+
+The `kubectl` requires a `kubeconfig` file to connect to a Kubernetes cluster. The
+`kubeconfig` file is usually located under the `~/.kube/config` directory. Make sure
+that you have a valid `kubeconfig` file. If you don't have a `kubeconfig` file, you can
+obtain it from your Kubernetes administrator, or you can copy it from your Kubernetes
+control plane's `/etc/kubernetes/admin.conf` directory. If you have deployed your
+Kubernetes cluster on a cloud platform and lost your `kubeconfig` file, you can
+re-generate it using your cloud provider's tools. Refer the cloud provider's
+documentation for re-generating a `kubeconfig` file.
+
+Check if the `$KUBECONFIG` environment variable is configured correctly. You can set
+`$KUBECONFIG`environment variable or use the `--kubeconfig` parameter with the kubectl
+to specify the directory of a `kubeconfig` file.
+
+## Check VPN connectivity
+
+If you are using a Virtual Private Network (VPN) to access your Kubernetes cluster,
+make sure that your VPN connection is active and stable. Sometimes, VPN disconnections
+can lead to connection issues with the cluster. Reconnect to the VPN and try accessing
+the cluster again.
+
+## Authentication and authorization
+
+If you are using the token based authentication and the kubectl is returning an error
+regarding the authentication token or authentication server address, validate the
+Kubernetes authentication token and the authentication server address are configured
+properly.
+
+If kubectl is returning an error regarding the authorization, make sure that you are
+using the valid user credentials. And you have the permission to access the resource
+that you have requested.
+
+## Verify contexts
+
+Kubernetes supports [multiple clusters and contexts](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
+Ensure that you are using the correct context to interact with your cluster.
+
+List available contexts:
+
+```shell
+kubectl config get-contexts
+```
+
+Switch to the appropriate context:
+
+```shell
+kubectl config use-context
+```
+
+## API server and load balancer
+
+The {{}} server is the
+central component of a Kubernetes cluster. If the API server or the load balancer that
+runs in front of your API servers is not reachable or not responding, you won't be able
+to interact with the cluster.
+
+Check the if the API server's host is reachable by using `ping` command. Check cluster's
+network connectivity and firewall. If your are using a cloud provider for deploying
+the cluster, check your cloud provider's health check status for the cluster's
+API server.
+
+Verify the status of the load balancer (if used) to ensure it is healthy and forwarding
+traffic to the API server.
+
+## TLS problems
+
+The Kubernetes API server only serves HTTPS requests by default. In that case TLS problems
+may occur due to various reasons, such as certificate expiry or chain of trust validity.
+
+You can find the TLS certificate in the kubeconfig file, located in the `~/.kube/config`
+directory. The `certificate-authority` attribute contains the CA certificate and the
+`client-certificate` attribute contains the client certificate.
+
+Verify the expiry of these certificates:
+
+```shell
+openssl x509 -noout -dates -in $(kubectl config view --minify --output 'jsonpath={.clusters[0].cluster.certificate-authority}')
+```
+
+output:
+```console
+notBefore=Sep 2 08:34:12 2023 GMT
+notAfter=Aug 31 08:34:12 2033 GMT
+```
+
+```shell
+openssl x509 -noout -dates -in $(kubectl config view --minify --output 'jsonpath={.users[0].user.client-certificate}')
+```
+
+output:
+```console
+notBefore=Sep 2 08:34:12 2023 GMT
+notAfter=Sep 2 08:34:12 2026 GMT
+```
+
+## Verify kubectl helpers
+
+Some kubectl authentication helpers provide easy access to Kubernetes clusters. If you
+have used such helpers and are facing connectivity issues, ensure that the necessary
+configurations are still present.
+
+Check kubectl configuration for authentication details:
+
+```shell
+kubectl config view
+```
+
+If you previously used a helper tool (for example, `kubectl-oidc-login`), ensure that it is still
+installed and configured correctly.
\ No newline at end of file
From 6573fead87ab7b73f3ed7b5eadf13d0c9d7f3566 Mon Sep 17 00:00:00 2001
From: Karena Angell
Date: Wed, 11 Oct 2023 23:50:41 -0600
Subject: [PATCH 137/229] Update pod-scheduling-readiness.md to beta
Pod Scheduling Readiness graduated to beta in 1.27, this minor update reflects that feature state.
---
.../concepts/scheduling-eviction/pod-scheduling-readiness.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md b/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md
index 24b5032c07f5b..0b671ecbfcbe7 100644
--- a/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md
+++ b/content/en/docs/concepts/scheduling-eviction/pod-scheduling-readiness.md
@@ -6,7 +6,7 @@ weight: 40
-{{< feature-state for_k8s_version="v1.26" state="alpha" >}}
+{{< feature-state for_k8s_version="v1.27" state="beta" >}}
Pods were considered ready for scheduling once created. Kubernetes scheduler
does its due diligence to find nodes to place all pending Pods. However, in a
From 39d9d0a4c36abcfa6a519a70fca0c0242707e117 Mon Sep 17 00:00:00 2001
From: windsonsea
Date: Tue, 10 Oct 2023 15:34:08 +0800
Subject: [PATCH 138/229] [zh] Sync taint-and-toleration.md
---
.../taint-and-toleration.md | 85 ++++++++++++-------
.../topology-spread-constraints.md | 36 ++++++--
2 files changed, 84 insertions(+), 37 deletions(-)
diff --git a/content/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration.md b/content/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration.md
index 1e4d8c2f8938d..9309db8b3bb4c 100644
--- a/content/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration.md
+++ b/content/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration.md
@@ -142,13 +142,51 @@ An empty `effect` matches all effects with key `key1`.
上述例子中 `effect` 使用的值为 `NoSchedule`,你也可以使用另外一个值 `PreferNoSchedule`。
-这是“优化”或“软”版本的 `NoSchedule` —— 系统会 **尽量** 避免将 Pod 调度到存在其不能容忍污点的节点上,
-但这不是强制的。`effect` 的值还可以设置为 `NoExecute`,下文会详细描述这个值。
+
+
+`effect` 字段的允许值包括:
+
+
+`NoExecute`
+: 这会影响已在节点上运行的 Pod,具体影响如下:
+ * 如果 Pod 不能容忍这类污点,会马上被驱逐。
+ * 如果 Pod 能够容忍这类污点,但是在容忍度定义中没有指定 `tolerationSeconds`,
+ 则 Pod 还会一直在这个节点上运行。
+ * 如果 Pod 能够容忍这类污点,而且指定了 `tolerationSeconds`,
+ 则 Pod 还能在这个节点上继续运行这个指定的时间长度。
+ 这段时间过去后,节点生命周期控制器从节点驱除这些 Pod。
+
+
+`NoSchedule`
+: 除非具有匹配的容忍度规约,否则新的 Pod 不会被调度到带有污点的节点上。
+ 当前正在节点上运行的 Pod **不会**被驱逐。
+
+
+`PreferNoSchedule`
+: `PreferNoSchedule` 是“偏好”或“软性”的 `NoSchedule`。
+ 控制平面将**尝试**避免将不能容忍污点的 Pod 调度到的节点上,但不能保证完全避免。
通常情况下,如果给一个节点添加了一个 effect 值为 `NoExecute` 的污点,
-则任何不能忍受这个污点的 Pod 都会马上被驱逐,任何可以忍受这个污点的 Pod 都不会被驱逐。
+则任何不能容忍这个污点的 Pod 都会马上被驱逐,任何可以容忍这个污点的 Pod 都不会被驱逐。
但是,如果 Pod 存在一个 effect 值为 `NoExecute` 的容忍度指定了可选属性
`tolerationSeconds` 的值,则表示在给节点添加了上述污点之后,
Pod 还能继续在节点上运行的时间。例如,
@@ -327,7 +365,7 @@ manually add tolerations to your pods.
* **Taint based Evictions**: A per-pod-configurable eviction behavior
when there are node problems, which is described in the next section.
-->
-* **基于污点的驱逐**: 这是在每个 Pod 中配置的在节点出现问题时的驱逐行为,
+* **基于污点的驱逐**:这是在每个 Pod 中配置的在节点出现问题时的驱逐行为,
接下来的章节会描述这个特性。
-前文提到过污点的效果值 `NoExecute` 会影响已经在节点上运行的如下 Pod:
-
-* 如果 Pod 不能忍受这类污点,Pod 会马上被驱逐。
-* 如果 Pod 能够忍受这类污点,但是在容忍度定义中没有指定 `tolerationSeconds`,
- 则 Pod 还会一直在这个节点上运行。
-* 如果 Pod 能够忍受这类污点,而且指定了 `tolerationSeconds`,
- 则 Pod 还能在这个节点上继续运行这个指定的时间长度。
-
在节点被排空时,节点控制器或者 kubelet 会添加带有 `NoExecute` 效果的相关污点。
+此效果被默认添加到 `node.kubernetes.io/not-ready` 和 `node.kubernetes.io/unreachable` 污点中。
如果异常状态恢复正常,kubelet 或节点控制器能够移除相关的污点。
```yaml
---
apiVersion: v1
@@ -164,12 +184,16 @@ your cluster. Those fields are:
{{< note >}}
- `minDomains` 字段是一个 Beta 字段,在 1.25 中默认被禁用。
- 你可以通过启用 `MinDomainsInPodTopologySpread`
- [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)来启用该字段。
+ `MinDomainsInPodTopologySpread`
+ [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)为
+ Pod 拓扑分布启用 `minDomains`。自 v1.28 起,`MinDomainsInPodTopologySpread` 特性门控默认被启用。
+ 在早期的 Kubernetes 集群中,此特性门控可能被显式禁用或此字段可能不可用。
{{< /note >}}
## 更改软件包仓库 {#changing-the-package-repository}
-如果你使用的是 Kubernetes 社区所属的软件包仓库,
-你需要将软件包仓库更改为一个包含所需 Kubernetes 次要版本软件包的仓库。
+如果你正在使用社区版的软件包仓库(pkgs.k8s.io),
+你需要启用所需的 Kubernetes 次要版本的软件包仓库。
这一点在[更改 Kubernetes 软件包仓库](/zh-cn/docs/tasks/administer-cluster/kubeadm/change-package-repository/)文档中有详细说明。
下面是对应 Pod 的配置文件:
-{{% code file="pods/lifecycle-events.yaml" %}}
+{{% code_sample file="pods/lifecycle-events.yaml" %}}
创建 Secret:
From 1bbf5df880165569e49286b956897b1c3fdd559d Mon Sep 17 00:00:00 2001
From: Tim Bannister
Date: Fri, 6 Oct 2023 00:45:46 +0100
Subject: [PATCH 140/229] Fix typo
---
.../index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/blog/_posts/2023-10-12-bootstrap-an-air-gapped-cluster-with-kubeadm/index.md b/content/en/blog/_posts/2023-10-12-bootstrap-an-air-gapped-cluster-with-kubeadm/index.md
index eebdcf6110a1d..43b06fb9f4d7d 100644
--- a/content/en/blog/_posts/2023-10-12-bootstrap-an-air-gapped-cluster-with-kubeadm/index.md
+++ b/content/en/blog/_posts/2023-10-12-bootstrap-an-air-gapped-cluster-with-kubeadm/index.md
@@ -35,7 +35,7 @@ While this single VM lab is a simplified example, the below diagram more approxi
{{< figure src="example_production_topology.svg" alt="Example production topology which shows 3 control plane Kubernetes nodes and 'n' worker nodes along with a Docker registry in an air-gapped environment. Additionally shows two workstations, one on each side of the air gap and an IT admin which physically carries the artifacts across." >}}
-Note, there is still intentional isolation between the envirnment and the internet. There are also some things that are not shown in order to keep the diagram simple, for example malware scanning on the secure side of the air gap.
+Note, there is still intentional isolation between the environment and the internet. There are also some things that are not shown in order to keep the diagram simple, for example malware scanning on the secure side of the air gap.
Back to the single VM lab environment.
From 9fa0ab491f6a1af97791e671e819c44900a2a3fe Mon Sep 17 00:00:00 2001
From: Tim Bannister
Date: Fri, 6 Oct 2023 00:45:56 +0100
Subject: [PATCH 141/229] Fix Markdown formatting
---
.../index.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/en/blog/_posts/2023-10-12-bootstrap-an-air-gapped-cluster-with-kubeadm/index.md b/content/en/blog/_posts/2023-10-12-bootstrap-an-air-gapped-cluster-with-kubeadm/index.md
index 43b06fb9f4d7d..ae3309bdb5dde 100644
--- a/content/en/blog/_posts/2023-10-12-bootstrap-an-air-gapped-cluster-with-kubeadm/index.md
+++ b/content/en/blog/_posts/2023-10-12-bootstrap-an-air-gapped-cluster-with-kubeadm/index.md
@@ -22,7 +22,7 @@ A real air-gapped network can take some effort to set up, so for this post, I wi
### Local topology
-This VM will have its network connectivity disabled but in a way that doesn't shut down the VM's virtual NIC. Instead, its network will be downed by injecting a default route to a dummy interface, making anything internet-hosted unreachable. However, the VM still has a connected route to the bridge interface on the host, which means that network connectivity to the host is still working. This posture means that data can be transferred from the host/laptop to the VM via scp, even with the default route on the VM black-holing all traffic that isn't destined for the local bridge subnet. This type of transfer is analogous to carrying data across the air gap and will be used throughout this post.
+This VM will have its network connectivity disabled but in a way that doesn't shut down the VM's virtual NIC. Instead, its network will be downed by injecting a default route to a dummy interface, making anything internet-hosted unreachable. However, the VM still has a connected route to the bridge interface on the host, which means that network connectivity to the host is still working. This posture means that data can be transferred from the host/laptop to the VM via `scp`, even with the default route on the VM black-holing all traffic that isn't destined for the local bridge subnet. This type of transfer is analogous to carrying data across the air gap and will be used throughout this post.
Other details about the lab setup:
@@ -612,7 +612,7 @@ export ZARF_VERSION=v0.28.3
curl -LO "https://github.com/defenseunicorns/zarf/releases/download/${ZARF_VERSION}/zarf_${ZARF_VERSION}_Linux_${K8s_ARCH}"
```
Zarf needs to bootstrap itself into a Kubernetes cluster through the use of an init package. That also needs to be transported across the air gap so let's download it onto the host/laptop:
-```bash
+```bash
curl -LO "https://github.com/defenseunicorns/zarf/releases/download/${ZARF_VERSION}/zarf-init-${K8s_ARCH}-${ZARF_VERSION}.tar.zst"
```
The way that Zarf is declarative is through the use of a zarf.yaml file. Here is the zarf.yaml file that will be used for this Podinfo installation. Write it to whatever directory you you have write access to on your host/laptop; your home directory is fine:
From 24e3ef93bc6e159f69787bae2d038bcb80d2376e Mon Sep 17 00:00:00 2001
From: Tim Bannister
Date: Fri, 6 Oct 2023 16:42:20 +0100
Subject: [PATCH 142/229] Fix hyperlink
---
.../index.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/content/en/blog/_posts/2023-10-12-bootstrap-an-air-gapped-cluster-with-kubeadm/index.md b/content/en/blog/_posts/2023-10-12-bootstrap-an-air-gapped-cluster-with-kubeadm/index.md
index ae3309bdb5dde..f10b4eeec7472 100644
--- a/content/en/blog/_posts/2023-10-12-bootstrap-an-air-gapped-cluster-with-kubeadm/index.md
+++ b/content/en/blog/_posts/2023-10-12-bootstrap-an-air-gapped-cluster-with-kubeadm/index.md
@@ -144,7 +144,7 @@ reboot
On the laptop/host machine, download all of the artifacts enumerated in the previous section. Since the air gapped VM is running Fedora 37, all of the dependencies shown in this part are for Fedora 37. Note, this procedure will only work on AArch64 or AMD64 CPU architectures as they are the most popular and widely available.. You can execute this procedure anywhere you have write permissions; your home directory is a perfectly suitable choice.
-Note, operating system packages for the Kubernetes artifacts that need to be carried across can now be found at [pkgs.k8s.io](https://kubernetes.io/blog/2023/08/15/pkgs-k8s-io-introduction/). This blog post will use a combination of Fedora repositories and GitHub in order to download all of the required artifacts. When you’re doing this on your own cluster, you should decide whether to use the official Kubernetes packages, or the official packages from your operating system distribution - both are valid choices.
+Note, operating system packages for the Kubernetes artifacts that need to be carried across can now be found at [pkgs.k8s.io](/blog/2023/08/15/pkgs-k8s-io-introduction/). This blog post will use a combination of Fedora repositories and GitHub in order to download all of the required artifacts. When you’re doing this on your own cluster, you should decide whether to use the official Kubernetes packages, or the official packages from your operating system distribution - both are valid choices.
From 8ecb73f2df065410ab1b334016e70e8fee197772 Mon Sep 17 00:00:00 2001
From: Michael
Date: Thu, 12 Oct 2023 21:49:56 +0800
Subject: [PATCH 143/229] [zh] Sync /config-api/kubeadm-config.v1beta3.md
---
.../config-api/kubeadm-config.v1beta3.md | 270 +++++++++---------
1 file changed, 134 insertions(+), 136 deletions(-)
diff --git a/content/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3.md b/content/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3.md
index 87640bbce8e46..cdba0a0859166 100644
--- a/content/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3.md
+++ b/content/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3.md
@@ -446,6 +446,140 @@ node only (e.g. the node ip).
- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration)
- [JoinConfiguration](#kubeadm-k8s-io-v1beta3-JoinConfiguration)
+## `BootstrapToken` {#BootstrapToken}
+
+
+**出现在:**
+
+- [InitConfiguration](#kubeadm-k8s-io-v1beta3-InitConfiguration)
+
+
+
-
\ No newline at end of file
From 51f26256253dbb4b67ed0b86c0845937904f7833 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Marko=20Mudrini=C4=87?=
Date: Thu, 12 Oct 2023 16:10:58 +0200
Subject: [PATCH 144/229] Clarify repositories structure in install kubeadm
document
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Signed-off-by: Marko Mudrinić
---
.../tools/kubeadm/install-kubeadm.md | 44 ++++++++++++++-----
1 file changed, 34 insertions(+), 10 deletions(-)
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
index 9f95d67e7e3af..505c6993bf71a 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/install-kubeadm.md
@@ -15,6 +15,14 @@ This page shows how to install the `kubeadm` toolbox.
For information on how to create a cluster with kubeadm once you have performed this installation process,
see the [Creating a cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page.
+This installation guide is for Kubernetes {{< skew currentVersion >}}. If you want to use a different
+Kubernetes version, please refer to the following pages instead:
+
+- [Installing kubeadm for Kubernetes {{< skew currentVersionAddMinor -1 "." >}}](https://v{{< skew currentVersionAddMinor -1 "-" >}}.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)
+- [Installing kubeadm for Kubernetes {{< skew currentVersionAddMinor -2 "." >}}](https://v{{< skew currentVersionAddMinor -2 "-" >}}.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)
+- [Installing kubeadm for Kubernetes {{< skew currentVersionAddMinor -3 "." >}}](https://v{{< skew currentVersionAddMinor -3 "-" >}}.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)
+- [Installing kubeadm for Kubernetes {{< skew currentVersionAddMinor -4 "." >}}](https://v{{< skew currentVersionAddMinor -4 "-" >}}.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)
+
## {{% heading "prerequisites" %}}
* A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions
@@ -163,6 +171,13 @@ have been frozen starting from September 13, 2023. Please read our
for more details.
{{< /note >}}
+{{< note >}}
+There's a dedicated package repository for each Kubernetes minor version. If you want to install
+a minor version other than {{< skew currentVersion >}}, please see the installation guide for
+your desired minor version. The official Kubernetes package repositories provide downloads for
+Kubernetes versions starting with v1.24.0.
+{{< /note >}}
+
{{< tabs name="k8s_install" >}}
{{% tab name="Debian-based distributions" %}}
@@ -183,7 +198,11 @@ These instructions are for Kubernetes {{< skew currentVersion >}}.
curl -fsSL https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
```
-3. Add the appropriate Kubernetes `apt` repository:
+3. Add the appropriate Kubernetes `apt` repository. Please note that this repository have packages
+ only for Kubernetes {{< skew currentVersion >}}; for other Kubernetes minor versions, you need to
+ change the Kubernetes minor version in the URL to match your desired minor version
+ (you should also check that you are reading the documentation for the version of Kubernetes
+ that you plan to install).
```shell
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
@@ -216,19 +235,24 @@ you can create it by running `sudo mkdir -m 755 /etc/apt/keyrings`
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
```
- {{< caution >}}
- - Setting SELinux in permissive mode by running `setenforce 0` and `sed ...`
- effectively disables it. This is required to allow containers to access the host
- filesystem; for example, some cluster network plugins require that. You have to
- do this until SELinux support is improved in the kubelet.
- - You can leave SELinux enabled if you know how to configure it but it may require
- settings that are not supported by kubeadm.
- {{< /caution >}}
+{{< caution >}}
+- Setting SELinux in permissive mode by running `setenforce 0` and `sed ...`
+effectively disables it. This is required to allow containers to access the host
+filesystem; for example, some cluster network plugins require that. You have to
+do this until SELinux support is improved in the kubelet.
+- You can leave SELinux enabled if you know how to configure it but it may require
+settings that are not supported by kubeadm.
+{{< /caution >}}
2. Add the Kubernetes `yum` repository. The `exclude` parameter in the
repository definition ensures that the packages related to Kubernetes are
not upgraded upon running `yum update` as there's a special procedure that
- must be followed for upgrading Kubernetes.
+ must be followed for upgrading Kubernetes. Please note that this repository
+ have packages only for Kubernetes {{< skew currentVersion >}}; for other
+ Kubernetes minor versions, you need to change the Kubernetes minor version
+ in the URL to match your desired minor version (you should also check that
+ you are reading the documentation for the version of Kubernetes that you
+ plan to install).
```shell
# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo
From a8d020978879b79c5d83191d3da82fe497670f73 Mon Sep 17 00:00:00 2001
From: Arhell
Date: Fri, 13 Oct 2023 02:30:25 +0300
Subject: [PATCH 145/229] [de] Fix sha256 url missing '/release/' to download
kubectl
---
content/de/docs/tasks/tools/install-kubectl-linux.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/content/de/docs/tasks/tools/install-kubectl-linux.md b/content/de/docs/tasks/tools/install-kubectl-linux.md
index 93126b9e4a7d3..78d31379f87ae 100644
--- a/content/de/docs/tasks/tools/install-kubectl-linux.md
+++ b/content/de/docs/tasks/tools/install-kubectl-linux.md
@@ -51,7 +51,7 @@ Um kubectl auf Linux zu installieren, gibt es die folgenden Möglichkeiten:
Download der kubectl Checksum-Datei:
```bash
- curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
+ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
```
Kubectl Binary mit der Checksum-Datei validieren:
@@ -236,7 +236,7 @@ Untenstehend ist beschrieben, wie die Autovervollständigungen für Fish und Zsh
Download der kubectl-convert Checksum-Datei:
```bash
- curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256"
+ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256"
```
Kubectl-convert Binary mit der Checksum-Datei validieren:
From 3f10e91511d05d022d8cd5ebed80a7fd76dd3e3d Mon Sep 17 00:00:00 2001
From: windsonsea
Date: Thu, 12 Oct 2023 09:33:47 +0800
Subject: [PATCH 146/229] Add a blog of quick recap of China k8s contributor
summit in 2023
---
.../_posts/2023-10-20-kcs-shanghai/index.md | 114 ++++++++++++++++++
.../_posts/2023-10-20-kcs-shanghai/kcs03.jpeg | Bin 0 -> 2585255 bytes
.../_posts/2023-10-20-kcs-shanghai/kcs04.jpeg | Bin 0 -> 4057822 bytes
.../_posts/2023-10-20-kcs-shanghai/kcs06.jpeg | Bin 0 -> 3236494 bytes
4 files changed, 114 insertions(+)
create mode 100644 content/en/blog/_posts/2023-10-20-kcs-shanghai/index.md
create mode 100644 content/en/blog/_posts/2023-10-20-kcs-shanghai/kcs03.jpeg
create mode 100644 content/en/blog/_posts/2023-10-20-kcs-shanghai/kcs04.jpeg
create mode 100644 content/en/blog/_posts/2023-10-20-kcs-shanghai/kcs06.jpeg
diff --git a/content/en/blog/_posts/2023-10-20-kcs-shanghai/index.md b/content/en/blog/_posts/2023-10-20-kcs-shanghai/index.md
new file mode 100644
index 0000000000000..e32bdcb0df615
--- /dev/null
+++ b/content/en/blog/_posts/2023-10-20-kcs-shanghai/index.md
@@ -0,0 +1,114 @@
+---
+layout: blog
+title: "A Quick Recap of 2023 China Kubernetes Contributor Summit"
+slug: kcs-shanghai
+date: 2023-10-20
+canonicalUrl: https://www.kubernetes.dev/blog/2023/10/20/kcs-shanghai/
+---
+
+**Author:** Paco Xu and Michael Yao (DaoCloud)
+
+On September 26, 2023, the first day of
+[KubeCon + CloudNativeCon + Open Source Summit China 2023](https://www.lfasiallc.com/kubecon-cloudnativecon-open-source-summit-china/),
+nearly 50 contributors gathered in Shanghai for the Kubernetes Contributor Summit.
+
+{{< figure src="/blog/2023/10/20/kcs-shanghai/kcs04.jpeg" alt="All participants in the 2023 Kubernetes Contributor Summit" caption="All participants in the 2023 Kubernetes Contributor Summit" >}}
+
+This marked the first in-person offline gathering held in China after three years of the pandemic.
+
+## A joyful meetup
+
+The event began with welcome speeches from [Kevin Wang](https://github.com/kevin-wangzefeng) from Huawei Cloud,
+one of the co-chairs of KubeCon, and [Puja](https://github.com/puja108) from Giant Swarm.
+
+Following the opening remarks, the contributors introduced themselves briefly. Most attendees were from China,
+while some contributors had made the journey from Europe and the United States specifically for the conference.
+Technical experts from companies such as Microsoft, Intel, Huawei, as well as emerging forces like DaoCloud,
+were present. Laughter and cheerful voices filled the room, regardless of whether English was spoken with
+European or American accents or if conversations were carried out in authentic Chinese language. This created
+an atmosphere of comfort, joy, respect, and anticipation. Past contributions brought everyone closer, and
+mutual recognition and accomplishments made this offline gathering possible.
+
+{{< figure src="/blog/2023/10/20/kcs-shanghai/kcs06.jpeg" alt="Face to face meeting in Shanghai" caption="Face to face meeting in Shanghai" >}}
+
+The attending contributors were no longer just GitHub IDs; they transformed into vivid faces.
+From sitting together and capturing group photos to attempting to identify "Who is who,"
+a loosely connected collective emerged. This team structure, although loosely knit and free-spirited,
+was established to pursue shared dreams.
+
+As the saying goes, "You reap what you sow." Each effort has been diligently documented within
+the Kubernetes community contributions. Regardless of the passage of time, the community will
+not erase those shining traces. Brilliance can be found in your PRs, issues, or comments.
+It can also be seen in the smiling faces captured in meetup photos or heard through stories
+passed down among contributors.
+
+## Technical sharing and discussions
+
+Next, there were three technical sharing sessions:
+
+- [sig-multi-cluster](https://github.com/kubernetes/community/blob/master/sig-multicluster/README.md):
+ [Hongcai Ren](https://github.com/RainbowMango), a maintainer of Karmada, provided an introduction to
+ the responsibilities and roles of this SIG. Their focus is on designing, discussing, implementing,
+ and maintaining APIs, tools, and documentation related to multi-cluster management.
+ Cluster Federation, one of Karmada's core concepts, is also part of their work.
+
+- [helmfile](https://github.com/helmfile/helmfile): [yxxhero](https://github.com/yxxhero)
+ from [GitLab](https://gitlab.cn/) presented how to deploy Kubernetes manifests declaratively,
+ customize configurations, and leverage the latest features of Helm, including Helmfile.
+
+- [sig-scheduling](https://github.com/kubernetes/community/blob/master/sig-scheduling/README.md):
+ [william-wang](https://github.com/william-wang) from Huawei Cloud shared the recent updates and
+ future plans of SIG Scheduling. This SIG is responsible for designing, developing, and testing
+ components related to Pod scheduling.
+
+{{< figure src="/blog/2023/10/20/kcs-shanghai/kcs03.jpeg" alt="A technical session about sig-multi-cluster" caption="A technical session about sig-multi-cluster" >}}
+
+Following the sessions, a video featuring a call for contributors by [Sergey Kanzhelev](https://github.com/SergeyKanzhelev),
+the SIG-Node Chair, was played. The purpose was to encourage more contributors to join the Kubernetes community,
+with a special emphasis on the popular SIG-Node.
+
+Lastly, Kevin hosted an Unconference collective discussion session covering topics such as
+multi-cluster management, scheduling, elasticity, AI, and more. For detailed minutes of
+the Unconference meeting, please refer to .
+
+## China's contributor statistics
+
+The contributor summit took place in Shanghai, with 90% of the attendees being Chinese.
+Within the Cloud Native Computing Foundation (CNCF) ecosystem, contributions from China have been steadily increasing. Currently:
+
+- Chinese contributors account for 9% of the total.
+- Contributions from China make up 11.7% of the overall volume.
+- China ranks second globally in terms of contributions.
+
+{{< note >}}
+The data is from KubeCon keynotes by Chris Aniszczyk, CTO of Cloud Native Computing Foundation,
+on September 26, 2023. This probably understates Chinese contributions. A lot of Chinese contributors
+use VPNs and may not show up as being from China in the stats accurately.
+{{< /note >}}
+
+The Kubernetes Contributor Summit is an inclusive meetup that welcomes all community contributors, including:
+
+- New Contributors
+- Current Contributors
+ - docs
+ - code
+ - community management
+- Subproject members
+- Members of Special Interest Group (SIG) / Working Group (WG)
+- Active Contributors
+- Casual Contributors
+
+## Acknowledgments
+
+We would like to express our gratitude to the organizers of this event:
+
+- [Kevin Wang](https://github.com/kevin-wangzefeng), the co-chair of KubeCon and the lead of the kubernetes contributor summit.
+- [Paco Xu](https://github.com/pacoxu), who actively coordinated the venue, meals, invited contributors from both China and
+ international sources, and established WeChat groups to collect agenda topics. They also shared details of the event
+ before and after its occurrence through [pre and post announcements](https://github.com/kubernetes/community/issues/7510).
+- [Mengjiao Liu](https://github.com/mengjiao-liu), who was responsible for organizing, coordinating,
+ and facilitating various matters related to the summit.
+
+We extend our appreciation to all the contributors who attended the China Kubernetes Contributor Summit in Shanghai.
+Your dedication and commitment to the Kubernetes community are invaluable.
+Together, we continue to push the boundaries of cloud native technology and shape the future of this ecosystem.
diff --git a/content/en/blog/_posts/2023-10-20-kcs-shanghai/kcs03.jpeg b/content/en/blog/_posts/2023-10-20-kcs-shanghai/kcs03.jpeg
new file mode 100644
index 0000000000000000000000000000000000000000..c6131bfc911f22d03436b7e2021f24639574cc11
GIT binary patch
literal 2585255
zcmeFacU)A-(lC4g1rsQ)x<){Zy9!JWC?XkD1VNIB>@W;zUdxT~*y(-RDeyY5$u!
zcs+ktCj>Dv`W+dAAjmjmqJ$(O4XK^*nFMKB_}&Mf$r4hC1bm~PnG&P1?~{<8B}|`#
z^kQKdNl4?N?p*l31|I^Xr@;3U_$(9RuY$B|*eDFoJ4o}T5JZQ;;c^83mGn%^mCVS<
z-E0+P(jsKwTg#E_z)~`G;qtjW7p|ugfv8GST5QZ_OJQYol
zsmLk@5Ae=MBDM+WjsOBr3)ANzt-NcHw?)GA7Rbj5(@IseB_`EG77$+DZJD{
z0dLWeKT4Rs1L;YSmVq)aAuTUVmqB{6F#Q(N5{JiO{5L>a^00umFOVK3%l?cdRfy7SiK{WyV37gNG#%_(JFe1UWP$eRxP(;;aBqe?PQ)R>0TUi3lPk
zOuLUm5V`XLdL{$1iNf?uNKZO1s0*Ya5*Gz&NFtKLG^!s`L(G60jtv5=EU%7n`>PM^ne)5ah@2>8sq3
zmD&>r@BvB;@0>{p@&Tnwc#pQa7X*FR-W*ASd_0AOBhYa;Jdp-irwdED4wQz(kjqS;
zKvKcu$Q~X_`Xr?_N=D{N7UssFF?2Eshr`h*bOfwSN@y>ry@_n<9<(L0sV9&gYEu%D
zg1&kQ`I5r4)eJ$ihVX^TNA%ra+TV;!z|6&7#~6|5|A@$8cZlN_v}k`h@+UH3%$PA_
z$4nSIc7pu)apUEuPMk1d;?(KC{4({IU#81Xz%JpJ;O58w60+mR%gV`4l9QV>MNUp`
z3i?lOilB=8zk{IvE+Rh$VIyayB$N?Jc?l_biT+CHn{oXQBs77_(Gu7N1fv&do8+j`
z(lTSljvFuWBTxd7loA0aArhk`q$Ed4j+T)gGg@i_5rFbiqvlS=kJehp_(j=kD?xh7
zp>v7a^X5-A``dv?^4|7BM`poc^G|O+GkN4`=WlFZsC#6+qu#v>e9EscE#9)eB>j26
zz^A5Ff6-qTcicSspzvMWrR0Z2@7pb%e0LtZoRVE!+d(z3a`p>5ekC=hq^?sQdI+c;
zh0!DFc&|06j0!${L5&<)-~sF*0=diJ`4Mii`;K{
zeeL-v$G$3$oAElvE2Z~(n@WRwZ*4zvN;7jtrp1N(Es9lXDa<7Qf`>L7;=DgIUrucb
z^Y@94YzupO(lf;Zmvd0Ws-cCF+n*(UX@)|iVEFWPrb>^W@TnsDwkb-5x=^NZV-uGIEDIUY{>
zQu(_seeUyp@0)Nf%jS$$lBUci9b0}sQvKI$B~Iu<)p@TLL~U4+FiGQ=0L`cjhT^Nz
z4O?2`PEYZ4?+)uTvgt=oR63hB#X5go;lX`WQlHfq=ls6cyenU8VTPv)7iZ`1*Q;L`
zt$91-;l_-RrTN8Ghsp!K0Wk-9KGxp|PMDXnEwtenyUVY5AuT!YzRcY_Z~BqVrgJnK
z$ME(sx_7C+;*Xid?+92F#7yg!e-IR+n$XiyAC=}@p_yqMWg7?t$T;^Z_8iW2UfY%Z
zWoE&SMb~}m`nql=u80n1rQNMOvfb9C_GXny^_W%oS%+gZjwChO8?q0?=*7#QWtVBz
z6N^kkOXN0TD`+p^dk>D
z$&?EfePc_KbbGxY`aORb(D%F_d2}_Sah=k!w`phHj20rT9Tz4>&+*TwEZ(?e_Nz_H
zW@;v-ee{0VXx-V?bNlI{_E|~SI8vNWcbBT}@swjl%^iumr$0A6zTLl<#p_4B&NeJGjZFbN01HJ7|g?DR7haM>&A~v}HUN_fcpH~0#I`-51;c5vzulH5lHfbyBN6t(O
zHGJ4@Y?)rZbU}jBjM=hTx94;}c~hSpdMULBBzkDgfkku)CB13PyzKegI
zlv~$aE1Ny`dBBo=Daqk1|4NlBUZ8j6
z_^AaJft#8R*5aQ~ePZGYPu(y0;`8=Ke0bm$UA#)SRm}(M8>b7sTDul>;sQB5h0&bC
zTz;2MSpsK&%!^&`978`uPPu(mX;qgYj^{X|CO%P()vz;I@xq&mB_vjk@Unl}&LE
z{G(jDE62P!`gV(Vtlag6vKf#2GO~P(X&ry;_US8Ws_=gu5H&jL=n+DUCO*HgzEt9_
zg2b%Z?`-yjJe&E&y>~Hg&6u|PWpCQd-AX3Ee^;t_;m_zzk*RAw#5+q9Rg0}|lH+kR
z9c9UX@TnitEL5NOOi8>xEyXK7E@F(r68m50yTnMHQi)66tGCj8LF|cRNtISzt4{aL
z7*$~#d^+=EKN9`h;qZoob{(}r0a{tVh0lumxX3c8ujhTl)BSa(r0st%o4&Qs|mweVB7ytojWx6!^hjM@=p7B1z9FYioyX7^i&-WJUQFK5}g
z7pdLjk9OtW@=?8B5}v?ORoeb&bVf=}p5e;qImfC)YFZ7Ab7oeKJQH)3b!BE&lolF!u6=u(zK^)btaLsf8~Z31Uj$t
zxv$K%&wGl>8F9oo2$PN`Suqzi3X?l1SB^#TMm2@L
z=&7Z8CmdIzvlJ`O=1*{B?z?%dQe9&~zMtWZ#J_#w4(xKZn`e{|&kr8I*IR$PPET&}
z#G-~2i`2RC@kzYhRg}}0!aLp4mu%3iQ(QHnOVh<&u~)it?oOrMQeEaqTXu+8#FFEDfu!gEpsgJ8q_p&y1TGvhb
zVzj?Dz~IrNxcBjjmyRDlHt98)lONLiv^wKkN=d`|{BDmKx?VFsK2{k!Mxo*6tBh~9
z=V?8qYYW0}a@(aYm{ndh={PcL=I7*F&2r19J*`jeN0cuwoptkptNI6>TiszV{<_9i
zYFb;Baypf4obYf?Tq~jQ7q(7jY4c3|f-Bz5)drnwAM7^|m|{7GcviEs`LUTwO{zmoocxtC+)S|N53x?bu0FtwtUeN
z;Of;>Cl8*YaowX&XQuD}?a8A9y)U#{k}03PY*tdivsu2@C@D3P?YnZVO*5gbWNQ2F
zyo{LkI^0L@m0iy_hw4Af>WK8-*=_fCNAV&Tg?atR{P*EU?0nlpZZE|lf6w%vhF{(M
z$n$f{+)y{Uoceym=$uE`$^u!$pdzjBeEp4nWSQptQ11`3D<$h%KRlwm>RNQL(a2R6
z7trd>!FNu3lv_(?gz6ox-D=@p*_BhDbt*l2eR|3w;|BjTXM&_V$$gP!
z9pg*iQUd>SR6JW3s@4{yd1_g3}Ir>0-epU=EmlhfOHRe!VMt?In4gwuAaQ|#C3_}&fa*oMfb72%74^PHG|
zWZJp?Z=CHu-0y3;Wzz1F7}ajP-oUgz#nyLqebS5Az=TUj4^BAos=53|s%EDq@mn%i
zago}R#xR4OnT5sf>Z$DM6dSvxrPVJAZX`BcQ=k61X6l6P3llS$t)bhl*XEwlskrA^
zKUZPy?{7Z(S8adMi92*Zr`PO^2dk|Rd3~$5Z;5_{f(LW;np=O*y{x72CLr6nMB!#r
z?s&s<$z>hXv3ag>=XT_^#i_D6)lI$B`WepGdR`tOovV*YnAkfu0$=jYy(9TWgEc-u
z&aodMnJArRcCShK{o-}##`4erB879UwwJnH>ELFTN(oHA
zLccw9n?2z`8znSi{aR^*u3!6+2=llDYd_jdR6lI=dH&(^(>rhB57*J|NmrCym1dRlp2yp-%t@;>(I8<~0S3ODfy_i``aekxm1Xw8m)Qd&1grm{_%-;Fl*g=zN_Y8*WjSHV)_?Dx{&oLO@9;H8(#}0d`Qyo!mybTl
zAFv|Mypp&hep2m+H+4&7qO%Gb?1GyM+Wc4c1XDP6m-Y0!Pt~sPc-*C(s?(4B)#CO#
zB*me9oO+At<}8KC#O=AWl{WnDG38UJdWKzF*1-e#gF&Vm`bSINnaK6cSA2+f+Zs;&Al)A&8UrRhIit$IYyvkd8`jHDgJ9ZZYZr>R~
z9KX6OS0|6d>pgciEUUsV^;O1)rT#hETsHTK!vmMb4O#sK?>`4#Dva82YnMcmde}`~
zer9}UNce}$P{oDivwS3q3-q*F*t!w-Pp2003^J;l`_9J{e2OaUx~w{W3^#$vc9Hsa
z-Q-S*d~*_0f~B$cYodzSB2ixO}1TXmMIza^E9vSKi`t69%?s!cs$`Y#jP*;>62jE
zfooA6F>Hm+(;wfcZp{qWYH#Rtca4?4d7YrM;bV79K-a`Bg@{*vMRyVlKc6n_va?7$
zwxQKYBaXZx^DH+a>LbtIsrQ&wlg+Kl@r_f{LK(C~$FTIbbN9Ju8s}?E%8&Y`u<>2G
z6Hi%NV+V;SyFcuB+PS_GkNJp0=!f{oU(YgoT^?MCuGv_zWMV_F;c_XPl(X6DWtHi1
z2`QP!R6=92%mWP)jvR@YRd_8Ywdw1fvwOmi60%a>r5{`75!Y=Vn|(QhT4H~$@7He+
zi;5duuA1+*;}>n(`qxc-_>ph1ap!hFn6Ym6ylzXWdxlC1HQ8zhoN9vqsBi6f!hQJ1
z+;dftk!ym-J*4DZsQ2jOR<~HlakyR12`-N$8)EN09{u%|v5)WdPqoVfeftsD$kc=>
zWtG<_WJezGFYxO)vLklsoyt?R%qNSMUA?t)@0quTvM%qAT+rUTUB~f*j87Tg{xg?X
zzShbr*LT)#kL{DU$mUH}sNT@o?bY0;=VM~0nJMv`c?OeA+^rxCN3
zqj)OEBPfgfwl`Q6#c?K0ML&pq(F3HoLkWp#;N{$ey`>R
zomU^bHWfX7-}g8@y)U7yIhB>Al-6tXdRkgD^JMt;o&Ct=GLMWlihY-QQ$|}<-nnc_
z$=-zRbGKIIO#Bd4+#z1feH%;`rC+ij26uy0ta
zvY%~#bo-%>d57zor$!oESojoYmd^B(=m5!~8tdfL8+-XC_
zQ_YRs2IuBwAr&un_S}i`WG@X(+(-Euaw72pQZ*SYGtWP
z8@D2382oowupEo*sHAt*!JMj+kFPp5sE4d5t~|=UsKGwfvCg1zYFFwDJNI_gUenwU
zm8;|5JK2~w=2HVt=)auP#G9SJuI~hUML`P8Zz=ACDDypOW%XDU-FD&v-we@vZM
zRGHDac-^Ve1>U=78>GEC{5kvg$tw<(7b^A_COq%*vA6vFX``LRpS}BU%vx`>@P55c
zXw9ZzjodvUc4=u=P8+pS7v|VHdZpFz!aA&CYHQXNUO&A(!tT(XuMwv@=V(<_CdjqV
zNUlzQ|16l*b*#HuuXo%Yx2Yxdw)^HJ`4)7IpI+B(?|h|enY~A3Q=gaB{k*%6KA)K!
z8uXsSn|;P`i!$x}GH&0JN94cH9-lLFe$Vl1!RtPd+HQ{{%rWLP1WcIbxS=1ZbFFTP
z-$U}dSiIi0p;zY4!@F&Lr?T6hZVErW^9}pd(x<6&i}7*Swp0csHstsi2d`oI5}T*l
zhQ{_I-5!CV+7F&Q@Obv*`nwFJ8!x|x&7$;OpZhE{IqE`u)XS__W~rOCvn%_NpzgR8
zl35ut@LgovkR2|mU$F?nm6ZUuxgpROX9`kyUVt1sJ*rLM$Ox;$ftI@v4_+&
z6!;T3JK6*Hne*JQ$c3$Fytb{xp#FwR{EFk(c-1i9%eK6AtT(u!^Zm_NLFplrDyP0F
zvO6v-SN)PT?PFqVmu*&a^l{swE)&dOi~6H|{B~!{
zH_34obst|PGH$;!^>wUyWwd|%#xZ@l9ck{t#tDU8xx1I+Q#EUUzoGei+tS=;3JcO?
zD(koCI3G4-`}8IKMa*6NY|GoMDZ94MO24wB>-0g@*i;o4B
zf2=;(U=??hJ^!(KLr^@sA=Khw-!Q>!b1>GecJ&=sVo-%DygPT$z;b
zt!B^X9ZKPvH!>fmR*jN5vHZiN-U-1EvoHTjdpzH0QSAl#Tgs?!AL1tm-k}yLHrQCY
z7QHI!N-b8Ay^W_uE-K8^ZdiPpI2Q!}B
ztIwpTzFK=AGj5Od^S5tblKqm(n>QXPmP^E#RFs@w
z^qcJ5`rbxk#IGbN(j$1@bV@p5rrYhq-uTq40%e&->lsg-EJE93W3pan+Sisa`_x>Y
zJ*Ypa->GQQJdxbz!)4o8)qe;%bBRi?%AL`V9NFa6u&Sl+T>CG6D>Fvdu3fk??#}wk
zu0N{ZC8vx(LbOlOD)hPeKyve@qfyq2Wlue^Tbr@53#NrH7E+0p`cKt|Ec(qfC--Eu8(wf}Xqbpn(c=ckfTP0MQAT|Kep&B22!sz3bl#lHUfW?e?7
zs`QFxg9@p-H|;?%ht4-KFMHeYsQlvgw`04s3{+!O@7>usx!m41qC{f8y2b&;GE;u<
zrAX@9l3=w<K7iR{m7@Q)=_Vp<1X41sUOTq%_QM8T|fDxq(z?Y)mOci
z)a~&iHT1&9=DxtW)p;L#Ctm0~(xcY8mp!pwHj5d);y`OwdAmmWU;a;-`zC2TR(v6|
zE$iOlrV4%MHiO_Qorad8eeO&55(DJHyt_GsU{ayv(|6e~VFkPlw7cr}rM2r;u-I
z)s}fFG2_Dlzjn*qfECL^HikwfdY_04DTzvL>PMbgX|gJRx!J2}-nVV@r4zfm551XF
zbH5)sr!rf89hd$tIZsYy{-oLTQ3oz+MxWa=$9U??Zadxyr7~EM)DY7ig|xg6eX;sg
zRfugr(z555)n!@MY$r<<|7T1E_loHrqps7A1wE_2*bv#}Tk@?h;@Q)#bDK8F*#-1iHtea?n&>dPS(rhz*n|Dv+*6QGv7k$6q%rxvxION#78bY;7ZH*pMt=CkphrZ^ju9l8ZetG>%
zDpBo9X6HJelM0j8eN95`B5!JHwg>%=e28L)eCMn)_e%H!GhQ+GFx&vG9E`KQOI=Ior2vU6vF}x
z!zAZt-;d}bn
za|bTh9o+{)j5vH2i{s-VN@L$1&Pwn-5vyzN>W`MtcIBgWP_O_e%X=-ugSB4IdOc)e
zsDu?vy}4W`a~9vn)5O7z3Bb`Vb6u!=?}T#D%E&m+0XW=Y6}8>@93c%89DH2e`K}xc
z3BboJbgc=9SVKM`!Sc!ymj^-oEa+U&Nmq`uNPKcAzzk^8=JNSm4|gudSy*L)2!fI@BwtpP
zZ|3UkG88ag1OVCw%A?jH)hwVvVkvyXhL`9Yvz&z(OitJ+d}(e$1o|e6<)L<~fRz+%
zqYm6lwPJ8|cOAPVuv&{y-9%4G2)iGV#3*RTat#qRCntco!+&%~7o|p!bM9osK#pL4
zc>_R2Wyn}bRP%*77|yS7k0721(l7Rs`hbGPg8KhAFiaL>P~0Lrlw1jR2_K={bkP(B
zLw(%J?_df55$x0pWIPkSmC}QM0?yD|WE{AfiDw}(23dt@A?C+S07
zI^aR|%u(znSRUYzW%%&91}qNCo55!}LI$!vz*FS%#|a9fIVi+P;Q0UB{7d&R&U(fRA8}SU+8?O;LvoWIV?+TA~;d%#Zxw|@x
zBsEdMyrC!?1R&$ca$@*k&ULIW%bPEzxTPrj`{LsroOQVFT<;;ml^5XGHW&b)GG5E&
zpt2sz=X%1h%wrApUgO-s%?-<);DEWsVfo{si(P&!j#=hnv>{{yd?YnLVQGP-O~KNr
z3QJ(+&=SZDObH=z7CvkMy+)8nke*=$=T{_t9uWC?K;-8Ek)H=dejX6{c|hdn0g;~v
zM1CF+`FTL(=K+zQ2Sk1z5czpP!8woqlU5-*%)W+sl#${H4OA-nFp?2&kSTU>5j^pE0mV|
ztNVL+da(EmC4Uch4o}@*Lm5M^4rw%7O<4&m!e?tJV-FZBSsR-w>2SSSN>o*X3KJf8
zR-)2X$rK99fk}2CEmeYZFa$LmSq(>5frqpSWOXuGNq8wkgWir#>Snt7!sg&kLs`g|
zpP!$qA5oR-?W~5U)9GqBf*OIK0wq*<0USQVUxmY)C&Hl1;xWBlu|p$DC`LFC;ltNZ
z24V$Vcz6z=6)P@LK}Y65IZxQ##2R;GsR#DUh^o6mp_zI$q_V5^Jg2&hM1*XN3Br45A_chhbSUj$eHVCkvBudo8hMdo22M*5mbsejA&1FGX|QY
zp-e=H@KiVEI=VUqz=0N)hBA77N(GdLKv^$(wS(a4enC+kZx$3AR#fzs03klmSVuS>
zGpyK;7#*4FPF!yf28hqqgW=3lbMs_5D=R?){EcLkDodJE#!Upbt=f
z^Ktr=b0Yb-uB&x-N(SwOdrQ0x#ysWx6`54`QKF9q{0#m?~s4ldM7`k*F#(
zJPW4+=k_Rc5}8bMV30-F1V=F#?nANa>(j~lIEoG(xS#`DbUiX~qlKf)V$gKB~bUfAr=0&w_CLwN~w7D}R?zbBUmWeGSu0djGw6k%gJ
zTsTw5;PHh35p{-e<^j%dxbS%bfC`-^KxFO$J>@6NT|yp8vZyElOayIH+Z_ftQ6^qU
z2zrbJqj?EMP)f(0;qA%~5VnsON=KLF!UzP;g!nC89lQnNSV9tJnWMamtOHaJZPAqB
z4epHZ?aCCAIe=ggqRa4RgD(_f8O&dUaVrKeC<-w^K|AK2?yewgg0L3`ks7-?^5Gzg
z5H6C|AbbF~umn|z0nmDTazH)6%7&j!9u^fGvr~?uq7jZB`F_AJ22nzEK
z-=oq{CJR+b4DiQx#1s+GERcK%?7&2h)Gyoh_a`F6$gNxWy!3+CT
zTpsOg9+^gZL^N_@w^xB}maL0o|!q#>@zk3tzxKJhXU0~U!y;KPfhD34
zV+O|uW->54~K#~3t2#l?PDKSylOW)T7
zprNaVGW7=_W&k5##b83Dq7hnzR0KnDil!Z=FlF+Dqv)=_EDLz9045p({0{-4hd^DK
z3>ejfqzr+@a{0)jPNqV{#&)4zPh`h$)DAouTN^dojge==KF^
zujc{}0X%^m^cn6vmOv>@0O;-N2uj0ci#X9x9#Q}t4NJ$z+n0qtt|FMst%1YR3~xbw
z9c-C7up-cd)taLt%TWwavgk
z*t0Mi%BcB(=Ap-nd4e)R1sj}cli{^-G`vU^gzJsiV^(UqTBszge
zq0tG#Sc
z7uH@6vm+k5$kq|K8sX&ON2omx>_Zj8NP=rA7?|i9G5x<~HgM%)H8!Ac?fkvVb=rqbwAOjDpS~8K2!wL~-cp?=GQ3fcO2>=Pq8Ko=6dayP$5C(qpg_++-x3Ir
z#>1+XMuo*BS_lZm;|Vk>K|lzF0!l>1NdFgv5HX1ms1&G-qYG*ipg9tqhTY((pa6Ir
z1*Hm#ka1)p4dw5<8XyrGokAdfNBYnn{?`H_3DkoOhD#R;gn%c5lt~l;9e4^nX@@6c
zbP#bQDvnBrGz1W71Tu{*>=^=dEW{`UHE=i*4rGPh5C~9^P9*;uT|@rAps0on9=IVa
zil(8w-rGm$Js}$%0=_d^Eq92JIR-#P$lVoPv0$Yk;0mEGSmZNc;1@<;%~@W;l)LK+
z?D@szYG@1Dbwg|h5g-PW5*CeAXJE@a4P_p~7xl)1Ojj|{5C)OJ
zf^K7~kO)+o3W?@GQ(-tbI;c>o4h%3f65bIF9SG|C!NX_Y1)%U0bplSEfENXy1jTq<
zCq5d<(n8lfq9~E6C=RP67&4U17=B8ExQd__(*<3{fxy(zcsCk7R8||1&wt7l36zmR
zBQR8mObP*FJirx=N@b`x5b<;-gHEK=Dc}c3;tJkqi&rP({x@6=eOs=mq5p(KM~Hq7
z={Zk900bsi8p=$;h|Y3Ua`NVSC=JF3<Rpkj_WDd$9Ad5c-Iw^Pei#
ze}em8I-7ql^!g8GiN0w1zaf6i@e1ATj~-TVuPZP}ebnTz9UFCV-t^zqFWACShnV<4
zlpKvsJ3?&w-?conzTmON|1d6&>IE7W8kp9bTB4JBF$2j8+)^dnkP%5ZG!e*T*jpr%
zhd_o3Pdsihxv={AZ#F7GFQ#2=IrZOcS=4}-w$UZpf3s;JezavYG;DxA>Z;A)u^eHa
z*c)Ax!=wG;+868w58cZ}QCPX4L7TyH*y(nTJ3~vkpV9S3EW&XfCX%L34A~lph
z96njMdzaF%n&tr?WX>02WBj;<-RhMW3
zWMI9F)h))aGkyu%6EXpxFUl5@iMu;$QH)NAz
zC^8W@!p#|+Dv>DKI{1!U0eoo9fsmMRV`eB!5Q`84z;>j>O9#lKj9R6d~
zM=PmmIXip9Mk-%8<45zfhUqt&E70t^zcT7#{}O37nvVq|zl#l^xkgx22A%FY4F+RS
zfXD>&b*TDe8UCv>
z97Q6Y&;KDm6e6sJ$y(ZYT`e6F4p#U&WL=sri2@NIyskc7xIOqCKiK|-XkQGL&^TCR
z3xg*5f=!))U0E!~W5p808wdJdm*F5*5Z+2CHWGnC9Wsgu$2UiS!3q@&3=%{jnJ%W#
z00P1Y2nb||0bmG7R3d$NArS&HWjG9_fCND?5)nf{hLG0qLIViIAjc4pV4V$TA%F!k
z0Y5Seq>*F*3wQ#ViXtG;Xk;;b3MnAtNyDkbga`o~3K>O!rxA(&fPg#_0tnWSF&W_K
zMC!;gfT+U=6wo0SNJA;W;bD$5ypWIu2(1i