Skip to content

Commit

Permalink
[zh-cn]sync kubeadm-reconfigure topology-manager configure-multiple-s…
Browse files Browse the repository at this point in the history
…chedulers onfigure-liveness-readiness

Signed-off-by: xin.li <[email protected]>
  • Loading branch information
my-git9 committed Oct 15, 2023
1 parent 52ecdb6 commit 3795662
Show file tree
Hide file tree
Showing 4 changed files with 62 additions and 43 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,7 @@ The configuration is located under the `data.kubelet` key.
To reflect the change on kubeadm nodes you must do the following:
- Log in to a kubeadm node
- Run `kubeadm upgrade node phase kubelet-config` to download the latest `kubelet-config`
ConfigMap contents into the local file `/var/lib/kubelet/config.conf`
ConfigMap contents into the local file `/var/lib/kubelet/config.yaml`
- Edit the file `/var/lib/kubelet/kubeadm-flags.env` to apply additional configuration with
flags
- Restart the kubelet service with `systemctl restart kubelet`
Expand All @@ -252,7 +252,7 @@ flags

- 登录到 kubeadm 节点
- 运行 `kubeadm upgrade node phase kubelet-config` 下载最新的
`kubelet-config` ConfigMap 内容到本地文件 `/var/lib/kubelet/config.conf`
`kubelet-config` ConfigMap 内容到本地文件 `/var/lib/kubelet/config.yaml`
- 编辑文件 `/var/lib/kubelet/kubeadm-flags.env` 以使用标志来应用额外的配置
- 使用 `systemctl restart kubelet` 重启 kubelet 服务

Expand All @@ -266,15 +266,16 @@ Do these changes one node at a time to allow workloads to be rescheduled properl
{{< note >}}
<!--
During `kubeadm upgrade`, kubeadm downloads the `KubeletConfiguration` from the
`kubelet-config` ConfigMap and overwrite the contents of `/var/lib/kubelet/config.conf`.
`kubelet-config` ConfigMap and overwrite the contents of `/var/lib/kubelet/config.yaml`.
This means that node local configuration must be applied either by flags in
`/var/lib/kubelet/kubeadm-flags.env` or by manually updating the contents of
`/var/lib/kubelet/config.conf` after `kubeadm upgrade`, and then restarting the kubelet.
`/var/lib/kubelet/config.yaml` after `kubeadm upgrade`, and then restarting the kubelet.
-->
`kubeadm upgrade` 期间,kubeadm 从 `kubelet-config` ConfigMap
下载 `KubeletConfiguration` 并覆盖 `/var/lib/kubelet/config.conf` 的内容。
下载 `KubeletConfiguration` 并覆盖 `/var/lib/kubelet/config.yaml` 的内容。
这意味着节点本地配置必须通过`/var/lib/kubelet/kubeadm-flags.env`中的标志或在
kubeadm upgrade` 后手动更新`/var/lib/kubelet/config.conf`的内容来应用,然后重新启动 kubelet。
kubeadm upgrade` 后手动更新 `/var/lib/kubelet/config.yaml` 的内容来应用,
然后重新启动 kubelet。
{{< /note >}}

<!--
Expand Down Expand Up @@ -488,26 +489,26 @@ the set of node specific patches must be updated accordingly.
<!--
#### Persisting kubelet reconfiguration
Any changes to the `KubeletConfiguration` stored in `/var/lib/kubelet/config.conf` will be overwritten on
Any changes to the `KubeletConfiguration` stored in `/var/lib/kubelet/config.yaml` will be overwritten on
`kubeadm upgrade` by downloading the contents of the cluster wide `kubelet-config` ConfigMap.
To persist kubelet node specific configuration either the file `/var/lib/kubelet/config.conf`
To persist kubelet node specific configuration either the file `/var/lib/kubelet/config.yaml`
has to be updated manually post-upgrade or the file `/var/lib/kubelet/kubeadm-flags.env` can include flags.
The kubelet flags override the associated `KubeletConfiguration` options, but note that
some of the flags are deprecated.
A kubelet restart will be required after changing `/var/lib/kubelet/config.conf` or
A kubelet restart will be required after changing `/var/lib/kubelet/config.yaml` or
`/var/lib/kubelet/kubeadm-flags.env`.
-->
#### 持久化 kubelet 重新配置

对存储在 `/var/lib/kubelet/config.conf` 中的 `KubeletConfiguration`
对存储在 `/var/lib/kubelet/config.yaml` 中的 `KubeletConfiguration`
所做的任何更改都将在 `kubeadm upgrade` 时因为下载集群范围内的 `kubelet-config`
ConfigMap 的内容而被覆盖。
要持久保存 kubelet 节点特定的配置,文件`/var/lib/kubelet/config.conf`
必须在升级后手动更新,或者文件`/var/lib/kubelet/kubeadm-flags.env` 可以包含标志。
要持久保存 kubelet 节点特定的配置,文件 `/var/lib/kubelet/config.yaml`
必须在升级后手动更新,或者文件 `/var/lib/kubelet/kubeadm-flags.env` 可以包含标志。
kubelet 标志会覆盖相关的 `KubeletConfiguration` 选项,但请注意,有些标志已被弃用。

更改 `/var/lib/kubelet/config.conf``/var/lib/kubelet/kubeadm-flags.env`
更改 `/var/lib/kubelet/config.yaml``/var/lib/kubelet/kubeadm-flags.env`
后需要重启 kubelet。


Expand Down
41 changes: 24 additions & 17 deletions content/zh-cn/docs/tasks/administer-cluster/topology-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -349,10 +349,10 @@ kubelet 将调用每个建议提供者以确定资源可用性。
如果亲和性不是首选,则拓扑管理器将存储该亲和性,并且无论如何都将 Pod 接纳到该节点。

<!--
The *Hint Providers* can then use this information when making the
The *Hint Providers* can then use this information when making the
resource allocation decision.
-->
之后 **建议提供者** 可以在进行资源分配决策时使用这个信息。
之后**建议提供者**可以在进行资源分配决策时使用这个信息。

<!--
### restricted policy {#policy-restricted}
Expand Down Expand Up @@ -382,10 +382,10 @@ have the `Topology Affinity` error.
还可以通过实现外部控制环,以触发重新部署具有 `Topology Affinity` 错误的 Pod。

<!--
If the pod is admitted, the *Hint Providers* can then use this information when making the
If the pod is admitted, the *Hint Providers* can then use this information when making the
resource allocation decision.
-->
如果 Pod 被允许运行在某节点,则 **建议提供者** 可以在做出资源分配决定时使用此信息。
如果 Pod 被允许运行在某节点,则**建议提供者**可以在做出资源分配决定时使用此信息。

<!--
### single-numa-node policy {#policy-single-numa-node}
Expand Down Expand Up @@ -421,30 +421,38 @@ that have the `Topology Affinity` error.
### Topology manager policy options
Support for the Topology Manager policy options requires `TopologyManagerPolicyOptions`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled.
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to be enabled
(it is enabled by default).
-->
### 拓扑管理器策略选项 {#topology-manager-policy-options}

对拓扑管理器策略选项的支持需要启用 `TopologyManagerPolicyOptions`
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)
[特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/)(默认启用)

<!--
You can toggle groups of options on and off based upon their maturity level using the following feature gates:
* `TopologyManagerPolicyBetaOptions` default disabled. Enable to show beta-level options. Currently there are no beta-level options.
* `TopologyManagerPolicyAlphaOptions` default disabled. Enable to show alpha-level options. You will still have to enable each option using the `TopologyManagerPolicyOptions` kubelet option.
* `TopologyManagerPolicyBetaOptions` default enabled.. Enable to show beta-level options.
* `TopologyManagerPolicyAlphaOptions` default disabled. Enable to show alpha-level options.
-->
你可以使用以下特性门控根据成熟度级别打开和关闭这些选项组:
* `TopologyManagerPolicyBetaOptions` 默认禁用。启用以显示 Beta 级别选项。目前没有 Beta 级别选项。
* `TopologyManagerPolicyAlphaOptions` 默认禁用。启用以显示 Alpha 级别选项。你仍然需要使用
`TopologyManagerPolicyOptions` kubelet 选项来启用每个选项。
* `TopologyManagerPolicyBetaOptions` 默认启用。启用以显示 Beta 级别选项。
* `TopologyManagerPolicyAlphaOptions` 默认禁用。启用以显示 Alpha 级别选项。

<!--
You will still have to enable each option using the `TopologyManagerPolicyOptions` kubelet option.
-->
你仍然需要使用 `TopologyManagerPolicyOptions` kubelet 选项来启用每个选项。

<!--
The following policy options exists:
* `prefer-closest-numa-nodes` (alpha, invisible by default, `TopologyManagerPolicyOptions` and `TopologyManagerPolicyAlphaOptions` feature gates have to be enabled)(1.26 or higher)
* `prefer-closest-numa-nodes` (beta, visible by default, `TopologyManagerPolicyOptions` and `TopologyManagerPolicyAlphaOptions` feature gates have to be enabled).
The `prefer-closest-numa-nodes` policy option is beta in Kubernetes {{< skew currentVersion >}}.
-->
存在以下策略选项:
* `prefer-closest-numa-nodes`(Alpha,默认不可见,`TopologyManagerPolicyOptions`
`TopologyManagerPolicyAlphaOptions` 特性门控必须被启用)(1.26 或更高版本)
* `prefer-closest-numa-nodes`(Beta,默认可见,`TopologyManagerPolicyOptions`
`TopologyManagerPolicyAlphaOptions` 特性门控必须被启用)。
`prefer-closest-numa-nodes` 策略选项在 Kubernetes {{< skew currentVersion >}}
中是 Beta 版。

<!--
If the `prefer-closest-numa-nodes` policy option is specified, the `best-effort` and `restricted`
Expand Down Expand Up @@ -580,7 +588,7 @@ This pod runs in the `BestEffort` QoS class because there are no CPU and memory

<!--
The Topology Manager would consider the above pods. The Topology Manager would consult the Hint
Providers, which are CPU and Device Manager to get topology hints for the pods.
Providers, which are CPU and Device Manager to get topology hints for the pods.

In the case of the `Guaranteed` pod with integer CPU request, the `static` CPU Manager policy
would return topology hints relating to the exclusive CPU and the Device Manager would send back
Expand Down Expand Up @@ -615,7 +623,7 @@ of the requested devices.
<!--
Using this information the Topology Manager calculates the optimal hint for the pod and stores
this information, which will be used by the Hint Providers when they are making their resource
assignments.
assignments.
-->
基于此信息,拓扑管理器将为 Pod 计算最佳提示并存储该信息,并且供
提示提供程序在进行资源分配时使用。
Expand All @@ -636,4 +644,3 @@ assignments.
1. 拓扑管理器所能处理的最大 NUMA 节点个数是 8。若 NUMA 节点数超过 8,
枚举可能的 NUMA 亲和性并为之生成提示时会发生状态爆炸。
2. 调度器无法感知拓扑,所以有可能一个 Pod 被调度到一个节点之后,会因为拓扑管理器的缘故在该节点上启动失败。

Original file line number Diff line number Diff line change
Expand Up @@ -330,7 +330,7 @@ can't it is considered a failure.
<!--
As you can see, configuration for a TCP check is quite similar to an HTTP check.
This example uses both readiness and liveness probes. The kubelet will send the
first readiness probe 5 seconds after the container starts. This will attempt to
first readiness probe 15 seconds after the container starts. This will attempt to
connect to the `goproxy` container on port 8080. If the probe succeeds, the Pod
will be marked as ready. The kubelet will continue to run this check every 10
seconds.
Expand All @@ -344,7 +344,7 @@ will be restarted.
To try the TCP liveness check, create a Pod:
-->
如你所见,TCP 检测的配置和 HTTP 检测非常相似。
下面这个例子同时使用就绪和存活探针。kubelet 会在容器启动 5 秒后发送第一个就绪探针。
下面这个例子同时使用就绪和存活探针。kubelet 会在容器启动 15 秒后发送第一个就绪探针。
探针会尝试连接 `goproxy` 容器的 8080 端口。
如果探测成功,这个 Pod 会被标记为就绪状态,kubelet 将继续每隔 10 秒运行一次探测。

Expand Down Expand Up @@ -635,8 +635,9 @@ liveness and readiness checks:
<!--
* `initialDelaySeconds`: Number of seconds after the container has started before startup,
liveness or readiness probes are initiated. If a startup probe is defined, liveness and
readiness probe delays do not begin until the startup probe has succeeded.
Defaults to 0 seconds. Minimum value is 0.
readiness probe delays do not begin until the startup probe has succeeded. If the value of
`periodSeconds` is greater than `initialDelaySeconds` then the `initialDelaySeconds` would be
ignored. Defaults to 0 seconds. Minimum value is 0.
* `periodSeconds`: How often (in seconds) to perform the probe. Default to 10 seconds.
The minimum value is 1.
* `timeoutSeconds`: Number of seconds after which the probe times out.
Expand All @@ -647,7 +648,8 @@ liveness and readiness checks:
-->
* `initialDelaySeconds`:容器启动后要等待多少秒后才启动启动、存活和就绪探针。
如果定义了启动探针,则存活探针和就绪探针的延迟将在启动探针已成功之后才开始计算。
默认是 0 秒,最小值是 0。
如果 `periodSeconds` 的值大于 `initialDelaySeconds`,则 `initialDelaySeconds`
将被忽略。默认是 0 秒,最小值是 0。
* `periodSeconds`:执行探测的时间间隔(单位是秒)。默认是 10 秒。最小值是 1。
* `timeoutSeconds`:探测的超时后等待多少秒。默认值是 1 秒。最小值是 1。
* `successThreshold`:探针在失败后,被视为成功的最小连续成功数。默认值是 1。
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -82,13 +82,23 @@ Save the file as `Dockerfile`, build the image and push it to a registry. This e
pushes the image to
[Google Container Registry (GCR)](https://cloud.google.com/container-registry/).
For more details, please read the GCR
[documentation](https://cloud.google.com/container-registry/docs/).
[documentation](https://cloud.google.com/container-registry/docs/). Alternatively
you can also use the [docker hub](https://hub.docker.com/search?q=). For more details
refer to the docker hub [documentation](https://docs.docker.com/docker-hub/repos/create/#create-a-repository).
-->
将文件保存为 `Dockerfile`,构建镜像并将其推送到镜像仓库。
此示例将镜像推送到 [Google 容器镜像仓库(GCR)](https://cloud.google.com/container-registry/)
有关详细信息,请阅读 GCR [文档](https://cloud.google.com/container-registry/docs/)
或者,你也可以使用 [Docker Hub](https://hub.docker.com/search?q=)
有关更多详细信息,请参阅 Docker Hub
[文档](https://docs.docker.com/docker-hub/repos/create/#create-a-repository)

<!--
# The image name and the repository
# used in here is just an example
-->
```shell
# 这里使用的镜像名称和仓库只是一个例子
docker build -t gcr.io/my-gcp-project/my-kube-scheduler:1.0 .
gcloud docker -- push gcr.io/my-gcp-project/my-kube-scheduler:1.0
```
Expand Down Expand Up @@ -326,7 +336,7 @@ scheduler in that pod spec. Let's look at three examples.
<!--
Verify that all three pods are running.
-->
确认所有三个 pod 都在运行。
确认所有三个 Pod 都在运行。

```shell
kubectl get pods
Expand All @@ -337,7 +347,7 @@ scheduler in that pod spec. Let's look at three examples.
<!--
### Verifying that the pods were scheduled using the desired schedulers
-->
### 验证是否使用所需的调度器调度了 pod
### 验证是否使用所需的调度器调度了 Pod

<!--
In order to make it easier to work through these examples, we did not verify that the
Expand All @@ -352,15 +362,15 @@ scheduled as well.
为了更容易地完成这些示例,我们没有验证 Pod 实际上是使用所需的调度程序调度的。
我们可以通过更改 Pod 的顺序和上面的部署配置提交来验证这一点。
如果我们在提交调度器部署配置之前将所有 Pod 配置提交给 Kubernetes 集群,
我们将看到注解了 `annotation-second-scheduler` 的 Pod 始终处于 Pending 状态,
我们将看到注解了 `annotation-second-scheduler` 的 Pod 始终处于 `Pending` 状态,
而其他两个 Pod 被调度。
一旦我们提交调度器部署配置并且我们的新调度器开始运行,注解了
`annotation-second-scheduler`pod 就能被调度。
`annotation-second-scheduler`Pod 就能被调度。
<!--
Alternatively, you can look at the "Scheduled" entries in the event logs to
verify that the pods were scheduled by the desired schedulers.
-->
或者,可以查看事件日志中的 Scheduled 条目,以验证是否由所需的调度器调度了 Pod。
或者,可以查看事件日志中的 `Scheduled` 条目,以验证是否由所需的调度器调度了 Pod。

```shell
kubectl get events
Expand All @@ -372,5 +382,4 @@ or a custom container image for the cluster's main scheduler by modifying its st
on the relevant control plane nodes.
-->
你也可以使用[自定义调度器配置](/zh-cn/docs/reference/scheduling/config/#multiple-profiles)
或自定义容器镜像,用于集群的主调度器,方法是在相关控制平面节点上修改其静态 pod 清单。

或自定义容器镜像,用于集群的主调度器,方法是在相关控制平面节点上修改其静态 Pod 清单。

0 comments on commit 3795662

Please sign in to comment.