From 585bcee18811ee6ec6152f621c1d34a24500d3ca Mon Sep 17 00:00:00 2001 From: Jihoon Seo Date: Tue, 22 Jun 2021 18:32:02 +0900 Subject: [PATCH 001/138] [vi] Remove exec permission on markdown files --- content/vi/docs/concepts/architecture/_index.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) mode change 100755 => 100644 content/vi/docs/concepts/architecture/_index.md diff --git a/content/vi/docs/concepts/architecture/_index.md b/content/vi/docs/concepts/architecture/_index.md old mode 100755 new mode 100644 From 79364dad1145ae9fad78c5c43ffbdf5584266be7 Mon Sep 17 00:00:00 2001 From: Akihito INOH Date: Wed, 23 Jun 2021 14:35:44 +0900 Subject: [PATCH 002/138] Add script to print URLs updated by target PR --- scripts/get-changed-urls.sh | 48 +++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+) create mode 100755 scripts/get-changed-urls.sh diff --git a/scripts/get-changed-urls.sh b/scripts/get-changed-urls.sh new file mode 100755 index 0000000000000..4ccb57d7e1236 --- /dev/null +++ b/scripts/get-changed-urls.sh @@ -0,0 +1,48 @@ +#!/bin/bash + +# This script lists URLs changed by the target PR. You can jump to target page quickly by clicking those. +# Syntax: get-changed-urls.sh [] +# Example: +# $ get-changed-urls.sh 28542 +# http://localhost:1313/ja/docs/concepts/configuration/manage-resources-containers +# http://localhost:1313/ja/docs/concepts/policy/limit-range +# http://localhost:1313/ja/docs/concepts/policy/resource-quotas +# http://localhost:1313/ja/docs/reference/command-line-tools-reference/feature-gates +# http://localhost:1313/ja/docs/setup/best-practices/cluster-large +# +# If no is passed, it will assume "http://localhost:1313". +# It's the default endpoint for "make container-serve" or "make serve". +# Please look at the README(https://github.com/kubernetes/website#using-this-repository) to serve endpoint. +# Or you can pass any strings to . +# You can get URLs for upstream by passing as "https://kubernetes.io". + +set -e + +if [[ ${1} == "" ]] +then + echo "This script prints all URLs updated by target PR number." + echo "You can jump to updated pages quickly when PR review." + echo "This is intended for using with \"make container-serve\"." + echo "" + echo "Syntax: get-changed-urls.sh []" + echo "Example: get-changed-urls.sh 12345" + echo "If no is passed, it will assume \"http://localhost:1313\"" + exit 1 +fi + +PAGE_TOP=${2:-"http://localhost:1313"} + +GITHUB_REPOSITORY="kubernetes/website" +PR_NUM=${1} + +# URL for calling GitHub API +# For detail: https://docs.github.com/en/rest/reference/pulls#list-pull-requests-files +GITHUB_API_URL="https://api.github.com/repos/${GITHUB_REPOSITORY}/pulls/${PR_NUM}/files" + +# Get files included in target PR +URLS=$(curl --silent -X GET ${GITHUB_API_URL} | jq -r '.[].filename' | \ + sed -e "s/^content\/en//g" -e "s/^content//g" -e "s/.md$//g" | grep "docs" | xargs -I {} echo ${PAGE_TOP}{}) +for u in ${URLS} +do + echo ${u} +done From e0fdee6b0d1c42be79a58cb3d440050b16988d94 Mon Sep 17 00:00:00 2001 From: "Jason Kim (Jun Chul Kim)" Date: Mon, 22 Nov 2021 15:19:13 +0900 Subject: [PATCH 003/138] Update certificates.md [kubelet has client and server certificates](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#client-and-serving-certificates). But this page only mentions kubelet client certificate. I linked to the [page](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#client-and-serving-certificates) because I couldn't find the doc about what are those `certain features`. Please suggest a better link if there are any. --- content/en/docs/setup/best-practices/certificates.md | 1 + 1 file changed, 1 insertion(+) diff --git a/content/en/docs/setup/best-practices/certificates.md b/content/en/docs/setup/best-practices/certificates.md index defa8b59f6557..5b065021da44b 100644 --- a/content/en/docs/setup/best-practices/certificates.md +++ b/content/en/docs/setup/best-practices/certificates.md @@ -22,6 +22,7 @@ This page explains the certificates that your cluster requires. Kubernetes requires PKI for the following operations: * Client certificates for the kubelet to authenticate to the API server +* Server certificates for the [kubelet endpoint](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#client-and-serving-certificates) * Server certificate for the API server endpoint * Client certificates for administrators of the cluster to authenticate to the API server * Client certificates for the API server to talk to the kubelets From 4650e1df6177a5c985c94711bf1196a4a00ee6be Mon Sep 17 00:00:00 2001 From: Shannon Kularathna Date: Fri, 13 Aug 2021 21:43:40 +0000 Subject: [PATCH 004/138] Refactor Assign Pods to Nodes --- .../scheduling-eviction/assign-pod-node.md | 462 ++++++++++-------- content/en/examples/examples_test.go | 1 + .../pods/pod-with-affinity-anti-affinity.yaml | 32 ++ .../examples/pods/pod-with-node-affinity.yaml | 5 +- 4 files changed, 286 insertions(+), 214 deletions(-) create mode 100644 content/en/examples/pods/pod-with-affinity-anti-affinity.yaml diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index 9216ec2ff9e86..14552f906cef9 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -12,158 +12,181 @@ weight: 20 You can constrain a {{< glossary_tooltip text="Pod" term_id="pod" >}} so that it can only run on particular set of -{{< glossary_tooltip text="Node(s)" term_id="node" >}}. +{{< glossary_tooltip text="node(s)" term_id="node" >}}. There are several ways to do this and the recommended approaches all use [label selectors](/docs/concepts/overview/working-with-objects/labels/) to facilitate the selection. Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement -(e.g. spread your pods across nodes so as not place the pod on a node with insufficient free resources, etc.) -but there are some circumstances where you may want to control which node the pod deploys to - for example to ensure -that a pod ends up on a machine with an SSD attached to it, or to co-locate pods from two different +(for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources). +However, there are some circumstances where you may want to control which node +the Pod deploys to, for example, to ensure that a Pod ends up on a node with an SSD attached to it, or to co-locate Pods from two different services that communicate a lot into the same availability zone. - -## nodeSelector +You can use any of the following methods to choose where Kubernetes schedules +specific Pods: -`nodeSelector` is the simplest recommended form of node selection constraint. -`nodeSelector` is a field of PodSpec. It specifies a map of key-value pairs. For the pod to be eligible -to run on a node, the node must have each of the indicated key-value pairs as labels (it can have -additional labels as well). The most common usage is one key-value pair. + * [nodeSelector](#nodeselector) field matching against [node labels](#built-in-node-labels) + * [Affinity and anti-affinity](#affinity-and-anti-affinity) + * [nodeName](#nodename) field -Let's walk through an example of how to use `nodeSelector`. +## Node labels {#built-in-node-labels} -### Step Zero: Prerequisites +Like many other Kubernetes objects, nodes have +[labels](/docs/concepts/overview/working-with-objects/labels/). You can [attach labels manually](/docs/tasks/confiure-pod-container/assign-pods-nodes/#add-a-label-to-a-node). +Kubernetes also populates a standard set of labels on all nodes in a cluster. See [Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/) +for a list of common node labels. -This example assumes that you have a basic understanding of Kubernetes pods and that you have [set up a Kubernetes cluster](/docs/setup/). +{{}} +The value of these labels is cloud provider specific and is not guaranteed to be reliable. +For example, the value of `kubernetes.io/hostname` may be the same as the node name in some environments +and a different value in other environments. +{{}} -### Step One: Attach label to the node +### Node isolation/restriction -Run `kubectl get nodes` to get the names of your cluster's nodes. Pick out the one that you want to add a label to, and then run `kubectl label nodes =` to add a label to the node you've chosen. For example, if my node name is 'kubernetes-foo-node-1.c.a-robinson.internal' and my desired label is 'disktype=ssd', then I can run `kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal disktype=ssd`. +Adding labels to nodes allows you to target Pods for scheduling on specific +nodes or groups of nodes. You can use this functionality to ensure that specific +Pods only run on nodes with certain isolation, security, or regulatory +properties. -You can verify that it worked by re-running `kubectl get nodes --show-labels` and checking that the node now has a label. You can also use `kubectl describe node "nodename"` to see the full list of labels of the given node. +If you use labels for node isolation, choose label keys that the {{}} +cannot modify. This prevents a compromised node from setting those labels on +itself so that the scheduler schedules workloads onto the compromised node. -### Step Two: Add a nodeSelector field to your pod configuration +The [`NodeRestriction` admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) +prevents the kubelet from setting or modifying labels with a +`node-restriction.kubernetes.io/` prefix. -Take whatever pod config file you want to run, and add a nodeSelector section to it, like this. For example, if this is my pod config: +To make use of that label prefix for node isolation: -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: nginx - labels: - env: test -spec: - containers: - - name: nginx - image: nginx -``` +1. Ensure you are using the [Node authorizer](/docs/reference/access-authn-authz/node/) and have _enabled_ the `NodeRestriction` admission plugin. +2. Add labels with the `node-restriction.kubernetes.io/` prefix to your nodes, and use those labels in your [node selectors](#nodeselector). + For example, `example.com.node-restriction.kubernetes.io/fips=true` or `example.com.node-restriction.kubernetes.io/pci-dss=true`. -Then add a nodeSelector like so: +## nodeSelector -{{< codenew file="pods/pod-nginx.yaml" >}} +`nodeSelector` is the simplest recommended form of node selection constraint. +You can add the `nodeSelector` field to your Pod specification and specify the +[node labels](#built-in-node-labels) you want the target node to have. +Kubernetes only schedules the Pod onto nodes that have each of the labels you +specify. -When you then run `kubectl apply -f https://k8s.io/examples/pods/pod-nginx.yaml`, -the Pod will get scheduled on the node that you attached the label to. You can -verify that it worked by running `kubectl get pods -o wide` and looking at the -"NODE" that the Pod was assigned to. +See [Assign Pods to Nodes](/docs/tasks/configure-pod-container/assign-pods-nodes) for more +information. -## Interlude: built-in node labels {#built-in-node-labels} +## Affinity and anti-affinity -In addition to labels you [attach](#step-one-attach-label-to-the-node), nodes come pre-populated -with a standard set of labels. See [Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/) for a list of these. +`nodeSelector` is the simplest way to constrain Pods to nodes with specific +labels. Affinity and anti-affinity expands the types of constraints you can +define. Some of the benefits of affinity and anti-affinity include: -{{< note >}} -The value of these labels is cloud provider specific and is not guaranteed to be reliable. -For example, the value of `kubernetes.io/hostname` may be the same as the Node name in some environments -and a different value in other environments. -{{< /note >}} +* The affinity/anti-affinity language is more expressive. `nodeSelector` only + selects nodes with all the specified labels. Affinity/anti-affinity gives you + more control over the selection logic. +* You can indicate that a rule is *soft* or *preferred*, so that the scheduler + still schedules the Pod even if it can't find a matching node. +* You can constrain a Pod using labels on other Pods running on the node (or other topological domain), + instead of just node labels, which allows you to define rules for which Pods + can be co-located on a node. -## Node isolation/restriction +The affinity feature consists of two types of affinity: -Adding labels to Node objects allows targeting pods to specific nodes or groups of nodes. -This can be used to ensure specific pods only run on nodes with certain isolation, security, or regulatory properties. -When using labels for this purpose, choosing label keys that cannot be modified by the kubelet process on the node is strongly recommended. -This prevents a compromised node from using its kubelet credential to set those labels on its own Node object, -and influencing the scheduler to schedule workloads to the compromised node. +* *Node affinity* functions like the `nodeSelector` field but is more expressive and + allows you to specify soft rules. +* *Inter-pod affinity/anti-affinity* allows you to constrain Pods against labels + on other Pods. -The `NodeRestriction` admission plugin prevents kubelets from setting or modifying labels with a `node-restriction.kubernetes.io/` prefix. -To make use of that label prefix for node isolation: +### Node affinity -1. Ensure you are using the [Node authorizer](/docs/reference/access-authn-authz/node/) and have _enabled_ the [NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction). -2. Add labels under the `node-restriction.kubernetes.io/` prefix to your Node objects, and use those labels in your node selectors. -For example, `example.com.node-restriction.kubernetes.io/fips=true` or `example.com.node-restriction.kubernetes.io/pci-dss=true`. +Node affinity is conceptually similar to `nodeSelector`, allowing you to constrain which nodes your +Pod can be scheduled on based on node labels. There are two types of node +affinity: -## Affinity and anti-affinity + * `requiredDuringSchedulingIgnoredDuringExecution`: The scheduler can't + schedule the Pod unless the rule is met. This functions like `nodeSelector`, + but with a more expressive syntax. + * `preferredDuringSchedulingIgnoredDuringExecution`: The scheduler tries to + find a node that meets the rule. If a matching node is not available, the + scheduler still schedules the Pod. -`nodeSelector` provides a very simple way to constrain pods to nodes with particular labels. The affinity/anti-affinity -feature, greatly expands the types of constraints you can express. The key enhancements are +{{}} +In the preceding types, `IgnoredDuringExecution` means that if the node labels +change after Kubernetes schedules the Pod, the Pod continues to run. +{{}} -1. The affinity/anti-affinity language is more expressive. The language offers more matching rules - besides exact matches created with a logical AND operation; -2. you can indicate that the rule is "soft"/"preference" rather than a hard requirement, so if the scheduler - can't satisfy it, the pod will still be scheduled; -3. you can constrain against labels on other pods running on the node (or other topological domain), - rather than against labels on the node itself, which allows rules about which pods can and cannot be co-located +You can specify node affinities using the `.spec.affinity.nodeAffinity` field in +your Pod spec. -The affinity feature consists of two types of affinity, "node affinity" and "inter-pod affinity/anti-affinity". -Node affinity is like the existing `nodeSelector` (but with the first two benefits listed above), -while inter-pod affinity/anti-affinity constrains against pod labels rather than node labels, as -described in the third item listed above, in addition to having the first and second properties listed above. +For example, consider the following Pod spec: -### Node affinity +{{}} -Node affinity is conceptually similar to `nodeSelector` -- it allows you to constrain which nodes your -pod is eligible to be scheduled on, based on labels on the node. +In this example, the following rules apply: -There are currently two types of node affinity, called `requiredDuringSchedulingIgnoredDuringExecution` and -`preferredDuringSchedulingIgnoredDuringExecution`. You can think of them as "hard" and "soft" respectively, -in the sense that the former specifies rules that *must* be met for a pod to be scheduled onto a node (similar to -`nodeSelector` but using a more expressive syntax), while the latter specifies *preferences* that the scheduler -will try to enforce but will not guarantee. The "IgnoredDuringExecution" part of the names means that, similar -to how `nodeSelector` works, if labels on a node change at runtime such that the affinity rules on a pod are no longer -met, the pod continues to run on the node. In the future we plan to offer -`requiredDuringSchedulingRequiredDuringExecution` which will be identical to `requiredDuringSchedulingIgnoredDuringExecution` -except that it will evict pods from nodes that cease to satisfy the pods' node affinity requirements. + * The node *must* have a label with the key `kubernetes.io/e2e-az-name` and + the value is either `e2e-az1` or `e2e-az2`. + * The node *preferably* has a label with the key `another-node-label-key` and + the value `another-node-label-value`. -Thus an example of `requiredDuringSchedulingIgnoredDuringExecution` would be "only run the pod on nodes with Intel CPUs" -and an example `preferredDuringSchedulingIgnoredDuringExecution` would be "try to run this set of pods in failure -zone XYZ, but if it's not possible, then allow some to run elsewhere". +You can use the `operator` field to specify a logical operator for Kubernetes to use when +interpreting the rules. You can use `In`, `NotIn`, `Exists`, `DoesNotExist`, +`Gt` and `Lt`. -Node affinity is specified as field `nodeAffinity` of field `affinity` in the PodSpec. +`NotIn` and `DoesNotExist` allow you to define node anti-affinity behavior. +Alternatively, you can use [node taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) +to repel Pods from specific nodes. -Here's an example of a pod that uses node affinity: +{{}} +If you specify both `nodeSelector` and `nodeAffinity`, *both* must be satisfied +for the Pod to be scheduled onto a node. -{{< codenew file="pods/pod-with-node-affinity.yaml" >}} +If you specify multiple `nodeSelectorTerms` associated with `nodeAffinity` +types, then the Pod can be scheduled onto a node if one of the specified `nodeSelectorTerms` can be +satisfied. -This node affinity rule says the pod can only be placed on a node with a label whose key is -`kubernetes.io/e2e-az-name` and whose value is either `e2e-az1` or `e2e-az2`. In addition, -among nodes that meet that criteria, nodes with a label whose key is `another-node-label-key` and whose -value is `another-node-label-value` should be preferred. +If you specify multiple `matchExpressions` associated with a single `nodeSelectorTerms`, +then the Pod can be scheduled onto a node only if all the `matchExpressions` are +satisfied. +{{}} -You can see the operator `In` being used in the example. The new node affinity syntax supports the following operators: `In`, `NotIn`, `Exists`, `DoesNotExist`, `Gt`, `Lt`. -You can use `NotIn` and `DoesNotExist` to achieve node anti-affinity behavior, or use -[node taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) to repel pods from specific nodes. +See [Assign Pods to Nodes using Node Affinity](/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) +for more information. -If you specify both `nodeSelector` and `nodeAffinity`, *both* must be satisfied for the pod -to be scheduled onto a candidate node. +#### Node affinity weight -If you specify multiple `nodeSelectorTerms` associated with `nodeAffinity` types, then the pod can be scheduled onto a node **if one of the** `nodeSelectorTerms` can be satisfied. +You can specify a `weight` between 1 and 100 for each instance of the +`preferredDuringSchedulingIgnoredDuringExecution` affinity type. When the +scheduler finds nodes that meet all the other scheduling requirements of the Pod, the +scheduler iterates through every preferred rule that the node satisfies and adds the +value of the `weight` for that expression to a sum. -If you specify multiple `matchExpressions` associated with `nodeSelectorTerms`, then the pod can be scheduled onto a node **only if all** `matchExpressions` is satisfied. +The final sum is added to the score of other priority functions for the node. +Nodes with the highest total score are prioritized when the scheduler makes a +scheduling decision for the Pod. -If you remove or change the label of the node where the pod is scheduled, the pod won't be removed. In other words, the affinity selection works only at the time of scheduling the pod. +For example, consider the following Pod spec: -The `weight` field in `preferredDuringSchedulingIgnoredDuringExecution` is in the range 1-100. For each node that meets all of the scheduling requirements (resource request, RequiredDuringScheduling affinity expressions, etc.), the scheduler will compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding MatchExpressions. This score is then combined with the scores of other priority functions for the node. The node(s) with the highest total score are the most preferred. +{{}} + +If there are two possible nodes that match the +`requiredDuringSchedulingIgnoredDuringExecution` rule, one with the +`label-1:key-1` label and another with the `label-2:key-2` label, the scheduler +considers the `weight` of each node and adds the weight to the other scores for +that node, and schedules the Pod onto the node with the highest final score. + +{{}} +If you want Kubernetes to successfully schedule the Pods in this example, you +must have existing nodes with the `kubernetes.io/os=linux` label. +{{}} #### Node affinity per scheduling profile {{< feature-state for_k8s_version="v1.20" state="beta" >}} When configuring multiple [scheduling profiles](/docs/reference/scheduling/config/#multiple-profiles), you can associate -a profile with a Node affinity, which is useful if a profile only applies to a specific set of Nodes. -To do so, add an `addedAffinity` to the args of the [`NodeAffinity` plugin](/docs/reference/scheduling/config/#scheduling-plugins) +a profile with a node affinity, which is useful if a profile only applies to a specific set of nodes. +To do so, add an `addedAffinity` to the `args` field of the [`NodeAffinity` plugin](/docs/reference/scheduling/config/#scheduling-plugins) in the [scheduler configuration](/docs/reference/scheduling/config/). For example: ```yaml @@ -188,29 +211,41 @@ profiles: The `addedAffinity` is applied to all Pods that set `.spec.schedulerName` to `foo-scheduler`, in addition to the NodeAffinity specified in the PodSpec. -That is, in order to match the Pod, Nodes need to satisfy `addedAffinity` and the Pod's `.spec.NodeAffinity`. +That is, in order to match the Pod, nodes need to satisfy `addedAffinity` and +the Pod's `.spec.NodeAffinity`. -Since the `addedAffinity` is not visible to end users, its behavior might be unexpected to them. We -recommend to use node labels that have clear correlation with the profile's scheduler name. +Since the `addedAffinity` is not visible to end users, its behavior might be +unexpected to them. Use node labels that have a clear correlation to the +scheduler profile name. {{< note >}} -The DaemonSet controller, which [creates Pods for DaemonSets](/docs/concepts/workloads/controllers/daemonset/#scheduled-by-default-scheduler) -is not aware of scheduling profiles. For this reason, it is recommended that you keep a scheduler profile, such as the -`default-scheduler`, without any `addedAffinity`. Then, the Daemonset's Pod template should use this scheduler name. -Otherwise, some Pods created by the Daemonset controller might remain unschedulable. +The DaemonSet controller, which [creates Pods for DaemonSets](/docs/concepts/workloads/controllers/daemonset/#scheduled-by-default-scheduler), +does not support scheduling profiles. When the DaemonSet controller creates +Pods, the default Kubernetes scheduler places those Pods and honors any +`nodeAffinity` rules in the DaemonSet controller. {{< /note >}} ### Inter-pod affinity and anti-affinity -Inter-pod affinity and anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled *based on -labels on pods that are already running on the node* rather than based on labels on nodes. The rules are of the form -"this pod should (or, in the case of anti-affinity, should not) run in an X if that X is already running one or more pods that meet rule Y". -Y is expressed as a LabelSelector with an optional associated list of namespaces; unlike nodes, because pods are namespaced -(and therefore the labels on pods are implicitly namespaced), -a label selector over pod labels must specify which namespaces the selector should apply to. Conceptually X is a topology domain -like node, rack, cloud provider zone, cloud provider region, etc. You express it using a `topologyKey` which is the -key for the node label that the system uses to denote such a topology domain; for example, see the label keys listed above -in the section [Interlude: built-in node labels](#built-in-node-labels). +Inter-pod affinity and anti-affinity allow you to constrain which nodes your +Pods can be scheduled on based on the labels of **Pods** already running on that +node, instead of the node labels. + +Inter-pod affinity and anti-affinity rules take the form "this +Pod should (or, in the case of anti-affinity, should not) run in an X if that X +is already running one or more Pods that meet rule Y", where X is a topology +domain like node, rack, cloud provider zone or region, or similar and Y is the +rule Kubernetes tries to satisfy. + +You express these rules (Y) as [label selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) +with an optional associated list of namespaces. Pods are namespaced objects in +Kubernetes, so Pod labels also implicitly have namespaces. Any label selectors +for Pod labels should specify the namespaces in which Kubernetes should look for those +labels. + +You express the topology domain (X) using a `topologyKey`, which is the key for +the node label that the system uses to denote the domain. For examples, see +[Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/). {{< note >}} Inter-pod affinity and anti-affinity require substantial amount of @@ -219,80 +254,106 @@ not recommend using them in clusters larger than several hundred nodes. {{< /note >}} {{< note >}} -Pod anti-affinity requires nodes to be consistently labelled, in other words every node in the cluster must have an appropriate label matching `topologyKey`. If some or all nodes are missing the specified `topologyKey` label, it can lead to unintended behavior. +Pod anti-affinity requires nodes to be consistently labelled, in other words, +every node in the cluster must have an appropriate label matching `topologyKey`. +If some or all nodes are missing the specified `topologyKey` label, it can lead +to unintended behavior. {{< /note >}} -As with node affinity, there are currently two types of pod affinity and anti-affinity, called `requiredDuringSchedulingIgnoredDuringExecution` and -`preferredDuringSchedulingIgnoredDuringExecution` which denote "hard" vs. "soft" requirements. -See the description in the node affinity section earlier. -An example of `requiredDuringSchedulingIgnoredDuringExecution` affinity would be "co-locate the pods of service A and service B -in the same zone, since they communicate a lot with each other" -and an example `preferredDuringSchedulingIgnoredDuringExecution` anti-affinity would be "spread the pods from this service across zones" -(a hard requirement wouldn't make sense, since you probably have more pods than zones). +#### Types of inter-pod affinity and anti-affinity + +Similar to [node affinity](#node-affinity) are two types of Pod affinity and +anti-affinity as follows: + + * `requiredDuringSchedulingIgnoredDuringExecution` + * `preferredDuringSchedulingIgnoredDuringExecution` -Inter-pod affinity is specified as field `podAffinity` of field `affinity` in the PodSpec. -And inter-pod anti-affinity is specified as field `podAntiAffinity` of field `affinity` in the PodSpec. +For example, you could use +`requiredDuringSchedulingIgnoredDuringExecution` affinity to tell the scheduler to +co-locate Pods of two services in the same cloud provider zone because they +communicate with each other a lot. Similarly, you could use +`preferredDuringSchedulingIgnoredDuringExecution` anti-affinity to spread Pods +from a service across multiple cloud provider zones. -#### An example of a pod that uses pod affinity: +To use inter-pod affinity, use the `affinity.podAffinity` field in the Pod spec. +For inter-pod anti-affinity, use the `affinity.podAntiAffinity` field in the Pod +spec. + +#### Pod affinity example {#an-example-of-a-pod-that-uses-pod-affinity} + +Consider the following Pod spec: {{< codenew file="pods/pod-with-pod-affinity.yaml" >}} -The affinity on this pod defines one pod affinity rule and one pod anti-affinity rule. In this example, the -`podAffinity` is `requiredDuringSchedulingIgnoredDuringExecution` -while the `podAntiAffinity` is `preferredDuringSchedulingIgnoredDuringExecution`. The -pod affinity rule says that the pod can be scheduled onto a node only if that node is in the same zone -as at least one already-running pod that has a label with key "security" and value "S1". (More precisely, the pod is eligible to run -on node N if node N has a label with key `topology.kubernetes.io/zone` and some value V -such that there is at least one node in the cluster with key `topology.kubernetes.io/zone` and -value V that is running a pod that has a label with key "security" and value "S1".) The pod anti-affinity -rule says that the pod should not be scheduled onto a node if that node is in the same zone as a pod with -label having key "security" and value "S2". See the -[design doc](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md) -for many more examples of pod affinity and anti-affinity, both the `requiredDuringSchedulingIgnoredDuringExecution` -flavor and the `preferredDuringSchedulingIgnoredDuringExecution` flavor. +This example defines one Pod affinity rule and one Pod anti-affinity rule. The +Pod affinity rule uses the "hard" +`requiredDuringSchedulingIgnoredDuringExecution`, while the anti-affinity rule +uses the "soft" `preferredDuringSchedulingIgnoredDuringExecution`. -The legal operators for pod affinity and anti-affinity are `In`, `NotIn`, `Exists`, `DoesNotExist`. +The affinity rule says that the scheduler can only schedule a Pod onto a node if +the node is in the same zone as one or more existing Pods with the label +`security=S1`. More precisely, the scheduler must place the Pod on a node that has the +`topology.kubernetes.io/zone=V` label, as long as there is at least one node in +that zone that currently has one or more Pods with the Pod label `security=S1`. + +The anti-affinity rule says that the scheduler should try to avoid scheduling +the Pod onto a node that is in the same zone as one or more Pods with the label +`security=S2`. More precisely, the scheduler should try to avoid placing the Pod on a node that has the +`topology.kubernetes.io/zone=R` label if there are other nodes in the +same zone currently running Pods with the `Security=S2` Pod label. + +See the +[design doc](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md) +for many more examples of Pod affinity and anti-affinity. -In principle, the `topologyKey` can be any legal label-key. However, -for performance and security reasons, there are some constraints on topologyKey: +You can use the `In`, `NotIn`, `Exists` and `DoesNotExist` values in the +`operator` field for Pod affinity and anti-affinity. -1. For pod affinity, empty `topologyKey` is not allowed in both `requiredDuringSchedulingIgnoredDuringExecution` -and `preferredDuringSchedulingIgnoredDuringExecution`. -2. For pod anti-affinity, empty `topologyKey` is also not allowed in both `requiredDuringSchedulingIgnoredDuringExecution` -and `preferredDuringSchedulingIgnoredDuringExecution`. -3. For `requiredDuringSchedulingIgnoredDuringExecution` pod anti-affinity, the admission controller `LimitPodHardAntiAffinityTopology` was introduced to limit `topologyKey` to `kubernetes.io/hostname`. If you want to make it available for custom topologies, you may modify the admission controller, or disable it. -4. Except for the above cases, the `topologyKey` can be any legal label-key. +In principle, the `topologyKey` can be any allowed label key with the following +exceptions for performance and security reasons: -In addition to `labelSelector` and `topologyKey`, you can optionally specify a list `namespaces` -of namespaces which the `labelSelector` should match against (this goes at the same level of the definition as `labelSelector` and `topologyKey`). -If omitted or empty, it defaults to the namespace of the pod where the affinity/anti-affinity definition appears. +* For Pod affinity and anti-affinity, an empty `topologyKey` field is not allowed in both `requiredDuringSchedulingIgnoredDuringExecution` + and `preferredDuringSchedulingIgnoredDuringExecution`. +* For `requiredDuringSchedulingIgnoredDuringExecution` Pod anti-affinity rules, + the admission controller `LimitPodHardAntiAffinityTopology` limits + `topologyKey` to `kubernetes.io/hostname`. You can modify or disable the + admission controller if you want to allow custom topologies. -All `matchExpressions` associated with `requiredDuringSchedulingIgnoredDuringExecution` affinity and anti-affinity -must be satisfied for the pod to be scheduled onto a node. +In addition to `labelSelector` and `topologyKey`, you can optionally specify a list +of namespaces which the `labelSelector` should match against using the +`namespaces` field at the same level as `labelSelector` and `topologyKey`. +If omitted or empty, `namespaces` defaults to the namespace of the Pod where the +affinity/anti-affinity definition appears. #### Namespace selector {{< feature-state for_k8s_version="v1.22" state="beta" >}} -Users can also select matching namespaces using `namespaceSelector`, which is a label query over the set of namespaces. -The affinity term is applied to the union of the namespaces selected by `namespaceSelector` and the ones listed in the `namespaces` field. +You can also select matching namespaces using `namespaceSelector`, which is a label query over the set of namespaces. +The affinity term is applied to namespaces selected by both `namespaceSelector` and the `namespaces` field. Note that an empty `namespaceSelector` ({}) matches all namespaces, while a null or empty `namespaces` list and -null `namespaceSelector` means "this pod's namespace". +null `namespaceSelector` matches the namespace of the Pod where the rule is defined. +{{}} This feature is beta and enabled by default. You can disable it via the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) `PodAffinityNamespaceSelector` in both kube-apiserver and kube-scheduler. +{{}} -#### More Practical Use-cases +#### More practical use-cases -Interpod Affinity and AntiAffinity can be even more useful when they are used with higher -level collections such as ReplicaSets, StatefulSets, Deployments, etc. One can easily configure that a set of workloads should +Inter-pod affinity and anti-affinity can be even more useful when they are used with higher +level collections such as ReplicaSets, StatefulSets, Deployments, etc. These +rules allow you to configure that a set of workloads should be co-located in the same defined topology, eg., the same node. -##### Always co-located in the same node +Take, for example, a three-node cluster running a web application with an +in-memory cache like redis. You could use inter-pod affinity and anti-affinity +to co-locate the web servers with the cache as much as possible. -In a three node cluster, a web application has in-memory cache such as redis. We want the web-servers to be co-located with the cache as much as possible. - -Here is the yaml snippet of a simple redis deployment with three replicas and selector label `app=store`. The deployment has `PodAntiAffinity` configured to ensure the scheduler does not co-locate replicas on a single node. +In the following example Deployment for the redis cache, the replicas get the label `app=store`. The +`podAntiAffinity` rule tells the scheduler to avoid placing multiple replicas +with the `app=store` label on a single node. This creates each cache in a +separate node. ```yaml apiVersion: apps/v1 @@ -324,7 +385,10 @@ spec: image: redis:3.2-alpine ``` -The below yaml snippet of the webserver deployment has `podAntiAffinity` and `podAffinity` configured. This informs the scheduler that all its replicas are to be co-located with pods that have selector label `app=store`. This will also ensure that each web-server replica does not co-locate on a single node. +The following Deployment for the web servers creates replicas with the label `app=web-store`. The +Pod affinity rule tells the scheduler to place each replica on a node that has a +Pod with the label `app=store`. The Pod anti-affinity rule tells the scheduler +to avoid placing multiple `app=web-store` servers on a single node. ```yaml apiVersion: apps/v1 @@ -365,56 +429,37 @@ spec: image: nginx:1.16-alpine ``` -If we create the above two deployments, our three node cluster should look like below. +Creating the two preceding Deployments results in the following cluster layout, +where each web server is co-located with a cache, on three separate nodes. | node-1 | node-2 | node-3 | |:--------------------:|:-------------------:|:------------------:| | *webserver-1* | *webserver-2* | *webserver-3* | | *cache-1* | *cache-2* | *cache-3* | -As you can see, all the 3 replicas of the `web-server` are automatically co-located with the cache as expected. - -``` -kubectl get pods -o wide -``` -The output is similar to this: -``` -NAME READY STATUS RESTARTS AGE IP NODE -redis-cache-1450370735-6dzlj 1/1 Running 0 8m 10.192.4.2 kube-node-3 -redis-cache-1450370735-j2j96 1/1 Running 0 8m 10.192.2.2 kube-node-1 -redis-cache-1450370735-z73mh 1/1 Running 0 8m 10.192.3.1 kube-node-2 -web-server-1287567482-5d4dz 1/1 Running 0 7m 10.192.2.3 kube-node-1 -web-server-1287567482-6f7v5 1/1 Running 0 7m 10.192.4.3 kube-node-3 -web-server-1287567482-s330j 1/1 Running 0 7m 10.192.3.2 kube-node-2 -``` - -##### Never co-located in the same node - -The above example uses `PodAntiAffinity` rule with `topologyKey: "kubernetes.io/hostname"` to deploy the redis cluster so that -no two instances are located on the same host. -See [ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure) -for an example of a StatefulSet configured with anti-affinity for high availability, using the same technique. +See the [ZooKeeper tutorial](/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure) +for an example of a StatefulSet configured with anti-affinity for high +availability, using the same technique as this example. ## nodeName -`nodeName` is the simplest form of node selection constraint, but due -to its limitations it is typically not used. `nodeName` is a field of -PodSpec. If it is non-empty, the scheduler ignores the pod and the -kubelet running on the named node tries to run the pod. Thus, if -`nodeName` is provided in the PodSpec, it takes precedence over the -above methods for node selection. +`nodeName` is a more direct form of node selection than affinity or +`nodeSelector`. `nodeName` is a field in the Pod spec. If the `nodeName` field +is not empty, the scheduler ignores the Pod and the kubelet on the named node +tries to place the Pod on that node. Using `nodeName` overrules using +`nodeSelector` or affinity and anti-affinity rules. Some of the limitations of using `nodeName` to select nodes are: -- If the named node does not exist, the pod will not be run, and in +- If the named node does not exist, the Pod will not run, and in some cases may be automatically deleted. - If the named node does not have the resources to accommodate the - pod, the pod will fail and its reason will indicate why, + Pod, the Pod will fail and its reason will indicate why, for example OutOfmemory or OutOfcpu. - Node names in cloud environments are not always predictable or stable. -Here is an example of a pod config file using the `nodeName` field: +Here is an example of a Pod spec using the `nodeName` field: ```yaml apiVersion: v1 @@ -428,21 +473,16 @@ spec: nodeName: kube-01 ``` -The above pod will run on the node kube-01. - - +The above Pod will only run on the node `kube-01`. ## {{% heading "whatsnext" %}} - -[Taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) allow a Node to *repel* a set of Pods. - -The design documents for -[node affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md) -and for [inter-pod affinity/anti-affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md) contain extra background information about these features. - -Once a Pod is assigned to a Node, the kubelet runs the Pod and allocates node-local resources. -The [topology manager](/docs/tasks/administer-cluster/topology-manager/) can take part in node-level -resource allocation decisions. +* Read more about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/) . +* Read the design docs for [node affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/nodeaffinity.md) + and for [inter-pod affinity/anti-affinity](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md). +* Learn about how the [topology manager](/docs/tasks/administer-cluster/topology-manager/) takes part in node-level + resource allocation decisions. +* Learn how to use [nodeSelector](/docs/tasks/configure-pod-container/assign-pods-nodes/). +* Learn how to use [affinity and anti-affinity](/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/). diff --git a/content/en/examples/examples_test.go b/content/en/examples/examples_test.go index ac50cdfe4c641..eaf6e5808e2c3 100644 --- a/content/en/examples/examples_test.go +++ b/content/en/examples/examples_test.go @@ -556,6 +556,7 @@ func TestExampleObjectSchemas(t *testing.T) { "pod-projected-svc-token": {&api.Pod{}}, "pod-rs": {&api.Pod{}, &api.Pod{}}, "pod-single-configmap-env-variable": {&api.Pod{}}, + "pod-with-affinity-anti-affinity": {&api.Pod{}}, "pod-with-node-affinity": {&api.Pod{}}, "pod-with-pod-affinity": {&api.Pod{}}, "pod-with-toleration": {&api.Pod{}}, diff --git a/content/en/examples/pods/pod-with-affinity-anti-affinity.yaml b/content/en/examples/pods/pod-with-affinity-anti-affinity.yaml new file mode 100644 index 0000000000000..a7d14b2d6f755 --- /dev/null +++ b/content/en/examples/pods/pod-with-affinity-anti-affinity.yaml @@ -0,0 +1,32 @@ +apiVersion: v1 +kind: Pod +metadata: + name: with-affinity-anti-affinity +spec: + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: kubernetes.io/os + operator: In + values: + - linux + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 1 + preference: + matchExpressions: + - key: label-1 + operator: In + values: + - key-1 + - weight: 50 + preference: + matchExpressions: + - key: label-2 + operator: In + values: + - key-2 + containers: + - name: with-node-affinity + image: k8s.gcr.io/pause:2.0 \ No newline at end of file diff --git a/content/en/examples/pods/pod-with-node-affinity.yaml b/content/en/examples/pods/pod-with-node-affinity.yaml index 253d2b21ea917..e077f79883eff 100644 --- a/content/en/examples/pods/pod-with-node-affinity.yaml +++ b/content/en/examples/pods/pod-with-node-affinity.yaml @@ -8,11 +8,10 @@ spec: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - - key: kubernetes.io/e2e-az-name + - key: kubernetes.io/os operator: In values: - - e2e-az1 - - e2e-az2 + - linux preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: From d97e53f2a174c922c7bbe9b4be28b2bd77832de6 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Fri, 21 Jan 2022 16:28:21 +0800 Subject: [PATCH 005/138] [zh] Resync the configmap page --- .../docs/concepts/configuration/configmap.md | 77 ++++++++++--------- 1 file changed, 41 insertions(+), 36 deletions(-) diff --git a/content/zh/docs/concepts/configuration/configmap.md b/content/zh/docs/concepts/configuration/configmap.md index 47ceccda3f26d..2bb64570e916e 100644 --- a/content/zh/docs/concepts/configuration/configmap.md +++ b/content/zh/docs/concepts/configuration/configmap.md @@ -77,8 +77,8 @@ ConfigMap 是一个 API [对象](/zh/docs/concepts/overview/working-with-objects 让你可以存储其他对象所需要使用的配置。 和其他 Kubernetes 对象都有一个 `spec` 不同的是,ConfigMap 使用 `data` 和 `binaryData` 字段。这些字段能够接收键-值对作为其取值。`data` 和 `binaryData` -字段都是可选的。`data` 字段设计用来保存 UTF-8 字节序列,而 `binaryData` 则 -被设计用来保存二进制数据作为 base64 编码的字串。 +字段都是可选的。`data` 字段设计用来保存 UTF-8 字节序列,而 `binaryData` +则被设计用来保存二进制数据作为 base64 编码的字串。 ConfigMap 的名字必须是一个合法的 [DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)。 @@ -92,11 +92,11 @@ Starting from v1.19, you can add an `immutable` field to a ConfigMap definition to create an [immutable ConfigMap](#configmap-immutable). --> `data` 或 `binaryData` 字段下面的每个键的名称都必须由字母数字字符或者 -`-`、`_` 或 `.` 组成。在 `data` 下保存的键名不可以与在 `binaryData` 下 -出现的键名有重叠。 +`-`、`_` 或 `.` 组成。在 `data` 下保存的键名不可以与在 `binaryData` +下出现的键名有重叠。 -从 v1.19 开始,你可以添加一个 `immutable` 字段到 ConfigMap 定义中,创建 -[不可变更的 ConfigMap](#configmap-immutable)。 +从 v1.19 开始,你可以添加一个 `immutable` 字段到 ConfigMap 定义中, +创建[不可变更的 ConfigMap](#configmap-immutable)。 ## ConfigMaps 和 Pods -你可以写一个引用 ConfigMap 的 Pod 的 `spec`,并根据 ConfigMap 中的数据 -在该 Pod 中配置容器。这个 Pod 和 ConfigMap 必须要在同一个 +你可以写一个引用 ConfigMap 的 Pod 的 `spec`,并根据 ConfigMap 中的数据在该 +Pod 中配置容器。这个 Pod 和 ConfigMap 必须要在同一个 {{< glossary_tooltip text="名字空间" term_id="namespace" >}} 中。 第四种方法意味着你必须编写代码才能读取 ConfigMap 和它的数据。然而, -由于你是直接使用 Kubernetes API,因此只要 ConfigMap 发生更改,你的 -应用就能够通过订阅来获取更新,并且在这样的情况发生的时候做出反应。 -通过直接进入 Kubernetes API,这个技术也可以让你能够获取到不同的名字空间 -里的 ConfigMap。 +由于你是直接使用 Kubernetes API,因此只要 ConfigMap 发生更改, +你的应用就能够通过订阅来获取更新,并且在这样的情况发生的时候做出反应。 +通过直接进入 Kubernetes API,这个技术也可以让你能够获取到不同的名字空间里的 ConfigMap。 下面是一个 Pod 的示例,它通过使用 `game-demo` 中的值来配置一个 Pod: @@ -243,16 +242,16 @@ definition specifies an `items` array in the `volumes` section. If you omit the `items` array entirely, every key in the ConfigMap becomes a file with the same name as the key, and you get 4 files. --> -ConfigMap 不会区分单行属性值和多行类似文件的值,重要的是 Pods 和其他对象 -如何使用这些值。 +ConfigMap 不会区分单行属性值和多行类似文件的值,重要的是 Pods +和其他对象如何使用这些值。 上面的例子定义了一个卷并将它作为 `/config` 文件夹挂载到 `demo` 容器内, 创建两个文件,`/config/game.properties` 和 `/config/user-interface.properties`, 尽管 ConfigMap 中包含了四个键。 这是因为 Pod 定义中在 `volumes` 节指定了一个 `items` 数组。 -如果你完全忽略 `items` 数组,则 ConfigMap 中的每个键都会变成一个与 -该键同名的文件,因此你会得到四个文件。 +如果你完全忽略 `items` 数组,则 ConfigMap 中的每个键都会变成一个与该键同名的文件, +因此你会得到四个文件。 ## 使用 ConfigMap {#using-configmaps} -ConfigMap 可以作为数据卷挂载。ConfigMap 也可被系统的其他组件使用,而 -不一定直接暴露给 Pod。例如,ConfigMap 可以保存系统中其他组件要使用 -的配置数据。 +ConfigMap 可以作为数据卷挂载。ConfigMap 也可被系统的其他组件使用, +而不一定直接暴露给 Pod。例如,ConfigMap 可以保存系统中其他组件要使用的配置数据。 你希望使用的每个 ConfigMap 都需要在 `spec.volumes` 中被引用到。 -如果 Pod 中有多个容器,则每个容器都需要自己的 `volumeMounts` 块,但针对 -每个 ConfigMap,你只需要设置一个 `spec.volumes` 块。 +如果 Pod 中有多个容器,则每个容器都需要自己的 `volumeMounts` 块,但针对每个 +ConfigMap,你只需要设置一个 `spec.volumes` 块。 ConfigMap 既可以通过 watch 操作实现内容传播(默认形式),也可实现基于 TTL 的缓存,还可以直接经过所有请求重定向到 API 服务器。 -因此,从 ConfigMap 被更新的那一刻算起,到新的主键被投射到 Pod 中去,这一 -时间跨度可能与 kubelet 的同步周期加上高速缓存的传播延迟相等。 +因此,从 ConfigMap 被更新的那一刻算起,到新的主键被投射到 Pod 中去, +这一时间跨度可能与 kubelet 的同步周期加上高速缓存的传播延迟相等。 这里的传播延迟取决于所选的高速缓存类型 (分别对应 watch 操作的传播延迟、高速缓存的 TTL 时长或者 0)。 @@ -391,6 +389,14 @@ ConfigMaps consumed as environment variables are not updated automatically and r 以环境变量方式使用的 ConfigMap 数据不会被自动更新。 更新这些数据需要重新启动 Pod。 +{{< note >}} + +将 ConfigMap 作为 [subPath](/docs/concepts/storage/volumes#using-subpath) +卷挂载的容器无法收到 ConfigMap 更新。 +{{< /note >}} + @@ -412,8 +418,8 @@ individual Secrets and ConfigMaps as immutable. For clusters that extensively us data has the following advantages: --> Kubernetes 特性 _不可变更的 Secret 和 ConfigMap_ 提供了一种将各个 -Secret 和 ConfigMap 设置为不可变更的选项。对于大量使用 ConfigMap 的 -集群(至少有数万个各不相同的 ConfigMap 给 Pod 挂载)而言,禁止更改 +Secret 和 ConfigMap 设置为不可变更的选项。对于大量使用 ConfigMap 的集群 +(至少有数万个各不相同的 ConfigMap 给 Pod 挂载)而言,禁止更改 ConfigMap 的数据有以下好处: - 保护应用,使之免受意外(不想要的)更新所带来的负面影响。 -- 通过大幅降低对 kube-apiserver 的压力提升集群性能,这是因为系统会关闭 - 对已标记为不可变更的 ConfigMap 的监视操作。 +- 通过大幅降低对 kube-apiserver 的压力提升集群性能, + 这是因为系统会关闭对已标记为不可变更的 ConfigMap 的监视操作。 此功能特性由 `ImmutableEphemeralVolumes` -[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) -来控制。你可以通过将 `immutable` 字段设置为 `true` 创建不可变更的 ConfigMap。 +[特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)来控制。 +你可以通过将 `immutable` 字段设置为 `true` 创建不可变更的 ConfigMap。 例如: ```yaml @@ -454,8 +460,7 @@ to the deleted ConfigMap, it is recommended to recreate these pods. --> 一旦某 ConfigMap 被标记为不可变更,则 _无法_ 逆转这一变化,,也无法更改 `data` 或 `binaryData` 字段的内容。你只能删除并重建 ConfigMap。 -因为现有的 Pod 会维护一个对已删除的 ConfigMap 的挂载点,建议重新创建 -这些 Pods。 +因为现有的 Pod 会维护一个对已删除的 ConfigMap 的挂载点,建议重新创建这些 Pods。 ## {{% heading "whatsnext" %}} @@ -466,6 +471,6 @@ to the deleted ConfigMap, it is recommended to recreate these pods. separating code from configuration. --> * 阅读 [Secret](/zh/docs/concepts/configuration/secret/)。 -* 阅读 [配置 Pod 来使用 ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)。 -* 阅读 [Twelve-Factor 应用](https://12factor.net/) 来了解将代码和配置分开的动机。 +* 阅读[配置 Pod 使用 ConfigMap](/zh/docs/tasks/configure-pod-container/configure-pod-configmap/)。 +* 阅读 [Twelve-Factor 应用](https://12factor.net/zh_cn/)来了解将代码和配置分开的动机。 From d78f8d2736b729b96b5fb95cf726b42e37971573 Mon Sep 17 00:00:00 2001 From: Mikhail Polivakha <68962645+Mikhail2048@users.noreply.github.com> Date: Fri, 18 Feb 2022 11:10:47 +0300 Subject: [PATCH 006/138] Add example of Service targetPort binding by name Add example of Service targetPort binding by name --- .../concepts/services-networking/service.md | 45 ++++++++++++++++--- 1 file changed, 39 insertions(+), 6 deletions(-) diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index 0298854137f46..cc23725a3cc10 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -109,12 +109,45 @@ field. {{< /note >}} Port definitions in Pods have names, and you can reference these names in the -`targetPort` attribute of a Service. This works even if there is a mixture -of Pods in the Service using a single configured name, with the same network -protocol available via different port numbers. -This offers a lot of flexibility for deploying and evolving your Services. -For example, you can change the port numbers that Pods expose in the next -version of your backend software, without breaking clients. +`targetPort` attribute of a Service. For example, we can bind the `targetPort` +of the Service to the Pod port in the following way: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: nginx + labels: + app.kubernetes.io/name: proxy +spec: + containers: + - name: nginx + image: nginx:11.14.2 + ports: + - containerPort: 80 + name: http-web-service + +--- +apiVersion: v1 +kind: Service +metadata: + name: nginx-service +spec: + selector: + app.kubernetes.io/name: proxy + ports: + - name: name-of-service-port + protocol: TCP + port: 80 + targetPort: http-web-service +``` + + +This works even if there is a mixture of Pods in the Service using a single +configured name, with the same network protocol available via different +port numbers. This offers a lot of flexibility for deploying and evolving +your Services. For example, you can change the port numbers that Pods expose +in the next version of your backend software, without breaking clients. The default protocol for Services is TCP; you can also use any other [supported protocol](#protocol-support). From e993de5dc6aab717cbf81c4ba6376adefff6f840 Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Tue, 8 Feb 2022 11:09:06 +0000 Subject: [PATCH 007/138] Tidy styles for community section This change enables an opt-in switch to new styles for /community It is opt-in to enable localizations to migrate to the new page on their own timeline. --- assets/scss/_custom.scss | 3 +- layouts/community/list.html | 3 +- layouts/partials/head.html | 8 + static/css/community.css | 411 +++++++++++++++++- ...{newcommunity.css => legacy_community.css} | 19 + 5 files changed, 428 insertions(+), 16 deletions(-) rename static/css/{newcommunity.css => legacy_community.css} (97%) diff --git a/assets/scss/_custom.scss b/assets/scss/_custom.scss index 9cf126a9e0588..ed4e16192a8f2 100644 --- a/assets/scss/_custom.scss +++ b/assets/scss/_custom.scss @@ -558,7 +558,8 @@ main.content { } } -/* COMMUNITY */ +/* COMMUNITY legacy styles */ +/* Leave these in place until localizations are caught up */ .newcommunitywrapper { .news { diff --git a/layouts/community/list.html b/layouts/community/list.html index a9eb9f73ce50e..9ce1091f30e5a 100644 --- a/layouts/community/list.html +++ b/layouts/community/list.html @@ -1,4 +1,3 @@ {{ define "main" }} - - {{ .Content }} + {{ .Content }} {{ end }} \ No newline at end of file diff --git a/layouts/partials/head.html b/layouts/partials/head.html index 9f25090d63cda..7f42139a68a17 100644 --- a/layouts/partials/head.html +++ b/layouts/partials/head.html @@ -92,6 +92,14 @@ {{ end }} +{{- if eq (lower .Params.cid) "community" -}} +{{- if eq .Params.community_styles_migrated true -}} + +{{- else -}} + +{{- end -}} + +{{- end -}} {{ with .Params.js }}{{ range (split . ",") }} {{ end }}{{ else }}{{ end }} diff --git a/static/css/community.css b/static/css/community.css index 9bd94d3dab95d..cfb6e3c86599a 100644 --- a/static/css/community.css +++ b/static/css/community.css @@ -1,19 +1,404 @@ -div.community_main h1, h2, h3 { - border-bottom: 1px solid #cccccc; - margin-bottom: 30px; - padding-bottom: 10px; - padding-top: 10px; +body.cid-community #banner { + aspect-ratio: 1500 / 293; /* match source image */ + display: block; + width: 100%; + margin: 0 0 2.5em 0; + max-height: min(calc(2.5vw + min(24em, calc(2 * 293px))), 50vh); + object-fit: cover; + overflow: clip; } -div.community_main { - padding: 50px 100px; +body.cid-community .community-section #h2 { + font-weight: 200; + margin-top: 1em; + margin-bottom: 0.5em; + text-align: center; + letter-spacing: 0.15em; + text-transform: uppercase; } -div.community_main ul, -div.community_main li { - list-style: disc; - list-style-position: inside; - padding: 10px 0; - font-size: 16px; +body.cid-community .community-section h2:before, +body.cid-community .community-section h2:after { + background-color: #aaaaaa; + content: ""; + display: inline-block; + height: 1px; + position: relative; + vertical-align: middle; + width: 35%; +} + +body.cid-community .community-section h2:before { + right: 0.5em; + margin-left: -50%; +} + +body.cid-community .community-section h2:after { + left: 0.5em; + margin-right: -50%; +} + +body.cid-community .community-section, body.cid-community #navigation-items { + max-width: min(85vw,100em); + margin-left: auto; + margin-right: auto; +} + +body.cid-community .community-section { + margin-top: 1em; + margin-bottom: 1em; + padding: 0.5em 0; + justify-content: space-evenly; + align-items: baseline; + align-content: space-between; + min-height: 10em; + text-align: center; /* overridden for paragraphs */ +} + +body.cid-community #navigation-items { + padding: 0.25em; + + width: 100vw; + max-width: initial; + + margin-top: 2.5em; + margin-bottom: 2.5em; + + gap: 1.25em; + + border-bottom: 1px solid #aaaaaa; + border-top: 1px solid #aaaaaa; + display: flex; + flex-direction: row; + flex-wrap: wrap; +} + +/* Allow fallback if calc() fails */ +body.cid-community #navigation-items { + padding-left: calc((100vw - min(85vw,120em))/2); + padding-right: calc((100vw - min(85vw,120em))/2); +} + +body.cid-community #navigation-items .community-nav-item { + flex-grow: 1; + text-align: center; + letter-spacing: 0.08em; + padding-top: 0.2em; + padding-bottom: 0.2em; + word-spacing: initial; + text-decoration: none; + text-transform: uppercase; font-weight: 400; + color: #303030; + background: #ffffff; + font-size: 1.1em; + padding: 0.2em; + margin: 0; + max-width: 75vw; + min-width: 10%; + min-height: 2em; +} + +body.cid-community .community-section > p:not(.community-simple) { + line-height: 1.5em; + text-align: initial; +} + +body.cid-community .community-section#introduction, +body.cid-community .community-section#introduction > p { + line-height: 1.75em; + font-weight: 300; + letter-spacing: 0.04em; +} + +body.cid-community #gallery { + display: flex; + max-width: 100vw; + gap: 0.75rem; + justify-content: center; + margin-left: auto; + margin-right: auto; +} + +body.cid-community #gallery img { + display: block; + flex-basis: 0; + flex-grow: 0; + height: min(20em, 90vh); +} + +/* see media queries later in file */ +body.cid-community #gallery img.community-gallery-mobile { + display: none; } + + + + +body.cid-community .community-section#events { + width: 100vw; + max-width: initial; + margin-bottom: 0; + + /* no events + background-image: url('/images/community/event-bg.jpg'); + background-size: 100% auto; + background-position: center; + color: #fff; + */ + display: none; +} + +body.cid-community .community-section#values { + width: 100vw; + max-width: initial; + background-image: url('/images/community/event-bg.jpg'); + color: #fff; + padding: 2em; + margin-top: 3em; +} +body.cid-community .community-section#values { + padding-left: calc((100vw - min(75vw,120em))/2); + padding-right: calc((100vw - min(75vw,120em))/2); +} + +body.cid-community .community-section#meetups { + width: 100vw; + max-width: initial; + margin-top: 0; + + background: url('/images/community/kubernetes-community-final.jpg'), url('/images/community/kubernetes-community-column.png'); + background-position: 80% center, left center; + background-repeat: no-repeat, repeat; + background-size: auto 100%, cover; + color: #fff; + + width: 100vw; + /* fallback in case calc() fails */ + padding: 5vw; + padding-bottom: 1em; + min-height: min(24em,50vh); +} + +body.cid-community .community-section#meetups { + padding-left: calc((100vw - min(75vw,100em))/2); + padding-right: calc((100vw - min(75vw,100em))/2); +} + +body.cid-community a.community-cta-button { + appearance: button; + display: inline-block; + margin: 0.75em auto 0 auto; /* gap before button */ + + background-color: #0662EE; + color: white; + + border-radius: 6px; + padding: 0.75em; + min-height: 3em; + min-width: max(5vw, 9em); + + text-align: center; +} + +body.cid-community a.community-cta-button > span.community-cta { + color: inherit; + background: transparent; + + letter-spacing: 0.02em; + font-weight: bold; + text-transform: uppercase; +} + +body.cid-community .fullbutton { + appearance: button; + display: inline-block; + margin: auto; + margin-top: 2rem; + background-color: #0662EE; + color: white; + font-size: 1.5em; + border-radius: 0.3333em; + padding: 0.5em; + letter-spacing: 0.07em; + font-weight: bold; +} + +body.cid-community #videos { + width: 100vw; + max-width: initial; + padding: 0.5em 5vw 5% 5vw; /* fallback in case calc() fails */ + background-color: #eeeeee; + margin-top: 4em; +} + +body.cid-community #videos { + padding-left: calc((100vw - min(95vw,160em))/2); + padding-right: calc((100vw - min(95vw,160em))/2); +} + +body.cid-community #videos .container { + display: flex; + flex-wrap: wrap; + gap: max(12px,2em); + max-width: 95vw; + justify-content: center; + margin-left: auto; + margin-right: auto; +} + + +body.cid-community .video { + width: min(80vw,max(31%, 24em)); + flex-basis: 31%; + flex-shrink: 1; +} + +body.cid-community .video .videocta { + display: block; + margin: 0.25em 0 0em 0; + text-align: center; + padding: 0.25em; + padding-bottom: 2em; + text-align: center; + color: #0662EE; + text-transform: uppercase; + font-weight: bold; + letter-spacing: 0.06em; + line-height: 1.25em; + clear: both; +} + +body.cid-community .video iframe { + min-width: 95%; + height: auto; + aspect-ratio: 16 / 9; +} + +body.cid-community #resources { + margin-top: 5%; + margin-bottom: 3%; +} + +body.cid-community #resources .container { + width: 100%; + display: flex; + flex-wrap: none; + gap: 2em; + justify-content: center; + margin-left: auto; + margin-right: auto; +} + + +body.cid-community #resources .container > .community-resource { + flex-basis: auto; + width: 100%; + flex-shrink: 1; +} + +body.cid-community #resources .container > .community-resource img { + max-height: min(6em, 50vh); + width: auto; + display: block; + margin: 1em auto 0.75em auto; +} + +body.cid-community #resources .container > .community-resource a { + text-transform: uppercase; +} + +body.cid-community .resourcebox { + height: 100%; + min-height: 370px; +} + + + + +body.cid-community .community-section.community-frame { + width: 100vw; +} + +body.cid-community .community-section.community-frame .twittercol1 { + width: 100%; +} + +body.cid-community details > summary { + color: #303030; +} + +body.cid-community .cncf_coc_container { + padding-left: calc((100vw - min(75vw,100em))/2); + padding-right: calc((100vw - min(75vw,100em))/2); + padding-bottom: 8em; +} + +body.cid-community .cncf_coc_container h1, +body.cid-community .cncf_coc_container h2 { + margin-top: 1.5em; + color: #0662EE; +} + +/* no need to repeat the heading */ +body.cid-community .cncf_coc_container h1:first-child { + visibility: hidden; + margin: 0; +} + +@media only screen and (max-width: 640px) { + body.cid-community #navigation-items { + justify-content: flex-start; + text-align: left; + gap: min(2px,0.125em); + } + body.cid-community #navigation-items div.community-nav-item { + width: 100%; + text-align: left; + min-height: initial; + flex-shrink: 0; + } + body.cid-community .video { + max-width: 80vw; + flex-basis: auto; + } + body.cid-community #resources .container { + flex-wrap: wrap; + } + body.cid-community #resources .container .community-resource { + max-width: min(80vw, 24rem); + } + body.cid-community a.community-cta-button { + font-size: 1.5rem; + } +} + +@media only screen and (max-width: 1024px) { + body.cid-community #gallery img.community-gallery-desktop { + display: none; + } + body.cid-community #gallery img.community-gallery-mobile { + display: initial; + max-width: 95vw; + height: auto; + } + body.cid-community .video { + flex-basis: max(30em,80vw); + max-width: max(32em, 75vw); + } + body.cid-community .video .videocta { + padding-bottom: 0.5em; + } +} + +@media only screen and (min-width: 1024px) { + body.cid-community br.optional { + display: none; + } + body.cid-community .community-section:not(:first-of-type) { + min-height: max(20em,18vh); + } + body.cid-community .community-section#meetups p:last-of-type { + margin-bottom: 6em; /* extra space for background */ + } +} \ No newline at end of file diff --git a/static/css/newcommunity.css b/static/css/legacy_community.css similarity index 97% rename from static/css/newcommunity.css rename to static/css/legacy_community.css index e62e4ab8c5973..80b0404344bc9 100644 --- a/static/css/newcommunity.css +++ b/static/css/legacy_community.css @@ -1,3 +1,22 @@ +div.community_main h1, h2, h3 { + border-bottom: 1px solid #cccccc; + margin-bottom: 30px; + padding-bottom: 10px; + padding-top: 10px; +} + +div.community_main { + padding: 50px 100px; +} + +div.community_main ul, +div.community_main li { + list-style: disc; + list-style-position: inside; + padding: 10px 0; + font-size: 16px; + font-weight: 400; +} .SandboxRoot.env-bp-430 .timeline-Tweet-text { font-size: 13pt !important; From afc338c0bf350a27104389438d4634baeb5603a9 Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Tue, 8 Feb 2022 11:26:12 +0000 Subject: [PATCH 008/138] Update and restyle community page - Use new styles (and remove inline CSS) - Fix some hyperlinks - Wording tweaks - Link to Server Fault, not Stack Overflow --- content/en/community/_index.html | 440 ++++++++---------- content/en/community/static/README.md | 5 +- static/css/community.css | 41 +- static/images/community/discuss.png | Bin 4944 -> 3984 bytes .../community/kubernetes-community-column.png | Bin 0 -> 4307 bytes static/images/community/serverfault.png | Bin 0 -> 1722 bytes static/images/community/slack.png | Bin 18595 -> 32845 bytes static/images/community/stack.png | Bin 17618 -> 16299 bytes static/images/community/twitter.png | Bin 9134 -> 7235 bytes 9 files changed, 218 insertions(+), 268 deletions(-) create mode 100644 static/images/community/kubernetes-community-column.png create mode 100644 static/images/community/serverfault.png diff --git a/content/en/community/_index.html b/content/en/community/_index.html index c08aa25ae0149..de7fcee9a45ee 100644 --- a/content/en/community/_index.html +++ b/content/en/community/_index.html @@ -1,257 +1,183 @@ ---- -title: Community -layout: basic -cid: community ---- - -
-
- Kubernetes Conference Gallery - Kubernetes Conference Gallery -
- -
-
-

The Kubernetes community -- users, contributors, and the culture we've built together -- is one of the biggest reasons for the meteoric rise of this open source project. Our culture and values continue to grow and change as the project itself grows and changes. We all work together toward constant improvement of the project and the ways we work on it. -

We are the people who file issues and pull requests, attend SIG meetings, Kubernetes meetups, and KubeCon, advocate for its adoption and innovation, run kubectl get pods, and contribute in a thousand other vital ways. Read on to learn how you can get involved and become part of this amazing community.

-
-
- -
- -Contributor Community      -Community Values      -Code of conduct       -Videos      -Discussions      -Events and meetups      -News      -Releases - -
-

-
-
-
- Kubernetes Conference Gallery -
- -
- Kubernetes Conference Gallery -
- -
- Kubernetes Conference Gallery -
- Kubernetes Conference Gallery - -
- -
-
-
-

-

-

Community Values

-The Kubernetes Community values are the keystone to the ongoing success of the project.
-These principles guide every aspect of the Kubernetes project. -
- -

- - READ MORE - -
-
-
-
- - - -
-
-

-

-

Code of Conduct

-The Kubernetes community values respect and inclusiveness, and enforces a Code of Conduct in all interactions. If you notice a violation of the Code of Conduct at an event or meeting, in Slack, or in another communication mechanism, reach out to the Kubernetes Code of Conduct Committee at conduct@kubernetes.io. All reports are kept confidential. You can read about the committee here. -
- -

- - -READ MORE - -
-
-
-
- - - -
-

-

-

Videos

- -
We're on YouTube, a lot. Subscribe for a wide range of topics.
- - -
- - -
-

-

-

Discussions

- -
We talk a lot. Find us and join the conversation on any of these platforms.
- -
- -
-Forum" - -forum ▶ - -
-Topic-based technical discussions that bridge docs, StackOverflow, and so much more -
-
- -
-Twitter - -twitter ▶ - -
Real-time announcements of blog posts, events, news, ideas -
-
- -
-GitHub - -github ▶ - -
-All the project and issue tracking, plus of course code -
-
- -
-Stack Overflow - -stack overflow ▶ - -
- Technical troubleshooting for any use case - -
-
- - - -
-
-
-

-

-
-

Upcoming Events

- {{< upcoming-events >}} -
-
- -
-
-
-

Global Community

-With over 150 meetups in the world and growing, go find your local kube people. If one isn't near, take charge and create your own. -
- -
-FIND A MEETUP -
-
- -
-
- - - - -
-

-

-

Recent News

- -
- - -
-



-
- -
+--- +title: Community +layout: basic +cid: community +community_styles_migrated: true +--- + + +
+

The Kubernetes community — users, contributors, and the culture we've + built together — is one of the biggest reasons for the meteoric rise of + this open source project. Our culture and values continue to grow and change + as the project itself grows and changes. We all work together toward constant + improvement of the project and the ways we work on it.

+

We are the people who file issues and pull requests, attend SIG meetings, + Kubernetes meetups, and KubeCon, advocate for its adoption and innovation, + run kubectl get pods, and contribute in a thousand other vital + ways. Read on to learn how you can get involved and become part of this amazing + community.

+
+ + + + + +
+

Community Values

+

The Kubernetes Community values are the keystone to the ongoing success of the project.
+ These principles guide every aspect of the Kubernetes project.

+ + Read more + +
+ +
+

Code of Conduct

+

The Kubernetes community values respect and inclusiveness, and enforces a Code of Conduct in all interactions.

+

If you notice a violation of the Code of Conduct at an event or meeting, in Slack, or in another communication mechanism, reach out to the Kubernetes Code of Conduct Committee at conduct@kubernetes.io. All reports are kept confidential. You can read about the committee in the Kubernetes community repository on GitHub.

+ + Read more + +
+ +
+

Videos

+ +

Kubernetes is on YouTube, a lot. Subscribe for a wide range of topics.

+ + +
+ +
+

Discussions

+ +

We talk a lot. Find us and join the conversation on any of these platforms.

+ +
+
+ + Forum + + Community forums ▶ +

Topic-based technical discussions that bridge docs, + troubleshooting, and so much more.

+
+ +
+ + Twitter + + Twitter ▶ +

#kubernetesio

+

Real-time announcements of blog posts, events, news, ideas.

+
+ +
+ + GitHub + + GitHub ▶ +

All the project and issue tracking, plus of course code.

+
+ +
+ + Server Fault + + Server Fault ▶ +

Kubernetes-related discussion on Server Fault. Ask a question, or answer one.

+
+ +
+ + Slack + + Slack ▶ +

With 170+ channels, you'll find one that fits your needs.

+
Need an invitation? + Visit https://slack.k8s.io/ + for an invitation.
+
+
+
+ +
+
+

Upcoming Events

+ {{< upcoming-events >}} +
+
+ +
+

Global community

+

+ With over 150 meetups in the world and growing, go find your local kube people. If one isn't near, take charge and create your own. +

+ + Find a meetup + +
+ +
+

Recent News

+ +
diff --git a/content/en/community/static/README.md b/content/en/community/static/README.md index ef8e8d5a3e6bc..bc44990c077f8 100644 --- a/content/en/community/static/README.md +++ b/content/en/community/static/README.md @@ -1,2 +1,5 @@ The files in this directory have been imported from other sources. Do not -edit them directly, except by replacing them with new versions. \ No newline at end of file +edit them directly, except by replacing them with new versions. + +Localization note: you do not need to create localized versions of any of + the files in this directory. \ No newline at end of file diff --git a/static/css/community.css b/static/css/community.css index cfb6e3c86599a..99ad2948fae3f 100644 --- a/static/css/community.css +++ b/static/css/community.css @@ -55,6 +55,10 @@ body.cid-community .community-section { text-align: center; /* overridden for paragraphs */ } +body.cid-community .community-section:first-child { + padding-top: max(3vh,1.5em); +} + body.cid-community #navigation-items { padding: 0.25em; @@ -328,24 +332,41 @@ body.cid-community details > summary { color: #303030; } -body.cid-community .cncf_coc_container { - padding-left: calc((100vw - min(75vw,100em))/2); - padding-right: calc((100vw - min(75vw,100em))/2); +body.cid-community #cncf-code-of-conduct-intro, +body.cid-community #cncf-code-of-conduct { + max-width: min(90vw, 100em); + padding-left: 0.5em; + padding-right: 0.5em; + margin-left: auto; + margin-right: auto; +} + +body.cid-community #cncf-code-of-conduct { padding-bottom: 8em; + padding-top: 0.25em; + margin-top: 0; +} + +/* duplication not needed */ +body.cid-community #values-legacy h1 { + display: none; } -body.cid-community .cncf_coc_container h1, -body.cid-community .cncf_coc_container h2 { - margin-top: 1.5em; +body.cid-community #values-legacy h2, +body.cid-community #cncf-code-of-conduct h2 { + margin-top: 0.25em; + margin-bottom: 1em; color: #0662EE; } -/* no need to repeat the heading */ -body.cid-community .cncf_coc_container h1:first-child { - visibility: hidden; - margin: 0; +body.cid-community #values-legacy h2:before, +body.cid-community #values-legacy h2:after, +body.cid-community #cncf-code-of-conduct h2:before, +body.cid-community #cncf-code-of-conduct h2:after { + display: none; /* skip decoration */ } + @media only screen and (max-width: 640px) { body.cid-community #navigation-items { justify-content: flex-start; diff --git a/static/images/community/discuss.png b/static/images/community/discuss.png index 9e83b8ee53be5286f4b26a3225ed0fc3c50db20f..41cb139b57e2bc6f29c1d6bdd8bf24255177425e 100644 GIT binary patch literal 3984 zcmeHKhf`DO77s<~5Wy9Y;w2(QA%&g*LP#h9f)Ju)!4{GL0a8dp6Qo9@iCMrN7*qrc zMnMpxfU=@gMHFyVM8yaq5@{}0e8E-U&YS1^1K#YNd%tq#{C?+r=lji^xp{#B-WsYa zRY4$-2F?ddkj)*7pR$5%Je;$&MmFj5JW0G@Rs=7R%%OqMR8}|*z%j^?Gy;uGO^)lN zxqv_l7IY$sN5cD~C@cn)yyyc>Vz6as5Xi+XiA|=&(0D*NEt1Z3g}k|P6#~$yu8sRu z?SaUmai|d}0@mwWEZNQ#62;@OQ7~9yVj?ur7Rus8!VpL#5(c+{+1OaiJgm9NOddJO zn#o=LodHYZQaE%rkIrHOi;U!ORy@xYBFprzBQV%MXqnt^Ws(&PmPBU55K#Eyk-j_P z@&DVE!T90LHA&iB6(bBW1p8jL{Wvf?=u*~LYyUJS)XVK_80kHsOfSh3%$7#PLk zvA9tzHh{q_R?P{pz>_I-=AzN^D+-TC;h0<=nMtAHu&xjp50p-)qU=4;wivjB7ZPdX zgg|&Yp&e|{p7!>3Xa_r6GzMw^9gAgA;u$n1?>m8~^}G+G6iKvtWsF*CQYT)WQ78VZ9WY;5i99UPsIC}$Vf^=|HH z4-D4J+Xv_C=Z_EA5Ew)VCT`pmLi#l{Z1a|_+qRR#DO6fSWE4FnmceAPIb2?RVp8&s zl+>LWnOWKVT{*jR^9l<06zvt1lb2J2ueaT}+1}CF)!lRJcJH0OyZ8DB2Jipz;Nhd8$HPyaJ{x&1 zc`-Wna{SfA>q+UGsp*;7xwn7LzkC1Ty4vQuxJ)C^}K8(0gPie|l z4Gt7S)`B5x30SCwS#eZ;SaiXmJiiF0xXQxpY4Kq{@rj_!k)>YH6Q?doiVdm+`esbM z5zV?D`Pl`9&(|`pI|#W_@qGEX_{p|+nUTXEQ_Bx=-;~XlbC;f4Fh{FmGcCYNj=3+- zl@9{GZ?HMQ(Z1vl@h`Reu0)c?09M)4XrJiZ;3Ml--7q)qvb4))s3(uj#}@9q8m!}; z;j}AYSV)(@QY)!4Z<@G{sC=LllMYiD8uEW%@48nIGG=6}K<79%Pj=~GjT#or0;`-D z_0Hyk49`aQ!2Cwd{w~cpBW|1g?_B5G-kWlJyQ;5Qj2e;a(4X$_lj}A#vQn$uc}#^g z)&uG^X>lvf)}+V=P&Zhfj6c|ZXuT2mQj(s6*4Ab7ePh%~FsQj~PQ%9tU?0v}=kuCK=- z_k_&X;(=m0ssmeaJhWB7*3kl!OqY+u?^aVP^;YFqDwvDU>MH814pGa%xnWQ@aJPXx zP>^|i^I#xGUc&{wUDaNqD>4kMsOHasZr2I4>y9{fP+Wg8J(=aG+*6$uJ9=9#vCIM( zUb+Clv*e32HX_}Uaq`(#50ieGVCydj3%l0b6N1mEw`lDqDp)cMsg&8mR>Px3EOXPzaka7T2e>w~T-oa>_L zqHk<=_X@PSGF9U;&a8#<^F~{1Gb21V)V#fcIqO%lsq>AK)L*WT*Z@@v-RM!TBuOAz z*Fv(f8?P=zHpGWy7R;c^`GM9?E{I!Chn9@*!~}Tb6bG_=@vQ~gFLs_?^6>G#UUF&y zGcP9O5oGV_!FNhs7oGayd`{nVqfR9Dt~)o z>%naw(-IHd3~iezoGWT7>JM9Uaw7K7+p^2iIz_3|;8rw4*ogpsd#l_0 zju2lfdY~z%5>xT2SvoMVd8FGAbXRn=j`n8nlQw91CfwE z+jA$`YyFr&J>`jd)x+iSM}n`}w=Fo%DO_~az(+_p>iW#9JD&0O;K`F6B|{!fD`Sew zCqj=Ww7=bz>OrhFaCY}Rmbm8?bCuL_R{7AqcP6UOVfZ7z5#C;-d)jll8aw9Zn1j>x4Qe;v*7Z5MnL?@H zRwZPLD3LPjUoVo14liA3AP{q3DR&;Ud$a0(X_^C`mv>NbU-_I`PIJl5s5ND*eC42> z=u}btX43EhsghTdW}b4K*n>ObC?q))j+FI@w6%_@S{`L;O2gJ)E+{XwxlFXnjV?TY zhSNIUe=dBz@k-;nRpQ+NXA(HGb84FDVl?i+kz;7$x`R)gUp#sM9-cTK1&&vxw9hgV zl)9p8fwg1J{zkOVK1yBgn~tPRkbHfW>`;;zt- z&fJ<~?s~c#|h))++_StJ*x7iY*x z65NiRX==~vHsdCE*tY3chL@FplKS0RnD^#;1ne@||C<^o-s$-ow;A^Zy%XpnEkyOa z_v_V2384_?QKX58=+VGJcG*ifn5*dblBr@jsZSF;CzkC6E%Um$#<)os5-SISyIxNF z(0(pbW5ye@+p-N_d$YlBOghT1RQ2mc(XBR`Ay1v^}Y>N&HX3lQ+ zT&V->ItEZ|rc0h_q36AZ9)Ihga6i$A<+v8W2y1X TQa<+d;y*IZGXN{{*q-$->x~Vw delta 4103 zcmaKvc{CIZ_s4CGWo8P)WH;8ynnI(pG?tpNr0h#}#*%GpiP6X?Lw2%e4K-tD29FTg zmC%?8S+b=JSt8|$Uhm((_x#TJ+;i^vo_o)^_m6wez3GzU^6A1{02UUOTD!X9Mkz;j z{FyGaA<`;xW}}Fk;zSc3d<~c4b2m29uud$G6Ee`KwSQX5PW3S)m2IkPnkbajc=>_k z6cnqJJ@}Y(Mjx5nY{paKGAzSF)yV#*&&P>~3Pc7CZ8> zbUonkelf-@`C1G`E{@DO=?Cw?}WyNe4bo z1q@$il2luf>u9p(S3**h7$>RNQm1^T5HKvc&|Jq>g-CnaP=)buKTFpkzvwr+M}uWF z*WuTr=%Qp9xlR`6i7Dj%dHOQ>SObOY=7JYqP*b8m48I~6L1B1O!d-mRxuy*Eo7VNM zZ5fM`Ju3sSXrZE<1e+`MfF6i{M;q6Ud`;`Ty?uD``U06%17!!YF@v6^!KRz?*SYEa zOHkVnFyT1WNi+F~7jM=hie5%JQ#HZ(Oz|5PC18Q~Gy#4`9oC97cT@j@O0s8Tp_e zTz1^|7OA+>Vm)URDgf|*cjYO7l&rgy$yGBDWqIx~DYYCqLNl2KaSn?0uNc!GXrMlY zQFETdnP|73F7XZgLIL2n{Bx@XZ*c|fA;;w5e5!AH{VzgWC09Jlikv81c#{J>3ghB% z?z@7_@i+F zB@zTjVZ?Nok!W+p6(Q^mbAbVvL4W0Ecm;n$!w!#2IKq;kk?J#Z%jLK)`X<>ExI+<; zV?tLhCTfU-hM$GvKvU}K#1@I6<@U2%>ZFYCWXSGpT*Fnque8DWJ4BdU`v|_|q?Nz9A83%60ZCtcP+YNd= z=dafbLH!lJeXkKeDHg+~!FhUG7#(T_N_FfNIC6YQ4JOH!{Lle#t5~0NNPS_yR{ez? z`(jQN_|lYftq1lm=cO-gkHb4L!LxlutWUj_Hp}*s88C-PWfbTe%Tvy&T5{nlcBQfI zJje7H_E&i{``1j2gX`+F5{9q`+;ER@JE|+RW>XrA%>(24XMTqr*_{Nu>K>u>w(nwv z4S^}NQhlJBo2^J1u}+*L+Tae~gx&zOqTUli4cdC;BJBE)J8S-wfoW5tZ!&;;|Cft$ zQJXbYkr~|&JC`8Cqd!+S4!=&=1OIv~bBVBee&}nPlw)@0enrB%Em>bJh!L|}9M z$hgin4?ec;4HqP_uo$f;jB5aFiSG0jq{tLpnSjM-SKaUj*k>IiM|#qyKyRS&zdTo` zKuuS+gm+MDCjtk!5B}D@y3HhVjCcb|gT+U_4C(@1O#5$wvp?W>?1>|W#|af8F*{v+ z=I+OSb4IKe{<3 zW-4YYE+wD(WFD5)Uzq>`%coYS@InTu+s0vl2_$Xsa2FHnar#6p8s|SzO_No)A{85e z!+qF&h{IAFV&6?0VM8zlZMa{=-Z?S_)@Ubwyifx8=X?W}3o=MI_esPre@|qRvo`D*&iB4X?-ie9N(fWQnhU+|W4(7quUV2iLWSLy5XqJuK z&f_a$mYrE_?JrM6os&upx(|MS1-svooC13r7(Mp%U(9z-nk9L_~1 zhfXJ@M={fN{c~48z%H#ui*H>ZW!PGK(2nKo;bFbFL$>+57VHMNy=%Cl+f$8(N^vQO zwH@ay#eWR;)6-liah#!_!$={t_FTJAG)hjH!Z0XD5b*`i2`TD;ZzcE6?X*;$EaG>Y zg|ga@lLlkX`L@DR$N`&Uy>wZy-26ceRW?3*nHzniml2M{X=ji|=!14AP7)1IjY zmZ(bwTyP10895)8ivfl-;eL3MfRkEt#vPKP&Zk>+<)z++W_vdDBLoR{Oza)eztom& z7hd8NerTcw=y^H4)gK3atyK$Mxn~hY{qCwc-sPqH6 ztEe^Q!@<+5SC0g+!hcYW$l?{3q{46S5|G(Wl}8=5YJ=fk7%H!n&aHhmCdv)j)DvUX z+^lRU0=V4v<^W8|DHi~4=xjCA37qyXGeY3-A&gV#Nf@K^&{t6M@<)ef7On~-Y1V(e ziWER4j$2Yh(`V>bn4|px$%5DSwHXzef6(C}UohOB6|D=&mHAwJizl0cj36(n6Q9h= zt&UlgxXb7FT*_d=9eWBdf@&$BY!s0lZ#kP~n(uqCm%$6n%|rT+MD$+4g8<%I&Cko;MSm&CA7~0R zIuoM#_Zq+Mp8O<~<1Ok6d^M-K!R_~#T;g`k>cP|F1aF4>C zSSqhwBCl06K~kI>4oZ9p_x0n{hHfb23aW7Z1c*8GL^l?5Djc?^3#6_}YoPEd?{;rF zo60Qz;&=w4lj06!4!d5ooyrc{j>vikx)_4Ohc)VMTLo^|;T#n8Jhn@g%Kr>j&#wi* z?J2O&nrU8xliz<}a*Vq-pcVtJtJ{*+lDj`43xJ(Y_J$|nt-L_e-ZeBOY^aC+$swCm z1L(^|Ht{gr6otN-q}486Rlzqwy=3S}GTdGqH3L9)NLLG#SKB`tG+qcG{}Nk*FI(yH zBbblET}vAL()+;W?BO+(fRSHUrEKEzhN#hH79`c9wDW2$*BmVUDR1_+y(|`ZVo6n^4pckGg7n2x zJ;N#1cO#FB)pnWqr6mg?TdBLcRiE_&_lvo!<|#;Dk%B^+ef%e_@^AJef2>=M>nLNB z;T5Ibzd##x5VN{oqrmj7pBvS#{-os4B4|R{;0jZvjOdxq&a_Trwh#JLewX;o4I7)i z&Pk#E+{VpilrDC~M=9TKfw5HkZh>cEN{NNFaSetNFWm5R%ERaL^Y4;1rVTmOjX6cf zVu~#e!1A9=!n@XGTh2ZJd{cQp09jfj*7ak^uV=D^)NK-scVEdWovN~PR~g969=!C+ zN$af_@1UFn*c}!Sg)*t%1zL|S+MHUqj1!&u+i#kk z3&qr>m_!mYDzu)^3kLYizAkv!RGYl7hc|J!8}D03pK~n`Y&Hf;AL$C^x(>e|;r=`8 zgK6v#>N5=?wh5O@@0JOBRhhAU4!^0?bFE}X34^c6nVA9;e&h((zW*89)5lu%h>y12 zXu`iE)@KqWH~|xnq4OR|Kc~xbJu9-fMhMGYPkrQbW+Q2K-g4T7g5_S4dld@GR5h^n zm};&CQZ(!gzo~n{2jr$gL%G43fl6`5vx`eViCODN(-rTf7Lk7eJN*9E*KPlq7cd_^ znLppGQhW@|%tG_YsF*#c%LMyNg!3*p*kWE**qm=|8rK?ZRDRQ%A*zt+`)wv~` zG`z+HKU(Im(UllMMg@$j{yv03y?r(U@lO@Qt~)B(2Hv?yYfn`7<+V6N>)#yf&dkY! zM!BD2UApsv`rg3^P} zII_#HHdG5~Zn%2^-{bVU54W-+FHgnsa{hF3Eo5qO>p^(H`W>;XN-ev`PHj?&cz~2k z+{+(6G-c=0A{Ad@RRq%=RPnt8HFg;=zgk|}4a&Vx^je-@Y>EkLLcNc?}%C;Y$o@aq3hc;5UyCBzaWD`h8mwIcK6dbZK z_^+8{cK3Oo=l4Fp-}}7J?3=deyvp$#MHNP9T;g$N1KkP)tw=SZ;oEB=kl(-y! z_MMI|W4A13j{o_?6EhG0z1Z_~L~6c%%amZtslN&zTwWaAc2}nlwFXN7_etkDrAWu<+HP@AT9$28UqPA?ZLi(XFrC@k_Wrr86hL857fz_ zW(~}5o>wb2FB3Tln_W>-p5-Ax3K#;KO(oMhpY>oCE)VZbGl8L&i?PguEsR7_pP~Vj zu`_lO4`k&=8mlNl%QY#^*Z6~Z2>9f|>I_5W2_lop*fS2hq9q85<2Zt(37W>C2d+1z z4IzuC^_eC_4#N+0QIl0eR??`6Da4cp!-HXP9PQ69rA8tH@M%4-0@Q=Z3MxU_Ng|aZ z21n>dpb>)P6B;%`uZ1;1)Bs&+&_obu1ZiXDP>33Eh13540o{iHNa8>OYJ--v^pHdX zBtZ(!rNgwTAvsruBhi5fQ^SOuQmrVsT0=YyIsPk|nIo1SgPFka0o-AESkBNZ65;)d z*kGm?_IogBu}iYVGpxh!^RcWy5cE=1faNHL^|~m| zoE#k&;{qs+ zaUmuF+{p+e=XN<5QJ^d+N#uiymJ(nM$SENK2sNFs227E8Z#3+|XnQVk%MwirMjQrs zuqrv-kiFtsE2qGGLogMjTnqzAbEFGi+%$K^X#vo5SiUAEMcQfB8ZnoGhh(6bf?3=U zz_LSCd7lOZL(ys#CF#M;1W+^5+7oDbZXrUl4n3O8ZIa(8bA&|hDYtKuvNeRFR;}}b zn43fw8i8a@1mki;Vx5pq0N51rMbvMXhgmEU!~k%{1l;LlXq+J_8fUo}zy-$R}rmy%M%wVxaft*V0_xtN6*+$W@#HLXS=| zB7H~Y8kK8A3XI5kw7N#+8j%7cavrU&|C?MTS59vr4d2c(@NCvs*4F^fvn{gE7Y+M- z{pVK%xnskM4W7{Z(Mhe<^S5G?f8$uoO55g5@drx(SQ6MaVQK4vo2!=mWQ&XoKfZ3u z=|$=fi8s%l?(N*MW(Qi_w~FPtJNJazF5cQYg<6u`yZ*v=+}=zXdTdtJ+xK)MJ!+>x z&j9+YnlC07V#QwnruUNig6HQKcfP*-Z2O*d7dlVxo4up*Z=a1T4j*(Jd8@Ryp?7=a z&=-5F@9uhL^9`FOu3uz$dnZ>Ga*tY@Jwr;~}Q! z)kkKW+uhx?ZugSyU0-?rJ$6fz*wl9Jtw*0eSU6>dZ}sDM9o}}p`*P!&Nz-M%dd!bD z&;Ba=a}NSiJ0ceGla5~BXWO4xcGGR$m)q+K&pcA!vtB^E#xhHHU0QK|&yy3;pT|8o z4gV#0^NW4AynFnkSmnLnyVUxgkB)gxYqqIbdh&h$Y$x&8qR6SEXI|3o{RRKv-Lpq` zHa+#R4fXAw+k@C1I}qB?^^d|+J)cy(y>j`+3zrH$|4B=&nZ2?Tp0lr2Ll5W(GIpZ* zD?m2BJQW&kMmQ3%eN|{Hm~1at`6u%a2`udQ)`kv!`#v=uL5hpsXnm<{#*Y^top7@K gn@{I%?&#=Do(O{6;4q#W*TbE|jkkH)lE@F2rU0?ry3To+`>Mi9qdU1qs9prK8mg%_RJ{CmR~ z%Ez79Av#C~WjAQ$_jf6vzAF^vy4pB<9NW7Gb?6Kf5P-&_y3j7FjPAs;x(u8f+XRMY zAzGUgTQ(I7)}d}m0Vs{rW{#w)QJWp7XsWt~+J#z3%1V${g0h+^3q#r&ibhuk2D2&g zW+vkCt>l6$C)TQIGD8rFL;_D(aY<<*D7)QGkQTyXF++q|?G!auH;ZaTPQe3IPT^&Z zmqgT1WSgW8&51#$%SRC8ysW6MTob%7LT6=y!b#&uIbbmO-%vrwqg5>eR{j2~uo~@@ z0TBVJ)S+;&aLpA)Dw%OBfYl@=DoO3RTdZr9G)ZlhWYq0OcLZ6E7mf9OSwb+#1VmM1 zMGgc!P7F%mJRfIh(r2UHq>b85xhTqOwYz-2+G?xIM|((jwT;ekJrdU;08z_veo% zaQF%C!_7iWAz&E~k!y&{6v&0(<*&sQ=F7G{4}DZMy;f!9xN3_o>xx`7@#HPlO+ zCyycJ4t-(678tur@-cP1WXq{^K86?Iyh_iREO!`jDq%C-UHj)d zgZL9ek5o?iN3MoiUK}hRnQfT+Fzro_O`J7Evx(PEk|%Wc#q?VnjRbkj;q;@nU2|`$ z&s{qcPG)*Pc;V@3i?e5Ra^jg(zg~6GKT|}^p;uo2Y4X+?d)G)|wyDq0nfmV+LYA`M z`!ZKvdv|QMa_s29#7B#FB%ic>>MS)bWHou-MLM;!@6deLSGP1=a<}`xw)&+UDmmO| z6Jrzn(UaFshm$`*gTCwJa{!*pY&>#))cwivB5}}t?mD> f>S}9bh(Fdn@ZI?1(SeTlVWT1eZ^-j%ZDaS}|Kxol literal 0 HcmV?d00001 diff --git a/static/images/community/slack.png b/static/images/community/slack.png index c408648bc06e28961016893c4ba9cdfd1da6e73b..c4386e9075b2cab29f1365d7772882faae775d71 100644 GIT binary patch literal 32845 zcmeFYbyOV6(l0td&;Y?LguvhgcXtmO+=CA8?(V@YxN8Cl?oM!*1RH`|aQ8cty?4$z z>)rM4`__8*zr#w_KzG%zq^qjCt06*3K?)6p2n7HDpvg#!zXJeZOQHXf5FjHhjcB?6 z0Lh@Is;2WhLpM@;M>{hMYg1BZ4|`KmQ+Eq90Kk2rDC2vwF3%U!C&Rup0K-Ut?!E`} z*0^A!R)WOj*?^g;OXM3-Voc%m2fwH7w5Oh&z0Vvh$>};{DMDZSw)i?!@9uZ6N@`9X zy|c^bI|bc=yXD6JIy7lf+a5L~>H0zTF4K^~e}Tj>qd_CM}Pr(TDrio9xd! z26o6Rftf3>{k*}A=LaFjz9dgKBVXHn&kS;2FCT12wo4d1jcHvhd2KOzjs){|*zO&u zAKJCr*`s7N6EQL@c6aoW4LvMg>y+)u(9-iA{5*0V^_X(ebslzcTWXJ;mS<$Swc&3% z1uMQjOiW(l5VE_>e{5cVYi<}Ki$`tof`5Ods&9FklW$Z&XLDruME!_6O=azA-{8zZ z$Ua89i$`b0xX;C->b(=;P^RxGC+phSo6=o`>w}{!p&={MAQ@%#D0W!BY_aGK!8p<- zaYS&MySL{=@&zw|Y;7hY${_$Xj^nn=6clyU#{;V+qqIHc6Yd)albT?lTNq1b;s_V* zU@7=+$)`dwy98|<&ddZDLX6U6k(c<10Q*A4s3_DEg|;}!k*=&b#S@)pE6IsbEBD7+ zn#m+pN7|wkbx$;oIaMcyx;b^PGX_9<*`~U-&F70T9zLl<@8+{maM)|Z-uRJ$f@t1_ zoy60R4#giqA6@F_QwYfIkQq?HC9r&eRk+J(i_RDq{vbQ*Wi zDPD$0M}d$$$bl~>#%nn1L{T!i=}8SN7h!O^_2kC}N@{*eYce>yRGCXLHwQ-x=#}O! z2w$DtRxW>MtiIcvY!&K~kngA;4jMw*4^GpiVc1-+&U(|xZj~Okk-R3-t?*KL%Bhi~ zNhHy{?2t=wQxo)QxlUI*x^K+e`;%3?I)#Yxf@Dp}Ei3uW03Tz{Fg++P%m{sz&x2mW zj6eW1r8qM<^veZSfVg&y1>IrDCs}{(Bx9IL~!0< zd%w1i8!f!FPwEkF*&V~@k~0XuvsopP_D!^^;!0@-R)3!E@-cOl+Lm+42C*K)uKja? zCY%t2^8{|C)Ak-w%CU^rPw%@p3LIUmR1&jG?S#K@Fw?sWJN^<$q5n0(UAYY(N zjEwS?_b{92#fD7b4~@^Jw=;@MHlHv+>&-6$q(Pw5L@DdxDd(sywy3uQ8FAlOQ{DJD z)@C$F;dw@85HVkY>_p#bFFM^?FwBG9JE5on1aZ<`sa)Do%&f229W&)O(4W zT?W}I?Rs^aIZ^KM^DO5U^w_$q|= zbfbp+iOXVNcH(E9_MJS&5qIUg%pl{G5)7kedI;m?v@X;xxjcV_onFOej{Da>H5 zmAzFHD#_;V<;uqH%0`AQryz|6j~2&>_-n+u$9f@7(9#b$W0)@W=r)n0b0xqtV_xT> zBg??RcjAh9*(@Sj!w4NjXlYMu6l>9FrGEeU==lLB*QHYc1vk4fUWNtsQa{|qn+vBm z{pa_#Y!BJ<4#aCA0iqTQhZYn}!bI`sO@^laf|+P39!e<+LOv6{xRS*1+*Sg?vfVY5 ziNm7hNUqf`J2f-O#u$btH_SF;$c&^k0C(K%fH62%Rh2Ohq9^aoo#aar4h-nx^gdh0K-&Bauu4?3&0EZ)yVmG_f?GGbn;3QOoeJHo&-O=`uy+!|6xegw?x<(f*GrHtbls;5Q@Famnt&5tJ?1jfP@? zuaYpcL=y_ZB8CPa6xgQs_v(nIVYcQ@E}o2HE)RIsbUU=Bm0ErGO-Vu;tS;7vJ{_s7 zLarx#D?!3EGYjrv{Ho_Ko&Ddvhi^b$1EpEM4tflB+p!V(wXX)gx%B-0w2RgjHkk_) z?ecOCB~jHNc2^Rby+^JRJl~BITn?9ryh;>w8Fh39V6#L{Das76yXFVQ-MtbMX7@0= z4%dpqL&JWp=>_P*caBOf84s`Dd+}QA;iHVx40Z+zh6KefoNdrT;n;^w!&gW|WsGuG z==I`76oq(sxpS#A=$K4q*1a#zhYs+S_zNiUM#z5vc{#MTr8<-a%*pbLyFcxlVcn@% zdgYrRi{so*wk%7u>wH#GYSr&+wg|}rN7gBq-kn<_SP@*jXl1-KVZoKd!eGq#O{7psTI}kAq=;n zV5esJ*_Zd#{7tu96PT^d=kdZqCgkqs_O6i-RtiLY;m$~#i^KiGqS17|Xu3~erW z3wOR%O^z<@e1>Pg70mPgif}ac9x1ScJ4hK$g4)J%dxn(m_=e6!mg#;EO;9IDud6dp zy@+0hH`%d?C-1(@@YHG=?mhNrQCGeOd4V3;vhf&sKANvWH5ncyRZo;HAH%@lAM)Xn za{d(7z%{G~e`!MJU%gt68HV8>EA?#6mm1KooIA9M9S(N3Uwg~PSJ6Hc?lz>qG$zMP z50KJQ{8YdcKFBObVadbM8Sq{wnDQo{o-G zFf56_lMpUWdH9dTRH6-%Vs}Q-h$Y}j#oV}BNlp!d)n$RBQFyzSG|q5?#t<)83H+fG zb>lKsElo7~d!q=>(9%+0#lmrEueVe(Bl8I8>qz?(E}{bp-w4qgFn#+!H163YG`{8z zO&r`GJf|G`q2}2&QPun$r^&I*dHr}Y@l}038o5jk&%$Y3U~@uX;HQ3lS%HJWA|+D< z!>>deA?SG@!SGjH8mA5YA4eT9Sv}@cj-zZuJ|=z*w(dLy1tX_Mqsmw`-8wQ5nCez_ zfpe(de#FjN&#lR)e7@HFFyqgy(Tip`=Z*lEvZ&Y&@8i7qtDg!x^eLqA#i6+EZT-NC z>ejf=WfR&_)Rt9-?_SRpRWyg!jPIQB>2Mg#u@(~X%+FDDPy`iQMgI<~=^jX;%3tCA zV#~bazG?jshfX5|l?RC-zQ9p4asB2Mc1M$p~>xtD1` zN+Ao187#}>#&RadPM+o$pEwq*S)HYtrHE((RHPu~mO8Wzk-F*TpRFc3DkRw5u;R;| zN56{EKCv7JaV>t&-aOfg!@yiq`anh!7Tmu{3A@En4L5%QGZckH@vC_gc7@?x50H#$ zSKbpdO0`|5@Ny6+&t2~oLWMhhQZ4Xc68Cs1{oJ0s#z`;SCsFSx{^{KN;Jqa^rRupn zviG98gUB$EhZjHWb{OAnZ5t}P`m_zkjK5$v;#dI9#-#FDc8e806FairPvz>|y=iO( z0TDGP{P$w#Aqh*C$FI;70Mo-hAvMg|B4>cx81xPF^5$BP&PW_m!n|{$=er2AFr#w9 zA{)_orCUn$J7z~U-}s7l;$~V~y_1sd-b_?Hd=6OKMyN)N`6(7K#P;SkiC?_R&Fg&^ z@7L?8WAvnNJMQAraX;YZV7Vk1{NY*=4Wtuca`f7TIE(^glr+Iv-bsmwvFghl_c4VX zj^Z^C^D>zq^wDtwl(>6KNq9W8GaY3(+2bSN2^9d$)M z8-9<4*!h~rAoU}=ok8p%@pqHrO60;2dqW$8tJ-vCL(N@Z+sXojFZYWa?9rhU2l@P{ z<*Su7U!~7+k7BU#++X5*i~QVwSqwT;6s%cD9TiXSxM@orO$tw0W;4j#8U|s794V+Q zJk*S~QmH)O)0Ua;A5~OH=2FjRLi^MeGOPhtAztp>0&sbDtIdit>7~{Z&(*Sbx9r0& z6&LyF`mE{2%xuP;KVq;Ko${h`9l2wa)|%eg(LQITXRL;ANxw&{;M2wQ(m1(XCw!Z;WtpII-iM zb^3#*G%=dO3Q_uEoCGh=T+2~iT^3!NJ+SDWS zrx5(yQO})`u)(e6Vbv^nt+LZmz!sg);a?HfDpuKrB4vy*P++m;Gf7pDWUa{XK0$br z;$TsRMSVtVRZgv#6D*`opLT_7sf@P=)5b(jwF@}XofXZMN{Gr79F-br3Tjt8i~kJ6 z;v!jotM7~5iGz_YZ{>`q$a%tWEm%$(+uvx`9aLVlT?EGakh>ZSL~!Q5F664#Wns{= z3e*I{99x!G74I&E`GdC?7bLSc3n+y;WEb1@XS%WRkosy^H%iP(rpx&K67WJdGz=G* zFTc@}_#47_pD@mK`o(d{*u}Dmq7~a`VNwMp6o>X23Eu^qc}!t+UuMU}Wk-Kn4*AeH zkqgW12*dRnv5JgjD+N>6PS4>wEKbs#bTjPn5#7WPz;>LdVDAeX}`zxE z%tLJ}UQO|NFowVxKaTyg8{-B?R8qWpJnv(88=FPER^2+e`NAI)l{(D$ugjFW$a}-R zu&)V0=@PpHLSa-zu6(c@RUD+!N$*D1j#`$mJu?TwDUi4Q<-T{o6HCnkPq~f6F=SGF z&3m(k6LeH%=ts+aNG82@YktJ7`ChU+A!&1y+CJ;*e&30=^n%ujMZ0MxXOdO)V@9B5 zk1aRha}2V33mS9yz`}r?G6K9?x#yX@?9A_Az6EuQlWr}};UVcy@(glf@DbZw4h)oI z#P_ghSE9zj*DI?APLc8PZ@?k>$9?OyS2pr;&`8}->teV zmE5QvnW7XItU60(4o5^segooYZw%CQOI1k;< zM?Fig=i=@qL06n$yKZluZl@Q1+*eLFtEm+)7QNeVg``%&U5Z@xOe@~)kFmS^CHqI( zL-j;&(9AD_>H7<`%z74r-?R%0M;SO_^&~aBs=VhK-gel^1>Gm;!nXeV1r+Y%C7-?yf0Yf%A@rHVl+9EZ!YRx|xg zR(ry-1CAST1BUOfa*`W)v(<--%tFR@thJ4Vt{hYuRE3n5XAC^0b`;qgW90}laPH%{ zn&lx40tJq#=$kGmcosDLgpmx}Zg5^7fNo4m@<0s7m#`#?(er!U_gZ=hSQX5$bdb0ej;wUr^SQBuuRErP& zrByUpEUqEX#+z)aR-E;2!n`a0dl>nay@Ps$BY@ypwpy;WjVMk2^ryaaB$!M-MUl3= z=(=4avuhMkTM|kfg-U7N!FzT+?dMq~ZSLBd(K_nH`Guq{5z143hCb(VsGa z7Q#bawTXM9oE*2Phs}q*V10j#8)j0(c^SeKgocSZ3nk2eDap?s+$|<*gmP{S0MWCt zckF>jUq@P`OEAkOZ}4(k>E|ZUdHS>}LpxH4gTBOVJBfAgYh^i~Hl7}`j-^cZ2A;Cm zXHJK+<~7%b;~J>oXe}t-VLp=9QyeLDn(_ECk$0Bd{Q!B;=L-yNV!8=2hkk?8WIZ_B zBp5)@Ra(||Dy-a*o5sG~QRqATf;lVg^NfO2mhVNR>iu9d?pKBTQ|bF#bKl)`UZIJyB}n2F}t_Oo*%sfqEcQ7gVFhx1=*9valSOlQq}T8>`>Pjf%# zmGX?`*+^>b&!&}SFA1j8@w^r{W_Ju&`Bss`U7nOG64YE|_^eSfsHd1nbxP9x>zTD@ zv*3d9SHLo9eHQtZ#MhjC|2G#on7>FErIDS{wYK!VN_N(l4~83TD)g%GVsU=Wkjk^o zm>CqKed&3!JCHlg^t4Qf0 zheO-8_=C$X=U)UB zx8Fv#d_xFHdXRK^TLx&a88ta%B(#r{?MTckjX*DyrMI}*BDxH_UZ*ao?Qg_RrOZIW zM^so4pp}I${vaL`g_B`9tr?6i?M8aJ5d6FyG>D+IVgB?Bj!lhNR3hBK7I18JQ|@Ld z{>9T!Jj1oNJLGa?p@{sz<85qX6(Zp>TYF~ayB>>-v1P6`vO+1nJ3b&%WZB%^Q4Ls( zMXI2`py;^!D~ZCUgr%wAzAl16AhBH&II6PFuLGlNzYU{uM;{3Rp5|HUVX*2xCyptn zZklVN%hlP#y<-~FRRJhxasJy|Q@;*Elgmk(QzW;#q_4qtXIEieW_rUDwXZKYYBqPn zjoVHiY|AFV4xi|sRQrXeCvmQ;SF`9-TWUJZy@Dd8_RYSypKH5isD}3Utk1B68@&&S z8P{o_b7KqSRsZ}}A}c`Y^b){nyVFFkmQsC5Oerjm@*b6L*HbF|RH12djP7Ij?RRkM z_j{)dxX!t}0Fb42(xgd!$UZm>Zw7%sk74ay>3QuBa!V!7x_!-c*ff;0Wi(HpqqF)d z$NH56Rk+JX>4kIc2icpscFqF++7)bxciE=54*NQM(%_9Os-YKs85QdKKk{ny&KrL? zbZ7;d-wvlqw49VpECMBTROxKuuqOm&t_HXn+?xWLm7VRc36DRcV{dTSAO&{kc}dV? z8=N(dU%e91mYFOKZ|R(;98HY-c4d|oPH$=SsVG*CQ1hM-=KIK^d0lF?=*!RPM+vgi zmBqQZhE)ttO}Zb?>pjALfwk71UcF8=6s#0;3*{|L>A!AuK&*S&c6pA2=B$j3qa@e1 zJCOxPbEh$Khf3`rT6A?jj=^^*{{gRiMWpJjV${x@s+pTv3~b~ zEm(e>`_TNIdEO6Kc&lfPM?6$gI%qNGK=z6{^Zr>3UitNaNw23fI6GZkDc$x z%<>-iK9t@I{c!=nszPw|_G4#WONHQZnPf4%TcM3QreO(MpcD!+w(tSG?vJa3SGuHb zXLN7CVuMD(SmK)_xCRT}Xet7Cd;{YvHh7kolLI^7=8_-%bP}HQk+s!I@>GYk*pj_b zu|Cg&7>Fsuqt8}32X+oet1Nu(@H^@+E^@RJf?W|BM_=VdiHANj5K+!#-RS+kXym|@ zgV$a4SrMIM?hCL>ExYV~WWnzl2k9~;%*8<_=X?L+raXlRLHOsg739IphkKoRWHbv! zA-Z!UJTyPhuzfC0t)y1ob?8TG`i+)+ zb|5bkfm3T=6mXY?!F&5D#?CY^ls@XH>pGeo!IBD}56~K&1d2H9+|`|C#HdzP1*+i-sSG62aY-Oz5Rlyc(?< zX`8AxVwkv?Bh32bgAY;i^R4Hb$O+i>IKo}!Up=|^@^J$wQQg1zBJ}PSqXf(+bZMY^ zzi9SJ>m&8zkm|0{5%g7`B9N|DHCW~D)Ya)ADxmC%J(~Rm`&N9Rfh;1`O9V4I;mkwO=?JKLRIN@j}lMX%m8NyD80Ej_s@xohRvL ziZLCCcjx8Ya75FE1v&8(9p`@P^AszdV3ij6YU~W@|BSq}PExo3K`P|9WYNMRaal@f zqxOx7O9K^6KI~?ndZzytOO~3WO`JoyH{K!(eGJcRR<&Uo5BUICWVAL{t*HMJalV>U zDd7DV$)_JrU*eN=lS7>VajBOX8hE~G_S4tL4%t*(@C|_5#!9>TOGX0U0yV?z3Ht|m zkx52{8q5j)6xVY-h1;W}Y8=ew^cz1@iMUc%N)vX-V>gin6A zv8RTF2rZewlh#Op2q{Ef>C`5rpV1SI`U;%p@V)Rb2u6|-|R zC1q!1XJlrOaJO(}1qz~&@;jQC@xBw6{2K!DlK{}%+1Z|#iOJ2)jnR#b(asUX#KObF z!^F(W#LCJ5p}X-{Y++|h3dJ-uvU71300JTNr2pt= zV=piN2j14{Z!bXn!Q^ge&&0yW%w%K3^zR%_&JwN=kiQlB-*PyqLVh=Z>7A*Qor|Nf zsf4Sktuy(*Lzo!<;kS2jwEk_5i7}I@wW$q+)CnS$S#;#1JZZd0uflV;87;GU5V2sC-@%J7WtI z-rwIEGjnp8@tBz~7;zY}Gq4-;a4~Qjv2if)n6j{PGc%j9@o+Q$82{0G-BZ7;bCQ9=VUQsFydx0WH2!`W?^GB<1pi7H~cpUMMn!rR2o|U zyH-#r69|-v2`eiz3l}ScA)6@&1G^!&A%h_|rx}BZu@Sc+voSk2ml^kOC=+8|Njpay zL&$Pk*cgILne1&rzbBv$=M_{-yZu5hZIwXEO+c08q}t*2Vo_q^cG+rYg>c zP;atuva_@BaB{G+vhi@Tar69(NZr)Y2@;7=OcrKF*1uGME(HvHp>YZU_{{;a zg;&hc)X>?^QPs}QS^x-DfE3F4JG@Ex|5_Aj3nvJL2Q=dUj(Jf-&|g3Ol>*ilzo$q^ ze}^rvq48froD5w}O@0f4@clJqY;I@^GKJ*#-vRZ{xW)fwvDmmd%uJctj2W2CIE*14 zHZf!{V&>pxU}J_%m~k5Ouo|-dC%TiJnX{Xrqp1i8VkyKLNC5q|hLq;FD{23e+RfY) z>L+GMZZkvvaj7!1@Un35va-HmX69vP1~UC^FcUPZ|FM{#>Hpyq{@(=uRt6w^e~m%P z3#3{x{V7*}`wZ&F|BGLLuf_jG2@vT23i%)T_rLD?ue<(79{3*-|2Mk+>#qNi2mVLI z|BbHy-`s`ruM3ZCqz-D zNP-UC6~kdbDT_WllH2kPPM&BfuC@!a$L?!(3+Z>d#PHt(gM^~g5b;5JuXN!>juCq& z2W2KDWLueNYr{{v*L^bG##h_!0C`%b>9cGw=FK){?~qwi4RFP(Ffc>`n8BuIgl7p! zp>{sYD%7)jnV_4i2dxLoE+b)KifrRh3v4@#aq9(yZJ#xoH^X!%JSUAZ_!Pby&1)ww zw-bh8$D`BdfH}9czm!3^#M`)F~z@NQu#V363H^E z-@scS_W_T6C0CLV5$pomIqYl?oH7LgTIp)H2;AJyzR4!F9iAQ9UoAZzSd1SbHWNAO zEgD!eLawIB_R?BT0Kju>=zkbMdL{v65YbsiUIK9o77^wJ-bbZdMaU3>(OW5T06HKN zFd$?*IRpT_QfSQbJU?2+pU;66wD$`cy?QAihmg4`oR>Hr# z#??xAAhnir<;5C;peU&W^pz%ncivc`ajWCW z=@_08(^K^R&R|CMI(6-7#hL^2Hlg`_W&gbWu=2e##O6>w$?3kqWwMWRB=(?MzsP|l zDtfd)zKie+vpT7zd82`>*|Eb8V#J0(3@!sqVb zV{{J8)<#Q_4f7aS0HE`|GF`F<9=&+bfegsuR#nJN2@Ix{a{~ z4kB{1pNBTa$6TtQ@4I<4`L*2PEX`c5$Og(-c`eD}$B8O?pjW+^5WpYfG+DKUPL@{; zSJCyZX+h^f^sRT?A}1=hvwG;+b?#Sm7T2zrK~*oD;(X)m$_6LADRD2=I3ydNLss-l zgPi`mQXFN^jPUl^1StyxS0q0HkBCdmy zE|Wz$>2S4Wd?eJb@Y{CGb}k&i|AHo2(!E+cE4%@>7=2pc00+ElxyK!t*h#>R|7?68 z3?A$?T!_+{H8BVG_LHx3!$CZbwR0C0Decu`WZ2x|;Z}Stuc2bW4hqhkp%7C^7Qfbt z3fVohoiun{3!wu5m>IxDuOg9ZO6W<7TkeJmoliu^&^<~crHBQ_(_FRq72Gc5DRJ8~ zQ_CR8>~e#ZJOhce;^NUZl+t(A7&!@}XG<&gSP{`XkJT!yGcs|P7R}9M*M(yV`iV;|OUS9>0`BWMMYs_|(@q^17vpT-FGi0hP60EM{K-@!r&h+L&mt$GP{qpX^ zla#=?D@l(Q`?Dl#QvnV628`?n*Rsc1dIcZ^^O;I@AM#=JCG6KGgUYPdMy26-cK61$ zcw-3@O61Xp#gw6sNH;tg0D!QiC+Ff3DKnz@!`_+o135yb7F<+1s=O*zo|^bpXQ`cx zn%e7C><$2cln|H?$wfDDB^S&TxJQoiFkSh5^eu{)1es+MxB}zKSgCN9?q92!9r;0! z>LnfU$U$RQd`xZRczkYF3u=4I6}xG@m}8J{Z7K^SCZj-{6gS7UyMY6AesS6|I-tQ+ z8pntiTN{$u9h>Ifzg$f%y&Zr7=8}*C!+KpoMu+{}`w$;!rX?_tFy!y@bUR2d?`Y{N z@!VZzBMD}{tGWXwlO241LV9uWW#i?o_d!4{bBW({=Ew9ZLl@Z^3jL}0oq9~Q3Z zned(%uc5r`CqDOc7{Gh!v_|u3hu+;UI_HITGMOKI*ou7TL}cmCaq_r79oiRnyJUWV zIJIfkZY5G7dGudnEY{Vi{IB8wBoQ<$Ot_BNIpW%Hf$Q9LfWC?~$43`N% zio%1o86lwyagkVXo~o@f78w&YH2{E0Uo?eer|uO@n&gl()6M!Y#7Vu|LKAWP0$G%H z^u$6H03cA}pI=i97b)|ZnpGvTxj0jmhwqR@l5JDms~8K_i7}FZ2lzjH%VI=8628Ih z-z`xWRLm-yz>h#&ll9!FyKl^#svT_36G}GC3?s4U?-Rx?m(I+{GVgS2;Z!M z7+}mXRJxA7G{XWq1;1I`tpYvOF?GKwp?m(V^#zqXzTDypJXIEM?HA)$v&I+r6^-I zA4jc>&Y=3YNJw>g9bWPEOcRsaMm#%jna4tATa_)RL?GkjDQbYNl}*zH2JjQk0aV+8 z;r{J=#-x5$9Q_BDeUEaAZp^hYoUP=L_;oE07(lKfbYBSP!oWg=7oqu{tlx5+*;ib0NB~wE4#D_9EY1gUqY4V48wotSiqZ6_w@@l*YEJHY0vL z{OKD2aQqs&Ck1DIWMJg%o1IoQry>Jc>QGZuz3hoZ1pqj|=QB{B@~FIZkBJ|j&bBE^ zkKmqO+3N^ZBLo1f!E&p%Yac2~s|yN6zhl{@*fVG-!vN^VK1S6Qh__k;!Jmw_am2Dq z;|Gv92(=YjwKlY7RCiF+K>)xT0|9xf1}vMo$*f5d&NYwn;y?q^C3=^PCsV-y6u^6B zUHPKPL%9?A?$LLL_DpjMoC{6m6rxe#oK|I2LHJ)};Q*Q#iuK1Hfujjv+rqP%CT7QG zq!h;dQOCz*H6lP~pnHvWf@V#wg^dqn*GV$0%%&0(qMW1ORW~)?px^*cVi~3-neQC{*Og%^MmGLC0ZUq2X=?Sz&DAk;+%^*fQT(I9;h>ms_i^_kR z0|4+u#fO}N(~IC-YO3vf$f|Qn#>96NgVXCiM<6Pi_8|bi0HAAjaa4nCIX9Ov$ryjb zKg(r>ACQk}k>ZA4!3prkt;{ZtS|Od7$eKJ%yy5?>5IjyjrX(N2_GtkCP)C}HP9dF8 z^2=CJr;}+RY)(gd3KkYyjVXtZFVJd&0m#YO81D*WQtKeG|8jK@Nxslk!^+c$VEfe` znU3TWG5ZqG*(;~7TS_@{+@YcTyvBrIv7*4~8bBDB zl(RjHYpib6C#z#qbvpon4bifALO5-e(Khac-=$boh-%0lzr`IL7tlEqP+o^=+E@WH zLBXBU6Oj+a$2M(T)F@(n50407CR3Sia(G5~=#b&(&|gx-tNgZ}6~wAaF=WqWXM_tO zSf8D?KEZxQcv-!ir1~=Ndm%8nm*8^k;nNxbK-HT?yCfZF$RtQJ?NLq@h>vaD7*?i) zUBM0TC#=dft2KZQXxWs>ZI>k==_^7gE&t-nm`#8{u1S``auD9qvV^v2u?aa+yx{@( z-&E$di$)vT(LLuP1`b0|#m*vFem%%PPKc{J0sv?5mhl8V;LHRDg}6xFOyeK> zxHIspieD4p0BJ~&EwvUYp3w-^YdV8bA!Sn+m#jJg!eNkvVlZZj4tNh-8n+P)!02gb zkaaSR6j1s)u5%wT(V#gB@o?`3Wrm@D|B)fWp6D%6{ANGVtkZA>MH+i117Nh3sctM z@wgULKrrN3g+~_ZqZ79_F9?lnC{jk5$EPg-;2E8EzumBXb~D}xPdo%e{+VC@Z-pEC zVxQ4rLH?GK6!%arFBC}(78e;(t>X|h>d%~ksK}}mN-zLnA}YoOXJX9Ize&+tE{fuT zqOMp*(z!rLiUOucfrUXB|M1c(8Bq1c2}{62iZunwjA-V+&G;#VlLbURgPcL6gSd1W zv;R!(pp?6!ll_rU`EAS?nT0|rUKkz#D6Wh}-&KVj*KFf2(&{{f|* z<_RJAN0ZQnZ&NgwMFZEmwuh%McK>0rurG-cqFh2!{X29*kq4&;Fd6-* z!T%a!V+}#UWb%NG`D5XZ2Z;+Jrc^FgsdO$Zz@I(b|D$~GA1U9HXQD6Pi2jSb)Q0}U zsaYKb%fC2&A4{`liI92!SY&9)>y}1c!9cn9M>N8NQz_V)O#a6oY~{jAz8{dShWhXi z5j0K$fVPjaKmSm@D0=siS|uFA|2=I{`x~%GQzGBwM~^OlDIOBddx-6SlDXEZQ}JuVQ&^&{7_l-;5;l*RldEjv1Hv z=Mou4jtlD$#g#sj(ft)95RT+Sd-rOI4z7ZS0f=AmSDUjZ{?6RENZR?e$)k4bBE2;k zd^3;gg%=Xu58y;v!7X_!#c)4^sfR|up_w7VUx5w){QZgXJ1PKxe?I>|OhfAsq|QNe z8z7tuElrSn<3D)+eEx&?&*%RU_difD{$2(M{_nE)=ksr|{}TUyn*LkM{}(c5(wW&* z$?SMq)yA5T8iC#J&1|_rXsxYzu8`epf<= za;uUL_m{=bU+1(IDrqfh67~5#K=o5EtY=}B)gs`Ru|e`%`iP>JM-{}yi=Fp085@+2 zI=dgaj-`Q>N8Q^o{{0Db;&&C7fcm5@A8d>LWXHvwxGF5q%I>nZ9`8_}@Hsk%?iW6Y zaW3YxI11lmYJpf$-AfwvNF#VLm{L5dN$)zH&8tQXj9{n4SMS78*i{mi`jPtv7To_&Pn3E zC-BK1;^BCB@j#ZmTnfEAE{C7y24$X%(P+l7O)Nc{xDs-RRa03VVb-Ty)>xS|R=b}UILf;GI}FfPwl!K3q}qZ3dZJC z*5qSZho(d+wB({wjKO!EC8?GUYHy3}MC|o0$q@U91zSL?i>@3JsjRKK&fXY~;J0!K zNUS#-7^b6UeWvIh*S>C@$f@vYq$1%@t`QL}HaS~gSq4Vp=U{v=B|Fsrlr=&p31L`x zeGXztRy~eI@IJ9Pwtb)$b!*8^^LYjed$wcXw)dE?qP;khE5-iF&TD$%b7k?A`R_V) z;q&y;BArKD24VZs+H;)eYst$Qlk0aV4%~LzOb1H6@4aGHW6X&RusW<2fJ%hb(z%zC z@^l+`4!1AHLfTl1Dae1l6)1#MT(I@3C(8)fFqb^tVfsl_yn|0P7l>7Q1ZMdBa#s!k zbk*85=JDmqa?FHVm>V7D$oE0UD}gM=+%+m{d;o_INmbkm(Ndo79qr?xMaH4m(qI=M>Q)ZZsuMNxL;4zYS&2 z?~}(rs)95O{UK{RyQDfRg}yT{7`ZN>c~?x;jfvtUR0`?GsiGD?>mkJSy4Qhb12h+$ z)}XlnVx1ZIO5!~W>G$0ZgJ#V9UPt1INuw6f!x3V-d3o#^R7&V5K+7`+{!sV8i6!tH ztKS33?xsZa#qQnEyDYE4B7tc1WW`qwVeDFtpIiJL(8w@`%VF7;F)|h>D=S zs6o#c6atLbWc7!`oU@7%Uh&dgl=p5u%HqovyRUAX! zJK>St6&3EaYXvUK=!eE3(Ybf5R1B_Pm7jw(r8^Y|VYBYrPVVF>c=XR`)jn{l_0}jc%vX;lOw|EAsyB)NUj#$|5Fgf_FRoqK3?cj0S*fzWOo`Xiy>%@DD|Vqg8j=dsjc; zz3Fg&8J`|1vo59ssWYh73JGHUrldSV)w-ta5vQrh4*h=n*N(>A$h_*1uE=n>4Nkk} zbvft)@HiBUs`RcHzv-XCsLZ)`H_6;wtMwv({8S$q03rOf5D~@sN(vGLbn`Y3keh$j z-~?|>L7w-5qd^9i*oFuKEf9T3@==Sy;!W}pRII57;etBV<2_4E9wypY-I!C!$2I3r z8X)@VTNMN7SuuHI&w|dGp?~~~tww?qb%vlq2_z%?jwc9m!?_1+9U!-5DhSyt{c=2U zs&Jt@yqKIkpSEfi$J}|lgK>~Oa1G-oY0~xryVSf~c-HQKT2#^g*U0;8*XeI%Tm?Sy z0TJP1EfIdJj?mh)y6}$9nX-SBUq?}m0jZ3twXVupJp^U3_=pISVgbCpInq+*U%7tmA`lg zX+;q;o$blNg9rr`^au}d9?pVLr63e;o~f=h(48;H`=XZY9mC@%J|6vIPB-oI#W2FV zJ&R^{p&(35Q^=j0gR2fIjDLTjoex3W3RqZ%VR2k zQ_S_uKHv|9)@uLpW%muu6Oeb*x(V%*X`_!aWdEQF!oZ|_Xr7^mnjHYXVt2;wABfDj z8W)&0*Lf>2PAV=g!*%F4t4^2v+j-`>?cFTZ`CUQ;+w;?qdmCAJ_`8nDoCE%#-y*2kcr5!+s&o(z{h{HeJiD?WNlt9k7n)exm{^2xDHgzA0 zl%d|bld>^3NowE3=S#Kd;X!cX3?%sB?ra!8m&S||M zD1%d@zg__yljXCt)2rPRLQ-na7YtSm@f-Xq!hxI4c(-`x2b~u2&Vs7fu>v;eg3J)! z*R*OV<}7*#KGKSWk^qS$G$i>#pcJpGvfJ-0RAQd*ZCOJ)Mlvg4CIkZaaP!iJo0Vr)O>im zt+b!ncX?w1D%HXa_kki17cPA_0~kLdTTfrw9{abTM@$3-c>i#A>FD5Q_7ww7#H6Ofc9iBOI(kUq9K_{PU#-|ACB+LK0 zvhke?8eE5)XC=?KJh@4Jgz?!xyiW`3GLzKat|}+~FXL}Y%i?rW~@Je3VL)z<_j=u{HPH z^Bg|h08b7$bj%Mv@OAbDsQVT!&SRPg#O>oCilcaw#K#P7ydT?U4-rkT6DgFQr?UOT znEauk6txm(>^BH$H+MoBF~tda2GW?NWj{wD%G}BN4ZM+p^i~8Bh=D|RDCL*9vDCL+9K+@DlzxYB`KDfOvw@3e1(iR;grF`r{F0IO7XCVzr zV+aY$%Fpj43417(zT(qC^3GkMylD~5$@zI)vV%k%?j+8Y3qKGxi z1qc>*^C?pGIPo2PIoIc`@mxqm*iJq1l;O=tn*?j_aIyS47SNHS45~p^O?t#HyIwkR zgx{MQH70ZpHy1B4iNdx{$fM|aOW=B)JubzF`7opNvPG=B5GOWg z7>2dj`U^nT8J4JYEhu;fA=S1tYUfrnTxp){kJv7z)w6g&L=NDNHfj1Q_?H|b+%`i%J0r@*QGgw zav{A`g1{7KkQ)CQGI;C zj4K6rN){`TA#|Z5kO5`QQr#wtV}GEACZM)=GS@CS`Q(s8z|NpoxDWN&dl{Ec$%f(K zth=L@G8r(+DtWu8VsD$OE8A@;0wS#gIFTCJB)H%(F^fFCgetPEdM{>X*_~<$kSYix zQ|2?Dw9Be?uuIcW+fNQ(7n9W^6s9+q*l1 zm=xWby>MQEiz+FeB=YdqEfjqwgzFU$p$8$fV7@9F!7k-dmqO|DA!m;@!v#$I!hpSa zuOaxVb`Y{p6Z#0a6bIH2#1y->{nAiGR)xGl8qAOQb1gei!C_&BaVbURF!y-!Lf__$ zgg+Tdr(e!b+Yzl<3I#w_i5;AlIUY8o_`nD^6_G&;ohNE7n$mblO>HrcmG3*vwvc z+w!l18&HZwqzWP+BE2gel%`1UO(Gy5y$Ohvh$w`vLPUCz-g}V}q==vp5s=;mH1raB zlDl5d-shbEr+e@7+%K0e5Z?8!Ix}l#elwFnbkJH4!&^}am?zSu5ZD|;_WGS|2fWII z7u{}*Um7@5#H}^{HTvfLRDrNh+GISOKszI3q1Q?gp2&}myT$q+SE%;Cz(RV3_3S&- zga`?95y;ThmK&6rS3{SPbCvO@okn$3Z;W%kh5-pg2;4O(sYq`PC{}#m(s@d?^Oo1) z(BpNH6jj{|bL?U&)Q;9RHf}FDFw!K$8t~|3_v#DrVr;`U$mXsqs-35Oi+M`5s=hxp zH)5Ao0%ppCcPy(#hJ2O|qwq)y!{Ua$T^@J&Pxg|NkO~bH?VR;!-}hWR6E_eBiSXbmYMw2;p}_(9J{V1P-jISjx*ID zM|EVIqizJfFnXMbt8rX%8eVo<_U6v}9CGTWb&h0@>A5^-MW3YVF?v18=&@MP@6 zP0o38Hy*ofVbwLi90i~{!ql{=rmbtgM5m9LD+Pgb9dBvW=Dlz=| zoxi88?#jQF*AMSYgThgGGrX_q_C0jgl6%5eya=A%FKO(g?G;7BJa67Si?K$nE~ADX zUhlhd89Q+__eJZN^Kyt24PBN|f7s`qk~jF6ArPFvq~gr{89BN@Rvg6b)b`v40NvLM zbtALnuDZca?=u@@=$t8H4y@#ZS?|XO*#X}UaF$v<5aBYPUYyQVbyHp=hwf#R$}bbD zrCXBlcq%PPys+GvqJjE1=D)PmXdH|tiFf3`#`8*uWWoej;vsjIjU?;0hS?~C&(I1}*N#1WpCg#(YR-Q)7-ajBIo}SgCkCW>BX5KqHEj*)l z;p2+nQ~LA;ch;GLyDCnmII<*hv6NWG4*rj))Fn#I_HANr-U~m4H_muNq292F)^S%t z1MG@U@Z=rE@-vv`UqsVkCh3G>H(%TXEmiC_FoCo5aNevbC~S1V znwT;D3aCdHA_trDhcGgq=P{z*LZ8>jw!SQLm$-ERgzBY+=r;zDQCN(dXC?oHzfAG? zrrsEnbj7k0);R3=xS0Z#Dm#WOPmu_0*btvv2lpsH`eGzxPxG0Xbjrn$_#-+T7Z}*NUU!`WfnR7DttGIce z%6o!+fVP#p`F&kwjKlk!mNoX1=S)xe4>%2fO=EwhR;I$v6n(AV!C|o205)&mUe=1{eh!#ZSI^KcdKJsQ?@M{p7w1qb)-Fr{@b=tI<%aJzr09ESEvzNeTqPZek^Ka& z+iUdk%!6O>tnG)9iVUQccP?I`X-bhxYRBCFO+#KBzxD1_vs?5VDl4vR*Tx=l!VL$` zU7lPH{XRV&UGCtY57XXIOl!PjA)Hg{hE13F#`6@Ha$cYfpBx6b$O^>W#$0r?5z)E^ zbC@WO!3 zE&0Zc)`Q&5*LQ!_qPP+n`LI_`FvN(+33jAJ6U#mYG z8R7fp(y58{u#3p!HQ%}0?sXoE5bADfR}xCFQS;#h#T&%*FsM6)8G`ioxN^n9twuR= zt;}AmJ$|RL|KXH2mIWDrIG&Cl4qoni1TS$EVC``t zIZ7;adXrVCAr{!Q>3_FX7d05(^5z`e3f!h`dYro4_^#h@pn7y42?wa~?nWLb`8M%b z?D|)|gg+U4S;N%M!tuPo2b*z)Ui66H?*o$*mqEdU+GppRPrU*TQvl^l#b<5s73ykG z%}IvJ{hBd;^N}&NXEuoQPI-|VMewL`Rb{tO2w#fpR9xb(xX(GyGpr4QMjdvAjT@`G zbe?`6AL)EW2-&nm#{r0?mowm%C8%E#%rEs868G} z0E_tl?Wcl?+hXc7iU;FSV|b8PBSXS?Q+eK-tZu_kA|4mP#khe$~i1u za?yu8VrPe|Z?Uzy%m45%&*1(c%)t82&^j?a*u|W!b%VH%SZ=BtyWC)yI*7gJaCn|R-g zJ1mfC;aX}IVK zJ0@OJoNE`g8O`7c+hS|a?|LwO`@w~Rs!HtD8B@>nepp+Kx^Joq>7 zQFZpb!$I=L*h87EWkQ!zKwF<)jZ%VIad)Kk(|D#L3>^cKblsv|eYXjg>t8ldB6Tgd z@jIL=d-tZzGlsM3Bt~kx%jp(ut-=u*6LfE^c-P7iBkZK63LG9%>LYT$gac~2hbcQ} z1Etta>R>RW@_bmJCH|I0O=NeGX637K7rQA8O6J5fo+^-J_(iktI4V41XZL8`LAKBT z^5e*zS+}5l%GkP)k-?)PQk7Tzw4vK}Hm5yDhu{Ly%k19i-9-;4MM~*Hx8N$k>B@g} z=|GS^WCHiB`txB$X>L0L?!8!}5!)KFt{`cAdGxY@+ppH=%B)v3Y=!c2x`bnvt2CJH ze~}BhRUTV$(wsFk8^{P{$YQ{bed>u^?)tIb?aG&z@pTosFHN~IEUV`DoQa#uqk;dBT;Ugi4ZJJnaxQg?{0kZDLYuct+@_0CVtoyhbCFSDffC z81)JRKFYWmr#NY%^(83dr_;p{KJks#q>T*0V1^A3g}_YQ7o7sxSd=SM*?OXjUE@1% zdxa$dv00eb*6C-6C3b^maP4nVWpwzRCrwjAG2>JA=H-Htb~52^Lm6BSs7vj|d2vP) zT_ORIS+S2~{C>{&Z~Vr@SAVBIeMB}ZNKY7H{t0enO!AEhe077<2aGS6WN>X>^ZFIz z=3B9Y1=ZSAl{941*@fkKLUzQVTWUmsLIRlxX{r*6K1j!4Yd`p%t>UTiAmQKeA)CcZ zFW}iM&gzS{*fc74l`o@(h}V(B?#~9wb82J)5f2{G6?hRz7K`Vxgl7{R!Fi08=V(|P z)h&5^H!?m0oF(~wyQ~IK`pnLfKdW7EVC4!MvLTZU;qy;T7bdi_bO(C49j;91RkExr zvc*ZK*(lYD6~F{rJ|g)3@Ifd3gO^Oub0$r9kxXz;I~Gz{j^R;PWB0GDH03{BuqFl} zrfZH)0xbuaz9HX56$<~J=&cl6CoNtZycUs_;BXyGuiJ5-F-z))GGig)zOj+tDgK9w z?knAC!J-Lu;_qNvGj-hwD05Be2>L2`{XJ5H#`_wXz1r1WoA+b4JeuVeb+*G0ie;&v zP{my$<2_1homQ4J1DS@^`s;rcuYwXx8A=an=?dP!rSWIG{Y-=2=;Q>892R(KVWMDE zHY%%Ms1L)i=c{d6XXp#J9j!#4$S}8_Df;R8l)T&bz}4q~Q7FS>j>))RVP2x_>ufJC z=X$R5&Hi$G3$#^vsxQK)d*O|B*cHBXibk7X7j{piawgQ9|cyGLJm z11-lsiKuplj5NAv;V(J|GUYz3Mr}>m~XATv2%L%a2;>aML#MX71oDM;|tio@~kv* zcmqVz4GR$wRmbIXqe<-nT5S1S$>s{Skx1o-QNi7xYABuVy+kM|$15!+US}<` zz?aex{p~I9;T+nxMSY}|>0se5h@pbPRLp*Poeq(TG+qfsNW317PG7L+gFAzp)Aa-6 z`)sT$l_qc^__;)z6y}+0CK?U1ebb*qq*;R-2LB`Y0bhogr%hxZleosIy6&P*hyY(giS?cEZqbuZ`5 z5S>0_q7x6WorGCqB9*dmo^v-wN{uRvvjYHOffj}QQpOD5)`>4aX2{EjencYQoWVwZ zmrL}7D*@VXr&1#^S})G8s{&UeeJqMQFu$>C{Y*z|BfG&o@-H_jP!elFArqZP&W< zPbo93U11%h2{Etl1%ExXfWV<&(w`^|>sD{|kk&SS_meqJH^&%<&*+}tU(U{FyB5}V z4f1?B7#cB??{qzkgE=D0eK20^Y&YezHm_S_6N zD?2~A)2uI7Nj@93h9oJ#oY0X@(K^XH#u6+Xx+!cS;Zh23eP422&%c#3|yZ|EpSeGLCj&L_Mz*Q6kfo6h&dSx_mc zNFeVYHpo~lE$g5$-;~EljZPZPrEXlGY@AQ{5}UYp&dJjMc~&?w%%4WG$clGsC!es) z{Pena=AKJGrt%{K-M5jr&E5K`m$d6Skw*DEjGC^%b5U6T)9_z4O&D7*YeDmsmqBvjN6uESZmUSlMd=s!2AvC~u{%Fx zAHG4(O1Ij@lRKWNm}lAbN?GOoxG~(PK!OtMbi9iZ`Kf!vUqFM9?>-aY`O*-Rp*B3h z;g) zXXek}t06B^?0==4L|QvMrx>{q614FWtEiK$H1!#bLmE)831993lU-7}?~2~N7)!p_ z!ss-s&s(+&w3x$&s+iLN%G*1gbmKRX{Ler}JX$8wmq+)l3PSjEpS&)w+*S&^#K~ge z)2dm20Jr;`H3=r`7E9iWp&u&0TS@NpJwU318g`gG)b+#x7OTI2)c1{%j7+hi8D^VqiLsQ~CE2G4-rj@AtM zkjfDrWO*w{J5s4M)#}SuTWfhHS*u`&zqzHZj|9MZs9i1nZ|5o9|MaRu!BP`~QvH6- zRM-QYD62g4QyR6{U)U%r>D3ONKQNdV2;MClfS{&^xX*M2zHlyOn&VCDKQN%{5c(z9 zgikG98z(KACc5nQpRL(3nf+V<;axMyEQ4?yYxga66g!YrHl9R5@amH&BF&$d|BRv{ zQPw|pVQPLK5eMKI_$}&5D5`BdPAP_uk_jw*HZiAFKqu4qpxoxO^#=Uc7UC@gT(ZOC zBc%~vAaV%4%V%Z4KPX7>hJd7E?c_2jqGN9XaB0g+4gi$YilFoKvz2feUte`hTl#xV z0XHweP8*)Hm0 z=%`BThDx;D7w}jU3f0=%VI+JS*Ry$z2)_lyLBK7(Jh|)lGb2?AR%C4Jb>!HzKO^xl z8o+G;6U{~Q)oU;o^zF5cFg*LxTMv7>twFOK9liH}jEj`g>>vNCYj zq(e9*ye?V44C>(HQ5)G6ON`e}MZhvmdr8b0dkBF{TEK}+a{=rcf_?$Ekxq$SEu~Gl( zpH&m`p=uJr*Fp1A_D5`nClv(N1~>Px`yDYqMLwdPv10UNbBRxSdycq5CHEx-<(3HI zkEj_6(&EyCqfK|~#Sdupe7!}v?c&*B1%4u%0*C+v1JsGq^Kk0y;0`EFH|5iZhxlpIgv#fQzi6vEA^`G05W9U zI|acHK{`{ZGh+|L&jB^VB`jUFl)~~Z6djT2zN7e+Rz<~;R5H)f=AF`gpEvF$U|OOF z0?T6`^qH1hKgebVYhLpIWHs}}bT75{{V zoH5aR-+P`p7AbX-vQvDygWRddo}Z3%c7y8H;L%O4)bKNW$sD?nP}vc6czrqo*^w_Z zW6!Y5??QFU;6wHBoZp|Q4mM6*2k|Q~&v%o__X^mnlFq^kcbH$S20(FRA2R%MvBfmB z%JbA@+?+J{>ipg~7kgW+!`-tsw9S1Y5;1?_qpYDNb$HNAuAi`W0E}sr`sJad`uNv7 zQByY)tsfET=b(^>(v4IIte2*W_uw}^JSfdW`6I1hNC5aP@W1SUY^q(oQ zL-tiRLG?_qauk7pUWajTPJ#vdSwZ^xt1lrJRF%?Xi=-c*(L%~T;;L-YdVGrqSep4gXQFZ!W2Ls;8Q&6*L8i61jqLdHeE#FiOq#thCqcuq<&<`MG^WAc;90`3XtYAu5OWxR(u_G@jBnpY& zoYN1Qx)WK$_*RF4RjB=!8QD_Bas5&+KaIpl8IYvrJi0#P*+z8Rp35@8He2tybUg4~ zZsd{1F1|eHvLKlnowE0Vg6+7k%$lh2f@Tr!n0#9k}1Ag!92O;2j19>vQOak?h*Ry4BZzVv;-uVLG( z*T+UNh^HnNBBo-*4>zj|#BYtW^7LlgG#c-md+U>UfrhLT(pwG&g7rYNa{KaFI%hH%JSua7dB$d9T1-lz=m znMKYgXD9y-_5m?rW%)`4=`AS@FIq>)xwyIsp)YauyKDqmTt?P#|0PJA$q$gfB`^q+ zvJTX^Z3ItC*j&1Giw0J_8baCmz^ub#$~m##P*b*}Xqghs#%bf1um+ODx733UAT);V zzlFbaq2lzTJD#=Dqk>gm6&JEU8QyinJJ>CHLn01`@97@iI(#RS0sJoq6NeL>h%bIs z=)yC|IWZ%uBu3_%fOl?XyCWW9(5<+RuP)R^GkG~P?;^u?d-l4o1;0RP=1myZeMTCG zBh;d}D$+w%EQ*N&8aMW&{-_FyEAwTcz~$=VcFe`1>f;|UWY&%4K6FLE9I6FBa2sh8 zn^JJzxsY5@z89d+%h<8uHaBtRRCa>f5F*LNtwin>{TGp2UR|%{-H(?AL>`R%CjEiM zB>jmorUB9iwWhat`8K^Ci$xbAeuOSht<^}d zIDnM?17ZM{90_d7jYxM*p^yBK$uJMZmSzBhZVsZ!jraUn}ahbl*q>~fwvhHngK8Iek3tis7F<2!RF$s*F-M^b0s$Cx6?h<*%kSZAM{5h&yadK+%@Tho1#! z&FH%qZfXs5=nmLp`HcM{>?jB?8all9%mVQ` zDnkMgX3>~gM(Ptwg|gI?xWbV|n_iu+XTbtMeZt0$te4Nw3O(Y(HBl7rcuK~{6(Jai z!A%t3`Ms#MN0N_tUC7UdMzz)sLaKyHO?q?DWd1KO4I^!cO}oxZ;~laOrYEjh$z7?ptk<=`%exCq{jYN zF35j1``G;$ z4dQet*&XAg5&Q)I0HLzrQu(4l=+OCxP1xErKz2B^6{MQ$u#pJPW{rmPpHQyz8NyKo zK_cuv!A+&Nq@A_!38AKVP|yGf6_gUHr?lNgX#K=0U{X5Q5ad-4)1Ep7X@S7+`E_}r zmwkM`KS~2)Ip)*YwLF0$8n*-hbBD*aD6;3dIbL29kWL{?UBSHDt?rCH>IsML`B*@5Mrdoevq<#R!l@ zsmC|5(939+bsFp;Nb<7tlCMAd=kI`GqF>2(=siwo=TLFQm!XVk5_-^Yl4Uy?;qOOP zQ`r4Vb5gJ!NYMQ*^Uz{0Fu+Pi|9*VIRd(@2jQ<9-(jAa(cRl_-blsH9LNkIA<}_+}ioC@6rT!UrJMc&%Pp;z}%^yjl0Ve6TQQ*h+ zk?-ICkW(8Lu8911hX$F1H%6@gj2)p6P}JJ_pT?T&KczvYEJP1rOOK0&8cTluXXvQ} zZw8QZ{P&VC{BDjz{kx|m5}Y)YZ~XVGRcZfuRQOXX_@4*4Ubf8|c9P`(td?a0FeQ{q z{4d^1a1cOU&@UM~f)SR@Es2=I^qxGMH) z=PQd7kN}^RYcme=mXB!+XNJX6^C6WYLgXhm@2OM8mbXfG)3lM+GnmYyJdl9=79>4K z(*EZB^vX5w)G2|sNCP3+blQ&n#x6vbz;Y4AYZB2g;6iB$fyPuxZ~@f=kdHwcc1m$C z{xS%WEi1VK@}-Br{7I<}`_sIo5du&nNPKuX!djA|egsB>{cH;i`4G(n!dK020&U~_ z8KmU`pQPe883d-dc)}`xYT6cWVqPgrSlt*|0vba|vV$c9PQKwXUjJO*;m!7hF zocQ^1`qD!0O^_$vMPuxF>J;5~mwgQ9=L$xmz@Yy)gP)a3G{{lM0J?Z^8-Uy(p?uQ& zydlvgsecd>Twn^VoIdPO*7{kq;M-GBK71p25s`DO-w5w3bSrFQigZ{y$_UIj2YF>o zDFJvNm~Qq7P56&rHc02_-IP~Yl8d8<9!tmu>Sa^;ZD}!M2KNfKmBHVhJnoxdF-&-W z@QLUOUHQ?=jtao~(+eYx0T*CvCzkru1FRA*?Jh{*7rql@@M)(8@Rae3yNsS;Q)!*?nuPE0CRRXAQtiVP)l#!Ow18zBkCUU< z^8y{TuR-=@l^1v}ue(NYMv7mY=HKz>YyZhQPX(}gPGeV4Tp`n)|LS~Blv4EprM@rG z!A?Qv9?;Y(*!h;;_q$g+f$|M67}T#Q=a`M#crVhE_Xek9;paKO3%HtC2WxQynet5a zkZsd7sRZO|PN`x4$AMsImkySI&lXs5U(6Ofj;GO+0i{S3OL)s=&0lDei#ljD8KaKn?WMqB_;9J_2yrbv*}Di>(wiQI$w+=YGkDzh zhx8jeQ>7K1dsB`NgD z(%9dyVrpy!wRvjwZd~an^Eev>t|lGI2}=Hmy%4g)`PyOAHFqAHTk-+S#t+&q%msh4 zA0Whp9`GAkZn6bFY|k3H)qlZrWKMRF^2$feN$zVK+CoLscYZN{E2zqA$izM}HU}*j zAsZHnOfxyFploIdIhy&?@^g`{Dhf-McXpY3*&5&NxEPy(>Qg+%X%ct|`KJ_$wl1G% z^yiv7BbN(@_%MMEZqyY~@?2tP@9kRpek6`crv>!E8L<)twka(I-K{Bs%i~>|7dQOlwEX$!; zMj0s9-`jr<1@Qp1Q6Wo8pfx^DTJeh+Xw>1sE%}OrDSXJMcrDS$dk?pv;iORZgDzSb zyL4jPzas_6>VYXp(gU$LITYO-V;Wf0Qm-Q_)#vy^p$`{ebomzxuNpYmi#@%6oIcgI zNrSMMx&tc&mCnf}wZ&IH9maw(Oo}QwR>CAsJTPg~qAwF5kjN_!XigHyJ1u&hL%ud% z&uWi@(IrLer}`^GA`KnVkcC>EKcBgC*XlZZT<&5f0~IVPd)JAkkbzvmN;~5S#LaRE z%M&lXMG2VU7i910N`7{OjUo2;vtNN$>1ZbEmVCc+*oMR#ZP{2fxiRO$A}52rufDlG z6r93XNo4N7234)%ws`m=7^m~%hp|od%#Rh>s~!y0)Q1gX$}fz0D#zT3*GLd@4nXr} zqQUb$i0arE8)!6fqt%0Vn~Cqbq{4$V6qa1B%Iv#Q^5X5P^*e7o7YC|7)xCJjBbp^V&$zr6v>wbzODv(spc2HfLRh)5wYgUyVfztQbrgjir>}-D*F4k z=rwkenn1(G+D3q^^n-I1k>^e+)kI9|zZfrOpd3skG;}0ye+ZSLjJ~CADR0m@KKS<7 z&{4Cc8savfmY%A3Q1BzhrS3S8b23@;a$a9wY#TMie-!Fd?`m!q01Y0+qZ!Pvm>4CE z2H(TAB^3)4w`W?!3n=1(pzq9k&ZWAf>za%=C-8%ir;^p$y8_(hOSV{IWaflTw#6aw zt2)a0N7sFF?-xMcKFtE??hAm2F~I!FnI$wvYDe*Yk|sym%a4*7gWeM@E%D|GOM(5F z*5;4m`1XH-P#9KVtUL?uDnjQC&mFfSTNlYjJ9ckEK~(-*Jx?ERyk{|TqaWA@E_C@q z#1Ax~1~PlpX#o8yni(hr-3DxUNibF^l;XB4C{QQ3<>3FJYdvES3Q0S<2WRE@)}zW3 z>l{~PvSiaAu5!_hY3cgZwD-&X*t&up+g!EFl3?0Q61&9-&b_4l_Dw_RZDmKIc2|RlnptjK~?jYdS52 zE|lpA9&P1w#8p2mb>T_D%S}^r>|7l(624ca{BAAq$CW2xhaNMkOw~^D#_wjX?>uQt r?B2nsZw+-=-Cq#{l=Of8bI>n)*WABxCS{`>dg-pJwo2LU2haZ((>?R^ literal 18595 zcmaHSbyOYAvhT*-U9)j__l>)|J8U4hTVUhv?rwpE;K3odJHZ`72=4y!oqNx{?~ixh ztTofJs(w|~UDMO6swY}aMHUU27#RQnpvlWgY5Xlm{<#t1|32&Vz7+f|h&-hAJT#qc zJiN`_tO4Ry&KA}{c}H_wYYl62D__@9YheHYhTcv~&qGgHNzl^Sk=6Vk8de`im%rQq zfUua4i@BwPwFl6`+SblVgz~(jhZ1OKB|@poqs*@CB54h_lk;=4*7Q@+vh;JX6tJQc z69o$U2>u0dwDvFu`Z#`cau@Uwq5K!F;NSW`-E5S=f01}Nh*18QQhLg2KuKpeYakCR zFN-AyH#?9|fR%%notuY)8OX`b!NED;q%#DVhJ4 z^|vKL3HI=C5oBZY_V#A==3;ervt{ED5D;Kv=VasLWcf?M;_mC@VeZ4?F%gW759~m6iYh zq>hgNL+$RNVg0}Q{y%};wR~Ny*)*)(oju(w|4y6@)jy(K1SQ?9%{`pmw49wk{(BVF zz|J1d?qFvZprqtKqs9-US2nk_bNZ)~;a^x1vogQ`NerT#ie<8xW#$7xx^&}c>c{Rjyylj7!Vf$x#|F7x(ch_GZ z`ltFo&il9VKkjes^w;Oz{yO)p4<;J`AdMw2C9dVO`X>-6&qzD{?YnbbS9{I9N9Fu6 zi@fa?5)u?DFxGrISdc!nG!DHeq;!v;v-EA2Xh58Zk5q~)Zls>1mk@Cv39jM%ScPsQ z-(=2zxx4c<=>4QrCUQk>GAj*gVj1Cw{`?TKlPFxgpj0ejPV&Ghm53ue z*}=^BLGAYgI=9Abn2{_jPnSZmmOTeHaP-g$Kt;^FjY2JN%D}n_87>wH*W#vO>QWgp zzNMFvyFv<@@=x~X)f(RHepNQg(Evif4HeL`Zz3>9L>SO)DEi1;PrYwF2JE~{tur41 z+)QE?u5BDYUMoK%;q1?5@gtb!NuV~Z#fxrqY=;XLTa6~KfBc6mI;4s5>TppO$@LL} zC1Y8wo=pweNx9tS$=_U$D=gSQ6aPqgd3rCZ7&TuwOIk_+-@rq+dXa~1()x0+9`_gF zE!gsiujeZb#uS$$I-zmAijOG>?OY#s>v|(-Y%(ChV#Q>^1qElN-eYEl^izMk3>ol7 ziF?6BOY2d#lR%g+9b-Y^yY`43=9)sor5HH}cV*#R;Vc3I9vPx8qF+gef&KIO`MwSi z0n^_h>DL)bZ_>`gt8_-jMsK<I$2CW>WHO~d2_x&}>)@{`)rrV*>>i=Da?j~n3I2arenC@h5mb(k!V*F>p| z7xQGD3_4K2mrVkKxCc7hXy)f55(xh3nQ8E_a-&iPSb+4fX12d`(zW9M?N&f!lC$A& zZ!YbXEf%0habrq>iRz*)wB`OGC+Wq8q-tGz1X;heITNM+Yc$eH^_Dy#agcR6ela%_ zzySQ}W^JZ8b@0sk&2u&)VunLkmC;}}IZlg4`_Ts-X1d<)!#Se1A2gafGTmK4YpSWo zk5Z*0lI7#EZKZxICj1jEW<`*{2$aJ6o#NvQTxm+rCa!#xtMc^;d_6Oiwa(0Pkjadh zkj(X4@o^m!hP1zJ#DSH#Li#yLT1uN|T&f-yR08`pL0%92HLO8rCzpr3{A6q+GVM*F zGZ3k(`^J%FwIz5{UeC6E-l_)CzDfTx^tefz7xkuVplVf%5RL8tE=!UIPiDypI$oUV z{AmTr(1ue&tdKtK;o*u;v?jQtiP{H&Vl;sm0+$) zLU>MxOYOs0S9?it_Ij3gASTKNUdkCIGf-flsvXwIf%79JoSqqc;PxC|5)eWH%ErQCfZS-xnK6!`;49-AK3M&zx0EA@W(SzqsNgxPmDIbx24&q zxTXqW^;0In3upJQuf=tCI!7e8m-*gbphB6(7y8Cp&SsRmCNZ(7%Ms)j`QBSxhBgwv zo|D&Z5XdwCQDvf$%l0T!!uY1&l#092w%o@#>R+kyuqcZTEPOt34jRX~yHn>8RHia9 z^u2vJR^2pcbZO2P=WO#^qL56sp9J*RGIu5f&jedbC80sP;yLnn6QYlc@+Tb@#4H3l z)BU(2Ud6$1i}N&y4;fHsw;q+=>2dPE%#9jX7h_ajl^Jn#_TVvTZTwm{G6mRy9xou- zsK=4OafI;Tx@f*LUd*C$6`i4ueQ;ORoagFT}I)+Mc{{T*H@>EuK6C&E*mj#TM-CId0F;7lf7 z+_~;jSO(2H;Aaa+@KA!qqW-Q(U-&G$ssw(?6HSG@5l}21iCo}BJ>LDPorshfp{ncb z$E1d>q0m5WJ1gy$X{gm(!61$aD7PQd(<+ayM{@mR>=WM$k@z>BW5|yO@BC3%Mrfi= zL}Z2LEDX584mxEhI~OWU8uX>j$-IZd26N$cCI)Lue4xDcRldpUzPsrC6=aT!1QkRU*_$I*f16_d>4+Lx&=KHQNQ$F(58gDpQ_NG@~3Z zF-5@$F=?sL>}qOTPx|H*D2MHEMzO&~3pI!P8-Kp$j|H=Eyb&oz{Em~ME)Fi!m%w$k zRjznTt7%2*k@$H}wlXvxFzNcIdf2G1z(S^4~uldj>r<<>RPGqP57);TUmK-8U8 zy|EIX0Fh80giBVDJ>iA`05fqrsc6+WL>n7bcGv??tpKqz{1?y?^OadSyX0mqN3}R= zy`)|jk&?HuWYzo<9P%C9`NK{sjMIblR|4KaSVSuRHj1=HNv%*TyKZg)Sc4`cyqGn7 z?(q>p`o>lsN3-8V+lNv=SC`u-oQixt^J!deMG88-Z5EW?J6cpYbd4gel2q+Rf>UYS zWgWb#u?dq!B80cjzZ;*13LSWnnR6=p)B; zyGUD73v|#F6nw-q^N}ET0S5DU!+|A;SQ8W^5m~g_5RqA+{2o}?_j5VDDckK?CMl(I ze|i44Loj1DaTBhacPN=ECKPtUw@_71!2i!3w{ z+wYovFS`}Sx5F<86M|pO*hFXl4G_Rqyc$k%vzDdQ-XK<0jj1ufgt|0NwssL!S-1F5 zh(~Naj#%_qBfhB7dx`&nE)KDM0mBcBQvc|F;cSr+A3=_zk|3X;AFSnjh0!$hq#T;B z_iGY6$)=g+hqiJu`L`ug;a!%I77Qsxss>`v7o!g;=^;rauPK{~Y9Ls+v#uh3xMqr} zOP>-YfJIqfeZwcVW|13e!b_M6T8{Y;mLh$RFFw%s_=o|^S#TpIB{hq={)Xi8HFL(K zmie=?P%+#Nj~NA2pFTE;2$detJjb?1qZR(X}>WN0Eq^ z-0aa#dgZow&oLAKwiqDplPis%h^f-s|8vX^i?B*=s;{+eDsl+bP;a)Iv)qwOzQ}zV ztCInl<5CGF@xJ?D90JgP7XP{-g30m|N1i!m%m7$zPcFaw2|bCyVb5a48(fhr#7QIE zHXb~g3va(%m}Dc+JGWvXQZ#579M15nbkr9pvJ5_p9D~{d8CF_z-FEr1oJoqfHk7P`9(a<@0DL-NmU+op*&1 z8l*mNY)forr!dEqd_@40#XgGqu!*MJARhk_;FC4<16x8d-{!*vyraurE_z%gezkIk z9f}^EN5996r5CJ&&4>46K?}?#S;pH&n0C#BHCX04qmR6%9Y471FAbI@9mv zS3-pP9V3cx;4+>ZRHdG9a*=*AOdxDizS}g>M~-O(gGD3q8-j~E8QY8Ym5QEPo(i2BWbh(^0^ZCP#lO}a^#8beG@JZ5pYZ1$Kix&tL z#qSS0A)I7p1QomKz>OAEnWw>Llrt9BnEW7J!@D%H!!6F>b&ZaT#iLcPz+`v~Vk%f< zu4S@jY2gA15OuPQ-OM}z1k4QAKno))E?$^Qsafq+9AppPLh#%>5!d(wi@7dnkIb+n z1x%1&-$kq78x_n+Kb9%Kb%u6YUnH9J%1Bp@u$=w{XQsf#1?++Jg1=r1c$;cqf2{Qt{yaWaqUX$WOvjn`sW=G5BpsC^hVNHbb8SFCu$-_0OWD}TTwjF+0|*~~xz_jYmlLtN z|6XLia2ygcK0^}phVd(A3@vC@3o(56a7(>c&8+@U_OK5}1=d#y_i3yw6Om;b08xql zo1buJpqZR8euuv+TQ2v>*Iw;)0C2zONauAUM`HH&S&=m088-2G#;3KE0eJO_Bciev z($5R1zQX%0UDCk?$<_49M2IG;8539aUcD}DWkWT74Q*oXwzDjwer(%trCr^94a=bV zop2+;PY$1ar0kp-7-m9z2y>}|5Y=kq_M;9}64)u<7$?Q_mUT~VDHNR}v2^onXMJbq zdzOFed*-NlEsVvFF?dB=5F1ltYP|UEo)zLs+EZDSV+O?K*=X2!rC7jZ9BHr!p@boB zVt7a<6Rg`VuE8cfb&7bZ(SPtos+Sb0+T&J~&A~H*^?Pqu`YC`_OB$bD`sLOcW=V_; z)rebu@gfQ0+@Vh^$~E29F?GHf*d5ZZ1NdIwrss=&TRN)Rftoy6Sc)F{ajHUV`q28JLDE+?2`N zG^qrF^D0OYhefzQ=Gh|~d?bXrbVlb6d42I`iktsQ_f-f(0hy`jTL;pWh`1sTQeqA> z=}F`n1f3okcMTPob}iS7Zw#xjtZ54uzw<81LJ0SFnG;?Dh>42IL~thCqrePZVy~JZ zS0Tov_4P$>CwZg7&l;d#ed}SB4Hiy@89^91>~^uXr8Hh4Rb7s>YSuH$Iu}fmMAN-w z-(K+U!9}HHlaMz-YV)Zq-)3jBWhWh&=itXbi}2Itain+-E5PKMPkN~7JD?BK5=5<> zkN*uzk4Sxs%1#ed@HMzG2oY2d(SU?ou*&-@LL^}2ju*NG1SXgHoB3j0L)IS`GB-4P z@?b{%#j&b|Gtw9nhcrDy8#D&651-|VmDEW2u2l8{wS=p$L>}3@U7(xpTOpyXZ8`jt zy0R?T%2K_1R}^SXsrVEXvax8)(7*h!-v7LS)CeVuk}z>8XN$1!0~qWP#FEWJ?v30c!Ri0H)8NpKtIP&32Jk?RAk(+4F_gehi2RY08_KeVtpMr(5 z0^baCJzXWFi2+W93QijxdN9OXLw0b1K$yDEH-m|f`AqL>J80`q4(DCmEf@62;ik*M zl-fsmRR!{C?5p5R+pRvO>2`gTNpi_EF;+EnsLBtjA1AT-&g&850L5B_22Td&60XgY zVEn9Tt-Ky|(gNhDa*1A4ybQ%XOa^*zjnns{zKKq5pBrjiigZRSBF*cOhny@KGKxO} zA-h)LC-g}jh27UwXh~w;H}lT0Fh=Sv@6l>qtJDnm7+nvM}%cpoQW)U6@0f;=C6b7%sZ7l<2=YoqIl z9$JD=MbC(tVhfj$)_5Ot_2|fdS!wK=dh`7KYEZF2mHa6P12a}1SS&! z!PL)f0%kw@@XKeOjcp=Yf9_#I#yOooOkt&+Fbv`7I?eICe2f|y5FbQ|Xv8#l*=~_m z8{(NJAr`2n(>HOem8M9`AC1tPp7Wsvka=H>;)fD=nLJDwksUau6PsDlKlZg6L_Vi7 z!HED6c%HyOie=5uKwU#kTNvxE=_~}yiAqVQCn%2I2c_)TVJ;!h)8>O^m*JPO#oXvw z4N(>sXX0rWIQ(ElO2Fie6j|}GVMym^Y&*C!5HtP49=P9!5CtMi(B9;gNVNwkdg;Xb z#@t#Mo?9p{H`a)#+}Wh#YJngi&wk08 zU?!<2tv%O2q|yCgiAy*(IuZ?x`iue^?mq)ievDd4!VlMLMw;CXZWGY^V2v8CiT0+Y zJgFR7$?P7&RewWe=ZtjjSOp^miF**qL%ya83NAY=vcYKR+<-8wboua#!7i@Bn6bq? zb&n87${vn_XiMbvlpW84TDktjjANyS@-t)^Fcu`dSws z*N`1ghnWX|t0>}uh2K`kgb>kuKh5X@;0k~q3a;aH z(VI1`k%F$*pJ_oVOSs(M>6K63?4Kl|ASX5B?x%GZ{3Ia{?(x&<%}?S+F24=o?M)hg zhnsJ>N=%4So-S4&zIUza-)8F$k%Kg@e5^;JnpC2Q{5160$psPC5(ymZHK-Tti{;oJ zkcV@FR4C~&2{C;2c*Alm9MYG++p2VbU8}kLwpmT`xIK4jaCKp1v2@-%mK#p2WB*y} zhcg=tVJM(4{C;j#lholfqKw<&YT$c)xitbk7}z+lT2V+IiQ@I)6G_Y!)0wUKa>e6@ zVI^EA3EiVHk|A^1SB1SR+b_CQxkqy`R7<{%5lwPUjcyaf$6lE248|omF~V>&c!D+? z{G3SGAVm}J@b;Y;i}Nv*n9w?}JT6iG*U1~KD;|dqbqbJ=7=7{IoR3^1>5;&s#UEjQ zG=U&(SG;=WOly!;n2sI;@>7wU5eIXYI)eM*0whu~;#API*82&5iE9s&7|P7956Q3Y zs~3LNEX(*%9y}JJJWej|bDu9&$_F8!Ry8Y6X zY#MIIMOX+vfi*Y7^sSbKNA7Jxv>SZa?s2jyI5ftK#66S71hB66y?R7Xx(6pwR?f)t zm6BbB{vty}yBzUGkm2SFbMmPn<2FV}u^<`8OUt>e_j%LG(ig)T8^#M+zMdlW zO<=FDZapwCEcEf^TDEo@t*b>-KE)IRC%>YUO^HpDXaX3#1rwr}uW2@DCr9YzDs(3R z@~P8UxMt8Hyo<%@+sB8~l4$CN>c}WIsv(bDCqS`nE~fHqNN6QL#e|lQve)^k^LQ&8 z2>&Jfdg&aB2YIQ*!9BS64xc{sh;Pyl17!coiUgI?q8d_dKF4fNo!Y49vRbdHqjep0 z__(fydGll7$|6goju@YkT|w~*GRvjxp+Z-OFY7vQ*fqKQ(^LHM#m=0U-L>b7Ol`Ad zM7w%*UQBjdy15^*fm;#EDno^+= z(zWeEKkQ4kbUXEU4N?)804lwlcZ49@x9gIt?rdl5Au-BVm}w+nz~7|X?fAi&u5AKC z@VXVRBVEh!g93x9ryW_D_Hmj)*eV^78okYXpKu}x<1m5bWSDD(F^A=qRMi(|xPk*v z3SO#Bz9LpcTE)ZN&4*^TNyfGSnv?9Aq)Qup1kBZvKK7D0&icS(aA1r_S-k40uwL3~ z%UAAZ$DnGDhhI_frCnd&JxYdMiv$Xu7-dI(WRw!B_UAOnUn~_XpX&ka!Av$Vf0ubD z1YBI7$u@>eML^=RJWit0@3Ijpf=h7`yP7W>O~pqeg|@g#J`)LmoohNeLhnzl#A5u} zP(7aTLXMU!X(8)f*4h4y!km=B{0qz#I;Wk+n;Q*sC%?WfS*EBaR#xmuyF$5L_&|l(-^W*VU<1FHuw2~lW z;fl5*NI~E}goJcw$UuYn%}d4EBW`sS16(5Ny7VD~K^jl@6nI*FV-jD z6$;(k{Y#B7(x0;ox^Cx>o1dN3)+C$9k`qQ{xS_&iSh+~wrbtZ*CC}@0z36Y@L%1Ga z4uvcdI|D#jrX5r7%_lvqTJg)Ws6eeR3`{<|g9)OWlbm7=*whT#2uKWC`syrU1}z{K zjY_iu47~`r%v6L7OJzq15+56sdL-$qRU)9ovFAIDr|_$`$cC;sA>iQ_8akSkB#Ma6ZpHG17-QsL4HCVK(p#YUVFx^q2w^UsXpBIutscLL$IdB+w89S;~FOhv!|x zjk@3P=Lah@KAC%R@ypHLelyqIFoi_)Tb*L-Xy_0GraG$Mv}dY=W5kliXY!D_Ffyoq z0+W}ZW+;qoLYOUNesXjnjw+y+hHBv<<^%+y1MUQiK7q)XgdGt%8S2n6uUYe%+3h9TRajr1^ z;Xwi>t3ySL5{O+c&oLt@T7SQe?>GnHhAQ`g=(5ib&zql~R-k7a*X{4cCDM0NTs?!H z%SQ>soOD^RO@u0S<0DbHC@%DK*l0`ZC+fn-#^p*?esO?AjoPqybsiT^1Lu)=zJx-@ zZ$T-ta?_yw{2TW@aDH8`bzcJha8q|JiWXJn^)_BfPV1pOX@@!5rpPH!e5Ib~07x?6 zbVZPy37na&^xV6AEL)&yQoikI%~S3RAvP`M95rm~5Fye(t2q9O?+)7 z!e1a8wz=E9o{x~afd98NvrLWay(9`y_7-K?jWdF!`1)b zX-e2r98tUZeNV?vpyEj48k|=oBHzu%l6kABzz{OfIqKo|QJB771ghZt>upDQ@{7r{ zr19IilmAYuZ^?-aQ)xUs|9lNJQ_Z2VpYg#TPp?JFVjhB)Um8`|G9@pRmL^HLi3Px- zTaqGfa1l3^!^+ptxsaTYilOJrODrE}e-1R#G1R63X~T*5WSzgSWV6@0>Dz0AdxhS) zs=pDkXx)RQ18?`j0?5v)rpIcn*ylaoM5X=@a|uf*wQ+dHL$B6LsUs9++NUcqO_rV) z1_6>_Ldl$vkbZeST&QLoaaY1-%}``F9Q(M}oa#J8d!&}AyoaFG>N>Ar?NSiM*eB)x(ghO>r(6QRK$)P(%B3MsjeR9*O+TA7bUU^KS z>QgG-7?bray zLy}w`0G7uh80P5-Nud?FQDBef{?)yC+EJyEn${vO%~vZ~15M@v)BT+`p@bp$kNGBy zX`4?U;62&$YuzJx;Lz;GO64evJfNiBWJCLcQYfAZk*Q6VfX&J<&BlCoxQqA$A~(E0 zR(F;3NxF=@2YXio(s5EjReaoa?O>~mEGWcu#-LfK&rlMBgUB2Y*g#U56ymqEOzdc_Cpl`pK&#oFe{Z$wKG> zA;19ZTB5l)2909+P$}0iE{N6-nK-a>&sBPvSvaVQOu|m*A$Ice)0)h~RiB;tka18H zzsLNxP6uj~Xr<)47$fErZ^*9o;){@@+$j4e*zgnTP=D%dOiw2UyP4-y+QDynkRtd& zjwl}>>VU#ueEiG>QlfKY?(qnxWB!Q_Gj73Ls;(Avi0yZWAiN;?h&N{@cArAjXj{B_ zWruo6EyW3NTI1PKAbjrJHuns23CM%wqqu$K2q*Mi7hq>wI5YaKKVk@sab@Y?yl7k{ zjV|4Ow42z1ER;;Lx?YXe@ikHG^>BIrR@r@i?}B|66N=m-vXUFo?wZ(O5_f>U zhJp8fdP1NG%Qq+&!`1j0z;L}HMugyzArz+FT5-cdO9T`4beC^ylh+nAu}q&|XKx04 zBt4E3v_ZVAH)QB8#1?xJkb;eL1^o9a zHV>JEaLnfq?DNq%=qC+bTUW9NfUd|0f{YexT_Zt-Ky}Zm;jn@y( zGND$9BEgSj$pVn@5RZ;sXFR>aH_Y2$$^t=#75#SabXr6@e8Mv)P}h7r)_(S-S2UrCBebrIjmX0 zB-%x|A~=})`a#T#MTowv3H?kK6VVfOsVI^YYkoB;Z|pFz`(3jHkBJMb8c^sjr%Cuj zI$|1W2-XKS`h%SgX;90O9-ke4p-kRz^3(T*&(asAa>TutEBfb#TOTI?`!zAyYdYS% zGW{CsS58`#I;aoi)uv{b?(<8gz#r z6p|eQQ@R*na@)dkwua%)Gjl)@6Ts-)T4{$9a+@Qg-ISPaJ>OfB1GR+nf&t%FGvO*a z9&gVo3`I4)cNn0s{oih_FNaN@A*(+HbTd6S5osn(=()iO43~YEpFaks(UplGcqXsV)o zzC&#}nnq0Ev(vfx-aMWX^z%aXYG}wBz7FjpbTBQkE2xu_)Ha-s=Z2^`UVMffj({D$ zTpw#Z6h2xUw5J}Mw4hc|EV`NnTS^CaCY1TgeV(8yeBEcs756O+>tQ$I{jt2p&f8{j zLyP>R!>SCP!6XN3gQ~ZbTC2u!4eRkUyf?R@-pHwiEjaS@OnRCoj3Txy*VqwC40C}YvKz9tN&G>iyC=o}IJ z_is2Bh6Y8?leBWP8H=4T{5^=wSY~{=5wX#T3^R^As%iI)?b89=v>#t}K>l)gCB60x zz;;V3@F8X_`y8io6!g%fLU+*j$F;um^}E!0QQ4utCnWON-4URz_QfgOdMof|C>i9M z?;vwO(UMtY&0Gwb2Fh{CM-XF#5u$t(_IAiv15D;AT^Jl~EcuGPqJFbGVri2$z04)G z=Xy3pdXOEZwEK_hA3mTSWmX9istadOBSChcFGJdIeeZ*wP`lTu@3PF`e(b)B^ zanxaGcs>@53%NU_9!oq{p+9?brNIZ2O~KaAb3!EjA*FS=f(Yg=V#MDN{6S7< z5f6qem@4zDF2Wa&PA+(GiJ`lp=%&BFu38M36OD4&;eW%Cxkx)q6IMgR00;PtoH`r? z>#2o7nKc2}+pj~r^!LMB*q`Q$DzOjigiW4-X>JEJ@9B7G9s`vL0v*6T-F!B=n`-MgI|kGVOE>~{hPLnjZ#bzG1rHM?F1K7|hN z6`*eL*M-Kn-)_xVULeteg@gt=wU=jgIHhJjvJ(qV3*x{efe!?h{hLSF>3?z2hvLW1 z9Qya_>fQ(vuo$tZ)cj;I;kacu3OUZFIvfw?U`#&)X{s_Na7X#l1(x(v(yy1u7vGe= zCg}xXrWmdP1DY^)l|_=GO1?!I=3Wc-s^@uYuy1GMcp^u4$sh}&Aqj^y6G;79-od0s z?ZVlK=ohXn*FVfh42?|LnY=0GPSuOh_)v(=R(!*rf?#%bB0N?Nmtt9Y}(vn+OWtgbQ!{;2r8R zyOiWm*{8I^?;~7s?o%G)ZJTIdX(y3H9}08X9GZjD;!O-ef>zOff-1hx7O~j>u1gp7Oswevrk#)C!+%!ejg&`ChF?bWOGR=xP?DIKZ2tIk+HpaL$~r=1=;3;E z(PkcZS)-cow(ZJ-ah0>w8S zj%XPLX>oSmS=?*<(UFNy)Hrgbhd;}Q%_;U^gh@`N8}-UEL`2nKgb7s`G@sZ>?l7_{ zW#I{ZbS^dZT$uMEUr*t;QmQ{2-7bY#DnU_yvpJaTTlyzq8F@>q4H9U=2*bA)Gy-Qdw#-pKn zk`oGvhjSy3!22$*Qj{|vT7)F`q=gfqlhPnPch#%J+-^jqGB!~oD~weflg)({ns#`_ zC=~P%DI6^pDqh^Yg~5(#Iq{z2tl-%{GL3{kxRd})HqP4NwSYN-tc z@P)#;vO*1OE&S@EV3GLX`Rhi4<@3$^fzHd1N_T(&Ofm-fZB<=Y>K>b8JIm$1={YC3 z54sOS2|@w{sc2oWb5_HNmT9@`V|<)T3I5tJv8)=o-#@Lo?X*%pxP!ewN%69x!yP*d zA?c0R?+u}-vaPg^>`BTyRz!pz#qGS?DPWTV8BIXkd0{@P!}GVN*)4k2M1urt=`iYy z&T2Xwwb6kefO7>E!A-X#4L66cnUy9Rck<13T)i`=CLvE*6~(`YqWxo1!@Pp9T@EXY z;CG;ydk`hN;5U4foOrS8<6wK8kO!Cxbbi713ti?#=L^s;#IcyqwD^Boz<#L5K|*XK z#I$Ju#+3&52FYIhn)N0=sua{f)E$vKRo(jg?+62RS68vt>blSOrjwD~n+TC*`uJ1p z1+>w9|6mf+qD745!S*8iSE(g z;7G*XxFLSo(nrH&KhNRGt3E2crWl01YiL~)H{q27CAUVr1uD1)yd!p#pJjKTBt)IX zbCME$W{+b|?q3XTQJ1CGR%|)ax2#OhN``&>)>)B{*zA-M)o}EV*uV8*p&P1?Q7>*7 zibl;4W2hH#hXlBn%SUY?a#u6&z{#gv?GL`eH(Rf!7=wlDE}2VP!;Ma2F4lVy`1^m{ z8yV`{u%T1cLmw_CaqD?mq^8dRohVWonv?cBH~B533t4lwO{W`qa>0mlRa}c_av%(;BD32q@!@ znhS_KHog4NfjYU3G*_ek`JMa4w+*RcXO@qsP?G!4x`i|Tq=#@>CB+#1Ht2$JWr6Zg z1S{TGBe|Oz0!!MyxAfR^U;FTrO<0#@-Roulqv(sC{d=`jrs96i!uYcpfbSR zbM=Q)p52s4pMlyYD3#*PED)($!Zfe&_qtB)%EH{pj1g8Ur@2~Ra3p={E2JI0sS`*2 z0)qh!m2WnmZ|b${sD=@NadTl?G#|c?z z;J5Rx3x`lm*NuVNPflYq=IE>*G*l9Mn3D|)dc*N+R@4+uu)X2A+j*>LqiR{o#CEC7 zw!b}*Si{lyC0?|%-gXo% ziGj((IvcgpyKud>1erH#%9PVzC1SKe4%#arHXQZJsvsezA8A~UU!_!n`{fwTsKKyb z&_9|3S+&wzTs#Lm^7)v3CcLk$Po_qNS--n@y7h#8u+|%C0Ovqsw2#rwv#_yxDRnyS zWx>f^MWdN6E^@Qm@KK-$%lT+bgL0b5b`O2QG?OKpwLhx(?vCZFEXh8oG)9PvyK>M& z3Y4{cj$JaUM+6BOm}F25gAl#PqHfat#(ZPySqKO~$Wp0p;}?C@&urn^lm zw#M59gNg(CO7G}(d_Cm!@u)sXjc1169p3)1L*w#4=_Hf5o13A8*PcLJ!@{=YyMUzP|VcT84UJo%3 z#=Kk4lZxcK&6XhPSbNSXe!qbq`VfZ$X4WA8PErP^c)woTMJG{i`46n7=%Gf5p<1~)Msg%lLsUI- zvZx)pwwU=mgXL`Lgo zDLu|F%zou@@28)U6qApZAt&)$q7 zbvVD;ozVeQDq1P8IQd}2Tf8;IJ=9}2evs7C+(WC9ItjEQ5k?dExRI?z;HD5&I#S7U zYU24y1?p!2Us_-QBP?4jOrQkhxOi?}uCO_kbb@YDTtU;w%o1z->jrbvi);Ga`(a%e zh5@LNyO9)gFT2@X33@=l)}C$88~r!TD^yr)d8`PjpM>3#bCW37cw!zeFv3p*2)s$N za0!az+gGeUo`0n+P|h>9(kl@P?2kP zlF98EhzLo=j<6t1cj6}cqHS!oVV&VT+h17V@Enn8G9=7KY_$+p`&2_zsm3e-pg-8 zo;SLgX-l;|^bzp*glDzPjL|Q2@gPRKp;zB=dcPMTzbw)ewMorf@{GF7HL)dxzN1b7 zV#*V&+E%By$2oQ)ZjVcQw(4#itIP&2le!R1=kF0~TfLlHdfauOcJlKN0!ymp`>1l(YI_ zViqN*a?Y(p*7FHvyuO!tRA~$IK04TG?Dw)cvPghW>99`2rpINAbY+Fi+0kM$K)F(3 zFK{@0B6Xx;NROC#2Jm)enod0=sPb?U9UCRzMCQ1Y=ej8)&;7*=T=rhQmZ*1@@y|3Y zWo@y$VSy;e{SoA7(dD{gJNfUi9gQkX@3*G!JROWoVyg4EP0V$h-zGes4R$6=J|aWJ z)ilAwyX^lb*aj#0nCOKK$ry~AJpszEUjT(G7DCC*^B{lORB+dq1K>D$%`p3-8#Gf7 zM#qMtov4zUe0WA)*-HV3+psG@&;sjK`~P~4VK4$AqfA0Hi6+@`eO7vY&yA$qea>ypPMgEfn|xIVy(c|X7)=1@m9tmJm(N}KRcY^S#Wf{Y3PnCJ zI2H^=L;FKLF}5tV%usa=t}dT?#flpH(kv>`x4!%{sq48fGF*W;dImd#2U5H4z3{oI zV#&n~R(wy(>SC2X`@NewFa7!kWqtEvQsx=)dRXu9K+!`WgtRCb2gWCYW5v8DJm-+85m6pCi)jd&5X(;O}Z$?njxMJoVo4G`Z2zqmWRK2)F$ zZdnGLyJ!^14bs`Y;GX@va(v{`5hUnC%OEM|6Jm}Mf4^khlm^_KmX|rCHM8Q1}x}JUQ1{Wo_#NTL6Sg(WCeM zeZlYUyMoDcb>Z&7%`KsNRZM%~c&Yt&JVO&o4dxWSmWFW13@{v*5Y^CcKOF0y4(EyZ=Ktt^&%9@65l%c{7;+cA z)I$RfPTZ8f-!lvfe7%NisCR)_a``j)*EWq4%dgyFhwo;Yk^X)XdgPhg6OXU`N4+Or zjv(>qX$jt|zG;IM4j8vP&bav)JGpeH=Wj22z?pVX*|J?uje-w$Hz%M^5b^QK5U5Wb*RjxGKNSVbZ*poDiF_hm<3t)?#z;O!a zoBp=5_Urcw#S1#^^nGjxkYP#fYA8sp+4d=U?cp2saI^qz3TAk$geMADS=#9Q%moj7 zuUR^xMU8KL{n0k?Z^vWGF2QWK9)wE0P0m9n8*-i1~(3^unSIe^$oX629XpAsnDkyNd>B}r~MasaWTOv)R#Q`W2lh*>6; zFyP_ENNzQ90I{7^xC2dv4$q^qZcX(?nj7Nh9Z>4tv=Kx}RSyEDaWL`nVI@wU4%tnrgqH8OwJOz!n)GWwp zj~qbG5Bc)ck3He%ouQOEy4{s@GY`aIzCZu^ALM2|2axkh{&D7&cX14}QXj?qnUtnJ z`OTSE-JP3l`$#Tf>I|eg*3bN4=Sz>Oy3Pv>d)tUC06sF>H)xnY^>>pN|G#Br^?P$O zodd{uE#KMl{CAoY1NAPJzx7<-E2+s(mOK4NzA)wDAJqAarQGc105T@ze-CZG`sL33 zpXC`Q%>ucd=CGemmyt}#F5N7SBkS6e^Md2oeSPM~AI;5Z4j^Mvez^1Hdyd5V>(Mr0 zRYSicQ_v;@l;OG$bc%;PU`Wj#)d1KD?m*LLY8L*a#$T-DW-$klF)uGR?^wJy(zUQJ z9WPV$sTk-`v04~Wj5nSrv+H&xABmz%L1Kt9sa66s}PisYw?V@BclP;N$Y z0Le*C#+>BtL2{Cl96)lClN>;Dl9Lga|1ZD*JqX!>Jk=@A00000NkvXXu0mjf$@HnN diff --git a/static/images/community/stack.png b/static/images/community/stack.png index 08e2aff1d7099ce2ecbd4844ee59c2d2627c6d7e..550440c5d5cfa94b1c58683e2ca55bb4ed5580f2 100644 GIT binary patch delta 15537 zcmaL8WmFq~)HRx5fdawZLXqN5f#TBQMN5$ucXx-RxDRtU@TQVp)ra*fYWP1AtB?_kP2$Mi9vGEdNIr0l zx-IZocm_X(fXHvD%gLt>U#+L^0#>XuMK-T#BS!ckaA=P@HibB-{K7=k}Gim&aw)^S`synTYg1jmrKtjXr) z49I1?<3vQK1NO}A#rPqL#L@eV_jPr+T%L(O18w+ni_*1~@O=6_)U(bitGx zCR9kF_v>Edu2HF-Dz;q%!RkKHE**j_Gf7Y&;?xW<)Bq*mV*yjQAA7L?1d!-p_=>ps zFRFg&r7;xuG}SZG=_=+u*ugW2JeQ#XGf=x^riyw4#^WFc1=Df}v!{@(BP%Et3(%qt zT(m`naj%V3-*i033ip$U{RJA^kWZDI&65Ob%_ny;-hWg4#CM802_GSHY?$N{!JnC# zovMkNa%Fi2hZSsGFP!PAuWt2GK8FD|_N9M4us__|Nb1dwUV${Tm_Y%Es&+716~A&M za9|~l2ERYlAcCpc{JiEjW;ahO_P@~}fsoR7U%yB=c1RLG&8rH(JI&zEs_X=Az-AC3 zUNkAVTyN~!##@6;60;NS``Lia2IJrrb;tl!I5&#NyS=w$xdysJ>b+r!EUSV^&;9}22W2SWF8%A{K9qu1A9+uD~w-g&P!by#%}s%p%puStCW+6P#e zt$%Eg|2Om5S|f0RP{BN{@423sU6Y5cAq&_oUIVox?tmp6|lqLURqrnP84w=7^$1fyWJO zH=v{SS8id^^1ycX=3N*PxCW5@;xPNsE{7X0*3#od>8L7T8zC@Gp5woWGnJ!+cO#sQ zMM@_Cls3GjG(AwvqZvLp_)=Bcvwn$gUV)Iz3oZ2U{q1Agbtz| z*OI$$>^KT;`2K}k48yc*oD@05`OM%uy+&MA2puZD+Vs3L5q+C|7i&I)JJ?5Bo!IgU$4_V?%i?#w11P`XBeTpF&U^fyHxRnp)qwP|xK=)W0V zRX@!AGxTlRm7K_n8@0II?_<7Nei^rdc=DuebiGUfKd8pu(26sfc!aIPFqy(ZLH)`Pfdbnq zY^NWGMEU4k3xY=}h(gAEU0clhRiBj~hT-)?q6bNE| zwsQ%YesNWm|GwBC;N8bC|UB8~Uw%qL52hW!nNRpGy6ayzZqoUa0G#1h zDeZiI79?cJT8;xBWJ?jSSpsHUk?Zja+7jJ%&wtg+3T~?bC5bd)2u=~bCenGh@4YRbaTXv z9#sfU>AdzY-V4+s`mtT+g?{c%Sr2rNK8NXqk*bIxWNvYr4&s=`I!F5>I`%@nqW{gf z=6s!JNgdya%xG*KA@}0kby+F@^z%%M4QHDPQqZ=*Bri_PbVH9DHn9S!JlV2ylP;~e z!c|FsH3-v6_d;Z>gO@G$J*@)o@wGhq5l{JGpVKFI;BjgH>o?lIjgtxw?A9oO_M{Zm z&{+p}POygXv9JucOMg%$@_4_L`nxhb z84l~KWAmi$K7~Mz(v*4^#T(dqoAisu2iJl>EezVN_z2_hUYhncK7QjSAkYZaocm%zebuzS@u7_4u5@P4-^X*83 zi_08kCgRqMlbCe#dq&AkFt;3}MW*_w6?|@@;aeI_$3-y~?D&5wG&y@9!-!DKb8_uv z3xi-JeG#)vP@PK%dh@%(rcfmvqCk@bMAgP%3qizWQqg{%K62Hjl1j>ols{g9({-1`ogp5X&p+XZ>@7Rwj4 zq*r$~+DH@=7ev&?TY7`SC{EUQiLxT1+Jkl;O8s=e&CrG!cbs{NCA`N_j)L5X!-T`G zv|qOg{r_U2bb+BpQ-4<*pSW%8kksOml)u^5jn3gm0}7~CkRQY?oP?oArCrx! z0xqsyfn3>=XVdvN9X0`b-j%)_<(6V{k*usi*iQ@o)bu zXb)GK%geUByZt(wzBiCayE^*`l{87RRFFseGb$|Yh6BDL$S}GHl!RXCVehpkDe@Yg zqM}nnjpGo5$NcY(E!Bj2ffmQGqR>@^h^$fa`mIF1$pm>sB*K#B#UpmdqCKIds`tNj zjNO^y-eg>RXD1R&A__^Nw=$IBW8)8l$*eH(dU2o774r5fOzwSbsUE5}6LCq0(lS}$ zcX-Y_i!is{`G^qVYv0Qc!NkU9wc%BRczlJbcu=sfu*LpqK|p0{$bFw!Z!#%naGh(S zp{oj74JdJ1BGVq4T8qe8KH_y3PC&pFSrrfwZ)_Bou^`Zh? z;!+AbiIECJs-pd*Ie0xwkb&O62(EgpkOpWA;PJboxn6MJ$>366`sg$soziJN0}Q=S zQ@i$=Y=|o?!pDvqQc-AC5{Ha|ka*j``D<*Ik7MJzgl%iL;3e1+#Z;IsPSD9c>jJZq z02(Oo0@2R{mHxtX(ZRC3eE-gs0Q;V7cM$Pf_*wtZYRd0ar+m1hnSaO9%tF)VxCKYd zJ=xF~qmXS^oFhAU@1s${IJgNL8@dsn-yGY%>*28a-D!h@Ir}5-29Fm_Q(X=kr0bK^PgakE@m7i846S7cXe$Hbo|te68fAh zi``{Wcju`=l*9j&;q!NO07^6{uP^fdqAdx|I2w8t8a#v_@4geCwP2vW*QR5A>+;d% z%I~G?-Yl$W+uNN&jBCd88?VO&Ro`R=bgsUl9e^#&9NS*lH`dJIQ7GSMq#5s#R=Xdk z%nY^MBRqgu^zy|7m7lvlrlG$5KhUpdl)9L55DEciBVm{nK^=CuAlM}vk&%O3|5k~x z_x5HmX~%wJoznw4d4!F&R)`RAe`xE@4Iuv$*+tqhz}lr@uyA%4oaUC)!!~O>$b7+x z7deEcNrtBK^zt{oemIoMizNcf#34w5DHNNVe%nzknBb-O3Oy-As!~9rk&Mt#LPdJU z4_Me<%>44(p_+Uy-`v9jzO=|4|BNJr^b&!PW~n(n=1h+A&?F&}tmD${zoUE=9LJvr zd^BEViuQ>5I@L?1ciwIgg7T^n9lhD(_-_5Tz232MS;yqx73;g*!2TFTn-c7D42Dz1 z&%ZX$omMRFB}8y(KK5$B9x%~2+lT;uf$5LcD|3FmP$Vkq&fb#HZLRgD&;6Dw+M?1} zNBdvXZ{|dV`3__+wVCc9in!Rlm7>#k{oEE>?2lT0n=|6{TMF3szICvG2m{Lzbrit^ zMR*lFIOkmWbchn$dB+bY6(}cNAALUeH@O$P*Mk^Y^Rh*?O&Pd9?+6NqW?bTUuU)sr zG=%9wS2h`)o#1(n=rSf@?bDy9N20h(oD{?CoWu;ha|PI(`uP*co4nxuD5_koduc%_wgJ~y_(*;F-CohMx1C1Vg!M+O6l3#6jy%lvxwDm z3SFpfnRXuy|Bl~8!_^5ai%P7{CUOT%vkKr*QblhCLdjg@eqrFt=r`L%!;!t@t#68lLK9 zn#-UB;)~)6BK1? zwS8}7RCMvGzL_q3&Y9mgz4LB`s$V|pCv9JCDX_91ps}dWsD5okizW}*Yv4e9p>hx4 zhFNRL;1w`|@&aBmk41?o?M@_z3VtxlYG$*2ZnVFnl=InMx_(AGn=n)8_L<}K=%$KS zqK15OpiC%?e|N$_kyr4v%rxdBI{SSbogF8iKu;P#JfFBn69L9{?u=N4RuP|Vzv$%M zbUglj?Cg^Kf-6IO5%l%jzePt)d}w!ypXChx9l$Tn)*yRh9D;OBu=R8ZSmePbqCVai zv{lP3RH(vgT}o+faD!&k?9#>QeaJc0e5GiY2JC9Vh@Gt?D%)~!iO!_$a|P;;WUVV* zzOs^tU=2w3YxPU{@w4go*xm4eUVRnUtWuggcOkl8Os9`$Zs z*)*%I=$TPjs#|7J6Hc3H@RCmyaR8nkNHM)_XGD%#m1hWAkN-iZb}LS0{|S8(W)Vw% z0?vi;iilM`)^L(Bd8CVY#vO|ekd1G!|3waN=&b?-hGu;JV;B{T{7gqzaA0KxlzDs8 zzExdGDma~*5&2B-jz`7YWGMzNP_=mDx^u8tz0yPS9sETgudv=DR2qcS)sin4vR97E z1KcDZ{YA%KC&3E6rghIoHN`(}{{$xH8n@vp!@fmR(-XEN+*w|`2RO7dM$GXb7qSVX z8-z(Apw5~LZKxD*+*TI$dQcrJZ<*e?5__W4+1j1}Lemt(G5aemu-oANV6v5dCG0Y5W!X$%mAip7=toNK#J+w?-y zc_#2buP9rIaiEQ37{K$ig5r1E19gL_AZ0X2V;t2=Z#)ERAl&dYfe%wN!bQVHt$TLP zb9B0e96EtT-|-^@h9onBfsRf7>YC}A;A=h4?K<8cp#ART-tC=`8@cY5R|Gb8(*-G8 z)z)IvUuof-Z>r3UY*PQurA3+0<8+1SgCj|e(g)g0HDJeR$YSlLhjU+Q_pA0}hJ;EG zXzhuofh=1yok!FjRA=vnONUyE%%sVLDl2a!51^w=fKIOXEBY0`;C8v;h3mlb_Xy6C z+FPh0_PO`^Aa}boJ`u5@6-y)T;2%jY^QAFyLN<*@bFLffIphz~rc`(8p|~8*ge?C+ zfau*iM5+8I5GgaJF4AS*5by`BUVj{&qGfhL`HKo{Pvz?lBXQW@pt>*d0Unfxrj@UP zh|mF9Q}QLu@!?B)b!u-03Zd(w6z6E=Y1@H~6AYKD){Wn5FfxqRI??t@PRlE%i0LY$ zifv^bgORv=7yB&zztW=gOumR3<`&FZiHY7X??M*SCY8rX5r6-XVCD7Tta~s#LuINb zqmyPs%7aX630uFPuE|})DR3J@ks0}b69la~Y*bjh+nWUptPsHdTur(Lr*r@6XJKxY z{ofNeXs&CCb3%8E!AD&5`BL;SYYd=|}h_lZ0Joq2G@J{D`Gxyx~ zZ>~Ljf;^w!&Kq3R&RnUbk$~c?z%o;YrtyRvbSQC~Z%H3C1R&ryqqQ>$ShDgRGEmr_ z!AuI_e6$AGLi`RN6-tT7KK~Q2Nbo0qfn-nS=cafF$1a*z!WFhfu4Q3}yeK-CJh-8A zzW4;qAc7QoOANthAYy1;z)EYw?M<96Q_3JBt}Xj=iM27h%8k zd-w0$m$suBFZH*nLBhnF%Xh9vPdOdRYKbekR0*XbRAmlyiF2OA3Ov?| zzDgmWv|*s2PU|{us9CRT_3!`dxQ;z%3epb;QQ^oENfhN zm#qmLd!ll$4g?1-OQU5?K!Jq)bzJp_&2LcO>zzdCUlstJnJWue+V~+)c*$DEgBZod z%uxJ-HrEB#w(SHLCUDbycHihV4Cy5?Y(u0FOVA0I#aGBN3(gYZTDls1e*4pVfQ|Ll zvs@+KjXluNgAp(=QGh7ptWGPK5!XllWc0_-qTg~MapK!B#ssimmeX#KJw2H9ujk5ilS%d=&8056PD?&c`=Ljf;h{6%f7oEtc`O?W@l! z3RZp17$mzh*G<6qRlIa0mgY{){Y&i{Q=hXO#Q{i750}0}Gi2JnF0c1_rgj!dpj}re z+e#HC^+~GgJX+rsD&+lMkc2ZlCB`U)P#|*<2QOucqr3_C{V6nO9<&tbpfpH)Iit0( zQSukn?@|!P^{~S=AYC9QFY5JibitjQb)SN)6h!NS9OnbAKBboVCQjRF)4EfDjSh?{ z`@rsUCB#ziE%WX63y%%9%g#p-Jmgmx4>Vu{?Q>sjuRN=d5Z zw8s@Kn_*Hg(utIjqscFMyPoU;@PjjlQ$3vG?&+y_AeU5(^GTG?EOtJ zcg}Phb3%$%C=TK3n*p+)AG}s1wc)*}N^-xwhx$l42Cv+7rpR7r#HbrnO~$e=v#C5F zBj1QL!rM*FZ&D0`gl!=Nyy|eybmBWw=F6W7uiRtNs)p9bU=9Pr!s+^#h7Zd*q?*G4 zNnuz9Zb{FNj|j9>r3WH+hTYg~WkW!Lz@1$Uqq%ls=zVh*X|=K#LO9nrN)LZ4BCa_J z-oFdi@pnUqzcc1VZe_^;FLpHS`6eDWfvFr(C5b9Z$6>&{$w5)N4zP4AIaOoGaD?`lT+SX2gSL-Xyc1w?5I^KcF%5XjCu9xY_-SDl zr(BH|qN0!T>+kG7!inEN9iph6$eR`I`Evz*{-UUT&EWbt*h%>X5UYr(aZh^VKNz12 zZ-fj?n3{jh>Aijhm*|>b?Qk1ZZ;epApn zv>{=zg-4~j`wQjJu1Cgs7Y;fNGhz>ab<`mC*t%>isR$*zYzo>25im!~&z@=91l}F! z2maBABl2_*(aDM$)7izZiu{WE!PKc8azskeO9!YU#ol!9po z8%D|GhiLEnB6bJkix_z>>iS^u?9hs?Hg$a&3SV)W%%u5eE_LmTQH|wZ8-B#L_ETl1 zcpRK~FN)2as5=?vS7Avsl4mz7@@t?c>mI<5?tTG=e9l1j#J#0Y3g8ST)JKL?bIV{( zv5d9I-%_=PIO3Y+YL?4wcg7FMZgH~IhLH+D*nap8dOPlaFdbi&P)qtr10^}-PTxdA zM|27gm)Lv*DH1|J59=wOzYRLKLKIoulG{%f`8^)R*$88aQtkLP5x6ul2X+_}z1)U% z%FmRSHXEV(>rOB%yHP{ZP_iKucuC&ivsv(wI+`z~*D3FSg#!5&B9s}g7a;~N*GT~x<$3x)O zWpmc^Ok>I#%&k!_@I3r;+Zdvh_KlsqDUapvI&p)Tjdq6f)J2VasMF++Qo1#fHb84P zt+9%lOb`P3Rf`R8*p<$-IV$=0LV-fZ1TqbPeBr@Ej2lp&A7ZAZtPc_V^6mJTnRD6f z+`Y<)bMgo=cCsgBhd<{RLaaP8tugJhsJaxN{N;sh4nfcB%C;h+4-@XeV|#|QK2AK^ zcmli@4GprtG7C0@w&;ci+utsXIRO{R1_M%{g|P(7v%^;e7%e}$4>Zv2A5+<4@A{Ya zefwtrp|_TB90VGD+Q@vz_KkZNIU|?=1=~T4jR<1(GTei_GU)#c7u4P~lo+UPqHQJi zmR?iMs#=@S&Tm>{rQ%F!KL;u5oKWZkF}~+4AA{ssZkLP!^q-pRV2fGF@w{D#g{VW? zrsy@fSkD91lmq9eV!*soSX7ahi#u7Htxuuom^xHh`#DQYZng9#eu4k?7s^@x5)pz* zL};M^t>*?&62l{oP5WAQ9jERcER%rZ82o7LAtXWskI79Hm!nV<}3hVJ=-zarW17qLJGlg zv9xF!{bPov2K1xcJ)TU#)-5WEx>I`R-rw5yM+Mzk&P$9Sg^eg*HUuCMnAnxJ3c9ak zj6uqV=Awgny;)qm4kAB(V9$_Z|J<3@@N0NPiCeYWiGbG5x<1_XO}RD~+oJyZ1tNY< z4l;rk^&$&4rQ!7BrCq(W~olLWa zkDywJy4UnV6x>QgGvwIW0g+zRVvjUNjV%J6{ zyc)KGoG@dgU`N{=okwpn+2~z!!)t;T83Nz^Hn78sG^447xK%3E+3`c6ZC5J5&ybWK ziX<&LA=HBamT5!e+M{v?iPiAq|CpI~$OL^EPb4pxG}i2M7=+zp`F-b4Ep`u!ho-p` z(01{O5nr^|R++`p4b0X1ij!=_vyo1&;`h1M%3g`e`Q7D^t-jNO1#r29^C`CX#-J8S zMHRaLT}35Fa!zV({hG&d5ook~QTC2Ww7eu#07?~=s^XA^I28@3R_kcqUWvn>A60wU z{vII9t|{HyNmDF$8wb##h{9eT4J0 zDA>ecZmH$wJFx9~B-+ZL#{5&XSW%#xxZFLzRi>vnp~lwgn=>P_V5VHG2v5+wziH0%!D};Z%0P7i&|$E589~CqH2Dd%$GFUjt#nzwWUz zX%yUV8+~SIO1=!w0K_^~ps0A6g2e?a6pT+;?RsDm9~+M+mQW2ENb0_+xP*V|f7^=y zTj)6_^UU4qG3M$TWQJ9;d8!h(%o6FoH;@9e~V&1l(8!58>W_$M6*Kt_S@ z@5bdXAJ$BerLG^L(Y6N`rUQe|n(xPYNbDvoh*veYCK+GK5~~Lh5HF(+0iAmxVHa@C zkalRwV{Yv)FVXbXzZy*8cC!si#-0qcXwdKh%$)?33iZRh;BY!=EXaN2eNkY9N4L z&Ad(@fqtWEz6K6u3@C=9jU@2{bAQjve2qGObS=EYKYSr`PC|Sdvgue`$EFvy`_nZR z{S2y9k0ZpKs4st3{&qfz4kuSZKM~XHASyN8IvL@OYyfz_EOuOWA9Ge{+oeOfCOW1e zaEKYoA>DbF-Tcch^q?2e*%YEl2o{z9l~?9YAu-tdUt1U<&{OU2T*@7Gl6PXx9$f=x zdPaDXA(*9@eaygdLnJY?HRa&}R9#hNuZ=|>eZ|7h5FP58rjr%dx)HVWo@Ip07w1_s z07b<7*zV6?1{qkm!!19ILuqCAid$?>Q5X7P^N!c%i&aRt&ac);j)(jYK%|W-rF|*s zPHk77ZVu(!6G@^dxdDd1+4-wjzbjhPudqJJQkDh4p76EZ*ek;-8?Wq7v zN~x|(ZlJWb_KU*s%H^-)dAG*e%`_5!B;Mc1}&SPeM?~}vU0^6HWsY+-Rob?_$ zW4W*tx*U=JjeAv_K#Lzt_sHW%TB-{iaRU6vQh`ewd41~g^|O*3D{)0mzrrOy)VC^F zM@U0-%naJAl3+4!gkLtFxL2(>Y`cW8$9e8EmdjPF%?xSLQNg90#9vt;iM|^6OB79s zXzd*wB(7#2qpxg(d9$yb22if7uxVYv0}TPiJ1OnfSdKJ#{e*Gs>GQ=S6J64V(Sx#%}C<_A{$cNj3Nn{w35kc;Tb zYno~+1miR;#-1$<*DYhPE8=Pay$2LgAhL4(q0#(@D))}e(yta-e5iz?&IJY``{qBa zucHGP#(nLmiPdyL51`&!DM=0eb6bUx!{sihks8?_Gu=13^PXEqxiO@;{B6Kr~D=0~CU` zMaJEB+HRzkJF_g(2xr$l012)_N9fRte-e}LvoJjB`R1G~ zoX2NU}8FGq?>2a8eBa0W4nRf{(=e)hcxiAlTGR-FH&aqC;}HLgpDk9%70 z;@FFTi$gh94=tqQq{G4$)FghSBR>AtFrgrlgnK=#+!A20s&6LZuiep2WgjvvNF)xf zA(I%icQ|d4U+(}(63;1$Mo;_z^dkb&<8&z?x>VXX-vOwZo@g{whL=HW#grI!1q3Xk z{b^5S93^p6IQWyPGJ(N>zU*tOKTGWx0<$s>y7NLTP^u;lyPDSE-xlj{gX#6;JvC-A zrCo0Qjzgm9);xVX?BafzK7(kFJC<&&L>d0e zpNL<7vW@L~<3kpx%OBm6;qBGH-*#tu5aVNvKWgS0yb!?yBzfW|+LioxM88;Ju5Dvy zC#*qVs7E#xWnI`+i&j!%bVH{R@7I4I;c_^dmeii`i8_f&Bfq_D+SMSi`;keoI;(1u zRPtWZAVtuv>gUM;K2D<(Twj&Gg{La%lj6z}E;2#72+!olx8?BaK#<^uS4xZteU!tZ z;3-8=nX8{#;n-ZAHLC{rK1DJ#~JP>x3yef|wk}rOyj@iDV z$tbXRdTW8oz{I?10de_EKhpMbJhX$dO)200*B?K|nx8g0TJD8hfXi zOGNGE+8#pALh}CX$p~X=+dyh3_>cZT-mgtTOVckv)J~v)w7aHzMmPM!aJc@rp9X%z zgd(S`4!27W6&WXdAFbTjc%b)ZlTrKdq5{dFtELwYOrzkcK-$40-xOql;UqZH4yGvR z>p_Nh$GH59x{^;d6Ma7u#mvb2t2uJ^bdhfhIq@66DP;@ZTZg5aT^IlrbZ;mmLlu%n z8t%-x;=c}+y@a4l4X;3c380g!n^WC{z?+`m@cUls@R1AP$WIcQz%j1!H#53oUHr7( zQ%$5=sy&xpu(#bzCzp1NOM027o`x;D1j0r{P9BX8S?n>EVJh~lC&g=f_2Bg$;Ohi0 z$ja+3>*&5NaTz=wE#Agp6<2X!aMbAXI+Bs|e7@S(k3F8)mo^4S!@TL0ZXYIz{_IB< zeos_v{X-sOv@iz^!Q?p#L{wFRtykN5CyTnr+OgWH(m|+nu2_|YsBKKRJ&ZbbyBZ5C zxWEO=rrY%G#ZpQXOsub#ctND_CY98JEP*ERhm#Tl%0h}@kTlUxpDApj+dq$7t`1MI zVy9I@3t+gGraB8KuN_bg2{lApm$hF7`Kby+f^o2|Q=x_cRPM{^G8GW#baHJg(cmQ< z3c~7rp5KMNMl*IT-@QlQ$`VGa_9TJP80=W}KKCS`f-|EPeVb&!RIeT;51BOPkR{5~ zZV4JgHfH!B4VC!NLdyTG z>ZIR7A6-Mn(E~2V({cNRsB?U6dR43fk(BTPO=YZt`JYa#g#`;}<{UEY9>)|FR;n}e z`-3Jj2tawBVw3< zU8dan&OE>H$vJ`VmWo=zLqI0|q0bh+Fdn*kKCfGRuylTkC^X-3!;3hl9Yv`W*M5Rd zH{ikip)oPf)PDzUm|erm@wh@IAXARpkZ#{ z2nllGeGvxRczp{^3seGzl$NX>sQ} zVlA{NVIWw(w7zT|MY#L*i-7)w(x%@jdVaoMUzqN9H@`xGw+ikWAGo)c(G(@$jBhk4Wxy_ugmc}Y+E}we-lUr}6A{J&RGZ#f=^5{oI+VsGlf~t6zn&iKC z;}{~!!zinR5Z8jMzuIs^(=KIn>w0Zhy6 zx?a%ms}m7}@p=|ojsJ$*PSjVN(mwpV&&NIH{IojZmJiRF?yy6^Husyli?=+QKeco} zZ=K3_#ZX{aJj@T*j_4m+h^~!fv|B4lV56h$QET`k(IPlTH~zw8fzI_Ska}VP^|icE z(1R8qSGugB%RSbJwoo&;>EJ)%{Jx<)8ac}97Bl_WKS4(JO#nH*c{eh1`F-CTe`S9r z4@Nv9=nxNVM0=rH-SUUemL})7cQ6HaaKAL(0KbB|_O+5bdFbxV0qA{$MsA^NkkBs3 zeJt)jEGs83B9CL*re#j`Ttp`ykP*rtsJ%2{t*LWtTgwO)?}o-`yy{y(Gt72NbgkX~ zc%wRP2|zHhu&*g)Sv(71**mw zWQDQ35OG80qhb$3>jguYY?#4-&lIg1Yr9WXwHtSC`Mp^;8b$?*K^@h=OuHcdXBW~a z*rpr7gMwl_!G>AjAUn~%SC{VWI;et1S!Z%|?3G=o*alJeRLv%~JF;^!0!ZCase&+%~9j?ZHj%V#F} zKyACO&3|3M1g=P^LZVSNB?x3Gc{3VR?~2N;JI>o1o0w3nu}NnzcdJgQ>LSky$`b*g zLBnN@Ltlnd7VkcsQ=Y7VLYRXxQCl&$fuEI-U|>7Y`mzSjSSMJ4CQBuQg3djFBqL=X z-@E$*7rvZW5&pTt7;T1l>{vkPc+wtZhAq4XbM&{-EZ`C`(4&Yl=On^du15-$YAx?P zhSJMTnHiN;FjP(NfcgpR@k?kj$5-tes1L?Aw=^fYqyY+ZR@0Nh*+A731@=(JWeYCrla>Y16 zSychz?2@A*2id-Le~Z!0prQ-CWTdb_-^w(-lsl68Jihh@-5rd#*V@m6y0!1+E>b3$&YeizvIno_YP8x5 zaS4w~Y)WC)TbDd~7)?gX7(X0ccl@)0dmvi(nO&>>4B%}*-ttlOJHK-%Ch+a}w$R^C zXD|<()z@`Kvcz_=99waoCmYg5*eZr_h{qv^8V8U6KeXZJ>hFZJ8!EUm+NVOjg4Ji1 za4%b+O8PLHY2o-z9udxWfxs=35ojG|IC2_jlPKy#?EvEx65MhJ%2B za_BeVt40Qq$J8QTSNZKNCtUIAf**wZ-j_G-lWmct>FpEuQbQFVdZawuOh`T%By;(; zB?;4TYC^-idM-{@^3*ig%+<_+noP@G9RvQj#i>2~ywxuwZ8E}WqWjOC&gU4D$)F5>iwh>j2W(z$=) zdT?k_db%_kd&P_8Ih6CT`0|XVSIgaBzn@dWv!R+>l%dKwuov{G`q}XMRdnxw5Y(Fy z^k{VWD@_(8K;M(q8+0Gzt_Q^|fH{aA+I6YZ`e-IM_6ML;wT1fzen0&L$miwbOU4+y zA4IFH+CPte!(ne!G8iHRmJuFjU;!a-86eTIxg+`{FE8tNl38qJf1}}wo&Dzo9|>%+ zF{RC>rg_UN54cH<7_v;7{J&?4@4v|+1Lff$LN8hL)D?%q1R$r#+iq1IPF7r|Td+nV z?EmG{g}ZhC*G&J<_3-)r&o{Tx_us5*_>?GWZQ0XO*SXnK7HrUxvuZ`zV?s(>}RcY zuN&pSY&dw0HzXh;EF&(W;=XdK3+t_-x*+t>n06~Eijem}Mnp>oM;B&eQ_j?utU9CX za;NQ58QGcNe!qe6n?_zKF--ab6$Axw)VY}Eaprrl!3ED4M8QTH?B?5TCY^ab`j+E5 z^+X~{3$1NkTV#uNs;wW%$SG`HI|viT*t7&zJBUpHY66xFAqTLvt^Jk^<+QO&H3jGA z&iJ&eQNb+1T#_Ier@*yz5h*I@Tj>y%w;5nU^H@zePXTsGKeoT0~z|gSpJRyMe~OrFL(0Ziw$k#04%yy}9qV<4z5r zC!O;Uw7g}GdL&eBttCrta70FhXy8us;k6qOA7d@(!PBza=&3)Q=XMCH(f6)727ZV z7GZja{he>|9ZBFDk+f2%z;#fDE%kOSw2I49lVit1-x3KB9RE7(L&4nVT5D|qXyQcD z_zT}WKr+hr1F&H6S?=8UuF?yn&O$eCCdWcK2bAO$&I2V$q>ft_K*w8Le1|6tEFsU^ z4w~0N!@NEeBe+xS9*F*AKu#>Rd~W)g20G|BaR@5@O?FzRS5VXao2I#{w^L*4iMlRk z*--a&2Rg2Y5**^DVgR28L0ky{zZ_~KqD8Mqhk!;9QdCQ@XX#NHLa0=l`LxKR{9%z{ z@9NJ;Jqxjc+yDh`td8xerYlK}yfhz7ZP9*C)X0lCgFPG)%F+xYIVlSO%DO7Sx*q6$ zO9|{>d?!q1SJI7W+TQ(OL#{~IB#8a2(;^88c*8UbTP*i)vvJ%oIA6a9_(MsR&mNxW zrAX=CaI}oQp;nFx%dr_XG+qc;7TTVgB^PS;kO%zV-IZS25yO$ueJjRFfJ2&FVIb~( z3=VS)W<9TV8hAgj-2tG(1P76WsL^f%t%eR)yWhCtGp~=P^-UlP8xiP9X{R8WRdUnY z9hS-OiNWt9---pPolkQhPu~O~l2JwG+!(tQ6g3>Vq`^OH(1PeHY<}3*Ga5|bsh3p? zj)}C6dSusDgD(2@#*07NI$aPFTURi&>AW8{3bI?*c#wPm1_)!U?A<5RPd4T3i$Jv! zk&CGM)^^8ZaX4<*d+9f|hxiN9T*fo1?Ysauk?BTem8)aen&;G7W^8V1NCa_5&B;gC zLp>6gdGXH%X%l04My*1Wtv=i+LG&y=j6v=8lmY1BjkGJ+1IY_yxPDgJ2OL!=v6@6; zn_rQ-4n5G<0lw9h%bP!$;0{n|K7S8@zYk39SnI-h0=A&fdQPaqCT@e}7wXn;RicIt zg%?K#s~uB{8{TXX#`hfz&4scQT(n|IiGY|<>wIr8pw4F3X>kf`p%Vr{9k-`!9K1iSEH*^vct-);D8^>wR@`urj6DqBYF*b9 zW&33SMnl*iit)qimzeM;I$M0w(^Fp}pnUFDn~k=7G2TV^PZ3r=uV5ZRYw>W#gO-x! zD(i*dU96jFLKrtQN)=x2#p^af0~MEnH`<8YC_#FRNltld4|l^x*=B$zo!>HYJ35@` zsePQ(yN5sjaeA|e3gV^_?5n~ueD+U0_aSLu)f-t$yz9FK_t!9iAF14G84 zSIW@w)HXe2cyW1v zm&tjzw@X4xpDTM5UXxBjofA)A-*dEjG9fi$hr3-B=Rz)>@W_teHQ0$dAM+m!<`RO^ zM%CTJx^iZa@~}GQrC*Q!_1D6EcK5MA0gj|vtMBV$xFazX{@c_|4Tp}k3u-s=DT(S%>A-bEx z=Www$$?sKMwGDq_gS!^N0168NAs>a!_cP}WdS7LEd5_(ME+OCFX>IQUJ_mj6Poj1- z(o1e-Ow2bA4iD!+a|I0CW|8XZBE&Z-C55DHDGo|c9d(yFnnEwvCGS?)C2kG81U7z4 zjw;i1+OS$EKD#9-RwsfT!P-DT$n)N}eptzxBPAmKv4f7X!R}|Wk;$HrfS{V();7#2 z$g9uRRw1Fix!5Y}Y%v||mVoS|>_c5$oVhzDtt+Ip*?5RID_Q!@#*aN`S@_JyuSCzm ze=WDl^Xz+Hy>j&JU|NxZLqHY$O4;K1qFUEAcsyh~$Ej0=ge5j1L_I(=q0I^GpnIUB z!iYNO1jgXSt_SL~`Y1LjSz$Qb95kEr;jFoiE^U04bo`ns3*Am;kkcI2RA-oN)Y?`| z4{v4sK%lqJP0C^!&~Zom9b%Y6BcWap6?w9MOJK$joA(~Fb~ABsG!YP-a65dfO`h4Y zp#SN!GQPr-o@@%f4=@qdf;+CP#`RooJZF<^=9w%LWh91|k-M8g+TRZ3l$d|^!|+7NgICGSXIIPr~WnOKct=_0U&pfnE^l92jD{zvVnN)2&fX zQTx~+u@NzuZb;!HrMVPB*a5$q?!ydyVV8&?r?3LfM})G)<37n3k@zZ8+E(G|H0VEB z^N$~b`j@!{K+n|9IeLa)0z4lYPp=gznbfyn17> z8B=#7biLDR2FfWX)7VPzutNnDmb8pn_y+VaxKh*jsAjG+;fN6)RXb}f^_vY%&+z5x zZ$cy1rK%ws+q4eZr^^Cob>KoAb4_?a;sXJw8%ZD031j<|{-`T|PZs8dY5kgE=RE~nZ>1+8UG`}rEH{W_FNQHJY_#M?W(3!i4?%*ftqC+PW*kjL!$1 zV(WvhAOJaVq4>Qa(P?M_{7w9 zrx~t|z!If&#+5O7?X>pqCchOWGDxDaeJ!mv1OAX-jGDBr@5if81lNr}`wucg2<#ku zKisND` z(a)k1)(MvGruWh;rcv0@-nq+J^QHaUg>o>3hp6tR{^7frGtu2)9Z`B zEC8|6D*`$_D$G2VeqjY)C~mfS-s;Hac%Rr%JPVv^xnEMgkq(q$GnyKfgA3Xa5x7gW zq9iW$s7hGG>agpTi2dv@zXh-&XNokCNa%`lwH_PIvfp-rkW2Lf5gak#$)08I*Wv z!z-m)`3rw%X_w$@Mfrit!vo51@U z8QGuROTHk_DIAg93r)RPfe>jS=!(! z5rE{cjW2nUjlptXOwPV=IXGCIzbS7zRsEu#cn8DDF41J=JrqpST$O4$n9wVvuIwNb zmwH2_D~35fZuUcux7jBYRN_%`_6LGu0)?~fPdq*PHnX9m!j9Rxlih^lY7a_I_=N-n z2-Y?IOY*JgQV1NEz$}76@L^Cc^C!7){*mEHj$E~gL2YAldwDoht_r{@JQ=K-Mm_(l zwLGq>xrc(()GOb?2_N=av{w6jgubZ{#r}XA6k{m&W8c?3c&@3U3jYCheV{%{%fmG+ zt?7vA)uuxc9_(|Mkoc`CJOUvHgPi9`!}_N%69IjU&jq6#J_ z(+Vy-7^SP?;_={{ve)_+zPi&X90L9S|54?spT1~LIN9(Lvld50`1Zgch43-0EyOMU5co(M^El`9a zx$m#zozENl7dsb_+d-6%YU8S_NbaYCs2P6H>_1P1&OB;YM<_P=U^?GUy@2fEN}mgO zD19*ZGRshiWiX4I1@V8nUPI?anVp3*OgND6mL|0?sj8#nJnE4?z@%nCC8S)GAk+rxdwYcS2 zl-;%U#z_=TfkqbshA6+$!%7Btn4j5Cb?|kup{wg4{32U8nq`*TX6grdwsMS=EClZv zFLgAF5qi%YT>rM(5J~y*uzXA^JRYG_l2-fn@jEXEz7A6B7uFz*v{Q>tOIU-=<`o$~ z8`sP+id|H3f3WOWUHKk-^Q&%}<0b|LcUzd1a(8jSRehp9(1eD+|8##|eY-AqPWg-N zHf#db`3_l9+Gmz@P{~8@Y8#yZei~@1tNte2&r?=$znQcHFyJK;ZlET6CqfW zdv4b9b}U?F-6<3YT%7SFd=v+|IuL-w&$jj$`d} z+ce(|6=MW+T>_icSGkFX2@*Wolj7RW>^25$xs!D3g2cuP)}9+yOstP{2!}1`oY8`6 zlN;1hVCsN|h3>CtXm-c*cT4D zn(>8O64jrAs!n;q1Cr{?hev^a=-15Kx2pnvgo_VxMGf{y2PYx~?Hfi|n?k8@_S}1;ZdWy-TM~iG4*&a)2Rb!}ZtSFj5k2T-DnpFgKPs$kHS(~ zZJv%@(3js3v-|F}slw+tO>1F_wYCXt3-#m%c-%@`wea2FD@P0Yx(qpVeW&cN9{Zqn zx6&i02cK|(NsxhV#l=;zyOkj-XHJmWGTc>Lpowt)%d6BOyK((b4SYN~3%y6Y~eyGlci zq(qDI=~no=(Dg!6J601a?fexKFR+>lU5dxHkc>j<2c6r0OC+Q-0AKPnM8fo|LicU_ zcA|c6-$`(IumcyTm$E}S*-Y(qOR}Z{Rn88(cCgJ~N0)}~+Ys!zqG@IE;1L(QXNZ(8I6ox5Q!DeOsR_|>x?3d>v63+FT-$_bsq%-s z+IP%jqFo&!3&yynp$_0$fP)2hNs;>9{dNJ~P?ZzcGlJY0(VOAP>Be}EjEPmD((n(4 zwC#c4;}q%Q*+e0XBY{Op_hwRCJlU&3z^}ojjD(Xjw=TE3>umKT3|;5pPDaf#!C?}g z_GD7D`;lX@J>?AOd03aoG^i7LEO5`$HzCDchL4hnzpG!oaIICTr8%N#xFIXOXibsp zXVM(rMen8;PDA?(eW-o#_Sru!1pt9=C@Q(Z5yLU9n-;K~=E+X?_pxKAk$rr8!v*Gf zXsUIi1BBL47CMa{{A@&endP)TxQxoVgGK&A!G*V!uORm`ALi}6xotmpg zM|Ng>!zcNr-3tfLf}mu+Es#TPju|B`kXodUn|?60xw! zcyN5#=h{6~6MsF6G>78_+hpbK>w*?;x1ee7&NCzL(%9R+jSz0=!e&q{K93`wy(6!u zJ4`q#TJyTY3|oEa*D7o}#M+C!pP9|xPT1jxZ50605J-8IU*jIuN1NQ>%4JS9pWR}I z{%TTeaUI#YoA)H6t7G$ShQn*d#tX91lw|Xwb>3pc!;!~L{f&WArLrn{vTgVKetyY^ z2n4xoMge%FHpVAkto6$+s$s%OSkN&OvsU)qN=M9+Ii@Ojv~bz32F)S5(>0>$d)oUY zVFN&{9zbp`WY=Zk1ZI$4CQtkbRu;);@i}f`Z*qJ#H$jZ&QtA9I9f$h z)6n?AA5*;u4e6NT7<$WP)C^E2*8*z7--6m6}6 zDqj8|j^!EiG4Ju`<^zltG&r*4ecaTazApyZ=YHgz!F!o`MJi!&BGfsS4!5e~mRVa% zkRShe6MX)_6#4LayEkQy#8jL;3Lqbbgv-ldUrxqK5!t=Zq@_kNOU%9!1Y{@g5y6?B79fM+9V&KB_<66S< z6;Sst+2L(~(G}r^Z(R*HB$2#S)7`1O_!5j!n>2kL@WtdG#^NZ3tuImL_P(!~;*}oB zkUr6R&iZ9SS_(UYh~pB46|vnnQQGjMht9sR8+M#Zmo!8@Q!$z0_ZPxNAN#B&r#?27 z%IRF_LQS0tBs~0Y1nA*RK7e*<@SX|!_VEhJh5rv-Dsoxj0e#L>C;Pilbswss7-@B? zO%9y(yc}U6f`ZnGsLEXGN+^pFN_}i%Fw!!wb^WptNRt)F4>E&V+LgA45BW%C z!%BHsp*9^otE2bxs#u}}foTlJTdT)KvaXw{k-9)5;V2?K(jf6}W5D>F(1fa5FyTxj zZ~nJvie(z>?$rlIzNc|bS3+|By9KsiSXgA8ELTvxtgnG&IqVm--&!1hM^o{$s8tR<$&3I{+kaLsYBM=4|_Bp zxV8hY1{$wv7g}4?01x@J0VDdpwX*a{iR8)xsl*UwjaFG-<;hMi)%C|^?cakXb1Jwm ze?{vzynXu37So?zym(9nc`|~APID!W4h?Og9JZdU8wRvYb}H*<)}12yzvh%wf53st zq$0mYn*KPm9s<^}S#e!O02)-oT$M1eQ1R)4xC->61(#G*-23F4>lCxFPAWm#yNtGE zC@J;YQzfT{M$S3(8dr6#WV{{C=R=|B*FASOr(qp(kGo*dAEI8gHT|(qmZ@f>N%Kly zni@2@wY&T4c-ChUA?GoH{Waq2Rr`c#t1Yx{&m7Y38t`+oYiV3IRbkq2fsQmv=6AUv z>7B|%{TD{<(8U>Vu-<-f{M5&4c}bEbs{s+@=pbc$s4wrjL28_b3D7M`!L7d?a$Izt zj=y~>PoaBk?+pP#u5`OZUW}}(s|bm?G*Nihls0$TTy5PgM^fZZ4_vnbLV8HD`dsJe z3+*IZ`v9tGRZOnmv`<44zUgV~a0`=rmPt%xb~dHr(om)Yu1YZ2J`-`b?#t`tWrcF2HC%bjW(8!X6;9dU=$T{RHIL5o^6v zb-lHB$M;+cQp3v1dnN^>wNK$g>xN<(s{KfF{RoV?`FP8nw$S&iA6GFB&Q1#MJ-(*i zWV`-lstx=QJ{3Rj$AmzmVhS2WuBR4Ha6FWTAPrCv;U1#l&5l^M z1bF>AO;#y2=|1~LIO|iDVuSoo9e44@(U~^c~OQ?;OV1p z2)FS+y81G67Wng;0B*7Sxu0-{`)HE(sTKqxVaG1xUGQIBp+5I)yT8Zz&giHtu+<*y z0n^S_gZQSpej*5yR-!}JX(uj1D|n)NE%|43HA13R(T zd+oENG}q8R2M=s51OsjzJ`K6nn-Y%rqaG;yG)UdA#u%5L-9i!~2$h7fJq&2hQNQ-=gXt>Iq{leeubi7*>Ex2?jgT>IR9$|Gz_&%u1iVKmG2?!;*}2ux;IQfpvGVO$}32o(I=D*M)3htbF0k{ zF2)I!9s(2&UbFzP_-DlzK<||E+Cq|5s;`~!0e^$&&o8(Rmz2ydFdl3ghSaIQ5P5dP z2B?_$dk+^g7}(n9%HIIfgU*G*a1_5-mMT+WnK)PlMdmu!_um82nfm`u-|D;0GSb%a zxgqjoC~vyjg4a|)U`yw-5qPdfxHxr-RXR22sdg}G0mZGSV}Uy^nl@!Eh^ zYwzyxW;pi?j5c5Mw3%PJwk8+su}E9uF4KneeB9dw{tFE6rz-hwi)-0iB!$RUe8&xvT zWvPzyF#c0Rxz*a5oZ@OrfMN_nU6@~y1ExWe9SX$czUhA_vx zbO{Pz8csmWtUnpXvZ;TmC4b-V*v|+bodY^0MfudVgJ2}b_}x`dGnw1VxJS5KT%+P2 z$H>p8Y$Hml@^M&FFZ4<%w=&$AnC~l%Es)RCJu3}?Uyj?m&2(n#PsA4j2Qq$b)#0rf zF_Af~Vu!4I)EjH_A$zyQ#3o0Q3vAK8OjE)<0bqgOJLm_5;31kYYz`?!7uh)DerLyi znduznt9HMWLB~@MtCbtm#MGO{9SOV2#4SObL8)pT7O->WhZ(#e^9vyah-}Z(>-N8j zKftm8w%qCz;>wPwk0>-a0#9~3h(|L``WF-(q8glaMKEplf zjGBYF5d{c}aRE+=$sGzxZQbhdQ}Pd1y=w9fAWuwSt@j`v>J_^5mvHZdBQb=V)Gf03 z-RXqJ@biyRH3^CRYQEfnZ2^~kq=Yu`JFb368MjQB9-)9)p;h?pJUckM+N_#dgouf+ zqdBnJDa0?k2D9OgM7#a`kqPP%+nIDKYl;e{?gnt?(g}%b;&_(js+?rf!RB~BQrQ;%Wx;!GznJT4G!v5lr1=KpGKjDcqJiXQ zhor80Hey_FkR!8XTUrCTw!g^D^g2c3F`#-8pF85+z^-+s6O-&D1g(_~tWJDdxzuOQ z)*@@icHy##-MbWgkMmftd)*jYUR`<&&i8)M{OTNetFT#)@$p5M?teFq=QgSteUL~c znUJN%KVem3OKTfDch5xB*LkLmhJ z=rB0!-dVlkaQgF3{rC8VWO+nj?6>}G$L55y zlHW-RXN#?F!m1V(sC=iK)BLTC=^Ij3ZiG)1<$9*`Jz22Zl)D^}z>Wo6Tzcr4-I;sD z1lz%s$zq>tOpiNLKulkQ}8zg|7NDt@+z+jS2mA{ zXj2(aJFR$<-k+dn4ebRL#lL8U;gH3?RChVaV?P#rXV@Uz*N898)AyT*=}n5f;X^ak zzL1OJl_?il9E{CWv1t_?aPd1Rhq*@KA6#|V zKg+LETJ=L#K!ZJBU}w-fk6u^Z>@2N>S~j>F?j@biVE)cl#HgJmI38mLlt2{b;e3#a zyZ%&MFX}?mt>F$)WetO9S*N0)C>FN;PAh$<5rN0nmLmv;24GQRo^L`Z+@YfWxQXR7 zo)GDcKbWl>N1>%8)^awlmyw2;FLMLqU}fy#jtE27_?8H&U#^$n0MpO-)kY9i__Ot2 zN?SJgr*Txi>|7l5{2{IXcrbY)^sckX`@y=qMFMa*VMwJHr8!iL2VRJ< zn`QP*il8Dn06kAXR+3Gf_c~MaVdX z5%p^vFknAgG+1~Kff8;0K4?0*(b=Ze8JcK9DJmg?4-YpcvbA@agD%(1Ib~#&V9=-= z=@A>wZO#kVNeyoO5fd%c^yu1Ya_9nkjlbDako43Y?oa?=cxLlxZQc$tAf8XLvNlEU zfp&cMecQ^%JwbjSnX0Vl0Q|RhdK@=LB6Loxp_~oha2wQ-n01(5dlwOUkg$ z$rXC%!@K%Ff`8}mi{PG^;>E#??4`#YL@Y(HxUtY6u5v4b5Xw{`#FHtczx@fS!jb^- zd6z(nR&JuKN72mYLK`4m?KKEWF=zp7K2C1^8jD2i576Hd3gq(A@KzsA_MiTk_P)P0 z@N+@$h9K%Jjjt3sERO$#2TNYrGy07*sZEL0irc{W^9ouvQrS;@NK*?JV_l^ow-ZksBq%fwzFgzDGly(q0bzsC?aBg^+j{!4o6=Mf!+j<%}etfr1@M zKouMXK}09;SU>KES^~xRX<=>x6TLEk&HqT#q7|Epasky%G*K~eN}_{;1`xsBB)%o+ zA_xY}S|B|-Ysg6G7_%}>_J-sknVPJBnx)~;Y6si?@rYleqr^{7(zfKI07 zx+~lJE9V&DZ6N%FUU`mH@=^r0#4!yw|Mc*_Ja&7@H?SH=v0}33*7&XKW|Ky@?5H9Z zcB8gMw=9jzr5l`T3jSr7LfWFWsNGI#stA`om zt%0|Eh6=1tm$4sOBybYi--4??ym=Z8ap-;XSxLQCnsB0}XwRmFcb3DbOR#bzR_qKv zG}0ZN!~u1^M0Qm-UPhVAv>b5jw@3kIiG!~vCgX|PX8KlIdS}+T=Kb5mSSMaHG2QC)g>scaW@^Yu z(66?axB=a3rgC-42ujy(>A@p3mi1YI&8dBm4{LCCuvY(E3(inF>JGnYcal}FiWMfA zu0dExM$B|5BMEwwH)Ndu=jO}J-H`vLmwi=bLCBZy8~FgkJ&zf5n*~7yRIf~SlsShE z<*BUS`1m(Hx}CUH$)q3Uf^w!!^mO@Iz~@v~Y9&m4=F-4wB&*Tv zi50*wiK9djFa~$dVERcEZ&Hwl`#;-*& z9SdjQ8TJ@@NqrY^8ua~vogeIwlNcm|s&Ch)F#R2GZ|`4VKFxG;;e7uRDSVCC*%$H< zB#}rmcCEk+nJdUk12+pIFrIiT(;_P?NyWcqWbtaiKR58dr+sPokp0Us9!Y5l8c?g< zjD0OdaGR-N2(GY}k5K5CjTr3Bp31Y9&Z^5{oJ0Tm7(%CQFn?cFliAZTx8w|P(YK!L zmoi^%*kHXVuFKM2`z8);^2==AKAk;xVHJc``E#Gnx)@%_bYCJa%XB4Bays51i2QYq zD{N8+2v(1$QiJUWMWKXC!1+0rG-jFsp`MBnq-&q^CtBBuC2U#*oK#mXXY4VkPfr{O zN}h1+nv|w^w(eA!?PyWiz9U>px(OA%6N7Qu)p^ymM!e@L!Kjl%i{tICf?ddmp2V8` zmIoH@-T51UiPJdmXl)}sD_%RC7OtfFSk4b@C(C&5e8ma(nbi75S%QE|y%4UW?izy| zM{TXRr70tWp2sMg;m2M;i4}_p1(Cq#kQGt@81rui#C^vejUmXgA|G1l$ZD z(EGOT0}675s6Rx73pCBWql2;!Df$%8e@Kg5v+f`|Xc%9xwx+tj<-Oo!*x^1SxI)YBlrF$Nz5jSra9Ynv|c;D-1o3${DW>t8#=2v5l)hLZ(8MrUQ@~+{632 zKv`;+Wv7bc_274`%rA~MQ)#$GV|}w_{sYWF&A5OcdkCA+l3SWW3ih}(;`bkHqH`1> z-2tyosQ}4`FqNC!ZW&0A+B_N;&M`+xvDwj#@A1rZ`E=;m?FgI06$pGyxn3y~f66tL zVrTb?eczRQgC`O$%P@Ix_D@ANK zi^kz}d-tb5NQ0c-AS%A~?VDEeU&wW+nG8r!CD2~hx}RF7DN>jzBheEW1_K{8g42$; zE^7EZA%P9uw$ z4jZZsjRe*3>rPuOt@(cYl;QXY{wAcZEpgCaM6PLD+;-+;lE2nCZjzQ>vA-PStYg}s zBypIw5qWM4=ervI2?28_Tvyy(&a= z?8f?`d78PU27kF3{dPMgu@mWgDKpg)F;qytA}bk$@)Jf#BT5%9*jDLTX2gy|rQjlJ z7chlL202T@BK{>JXhH5EG9`OPtq*>|hy{qD*rUFWD=+jLHk4~m>wq2Y?#lid z=vUoDfYM1-Ri-%&YXO^Ncm-d}CwbthXc>pnU5{WIRMXOZt#TeJ0x5P;iz`06RHSve zB*r&6uvan=Mgi3b(Ac-f*}`RBncjQ7@qVOdAPp1Q9z10VXM!YYUCLo01eqX#mwM2Y z>X!%=GycUB`{Z1iUx894UJxVTB%1&+F?IB98l^#Of$SH^a8LX#OAel3SN9CL zRYP7I+abif!@agD7*pv>5OzQO2rtUB`6kY<#$bkQNR%k0Z`wwX{& z#z_XWj{vJHLk|DD*TlSj{TqmNQI%Mt28y?q75d`BvS59CueoPW(bVPe%Wa0aZ2F58?F4{;Np~v$SHyq{8&tc{H5#(i}tm3mTX;JWOyL-IUo!an9NfUX#6uD0dA#e^L;u_3kE$(ezvrPE=vuI z2_sYoYP4{*cL_&Gd}@hxn*VZDDR0+rBNxim(Bz_pnQtb4&R$P+&v8(gB@3$D)1P%plM|UXw+Zk8gdxOsEskUV|UYmZHO77OO5#Y02hu{ zrz^EgRptIrn3`k4B9-%HVc-OdO$t zaRcL7QyKi}S?lSAF4%FOToX-w>>@{EhA!9EefYZcG;VLitZP`xNGAuIc>P4UV+E32CcJU zor<0?qaAbbhRw|y?B*Z@5}YivZAi6l=)VJx2|;dyiybjwqhMOuW1$|$z#rg}R`z#e zWp|RP{k<#cZ%8Fuk4$gU>kA`WrG%$q1u)!6EgIIX!XJ%vM2bRsJQv{n&Fm#GZ~#Yr%`+ zA&HHvkw+qHYi%-jI&{36{XY+%6eJINEb%{oRR?9~ZtlFBTw+rZz!9itDF4bhb~J{c z_t?Vi%Xl5G%Z5Gtr%vvW#g&glT{mUK9Y8jPR3U>qV^BLm$5^MGu@_6J_?bt?o!l<} z9{yakE8LIs%hA0#yH(F~=gTBwpDy$HEX`q;4?Ks_^3p|iQ7ptNiNScFsR{AdvCG#- z?lvbL!Z1@kbZRdn;0nbdM04|df{uQVp^Ajw$y^N-n28tLAnq9EZvF-paw^RAz$!Dg zHg_k>h0}mbd=V)%~EB zII^?igeX`I_-sT)C>;1a&-6>r!4`7Z6$v-KHw1R*xx-i@z<5nA{<533!p_*t&j7@= z7GSGjHa2v5(N6EE5Tm43DNp|TUFk~z0%19b9V__E-sE}6sL1&H zi%l--*tu#0tT+?0{6hz&PfN_+vtBROY>YsT>H!&V6S}6*wXLLJmT^b;cr<0YU!*_>V}sxg zM+A>AAn8bKRy$syOFfbK9m^=M20qz^wFgGp3M@28%W5Eb7?n`uuG5~AICe?@(!>Oq z8xnu%IiBLt6EE^%4{D@F=$matR%F37>||pQy$WkoZ~a-g)WrO~qOk z+8_^U>&L{jfC+eNAAI<-FPPO~1Y?(6?I#~jQJgDmo8pwW+4XQ@#J0RGU~R#qDxq4J+n)|ai6%x`=gx+~kjIgP zrcAYLcXKFedf!?)G;<&`(wNAzaqo8*p4X($hDtUrkZP(eJN^l zMEYaFy!ETgl)HEZ{+~MiZPuovz>Vanry4-h1eUY_+;oqUU?QO@X8C55@SVF8??*-d z(a^nC;Ht4zvj}J?E|Y=BC4duD=JZehnRpn|-|Y)tC=@w`0SG)@{an^LB{Ts5J14Oc diff --git a/static/images/community/twitter.png b/static/images/community/twitter.png index 339ed1b01512be69d8c9f38d1f92e943790cdb8d..0e5a443c0a1404b5340894d90ad651e1694603bd 100644 GIT binary patch delta 6400 zcmZvhXEYp8+pa?fBSaS^%3zov8AR`f5rfeQ!ssPRw1kN$qt}?|y+;={dWjZY5Fyct z-g_H_&wJK6>#X&zZ~xr;+5h&s_j6rOxl)%{rT2XdeU&mCru%++7e(?~2fe@od`=P6 zK>;v7U5}<-gqD{yN=WD=6LmE#53XXVm=XN2}9Bt6gkC zf!(hj6N?Y3QBfIhTF2{5w%mtB#Mf8S!4RC-0*-Tcy!B<;3RoP1qh*OmnpM9U!QDbf z_Hh{5Uf0>TK)_Oed!(Nn2u1+}1M&R;(-5i_`huZcm1NG z-2c5y$z6^FWbU{0LnXvlQ2&nik;80~sD_0engn;j=5N)(2WsS)ugflcC>L)(En)0M zVpi+2sUH9a=Jajr9WMR0p>~x!^YyBXP`>qB;%luwSAj;V9*7)2o44OP7>+=K?H~)b=w+rb8&JJE095jDm_Y7&G5?+2T2q+E?OUo5!iP zyRyOvNy^DseC~-Us##7~5$!0;z|-yM{Yb=GI4%*G7jrmL+aZ@4IWrq-54|#h1%SZ* zA?6Daxguyqn}<^=glLRf7=?p-Cw^Z-4g+~C7Q?Kh&TKtxYZ>9C_$cDxEP2bQ&=v{{ zP&qHwN!{GWDA5?NN7B=jvA~RRKX;n7p&;GE>o{vhT}#;V}I8< z>}Z1#5xJ|3%{$Crn^j)!R(Cm3YG@9iZf!;7AKi+pZQf(96{_$a; z^2P4X!ZbXior0IZb%Hke)RRu96EZ3yKw&A7>&UNZV^Dd>-cvxRl2z_14!T}G245ZG zed5^zFko8tF?e)#OJ+=GWkbT+FA#}(uMT6Dwd9J3NIYyyn~%l<+nI7Iy1Qlny}k8J zoMLQYk^9)cjn}qN#1ZAHF{T8${`&jR&u0ac3lEIH9QL9;Mm!o5D|vB}dCB32Xt^n| zX6$p4TBldI77Gm`okRlKrY%kPy`O&qJ3HEj1w1jXkUp5C&V}r4%A$#+a_z~y+Ra)_ za^=^A5^ufT9qOxc0p|GYt3(IX8SAhBF2>u}z?rrc34?t$dbZvLc_J)ZOiwv2{lv0i zg!SLvU(%rg8RgU*3p`h6ll#{64CF&S-H(k(qlhxe^@_Ndw81?K3ebjTjP>zOYQL@Q z+ZoL5Sd5Iuyw{HtEyT4y&4BHSvE2kzNsqoU)C_>D8i^xu-5-A(bL{-ih*~c=QY_Kk z`<5gP=_DQRl&_8rr6-3jwPqkUG3zw`jUjCNB^6#$-Oj?=_6eCa@;G{@K0!OjPWNdW z52YHe&o|W~qS2Ray&qik1AAzoHh*0`r7-*uT;E|dI9hjqH(NC)^nTFfG$PYrBy{h& z8UjB~VCs;((5+ycvhGa4!*!MzfBJD4YZlFIXJTV>##CtJD}xr=kF{|zOrI7GVaJ^7 z$~ogf;AbVktnCSV+l8F3NOSo-u1e+s7UGIbnsXS6g|3eOBD>oM%;1J?po}8#!pu8( zu&O0L+CG)&JkiDBk2ib9#!n@q8rnZAc#`V?nx8|j8D92YWI%f5QgU~vm9E>@9KKmo z%t-UH1Yy>fm9F#=v0>&7ODF;#vIHk4x?9pxixdfc+il^6vDXzoE3Kad-wX&GXW}6_=4mUBLv(&4-?%mG z+%_&I1bbyjI$hHFO#huKY+faMzm|(C2GY5}(%@nU!+KhkB;~B2jjj!c#Mo z8bFH0&4>FFV`5uI*u3{lm^4O&^d5@_d2#m0%OSqAQVoHK4JJ!Eez@5_OaNrlOR`AYu(mQ^ z95LS!KuFvveLl~MV1&Ggjh|;V-60@80dAAe9o_rwG)`;>b*%4UL~oO95WxbH!V6t zh|Fbzkd=?RT_4%O(AGl(Ev&V~8qrFtN7?gVn2*ZTe{9OxczqB)leU@m!-DsH zUTH%=1R#TrpO+2Z)JAcu3kR%-MjK2=<;ZM5~WxsQk0E zrAFN&QNT5)lu7!J$`t^f0{$raWA#hhmnFzYh5pRI#^KePi zNZQ`|eR2US=urB^T)laxCAGX#%tHr0=x<5$_D&VgmYdT<_L;X*_J((I_#jyt{2&%- zLq%*7#Ui zS#Gl=LmJ7m|7u_9N2&l&dK9JpnQJqGL)cO(nU>W*zFZ{hGPz2r?J9?1tQSs#Z;?+= zFSGyfCbR$1T4TLxY19Vy`Lmp#z58OO=>8S?ivokyDC;C!QM_*V5kXf?ejEL$92X|K zJr`n1w6x3urBP|#pfVJOo`expyz2Cr{QHdJ2eFRJqj zX=!UWFan;S=E5gn60X+4^+3?Sedo^!JTh6)T?am8puzChDu;I&@25JyucI|tpOyuc z2LtAOPG`KzfTfjkA~5pVo`Uv;wjkHr_)|!KVnx#?>P&Je&0^NtdLRa5{>ka>R-}yK z)3&4{mM@JP%7KzKd4yNE$Oj5Sl17^|i| z?F!GCq~okYC13N#+$JLWfH43T@V;A(q8H^MBea!Q&K7`D6i+EFItilPG4tIxUYT)H zmmTE;lV!TnS+Igz%(+1bMN*aK0M?GW!VG8}=T92ru&3%!HRfqU5mC*{=Q52I&6p)4 zVHys;iskzsA4qQgrCzC|0WLKxXOWX|3o1+2WYha~lLXCPa7a8b8P6O+}0#re!c{IBD%w<}wVo--Yi1_i@YhGqp5%`JiW zun9@JVTVhZkIF;B>--C&A#ItXHXvB%R^as*t5JSXw#G}JpJ|EZ%DW0Cx^cb19Z~oN z_j8^T^ox=v7oX6u=EgjGY@e~Fb_$gavLAJPwtE!c?&`nrV)q6Cj0 zQgM)_X*rS*Ib+RVzSz&EqQbg9Ezj=ad55dW{Wl(NnlsFmSOl zkQO|WcFM7%Ix=RZ2%uL3UUCd z@>Z!&&NX6`;L6d)z7B!G7B4J)J!o5??Wj19;scGa?5|VB!zhCA@x?r+o{~ zy-N+jle5<+FAa_q;WJ>sVDV4l+oxJo0PKQVe6u7z9?ls?U54#_@(xa3y>QkR{~C>k z+lT72W)+Z!C4&4!w}6&hzJ_6dAO6>0ljeEs19l|Tk_Ia`F`;3icOAIAlFO;9gcSD@ z(aTdWUCoqEzK4{~DUu(B$ZsemWldl}{2qXV15k)T?)#jKM`ZX|&)_Iu3DLfzyZ%z~ zm$3Jc4KK+QZDk6$PY1{SXODbc!x|(~(rv53FpBnV_dH@wj<5#tp7H3HV(^#G0pjly zxseKV)bR8I46uWdJF{R(OBU~=l+8qJB5BPK{QC#Opb@m{QZfc8;zpvxU$_hn2n!^C zc}k#>diT0a3?8e;+SK4+-=Qsx>(%P9i8r#>d@5LwpXsHv;f7g2*edB|9alBaJkev~ zORN1%H`XrKSMV>LrCnqi{&~b>UcGH*WhR$}YHZ&z5I(KfsJcBdQbhMG-S&l_KIbnA zkN9j5AnkVXb8I~$w|h%JrQmWyELjuH(9*%6B5>90C?jJ&QwV@ZlRDRXKo1B4{Ua|M zoOh2yCF=qhTnTN~g&Q6oWhYang@yH%>Jvom1qJB~oUgUI$nw)&>#T|y1c&LS(?f1c zgHka1=L_d;^s~y9*O6gXVkIG!JVyA z%dpSArrp`^L_U_xs2cfq{f7h}l?H*gTh@3+IZ{6%0^ zPpI*vyAslQcX=u`7JwV<93OTxErpcv+hRUupubxV%)J`J?*VbEQM6Ar-w-Uu7LGPL zM#twk^WifNW!3(&4mU~51@+N96`kZ{HQc!G(gQ@0@Smv6jg`laXi4@=Dk7pGi$ia2 zRZY+NyPd~ldfr71;#p}e4aaI**0R<#dg<#s}Qb{`ja8 z?h|DiX3#I*ep;1ihD-PLkxg%QYD}jbNpbj-a2H$eLcO{wofTE60MV#*5#D!Jl|l76 zzwLa#4qLx3_1AQMlc#>KPKe^w1*X~YW?}KR=tTEmqwbyXEXrlQeU77_82?Pp?u6)QIguKFCC{8dD01jWwZ4`HY+ohmgJB>(t2mqbx*8K#Ps3V)NI{Yg(c{Zuu|(IwN%NG@&Q0 zFcsD0;$)P_D`5!y>^s>^MSMh)&U93kX!kQ%;B;k&dKOZ+>Bfynr@A5hwvmf2%t=ug zxR=_otr7byiREG)1hWal)ThltR^GpIQW^T-=X`uer7Aq^hFUG*{#Q42i{$}0M78P^ zwevEEVPo^DM6;NV(z_*YJ>^%msib94Rd5Z}s#KWW_j-0A`d|_s2BI8$svXw^n)^EZ zhtPA#AB+3FXA~|z!@+4Y)*(o;q(7vZH}yPvg)IT+WjP?-77Iurbic;p z9}y&ERvx)JACQH7`xfqMrN048bDgYzZrB$?1Mt)xaX)DhwY}or+PK3D{4RrPka@as z$XZSm1sASLz$Ds%SdvC3Hk?UWF}imc?!B0Gze|j01oMFeUDLoT`kV3vNX_P9f4?xa zsM1BnQU1Z29l?XSyTF1yFVD{d_r=}oxwmV%vlPNRPx~Lxb@t-&@pX++;xr8vgbaEm z=USmexfP<71eUJbrWD@@t?_6r-7%=C9Rw9SOYw zwlZUo+(qrpC?o;18IO*0@dEY3(5D#LgR%Tny0 z>25m{G|W0~hJOnm3s@~ORs~>2u8(cyv&!np4Q%|h<< zVCIC0L?IO& zZzgp$Hqb(G30dhLN&itBZ95ho`}>D9J`U$QlM7um{^yiWB>yg8&~?!XGmPOSYA=#l z+=!Bw28-C*(-Ks5y8I8-)r%jlQt?Q&3+}m5K3Qlz_m1sx|B_6h zDhd`hhr|!=BfgRR7#A2}jOg@Vr|FDuBdb zNt=4(vp4$pS077hyP(F03{91xvLwT%g8A6Wju>*;^y8I41u%3LJZ`}d+RBTr|8@2I z7%i7VD6Fr6DTQ~`#vHPZ|EEiO>S1P2MduI^Mymw5!wI^U>QCywh!)K}W%@~b#*0&D zH8^Kn9xk?PgSs<4n!Y$Cqk>6xt)5~q20wxoA;8RDO?9U^bH`vO+ti>=hxs5QR> z6MNm^>xnN^nk&+vBA==!#}xvQpRfWRKgoUOug@`Q{T$-g8M--~0qjb_YdP&&D(p({ zE7;`40wn?7kHj8%evT0eX!vSBS}Nfcit;0&B&O2B!DO25zS)EN>uSe&-ZANW?)h`zpRfjk~Y?0j1_O@3O~#IAjE5g0>! zyVG&CLt=)!XY-ob@|)tWsCT4MMU|>i`DJ3px4%YXGHXVp)X`@%JE}hClb&>YU*Eo0 z%D6FX(EN|>|9_7EzlHw)9{~LCGlBoAmkI}61j=3r{vf9UU=G%~WpByN%f1|wy{rU*}HgrtSEI9t~KL2*n*Rt1HQDxR*Y+^FYi-15K z$dII6O8hCLB>cq6=*rmdmMxS7g)WypM<2i7X*rUl4zXmxw{EWqhuB9(+AkMRTY<6D zoYR~wr5sO8w4d``prK)+qcISpp~2A5a52$j82*px|GD@-{hvJf zgRSk+W!wtj3lb5A`m_AV{p0UvvbpRxCDvjfZ5D^WQZZp|K3UKE04HsO+T11`$oJcB zuX!T$XG{stQStRRbd7C!iIOJ%=XcLu6;jc{sneMHsYo}zvZzu|QX@#n^TEj+$G-3Q zweglqA-l@)jEBv5!?5t^659{8rm49y_67B+nBFz{%4D_+w@|P12cP>|_2~ zeY4-Z@weS1WS$hWMnWhfB!G4zm1;f8IFDzeM6`?d2cI%)x{cBBYvj8@a>+U67l{Z# z&+5UMGFb+#5vRzYa<0$6ybptgo-=Qzx&M=Q=Pj|(BSf|g0^qVy4Xv`OQGL;>LEV={ zb?<|ZVV6#*#H_u;W|`Yf8_)D0h9r)HwanGK^?@46RbA45O)d zCY7SDj%QSv0Vc?5UqHfb%yaLl{Pfc_xQj_X^fUek-Nu(rk`eY9AtgFpO`B`;Cwn?{ zp@F|0*m%{`1M;%fpKlQP4{i3`_y$D$ntr*dl=AM)4fEXl6R6pxpnBuv~JcPOquHX&6ML`EJ8$4XC2t^Tp~55kCL?S8@90$n}?9 zV&$+_`PWu;o=5Zb`?4jtdO&DnOe!Ms1*-iH>-}g@b<)gWP$&tXvrkWcW_SaBJ z8*3AV?m{KndFgd?{JmsrxW=L%?7onbTuFzX^$5XlFks03cRIcnF`K{DX<^;T$sWBu z9>kuQ7*GP0i@$^)1~c<1D1@*2zp7<1<=y0x2G*E^-f|wSGNBSXwuJ)JGmYqdGHvL+ zYu09hnP;urG7U)*{1duxb5|E0g*1Zw7Plxd>lzQ!607Qs&`I(S=me|ojfieG08^5H z*Iw&Gt6#~FpjYKbaI@mGJt)gvwIbD9$q%DuJZ}qhTpr<&TV{YpXZ(s;s;I65PAPH$ zd=6it`*JAc5KSCT)iTpznWkQhYr{I|y@&6dU2Mwm^rkAEf5os4O?TDKv`}@G18gnbNklAm9`0 z95&Z`|GWbjUY{Z?*zHze2~_e83}A0>wWjN%$>$z5?9Sekfl2vI60THMpdv2yMlkOB zeoAIGCV_rE-<;}TAfJ;5ct%iA)x#~y$&=DyIjn!z$x1rwqRJw@eBejsvT|j=Ei%1A zKSST0(}M0PoUcL4)-57NIrrsi($sqe+{tGNtN$?s3y{8Tj<`YwJ$Kmu#;QvS6d--xIfnhvR@VEtn&?KeE~(p5hca3gtZe`@wHp0K*L z>JuzrLKD?r$w!>?l+Yd$LcTp)zqJ$zI-X!HEU8ckT$=7Ja zHUHDXE9~d0HhFd}O=-t72EVl42cpW| z_DF8z+Ffmc0R|-cp1ThiOi?_vy^FPACALwz@eOd@7MPyV;qEM@=Yej|z;kKWPH%$V z^zl0+L$F35`sR|k`Alr=guJuCtOiS(!X^&*Wir^_8Brni<0_apq;~T9p9;QEy*usf z?7jfk`+gdxu;O{P=dKU_F=ZVWBWO)ltYStZ{Vy1(Y*-mQI$srLjL)xW?J5l#OtKs-yg>is)Jdb&!Zy`qNZO3eMGdAgo}Hm zrb_|O$>+b^bazmf#DoH?DFD^nQgwAe%fjt0z4Qz+zXoF;Bx{fzN7?mr8g#R=1LzHr zojNI1v>Wsr*2u?wEF2hm|ERy&6rlF}NVD~0a#S&m!hH2bK=4WR^p_)f)UU^qsk7Ni z^!_u$SF^41!TN1xoSl#FP`OqYE^LqW=$=5cZglf9^mxhZ(nRjD4h0=aj6k*DHVJeN z#7+v}vTWlbKTM^?1mrf}_Uo9kFu+Q{C+dB5vLo>N+{xL8&X;4=Vl)1o6gy(anv^^j z)V2cHk6b0V$ULs$(N@0kLyIp09iShdFxNEQ`4IWNpvMIqmDm;gf6Lx0jd?2ywSb7j z?s~W&wmYT?Z47HdK<+|2d{k?<|KJy<$2hfUDS^gnJgrgPNbW!-+C!Gq7YpfIgCjAf zV_V}{F~p|;@;B~Lj34w$n^2`o79Pu2eT=yFnPRL8Y3Bmtb^{sgGy>XPAfDyrzYX;J z!m3qfU*G=l?=Ysu6Z5oa%@S2B{0KP2poCL+B72IQpNiSVPgc*-+z?VP*oenhHoO|g zJq3#Q15M#k?_02mwqkD$_0Qdn{j9#6cr}e>=g>mt*-}+3PXgDjO3sNB^%@sjM}py= zY!2CjzE~_ty%lH^@2GBLpkH3g2lGsqWndBnuH*QI z3rUJ*j0w*XlY9Fng;SmFO(xmDH%i78yG2uNYO;hESFub&#sk3wnj(nMgP|9>vXhR&+pBBS9tNGP$OUxQ4JN<|<2%dae&Eko` z`}v?nQ%|FeyWL8xR12v$h5(@!Cy9bDhMe_H! z=P`Y&R2Zt<1C{+BQ^;h5^^mRk-?a{N^Hd zsKL4%;rbW&*S0{rCs-L;@MpG!jcBmkEP}OtOHwn!cfYyna$wL!EQ>>`f{XhMTZQBI zpMc<;+`!+y5<-SDkb%<|%VdT<1^AIa>vmGDM6}wIZC-ztGh~xGfDp4xbI!c z-y1S8{R?_*@)NPDXQAC{`nPS0U);xzZSSs%Xm+_3Fa;r3oj8?JU&D)%V)94EEb3^v zX$6nuzb3i!PZ~8jIjR|iMZzD7|@ zU*4l57Xa~7dxtk&{b*`miBvyhtUoFU!xbf#tq@f22^#uu16&*5nhGu9nU?*WdeFo( z!3H#t=g7ag<~)%J;qvwhP`L2N>?e(IJN|SM5iSF!I1nL7Lv(tK=QyN({QcXDRGZ}5 z{$%H8vZqdFZpt%0`F6U9-Fq8{-nV{p9yn3w=H{{$3f zFbdjB;q#8_2ip%nM(kQov8NO*cHgHuwKLk%L|D5;_WKzunhc%EfPWcIXJ4At29>_-;n-d2c!+1(R-LAwWlpE#I>qa@sg5pzyuW${E2W;-W+CrIU z?Jy`>fVP7I=j<{{`OOH`zziv2q)=@|C48`?S)0E#+~jcgm8j_>%HF&asGBcQX$V*% zNZI(j$YLtQ2it$aN%jrV%B5I3tlT?$?}2drXOCd?$q11o!i)+7->56$_p+Vl3H3y4 z=DHCZXW8_>vddTsLE#=%WKXCz>1?XNA;UpP9aAWPtGyRS}Q_Z0G5XoUE;G6wp0 z`e4{d9FZ1G+W28NqNjdy(f!lmDqHqP?s5>2pYUZK*`S!iQ%N`G zreIGlB`MCZ(Ct*tn1OO6<+U)AUbiI~e|pqTcmY+jmHX#Knn`pLJ;_>9rQ9J#O`M|t zl96y~jiL(hepnz-Kwy_#nUU}Gr}3E%^;f&TJd6*FNe$PHJVN^g1e0^H(&#bJq_uW4 zgV8b#1^Tf`vX@_ZA}kwAE&e=8@9NVWSkxKQqwmcIYov;gcGL`PC_%d7qpx|&$*L}$ zXE!>DDjnZ&5|Rj8uH)xlCfo5-(!$-Rl#toipsP-XYHk8JncQHHaye#6%EG}_c4938 zC!5BTlu>byKIZ9l%e90lhGG4VFRF5$Oa?<9~o1lJ+P~Fi|lK}T~fQ=t3#P1+Rt%0%P zDT@kaP=GU@O?IL(aUxn5Nfb+-{7{p>M1mBjRb8Sv>FMAi=EkSQZcHYAcNXO`_rlj3 zB+SHJVsO-iIB%#HUk3b-0LST}Y|@gKb85upCqhBiD=#hyRP?kr)t>r#?Xxosty^ z_xu|uOu(u7>>2>`(1!kWW^(%ve5q8wda>u(0M&ZB`I0(xU5vMW3EIb69l#AI@Iuc4 zhy=cSerT!nZW*&O_W0c?Y_N2>rdQ)Myy?){2d+AsNcKoU=D|kp^@ZR)pKO&#&$DI(P1AG`*|HX3uHJu2|gUJ(@oiE z>Q~5|UzUK+6iB`W-!k1Ez(=%pWf>5_hcP^%y(fO_HIC%SnmHdbiP)F4d>)+saa`4_ zmky*nOfZ%PIwTA~#RI`|hG>Wsl%cIPwKku&E(x<8^uKDWvAA_JQS6hmGe@42^{EHfxP3d0h}wVK zSJ=eSxO@oJrp+>1Z>y4ARN820S`NFbwv8!k|2J?aJEHAshw|j{6=Hcs6Wx>K)l3hm zl6rpI(J*a;L>X4CbH{Ovb+#FJ5RIAL zHXZg!N&+jy?cnp`xHnK)WX6#H=bXrIS4cO6D{f-Tn6X~?{k;2mU?nN0N0p&N{$b^(nb2{AR(@}w70mU;=s4Z)@NACM`! zA+6+~c?zoG>9K0g-ve&WANUcc<>- zJ2?-oR_7|?+a(}&|3^`8X?5c7Q>^4LW~Nz{jE&)QRp|5ZJ&`cysz@O-mw?sbOc7Pv zvUGD5M@C*eX|DvQ&NfXym%ja-c@#6mtYYkBSav0TgXpH6E;OSoQ|=nUFO!ze^X1$9 zYQXD#Y5c7{-0U*8Tgj2MmFbB z9>udsF>WUhk0e`?UQ>p&ae`sHxs4NgQ|C?x*_3@OnA(dBk>-b73M(jzQ^;g(q%dkY z_D{h3*G%eUa%a!Ee^hpUeA(!X4a*6aHleVAqnb)f6Dd9t=`$g;?w(~zfaQI-hNk!6 z3v}jE7L`C;+8%xYEY9Qml-lByLiO8m7t@(TS))+%IyHOi0>`A1VsD8i%4zMd(m3Q3 zk%TO>i^0d!F6jmy-;xGG%aBCn9=-ZuD>|JRwj^Xz+bTW#BBbXv{~ z73<6}!VvLZH{Q}_LB*gQmtv?@ftdvu|F3hPwGog>$EP8^VG~_cnAmUbsd(<*bM*3) zJf0@4eZicULq^L^SM0uq;+n|tgols@CH6l`9Q%DM$7%N%j!-Ejwvb4bTm2FJVP-vA zE!HG0bIE;^hU@PSUuVB6!5G(m_I*%kK-!g?0GhO?hAb9#x_Ezw%)Zdi)bEs&_2w`n zL>mJCnFb`=fasc8YWqk;&^cYlRn5z1J5eImwsW|?AqM0hM^>$xeB_Hx*lTnpURHtf zL(h7ae%I_IJq+SN6{wR-Vm}F@Q$|j|4_u^A>byDFxUkFLXknfSxpVCFUAnO9uM9PB z(Prh>79jyIv^>#pWw@_~#Stz+JPaeCZ}0s=N84%47b{#Y_>y>+mKqBlzarYmnn6V@JU)2uc6DCJmD zeI#lYr31%D$>U(rr)#QIv~7$QE?`jp(*K;_@Tyy%0k|{dD|Z%|r8^MGa2t&Y;|FA@ zyYTwu3z71rj^|61_Ls4wH4}?;A2uM+~6&n5(G_Vo**K6T;Jwb-x=r|+A#4ql{+H1CSqClgZ+ zn!Hz-SQho1PB!1&r!Jy4TvM-^0Wds@*?J1Q8$4fZv`A)@JO|lvLdUrnIg*|)INap% z)@{2{@<@pO%s319MYcD_OYIK+8UL9K?EaQFd3TcAW=hQLf@qwHktNC3w0nR5pl5Au1 z{76W*C(_&l22W7e-m2*XwuY{x1B}6pIp*kl7T5%kOfGOs)c2Rv$0}1q&BIf#w@Sa0 zUxLjl+S%hr-h1Cj)9>fQA)^_A#t^$RqsJeJ^wBF+6k4waURk`FCo!)m`y`pRL*L8J zR`y|!(jpe@`Gpe7$L6l40V&JB@nld80}fjLyF;=nP9?q9uERP2!dxQNOQu{6lEka> z$wj&dn8q&=5+2DVR*JOG>FOhxwPj6eaoQ{&3y~C&N2lJ{YWULX^W*a@;$$KnKcYEW z6LvE->3%R`*zC9Q)P*7))8xw4LS-=H+jW-~?C+)m2IBXyx$~mtp-Mamw%9I(i&q-a zjE9q_X$G!OQWFrOpJfY+oBXXfWbrw!`oL#z_QG#oO@2)sow`AXG?2g`#+brNKW9-_ zP;r^eK_12Qj=}EaME2;{QS(m2gMV{WD}xBjRg{Ez1`TV(pwnn9yEg=n`>%RqHTJ!ZJI05hKCz#eRX5BrLFr+w2-Ca+;nyF8@u90YQ#@ z;MH$ADmXgHc>3k*c{_3Lz&)3MmA+wT zEVgqQ20J^0^5k1BvDc+XlV-VeX`t=(ay1p;e!k;xslnZm>rtDw!4{hZ1kNrkH4hC7?Plucuw9!?M1Q%Io23lawocU%vnerQI|iZ_D^FUH{9E+pZ)9 z(9yHN7Y*4Cl^DV+{u|yO3o5EUWXwxC7x{7e8j`39tdBgVQK-jJ>0eZvz!mD^6d$M) z!>*`b{wW3$T9kr~l;&azYovoL6rAmH%nsitFhV&+mp>mSy%e+?Jon^QO-@zv+ID0} za`<*pkhKgNsgsCwuuY6Cy!1DUuz{`tQbm9NQE@Lj@n_36FIiWl)g})$PL#j(loNfl zP^5dv2v4~S#SZ(Ei=X^Plse!*knTG&a8P|trj5&+)d-^0xc-7uU@jM;c&IctO&%o! ziwo~J*C8ip4kR;z$oho3%U7%lW)>JjKiPy3BG)ukU+8iZ*0nJw#w5=^@g(yec!J?o z=ueXSA0e6c|66PSpR)Rs4FCVv=09;LD&ThId!eBbprfsCJreMvg`L1$|FpD$pSGeY M%d3N{-dTqHFJ4Fn+yDRo From d6a755b4741372a4a8ad5011a46158ce1af75eec Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Tue, 8 Feb 2022 11:35:27 +0000 Subject: [PATCH 009/138] Revise community values page - reduce sitemap priority; the contributor site is a better version - make title match included file --- content/en/community/values.md | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/content/en/community/values.md b/content/en/community/values.md index 4ae1fe30b6d55..675e93c865b71 100644 --- a/content/en/community/values.md +++ b/content/en/community/values.md @@ -1,13 +1,18 @@ --- -title: Community +title: Kubernetes Community Values layout: basic cid: community -css: /css/community.css ---- - -
+community_styles_migrated: true -
+# this page is deprecated +# canonical page is https://www.kubernetes.dev/community/values/ +sitemap: + priority: 0.1 +--- +
{{< include "/static/community-values.md" >}}
-
+ + + From a520a7bb2d51fb22b2ebe0f4808eb2a987852b92 Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Sun, 13 Feb 2022 13:28:56 +0000 Subject: [PATCH 010/138] Update code-of-conduct page to use new styles --- content/en/community/code-of-conduct.md | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/content/en/community/code-of-conduct.md b/content/en/community/code-of-conduct.md index 5dd0cb28e8c3d..a66b0572bffd9 100644 --- a/content/en/community/code-of-conduct.md +++ b/content/en/community/code-of-conduct.md @@ -1,27 +1,29 @@ --- -title: Community +title: Kubernetes Community Code of Conduct layout: basic cid: community -css: /css/community.css +community_styles_migrated: true --- -
-

Kubernetes Community Code of Conduct

- +
+

Kubernetes follows the CNCF Code of Conduct. The text of the CNCF CoC is replicated below, as of commit 214585e. If you notice that this is out of date, please file an issue. +

+

If you notice a violation of the Code of Conduct at an event or meeting, in Slack, or in another communication mechanism, reach out to the Kubernetes Code of Conduct Committee. You can reach us by email at conduct@kubernetes.io. Your anonymity will be protected. +

+
-
+
{{< include "/static/cncf-code-of-conduct.md" >}}
-
From 49c8a250446e6fbb7439f591023bb263bfe7bc4c Mon Sep 17 00:00:00 2001 From: Tim Bannister Date: Tue, 8 Feb 2022 11:36:22 +0000 Subject: [PATCH 011/138] Restrict approvals for community static content The intent here is to make really sure that we only accept changes that match upstream exactly. --- content/en/community/static/OWNERS | 7 +++++++ 1 file changed, 7 insertions(+) create mode 100644 content/en/community/static/OWNERS diff --git a/content/en/community/static/OWNERS b/content/en/community/static/OWNERS new file mode 100644 index 0000000000000..3db354af1468a --- /dev/null +++ b/content/en/community/static/OWNERS @@ -0,0 +1,7 @@ +# See the OWNERS docs at https://go.k8s.io/owners + +# Disable inheritance to encourage careful review of any changes here. +options: + no_parent_owners: true +approvers: +- sig-docs-leads From 46cc50c4eff405966bd757dc7d75e16efd3f72fb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Thomas=20G=C3=BCttler?= Date: Wed, 23 Feb 2022 13:38:20 +0100 Subject: [PATCH 012/138] Added "OR sun,mon,tue,wed,thu,fri,sat" to CronJob --- content/en/docs/concepts/workloads/controllers/cron-jobs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/concepts/workloads/controllers/cron-jobs.md b/content/en/docs/concepts/workloads/controllers/cron-jobs.md index 62cac0f001f8e..cafe51102bb39 100644 --- a/content/en/docs/concepts/workloads/controllers/cron-jobs.md +++ b/content/en/docs/concepts/workloads/controllers/cron-jobs.md @@ -69,7 +69,7 @@ takes you through this example in more detail). # │ │ │ ┌───────────── month (1 - 12) # │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday; # │ │ │ │ │ 7 is also Sunday on some systems) -# │ │ │ │ │ +# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat # │ │ │ │ │ # * * * * * ``` From 8dd6fa677db718a2c7451eed0333ec9a0007a6a0 Mon Sep 17 00:00:00 2001 From: Brett Wolmarans Date: Wed, 2 Mar 2022 09:21:06 -0800 Subject: [PATCH 013/138] --all-namespaces shorthand As a beginner to kubectl, I must have typed (and re-typed due to typos) --all-namespaces many hundreds of times before I stumbled across the -A shorthand notation for this. I sincerely would be grateful on behalf of all kubectl beginners if this very useful cheat could be added to this cheatsheet. I have proposed in this PR a location quite near the beginning of the cheatsheet for this so that it is very hard to miss. --- content/en/docs/reference/kubectl/cheatsheet.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index b3c3536b31952..cb3aa1f27b3aa 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -38,6 +38,11 @@ complete -F __start_kubectl k source <(kubectl completion zsh) # setup autocomplete in zsh into the current shell echo "[[ $commands[kubectl] ]] && source <(kubectl completion zsh)" >> ~/.zshrc # add autocomplete permanently to your zsh shell ``` +### A Note on --all-namespaces + +Appending --all-namespaces happens frequently enough where you should be aware of the shorthand for --all-namespaces: + +```kubectl -A``` ## Kubectl context and configuration From bf7b94c65a2e0791e4ff458210c75dcb1feabf71 Mon Sep 17 00:00:00 2001 From: Damilola Abioye <35523688+damyl4sure@users.noreply.github.com> Date: Wed, 2 Mar 2022 20:04:02 +0100 Subject: [PATCH 014/138] updated allowedTopologies values format --- content/en/docs/concepts/storage/storage-classes.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/concepts/storage/storage-classes.md b/content/en/docs/concepts/storage/storage-classes.md index 421a293737954..1c902d348938c 100644 --- a/content/en/docs/concepts/storage/storage-classes.md +++ b/content/en/docs/concepts/storage/storage-classes.md @@ -241,8 +241,8 @@ allowedTopologies: - matchLabelExpressions: - key: failure-domain.beta.kubernetes.io/zone values: - - us-central1-a - - us-central1-b + - us-central-1a + - us-central-1b ``` ## Parameters From 7122a4498ad445a6d34a5eaadac73d2269f9af3f Mon Sep 17 00:00:00 2001 From: Thomas Guettler Date: Tue, 1 Mar 2022 21:31:56 +0100 Subject: [PATCH 015/138] fix busybox image to 1.28 (issues with `nslookup`). Changes where done with these commands: reprec 'image: busybox(?!:)' 'image: busybox:1.28' */docs */examples reprec -- '--image=busybox(?!:)' '--image=busybox:1.28' */docs */examples Related issues: https://github.com/docker-library/busybox/issues/48 https://github.com/kubernetes/kubernetes/issues/66924 --- .../docs/concepts/scheduling-eviction/pod-overhead.md | 2 +- content/en/docs/concepts/storage/ephemeral-volumes.md | 4 ++-- content/en/docs/concepts/storage/volumes.md | 4 ++-- content/en/docs/concepts/workloads/pods/_index.md | 2 +- content/en/docs/reference/kubectl/cheatsheet.md | 10 +++++----- .../tasks/administer-cluster/declare-network-policy.md | 6 +++--- .../debug-application-cluster/debug-running-pod.md | 8 ++++---- .../en/docs/tasks/job/parallel-processing-expansion.md | 2 +- .../horizontal-pod-autoscale-walkthrough.md | 2 +- content/en/docs/tutorials/security/apparmor.md | 2 +- content/en/docs/tutorials/services/source-ip.md | 2 +- .../logging/two-files-counter-pod-agent-sidecar.yaml | 2 +- .../two-files-counter-pod-streaming-sidecar.yaml | 6 +++--- .../examples/admin/logging/two-files-counter-pod.yaml | 2 +- .../en/examples/admin/resource/limit-range-pod-1.yaml | 8 ++++---- .../en/examples/admin/resource/limit-range-pod-2.yaml | 8 ++++---- .../en/examples/admin/resource/limit-range-pod-3.yaml | 2 +- content/en/examples/application/job/cronjob.yaml | 2 +- content/en/examples/application/job/job-tmpl.yaml | 2 +- content/en/examples/debug/counter-pod.yaml | 2 +- content/en/examples/pods/init-containers.yaml | 2 +- content/en/examples/pods/inject/dependent-envars.yaml | 2 +- content/en/examples/pods/security/hello-apparmor.yaml | 2 +- .../en/examples/pods/security/security-context.yaml | 2 +- content/en/examples/pods/share-process-namespace.yaml | 2 +- .../projected-secret-downwardapi-configmap.yaml | 2 +- .../projected-secrets-nondefault-permission-mode.yaml | 2 +- .../pods/storage/projected-service-account-token.yaml | 2 +- content/en/examples/pods/storage/projected.yaml | 2 +- .../examples/service/networking/hostaliases-pod.yaml | 2 +- 30 files changed, 49 insertions(+), 49 deletions(-) diff --git a/content/en/docs/concepts/scheduling-eviction/pod-overhead.md b/content/en/docs/concepts/scheduling-eviction/pod-overhead.md index eebc235084ec4..d5db85dadf761 100644 --- a/content/en/docs/concepts/scheduling-eviction/pod-overhead.md +++ b/content/en/docs/concepts/scheduling-eviction/pod-overhead.md @@ -72,7 +72,7 @@ spec: runtimeClassName: kata-fc containers: - name: busybox-ctr - image: busybox + image: busybox:1.28 stdin: true tty: true resources: diff --git a/content/en/docs/concepts/storage/ephemeral-volumes.md b/content/en/docs/concepts/storage/ephemeral-volumes.md index 0ee07c7c991be..087cf37c739bb 100644 --- a/content/en/docs/concepts/storage/ephemeral-volumes.md +++ b/content/en/docs/concepts/storage/ephemeral-volumes.md @@ -107,7 +107,7 @@ metadata: spec: containers: - name: my-frontend - image: busybox + image: busybox:1.28 volumeMounts: - mountPath: "/data" name: my-csi-inline-vol @@ -158,7 +158,7 @@ metadata: spec: containers: - name: my-frontend - image: busybox + image: busybox:1.28 volumeMounts: - mountPath: "/scratch" name: scratch-volume diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index 50f7bf62a26ff..69f3c6c757d13 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -251,7 +251,7 @@ metadata: spec: containers: - name: test - image: busybox + image: busybox:1.28 volumeMounts: - name: config-vol mountPath: /etc/config @@ -1128,7 +1128,7 @@ spec: fieldRef: apiVersion: v1 fieldPath: metadata.name - image: busybox + image: busybox:1.28 command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ] volumeMounts: - name: workdir1 diff --git a/content/en/docs/concepts/workloads/pods/_index.md b/content/en/docs/concepts/workloads/pods/_index.md index fa91723c5ad08..3aca6e094b7be 100644 --- a/content/en/docs/concepts/workloads/pods/_index.md +++ b/content/en/docs/concepts/workloads/pods/_index.md @@ -180,7 +180,7 @@ spec: spec: containers: - name: hello - image: busybox + image: busybox:1.28 command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600'] restartPolicy: OnFailure # The pod template ends here diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index b3c3536b31952..a0bd2eb262841 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -96,10 +96,10 @@ kubectl apply -f https://git.io/vPieo # create resource(s) from url kubectl create deployment nginx --image=nginx # start a single instance of nginx # create a Job which prints "Hello World" -kubectl create job hello --image=busybox -- echo "Hello World" +kubectl create job hello --image=busybox:1.28 -- echo "Hello World" # create a CronJob that prints "Hello World" every minute -kubectl create cronjob hello --image=busybox --schedule="*/1 * * * *" -- echo "Hello World" +kubectl create cronjob hello --image=busybox:1.28 --schedule="*/1 * * * *" -- echo "Hello World" kubectl explain pods # get the documentation for pod manifests @@ -112,7 +112,7 @@ metadata: spec: containers: - name: busybox - image: busybox + image: busybox:1.28 args: - sleep - "1000000" @@ -124,7 +124,7 @@ metadata: spec: containers: - name: busybox - image: busybox + image: busybox:1.28 args: - sleep - "1000" @@ -314,7 +314,7 @@ kubectl logs my-pod -c my-container --previous # dump pod container logs (s kubectl logs -f my-pod # stream pod logs (stdout) kubectl logs -f my-pod -c my-container # stream pod container logs (stdout, multi-container case) kubectl logs -f -l name=myLabel --all-containers # stream all pods logs with label name=myLabel (stdout) -kubectl run -i --tty busybox --image=busybox -- sh # Run pod as interactive shell +kubectl run -i --tty busybox --image=busybox:1.28 -- sh # Run pod as interactive shell kubectl run nginx --image=nginx -n mynamespace # Start a single instance of nginx pod in the namespace of mynamespace kubectl run nginx --image=nginx # Run pod nginx and write its spec into a file called pod.yaml --dry-run=client -o yaml > pod.yaml diff --git a/content/en/docs/tasks/administer-cluster/declare-network-policy.md b/content/en/docs/tasks/administer-cluster/declare-network-policy.md index 7acbaa9e7d50b..ac9715b9092ca 100644 --- a/content/en/docs/tasks/administer-cluster/declare-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/declare-network-policy.md @@ -68,7 +68,7 @@ pod/nginx-701339712-e0qfq 1/1 Running 0 35s You should be able to access the new `nginx` service from other Pods. To access the `nginx` Service from another Pod in the `default` namespace, start a busybox container: ```console -kubectl run busybox --rm -ti --image=busybox -- /bin/sh +kubectl run busybox --rm -ti --image=busybox:1.28 -- /bin/sh ``` In your shell, run the following command: @@ -111,7 +111,7 @@ networkpolicy.networking.k8s.io/access-nginx created When you attempt to access the `nginx` Service from a Pod without the correct labels, the request times out: ```console -kubectl run busybox --rm -ti --image=busybox -- /bin/sh +kubectl run busybox --rm -ti --image=busybox:1.28 -- /bin/sh ``` In your shell, run the command: @@ -130,7 +130,7 @@ wget: download timed out You can create a Pod with the correct labels to see that the request is allowed: ```console -kubectl run busybox --rm -ti --labels="access=true" --image=busybox -- /bin/sh +kubectl run busybox --rm -ti --labels="access=true" --image=busybox:1.28 -- /bin/sh ``` In your shell, run the command: diff --git a/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md b/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md index 9653ff05ef1ff..c3922ed0a145b 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-running-pod.md @@ -110,7 +110,7 @@ specify the `-i`/`--interactive` argument, `kubectl` will automatically attach to the console of the Ephemeral Container. ```shell -kubectl debug -it ephemeral-demo --image=busybox --target=ephemeral-demo +kubectl debug -it ephemeral-demo --image=busybox:1.28 --target=ephemeral-demo ``` ``` @@ -182,7 +182,7 @@ but you need debugging utilities not included in `busybox`. You can simulate this scenario using `kubectl run`: ```shell -kubectl run myapp --image=busybox --restart=Never -- sleep 1d +kubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d ``` Run this command to create a copy of `myapp` named `myapp-debug` that adds a @@ -225,7 +225,7 @@ To simulate a crashing application, use `kubectl run` to create a container that immediately exits: ``` -kubectl run --image=busybox myapp -- false +kubectl run --image=busybox:1.28 myapp -- false ``` You can see using `kubectl describe pod myapp` that this container is crashing: @@ -283,7 +283,7 @@ additional utilities. As an example, create a Pod using `kubectl run`: ``` -kubectl run myapp --image=busybox --restart=Never -- sleep 1d +kubectl run myapp --image=busybox:1.28 --restart=Never -- sleep 1d ``` Now use `kubectl debug` to make a copy and change its container image diff --git a/content/en/docs/tasks/job/parallel-processing-expansion.md b/content/en/docs/tasks/job/parallel-processing-expansion.md index 30942b3edecfa..f62515fda0505 100644 --- a/content/en/docs/tasks/job/parallel-processing-expansion.md +++ b/content/en/docs/tasks/job/parallel-processing-expansion.md @@ -201,7 +201,7 @@ spec: spec: containers: - name: c - image: busybox + image: busybox:1.28 command: ["sh", "-c", "echo Processing URL {{ url }} && sleep 5"] restartPolicy: Never {% endfor %} diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 04d3e3daaaf01..3e66272e4850e 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -152,7 +152,7 @@ runs in an infinite loop, sending queries to the php-apache service. ```shell # Run this in a separate terminal # so that the load generation continues and you can carry on with the rest of the steps -kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done" +kubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done" ``` Now run: diff --git a/content/en/docs/tutorials/security/apparmor.md b/content/en/docs/tutorials/security/apparmor.md index 992841e356df1..e6e90b6ef47d2 100644 --- a/content/en/docs/tutorials/security/apparmor.md +++ b/content/en/docs/tutorials/security/apparmor.md @@ -264,7 +264,7 @@ metadata: spec: containers: - name: hello - image: busybox + image: busybox:1.28 command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ] EOF pod/hello-apparmor-2 created diff --git a/content/en/docs/tutorials/services/source-ip.md b/content/en/docs/tutorials/services/source-ip.md index a5b0d5a1d2291..bb9a622c98d7f 100644 --- a/content/en/docs/tutorials/services/source-ip.md +++ b/content/en/docs/tutorials/services/source-ip.md @@ -119,7 +119,7 @@ clusterip ClusterIP 10.0.170.92 80/TCP 51s And hitting the `ClusterIP` from a pod in the same cluster: ```shell -kubectl run busybox -it --image=busybox --restart=Never --rm +kubectl run busybox -it --image=busybox:1.28 --restart=Never --rm ``` The output is similar to this: ``` diff --git a/content/en/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml b/content/en/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml index b37b616e6f7c7..ddfb8104cb946 100644 --- a/content/en/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml +++ b/content/en/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: count - image: busybox + image: busybox:1.28 args: - /bin/sh - -c diff --git a/content/en/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml b/content/en/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml index 87bd198cfdab7..6b7d1f120106d 100644 --- a/content/en/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml +++ b/content/en/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: count - image: busybox + image: busybox:1.28 args: - /bin/sh - -c @@ -22,13 +22,13 @@ spec: - name: varlog mountPath: /var/log - name: count-log-1 - image: busybox + image: busybox:1.28 args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log'] volumeMounts: - name: varlog mountPath: /var/log - name: count-log-2 - image: busybox + image: busybox:1.28 args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log'] volumeMounts: - name: varlog diff --git a/content/en/examples/admin/logging/two-files-counter-pod.yaml b/content/en/examples/admin/logging/two-files-counter-pod.yaml index 6ebeb717a1892..31bbed3cf8683 100644 --- a/content/en/examples/admin/logging/two-files-counter-pod.yaml +++ b/content/en/examples/admin/logging/two-files-counter-pod.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: count - image: busybox + image: busybox:1.28 args: - /bin/sh - -c diff --git a/content/en/examples/admin/resource/limit-range-pod-1.yaml b/content/en/examples/admin/resource/limit-range-pod-1.yaml index 0457792af94c4..b9bd20d06a2c7 100644 --- a/content/en/examples/admin/resource/limit-range-pod-1.yaml +++ b/content/en/examples/admin/resource/limit-range-pod-1.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: busybox-cnt01 - image: busybox + image: busybox:1.28 command: ["/bin/sh"] args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"] resources: @@ -16,7 +16,7 @@ spec: memory: "200Mi" cpu: "500m" - name: busybox-cnt02 - image: busybox + image: busybox:1.28 command: ["/bin/sh"] args: ["-c", "while true; do echo hello from cnt02; sleep 10;done"] resources: @@ -24,7 +24,7 @@ spec: memory: "100Mi" cpu: "100m" - name: busybox-cnt03 - image: busybox + image: busybox:1.28 command: ["/bin/sh"] args: ["-c", "while true; do echo hello from cnt03; sleep 10;done"] resources: @@ -32,6 +32,6 @@ spec: memory: "200Mi" cpu: "500m" - name: busybox-cnt04 - image: busybox + image: busybox:1.28 command: ["/bin/sh"] args: ["-c", "while true; do echo hello from cnt04; sleep 10;done"] diff --git a/content/en/examples/admin/resource/limit-range-pod-2.yaml b/content/en/examples/admin/resource/limit-range-pod-2.yaml index efac440269c6f..40da19c1aee05 100644 --- a/content/en/examples/admin/resource/limit-range-pod-2.yaml +++ b/content/en/examples/admin/resource/limit-range-pod-2.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: busybox-cnt01 - image: busybox + image: busybox:1.28 command: ["/bin/sh"] args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"] resources: @@ -16,7 +16,7 @@ spec: memory: "200Mi" cpu: "500m" - name: busybox-cnt02 - image: busybox + image: busybox:1.28 command: ["/bin/sh"] args: ["-c", "while true; do echo hello from cnt02; sleep 10;done"] resources: @@ -24,7 +24,7 @@ spec: memory: "100Mi" cpu: "100m" - name: busybox-cnt03 - image: busybox + image: busybox:1.28 command: ["/bin/sh"] args: ["-c", "while true; do echo hello from cnt03; sleep 10;done"] resources: @@ -32,6 +32,6 @@ spec: memory: "200Mi" cpu: "500m" - name: busybox-cnt04 - image: busybox + image: busybox:1.28 command: ["/bin/sh"] args: ["-c", "while true; do echo hello from cnt04; sleep 10;done"] diff --git a/content/en/examples/admin/resource/limit-range-pod-3.yaml b/content/en/examples/admin/resource/limit-range-pod-3.yaml index 8afdb6379cf61..503200a9662fc 100644 --- a/content/en/examples/admin/resource/limit-range-pod-3.yaml +++ b/content/en/examples/admin/resource/limit-range-pod-3.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: busybox-cnt01 - image: busybox + image: busybox:1.28 resources: limits: memory: "300Mi" diff --git a/content/en/examples/application/job/cronjob.yaml b/content/en/examples/application/job/cronjob.yaml index 9f06ca7bd6758..78d0e2d314792 100644 --- a/content/en/examples/application/job/cronjob.yaml +++ b/content/en/examples/application/job/cronjob.yaml @@ -10,7 +10,7 @@ spec: spec: containers: - name: hello - image: busybox + image: busybox:1.28 imagePullPolicy: IfNotPresent command: - /bin/sh diff --git a/content/en/examples/application/job/job-tmpl.yaml b/content/en/examples/application/job/job-tmpl.yaml index 790025b38b886..d7dbbafd62bc5 100644 --- a/content/en/examples/application/job/job-tmpl.yaml +++ b/content/en/examples/application/job/job-tmpl.yaml @@ -13,6 +13,6 @@ spec: spec: containers: - name: c - image: busybox + image: busybox:1.28 command: ["sh", "-c", "echo Processing item $ITEM && sleep 5"] restartPolicy: Never diff --git a/content/en/examples/debug/counter-pod.yaml b/content/en/examples/debug/counter-pod.yaml index f997886386258..a91b2f8915830 100644 --- a/content/en/examples/debug/counter-pod.yaml +++ b/content/en/examples/debug/counter-pod.yaml @@ -5,6 +5,6 @@ metadata: spec: containers: - name: count - image: busybox + image: busybox:1.28 args: [/bin/sh, -c, 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done'] diff --git a/content/en/examples/pods/init-containers.yaml b/content/en/examples/pods/init-containers.yaml index 667b03eccd2b0..e55895d673f38 100644 --- a/content/en/examples/pods/init-containers.yaml +++ b/content/en/examples/pods/init-containers.yaml @@ -14,7 +14,7 @@ spec: # These containers are run during pod initialization initContainers: - name: install - image: busybox + image: busybox:1.28 command: - wget - "-O" diff --git a/content/en/examples/pods/inject/dependent-envars.yaml b/content/en/examples/pods/inject/dependent-envars.yaml index 2509c6f47b56d..67d07098baec6 100644 --- a/content/en/examples/pods/inject/dependent-envars.yaml +++ b/content/en/examples/pods/inject/dependent-envars.yaml @@ -10,7 +10,7 @@ spec: command: - sh - -c - image: busybox + image: busybox:1.28 env: - name: SERVICE_PORT value: "80" diff --git a/content/en/examples/pods/security/hello-apparmor.yaml b/content/en/examples/pods/security/hello-apparmor.yaml index 3e9b3b2a9c6be..000645f1c72c9 100644 --- a/content/en/examples/pods/security/hello-apparmor.yaml +++ b/content/en/examples/pods/security/hello-apparmor.yaml @@ -9,5 +9,5 @@ metadata: spec: containers: - name: hello - image: busybox + image: busybox:1.28 command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ] diff --git a/content/en/examples/pods/security/security-context.yaml b/content/en/examples/pods/security/security-context.yaml index 35cb1eeebe60a..7903c39c6467c 100644 --- a/content/en/examples/pods/security/security-context.yaml +++ b/content/en/examples/pods/security/security-context.yaml @@ -12,7 +12,7 @@ spec: emptyDir: {} containers: - name: sec-ctx-demo - image: busybox + image: busybox:1.28 command: [ "sh", "-c", "sleep 1h" ] volumeMounts: - name: sec-ctx-vol diff --git a/content/en/examples/pods/share-process-namespace.yaml b/content/en/examples/pods/share-process-namespace.yaml index af812732a247a..bd48bf0ff6e18 100644 --- a/content/en/examples/pods/share-process-namespace.yaml +++ b/content/en/examples/pods/share-process-namespace.yaml @@ -8,7 +8,7 @@ spec: - name: nginx image: nginx - name: shell - image: busybox + image: busybox:1.28 securityContext: capabilities: add: diff --git a/content/en/examples/pods/storage/projected-secret-downwardapi-configmap.yaml b/content/en/examples/pods/storage/projected-secret-downwardapi-configmap.yaml index 270db99dcd76b..453dc08c0c7d9 100644 --- a/content/en/examples/pods/storage/projected-secret-downwardapi-configmap.yaml +++ b/content/en/examples/pods/storage/projected-secret-downwardapi-configmap.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: container-test - image: busybox + image: busybox:1.28 volumeMounts: - name: all-in-one mountPath: "/projected-volume" diff --git a/content/en/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml b/content/en/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml index f69b43161ebf6..b921fd93c5833 100644 --- a/content/en/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml +++ b/content/en/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: container-test - image: busybox + image: busybox:1.28 volumeMounts: - name: all-in-one mountPath: "/projected-volume" diff --git a/content/en/examples/pods/storage/projected-service-account-token.yaml b/content/en/examples/pods/storage/projected-service-account-token.yaml index 3ad06b5dc7d6e..cc307659a78ef 100644 --- a/content/en/examples/pods/storage/projected-service-account-token.yaml +++ b/content/en/examples/pods/storage/projected-service-account-token.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: container-test - image: busybox + image: busybox:1.28 volumeMounts: - name: token-vol mountPath: "/service-account" diff --git a/content/en/examples/pods/storage/projected.yaml b/content/en/examples/pods/storage/projected.yaml index 172ca0dee52de..4244048eb7558 100644 --- a/content/en/examples/pods/storage/projected.yaml +++ b/content/en/examples/pods/storage/projected.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-projected-volume - image: busybox + image: busybox:1.28 args: - sleep - "86400" diff --git a/content/en/examples/service/networking/hostaliases-pod.yaml b/content/en/examples/service/networking/hostaliases-pod.yaml index 643813b34a13d..268bffbbf5894 100644 --- a/content/en/examples/service/networking/hostaliases-pod.yaml +++ b/content/en/examples/service/networking/hostaliases-pod.yaml @@ -15,7 +15,7 @@ spec: - "bar.remote" containers: - name: cat-hosts - image: busybox + image: busybox:1.28 command: - cat args: From bbcf9e316f526aeb24232dc2543fbdfe7c3c1f72 Mon Sep 17 00:00:00 2001 From: Zach Rhoads <43684271+zr-msft@users.noreply.github.com> Date: Fri, 4 Mar 2022 16:48:29 -0600 Subject: [PATCH 016/138] Update link to Azure Gateway Ingress Controller Changed link for Azure Gateway Ingress Controller to Microsoft documentation. --- .../en/docs/concepts/services-networking/ingress-controllers.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/concepts/services-networking/ingress-controllers.md b/content/en/docs/concepts/services-networking/ingress-controllers.md index 08b715ac7bc8c..5516306ffa766 100644 --- a/content/en/docs/concepts/services-networking/ingress-controllers.md +++ b/content/en/docs/concepts/services-networking/ingress-controllers.md @@ -23,7 +23,7 @@ Kubernetes as a project supports and maintains [AWS](https://github.com/kubernet {{% thirdparty-content %}} -* [AKS Application Gateway Ingress Controller](https://azure.github.io/application-gateway-kubernetes-ingress/) is an ingress controller that configures the [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/overview). +* [AKS Application Gateway Ingress Controller](https://docs.microsoft.com/azure/application-gateway/tutorial-ingress-controller-add-on-existing?toc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Faks%2Ftoc.json&bc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fbread%2Ftoc.json) is an ingress controller that configures the [Azure Application Gateway](https://docs.microsoft.com/azure/application-gateway/overview). * [Ambassador](https://www.getambassador.io/) API Gateway is an [Envoy](https://www.envoyproxy.io)-based ingress controller. * [Apache APISIX ingress controller](https://github.com/apache/apisix-ingress-controller) is an [Apache APISIX](https://github.com/apache/apisix)-based ingress controller. From 2cca1a2f857c90430fcf873f07476a1bc0741d98 Mon Sep 17 00:00:00 2001 From: "Jason Kim (Jun Chul Kim)" Date: Sun, 6 Mar 2022 09:39:02 +0900 Subject: [PATCH 017/138] Update content/en/docs/setup/best-practices/certificates.md Co-authored-by: Qiming Teng --- content/en/docs/setup/best-practices/certificates.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/content/en/docs/setup/best-practices/certificates.md b/content/en/docs/setup/best-practices/certificates.md index 5b065021da44b..ec2a3bfdc7476 100644 --- a/content/en/docs/setup/best-practices/certificates.md +++ b/content/en/docs/setup/best-practices/certificates.md @@ -22,7 +22,8 @@ This page explains the certificates that your cluster requires. Kubernetes requires PKI for the following operations: * Client certificates for the kubelet to authenticate to the API server -* Server certificates for the [kubelet endpoint](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#client-and-serving-certificates) +* Kubelet [server certificates](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#client-and-serving-certificates) + for the the API server to talk to the kubelets * Server certificate for the API server endpoint * Client certificates for administrators of the cluster to authenticate to the API server * Client certificates for the API server to talk to the kubelets From da4ecd078867942651deee0e6f7741d9e684f7b0 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Sun, 27 Feb 2022 22:29:16 +0800 Subject: [PATCH 018/138] [zh] Translate kubelet config v1alpha1 reference --- .../config-api/kubelet-config.v1alpha1.md | 324 ++++++++++++++++++ 1 file changed, 324 insertions(+) create mode 100644 content/zh/docs/reference/config-api/kubelet-config.v1alpha1.md diff --git a/content/zh/docs/reference/config-api/kubelet-config.v1alpha1.md b/content/zh/docs/reference/config-api/kubelet-config.v1alpha1.md new file mode 100644 index 0000000000000..ad7e4908f40e0 --- /dev/null +++ b/content/zh/docs/reference/config-api/kubelet-config.v1alpha1.md @@ -0,0 +1,324 @@ +--- +title: Kubelet 配置 (v1alpha1) +content_type: tool-reference +package: kubelet.config.k8s.io/v1alpha1 +auto_generated: true +--- + + + + +## 资源类型 + +- [CredentialProviderConfig](#kubelet-config-k8s-io-v1alpha1-CredentialProviderConfig) + +## `FormatOptions` {#FormatOptions} + + +**出现在:** + +- [LoggingConfiguration](#LoggingConfiguration) + + +FormatOptions 包含为不同类型日志格式提供的选项。 + + + + + + + + + +
字段描述
json [必需]
+JSONOptions +
+ + [试验特性] json 中包含 "json" 日志格式的选项。 +
+ +## `JSONOptions` {#JSONOptions} + + +**出现在:** + +- [FormatOptions](#FormatOptions) + + +JSONOptions 包含用于 "json" 日志格式的选项。 + + + + + + + + + + + + + +
字段描述
splitStream [必需]
+bool +
+ + [试验特性] splitStream 将错误信息重定向到标准错误输出(stderr), +将提示信息重定向到标准输出(stdout),并为二者提供缓存。默认配置是将两类信息都写出到标准输出, +并且不提供缓存。 +
infoBufferSize [必需]
+k8s.io/apimachinery/pkg/api/resource.QuantityValue +
+ + [试验特性] infoBufferSize 设置使用分离数据流时信息数据流的大小。 +默认值是 0,意味着禁止缓存。 +
+ +## `VModuleConfiguration` {#VModuleConfiguration} + + +(`[]k8s.io/component-base/config/v1alpha1.VModuleItem` 的别名) + + +**出现在:** + +- [LoggingConfiguration](#LoggingConfiguration) + + +VModuleConfiguration 是一个集合,其中包含一个个的文件名(或者文件名模式) +及对应的详细程度阈值。 + +## `CredentialProviderConfig` {#kubelet-config-k8s-io-v1alpha1-CredentialProviderConfig} + + +CredentialProviderConfig 包含有关每个 exec 凭据提供者的配置信息。 +Kubelet 从磁盘上读取这些配置信息,并根据 CredentialProvider 类型启用各个提供者。 + + + + + + + + + + + +
字段描述
apiVersion
string
kubelet.config.k8s.io/v1alpha1
kind
string
CredentialProviderConfig
providers [必需]
+[]CredentialProvider +
+ + providers 是一组凭据提供者插件,这些插件会被 kubelet 启用。 +多个提供者可以匹配到同一镜像上,这时,来自所有提供者的凭据信息都会返回给 kubelet。 +如果针对同一镜像调用了多个提供者,则结果会被组合起来。如果提供者返回的认证主键有重复, +列表中先出现的提供者所返回的值将被使用。 +
+ +## `CredentialProvider` {#kubelet-config-k8s-io-v1alpha1-CredentialProvider} + + +**出现在:** + +- [CredentialProviderConfig](#kubelet-config-k8s-io-v1alpha1-CredentialProviderConfig) + + +CredentialProvider 代表的是要被 kubelet 调用的一个 exec 插件。 +这一插件只会在所拉取的镜像与该插件所处理的镜像匹配时才会被调用(参见 matchImages)。 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
字段描述
name [必需]
+string +
+ + name 是凭据提供者的名称(必需)。此名称必须与 kubelet + 所看到的提供者可执行文件的名称匹配。可执行文件必须位于 kubelet 的 + bin 目录(通过 --image-credential-provider-bin-dir 设置)下。 +
matchImages [必需]
+[]string +
+ +

matchImages 是一个必须设置的字符串列表,用来匹配镜像以便确定是否要调用此提供者。 +如果字符串之一与 kubelet 所请求的镜像匹配,则此插件会被调用并给予提供凭证的机会。 +镜像应该包含镜像库域名和 URL 路径。

+

matchImages 中的每个条目都是一个模式字符串,其中可以包含端口号和路径。 +域名部分可以包含统配符,但端口或路径部分不可以。通配符可以用作子域名,例如 +'∗.k8s.io' 或 'k8s.∗.io',以及顶级域名,如 'k8s.∗'。

+

对类似 'app∗.k8s.io' 这类部分子域名的匹配也是支持的。 +每个通配符只能用来匹配一个子域名段,所以 '∗.io' 不会匹配 '∗.k8s.io'。

+

镜像与 matchImages 之间存在匹配时,以下条件都要满足:

+
    +
  • 二者均包含相同个数的域名部分,并且每个域名部分都对应匹配;
  • +
  • matchImages 条目中的 URL 路径部分必须是目标镜像的 URL 路径的前缀;
  • +
  • 如果 matchImages 条目中包含端口号,则端口号也必须与镜像端口号匹配。
  • +
+

matchImages 的一些示例如下:

+
    +
  • 123456789.dkr.ecr.us-east-1.amazonaws.com
  • +
  • ∗.azurecr.io
  • +
  • gcr.io
  • +
  • ∗.∗.registry.io
  • +
  • registry.io:8080/path
  • +
+
defaultCacheDuration [必需]
+meta/v1.Duration +
+ + defaultCacheDuration 是插件在内存中缓存凭据的默认时长, +在插件响应中没有给出缓存时长时,使用这里设置的值。此字段是必需的。 +
apiVersion [必需]
+string +
+ +

要求 exec 插件 CredentialProviderRequest 请求的输入版本。 + 所返回的 CredentialProviderResponse 必须使用与输入相同的编码版本。当前支持的值有:

+
    +
  • credentialprovider.kubelet.k8s.io/v1alpha1
  • +
+
args
+[]string +
+ + 在执行插件可执行文件时要传递给命令的参数。 +
env
+[]ExecEnvVar +
+ + env 定义要提供给插件进程的额外的环境变量。 +这些环境变量会与主机上的其他环境变量以及 client-go 所使用的环境变量组合起来, +一起传递给插件。 +
+ +## `ExecEnvVar` {#kubelet-config-k8s-io-v1alpha1-ExecEnvVar} + + +**出现在:** + +- [CredentialProvider](#kubelet-config-k8s-io-v1alpha1-CredentialProvider) + + +ExecEnvVar 用来在执行基于 exec 的凭据插件时设置环境变量。 + + + + + + + + + + + + + +
字段描述
name [必需]
+string +
+ + 环境变量名称。 +
value [必需]
+string +
+ + 环境变量取值。 +
+ From 6c646319441b63d8bfdf001c767d0f8896daebc9 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Mon, 7 Mar 2022 12:37:15 +0800 Subject: [PATCH 019/138] [zh] Optimize k8s extension index page There are some italics font style which doesn't render well in Chinese, we should avoid using them when possible. There are some inappropriate line wraps that are creating extra spaces between Chinese characters, this PR tries to optimizes that as well. The subsection title 'configuration' was not translated, also fixed in this PR. --- .../docs/concepts/extend-kubernetes/_index.md | 104 +++++++++--------- 1 file changed, 53 insertions(+), 51 deletions(-) diff --git a/content/zh/docs/concepts/extend-kubernetes/_index.md b/content/zh/docs/concepts/extend-kubernetes/_index.md index ce5050711f7aa..467388221b6f6 100644 --- a/content/zh/docs/concepts/extend-kubernetes/_index.md +++ b/content/zh/docs/concepts/extend-kubernetes/_index.md @@ -70,16 +70,17 @@ Customization approaches can be broadly divided into *configuration*, which only *Configuration files* and *flags* are documented in the Reference section of the online documentation, under each binary: * [kubelet](/docs/reference/command-line-tools-reference/kubelet/) +* [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/) * [kube-apiserver](/docs/reference/command-line-tools-reference/kube-apiserver/) * [kube-controller-manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) * [kube-scheduler](/docs/reference/command-line-tools-reference/kube-scheduler/). --> +## 配置 {#configuration} -## Configuration - -*配置文件*和*参数标志*的说明位于在线文档的参考章节,按可执行文件组织: +配置文件和参数标志的说明位于在线文档的参考章节,按可执行文件组织: * [kubelet](/zh/docs/reference/command-line-tools-reference/kubelet/) +* [kube-proxy](/zh/docs/reference/command-line-tools-reference/kube-proxy/) * [kube-apiserver](/zh/docs/reference/command-line-tools-reference/kube-apiserver/) * [kube-controller-manager](/zh/docs/reference/command-line-tools-reference/kube-controller-manager/) * [kube-scheduler](/zh/docs/reference/command-line-tools-reference/kube-scheduler/). @@ -94,19 +95,19 @@ Flags and configuration files may not always be changeable in a hosted Kubernete 有鉴于此,通常应该在没有其他替代方案时才应考虑更改参数标志和配置文件。 *内置的策略 API*,例如[ResourceQuota](/zh/docs/concepts/policy/resource-quotas/)、 [PodSecurityPolicies](/zh/docs/concepts/policy/pod-security-policy/)、 [NetworkPolicy](/zh/docs/concepts/services-networking/network-policies/) -和基于角色的访问控制([RBAC](/zh/docs/reference/access-authn-authz/rbac/))等等 -都是内置的 Kubernetes API。 +和基于角色的访问控制([RBAC](/zh/docs/reference/access-authn-authz/rbac/)) +等等都是内置的 Kubernetes API。 API 通常用于托管的 Kubernetes 服务和受控的 Kubernetes 安装环境中。 -这些 API 是声明式的,与 Pod 这类其他 Kubernetes 资源遵从相同的约定,所以 -新的集群配置是可复用的,并且可以当作应用程序来管理。 -此外,对于稳定版本的 API 而言,它们与其他 Kubernetes API 一样,采纳的是 -一种[预定义的支持策略](/zh/docs/reference/using-api/deprecation-policy/)。 -出于以上原因,在条件允许的情况下,基于 API 的方案应该优先于*配置文件*和*参数标志*。 +这些 API 是声明式的,与 Pod 这类其他 Kubernetes 资源遵从相同的约定, +所以新的集群配置是可复用的,并且可以当作应用程序来管理。 +此外,对于稳定版本的 API 而言,它们与其他 Kubernetes API 一样, +采纳的是一种[预定义的支持策略](/zh/docs/reference/using-api/deprecation-policy/)。 +出于以上原因,在条件允许的情况下,基于 API 的方案应该优先于配置文件和参数标志。 ## 扩展 {#extensions} @@ -124,7 +125,7 @@ install extensions and fewer will need to author new ones. 它们调整 Kubernetes 的工作方式使之支持新的类型和新的硬件种类。 大多数集群管理员会使用一种托管的 Kubernetes 服务或者其某种发行版本。 -因此,大多数 Kubernetes 用户不需要安装扩展, +这类集群通常都预先安装了扩展。因此,大多数 Kubernetes 用户不需要安装扩展, 至于需要自己编写新的扩展的情况就更少了。 -编写客户端程序有一种特殊的 *Controller(控制器)* 模式,能够与 Kubernetes 很好地 -协同工作。控制器通常会读取某个对象的 `.spec`,或许还会执行一些操作,之后更新 -对象的 `.status`。 -*Controller(控制器)* 是 Kubernetes 的客户端。 +编写客户端程序有一种特殊的 Controller(控制器)模式,能够与 Kubernetes +很好地协同工作。控制器通常会读取某个对象的 `.spec`,或许还会执行一些操作, +之后更新对象的 `.status`。 -当 Kubernetes 充当客户端,调用某远程服务时,对应 -的远程组件称作*Webhook*。 远程服务称作*Webhook 后端*。 +Controller 是 Kubernetes 的客户端。当 Kubernetes 充当客户端, +调用某远程服务时,对应的远程组件称作 *Webhook*,远程服务称作 Webhook 后端。 与控制器模式相似,Webhook 也会在整个架构中引入新的失效点(Point of Failure)。 在 Webhook 模式中,Kubernetes 向远程服务发起网络请求。 -在 *可执行文件插件(Binary Plugin)* 模式中,Kubernetes 执行某个可执行文件(程序)。 -可执行文件插件在 kubelet (例如, -[FlexVolume 插件](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md) +在 **可执行文件插件(Binary Plugin)** 模式中,Kubernetes +执行某个可执行文件(程序)。可执行文件插件在 kubelet (例如, +[FlexVolume 插件](/zh/docs/concepts/storage/volumes/#flexvolume)) 和[网络插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)) 和 kubectl 中使用。 @@ -223,12 +222,11 @@ If you are unsure where to start, this flowchart can help. Note that some soluti 2. API 服务器处理所有请求。API 服务器中的几种扩展点能够使用户对请求执行身份认证、 基于其内容阻止请求、编辑请求内容、处理删除操作等等。 - 这些扩展点在 [API 访问扩展](#api-access-extensions) - 节详述。 + 这些扩展点在 [API 访问扩展](#api-access-extensions)节详述。 -3. API 服务器向外提供不同类型的*资源(resources)*。 - *内置的资源类型*,如 `pods`,是由 Kubernetes 项目所定义的,无法改变。 - 你也可以添加自己定义的或者其他项目所定义的称作*自定义资源(Custom Resources)* +3. API 服务器向外提供不同类型的资源(resources)。 + 内置的资源类型,如 `pods`,是由 Kubernetes 项目所定义的,无法改变。 + 你也可以添加自己定义的或者其他项目所定义的称作自定义资源(Custom Resources) 的资源,正如[自定义资源](#user-defined-types)节所描述的那样。 自定义资源通常与 API 访问扩展点结合使用。 @@ -236,12 +234,12 @@ If you are unsure where to start, this flowchart can help. Note that some soluti 有几种方式来扩展调度行为。这些方法将在 [调度器扩展](#scheduler-extensions)节中展开。 -5. Kubernetes 中的很多行为都是通过称为控制器(Controllers)的程序来实现的,这些程序也都是 API 服务器 - 的客户端。控制器常常与自定义资源结合使用。 +5. Kubernetes 中的很多行为都是通过称为控制器(Controllers)的程序来实现的, + 这些程序也都是 API 服务器的客户端。控制器常常与自定义资源结合使用。 6. 组件 kubelet 运行在各个节点上,帮助 Pod 展现为虚拟的服务器并在集群网络中拥有自己的 IP。 - [网络插件](#network-plugins)使得 Kubernetes 能够采用 - 不同实现技术来连接 Pod 网络。 + [网络插件](#network-plugins)使得 Kubernetes 能够采用不同实现技术来连接 + Pod 网络。 7. 组件 kubelet 也会为容器增加或解除存储卷的挂载。 通过[存储插件](#storage-plugins),可以支持新的存储类型。 @@ -320,9 +318,8 @@ Kubernetes has several built-in authentication methods that it supports. It can 这些步骤中都存在扩展点。 -Kubernetes 提供若干内置的身份认证方法。 -它也可以运行在某中身份认证代理的后面,并且可以将来自鉴权头部的令牌发送到 -某个远程服务(Webhook)来执行验证操作。 +Kubernetes 提供若干内置的身份认证方法。它也可以运行在某种身份认证代理的后面, +并且可以将来自鉴权头部的令牌发送到某个远程服务(Webhook)来执行验证操作。 所有这些方法都在[身份认证文档](/zh/docs/reference/access-authn-authz/authentication/) 中有详细论述。 @@ -345,12 +342,12 @@ Kubernetes 提供若干种内置的认证方法,以及 ### 鉴权 {#authorization} -[鉴权](/zh/docs/reference/access-authn-authz/webhook/)操作负责确定特定的用户 -是否可以读、写 API 资源或对其执行其他操作。 +[鉴权](/zh/docs/reference/access-authn-authz/authorization/) +操作负责确定特定的用户是否可以读、写 API 资源或对其执行其他操作。 此操作仅在整个资源集合的层面进行。 换言之,它不会基于对象的特定字段作出不同的判决。 如果内置的鉴权选项无法满足你的需要,你可以使用 @@ -371,10 +368,10 @@ After a request is authorized, if it is a write operation, it also goes through [准入控制](/zh/docs/reference/access-authn-authz/admission-controllers/)处理步骤。 除了内置的处理步骤,还存在一些扩展点: -* [Image Policy webhook](/zh/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook) +* [镜像策略 Webhook](/zh/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook) 能够限制容器中可以运行哪些镜像。 * 为了执行任意的准入控制,可以使用一种通用的 - [Admission webhook](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) + [准入 Webhook](/zh/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) 机制。这类 Webhook 可以拒绝对象创建或更新请求。 +从 Kubernetes v1.23 开始,FlexVolume 被弃用。 +在 Kubernetes 中编写卷驱动的推荐方式是使用树外(Out-of-tree)CSI 驱动。 +详细信息可参阅 [Kubernetes Volume Plugin FAQ for Storage Vendors](https://github.com/kubernetes/community/blob/master/sig-storage/volume-plugin-faq.md#kubernetes-volume-plugin-faq-for-storage-vendors)。 From 77123a0e911d779ecf8a7b76b81c6c6d93f63306 Mon Sep 17 00:00:00 2001 From: Andrew Foster Date: Wed, 9 Mar 2022 22:55:23 +1100 Subject: [PATCH 022/138] Remove kompose up and down command doc --- .../translate-compose-kubernetes.md | 50 ---- .../translate-compose-kubernetes.md | 168 +------------- .../translate-compose-kubernetes.md | 219 ------------------ 3 files changed, 2 insertions(+), 435 deletions(-) diff --git a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md index 205628b525609..a463982422656 100644 --- a/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md +++ b/content/en/docs/tasks/configure-pod-container/translate-compose-kubernetes.md @@ -208,7 +208,6 @@ you need is an existing `docker-compose.yml` file. - CLI - [`kompose convert`](#kompose-convert) - Documentation - - [Build and Push Docker Images](#build-and-push-docker-images) - [Alternative Conversions](#alternative-conversions) - [Labels](#labels) - [Restart](#restart) @@ -326,55 +325,6 @@ INFO OpenShift file "foo-buildconfig.yaml" created If you are manually pushing the OpenShift artifacts using ``oc create -f``, you need to ensure that you push the imagestream artifact before the buildconfig artifact, to workaround this OpenShift issue: https://github.com/openshift/origin/issues/4518 . {{< /note >}} - - -## Build and Push Docker Images - -Kompose supports both building and pushing Docker images. When using the `build` key within your Docker Compose file, your image will: - -- Automatically be built with Docker using the `image` key specified within your file -- Be pushed to the correct Docker repository using local credentials (located at `.docker/config`) - -Using an [example Docker Compose file](https://raw.githubusercontent.com/kubernetes/kompose/master/examples/buildconfig/docker-compose.yml): - -```yaml -version: "2" - -services: - foo: - build: "./build" - image: docker.io/foo/bar -``` - -Using `kompose up` with a `build` key: - -```none -kompose up -INFO Build key detected. Attempting to build and push image 'docker.io/foo/bar' -INFO Building image 'docker.io/foo/bar' from directory 'build' -INFO Image 'docker.io/foo/bar' from directory 'build' built successfully -INFO Pushing image 'foo/bar:latest' to registry 'docker.io' -INFO Attempting authentication credentials 'https://index.docker.io/v1/ -INFO Successfully pushed image 'foo/bar:latest' to registry 'docker.io' -INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl apply -f' commands instead. - -INFO Deploying application in "default" namespace -INFO Successfully created Service: foo -INFO Successfully created Deployment: foo - -Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods,pvc' for details. -``` - -In order to disable the functionality, or choose to use BuildConfig generation (with OpenShift) `--build (local|build-config|none)` can be passed. - -```sh -# Disable building/pushing Docker images -kompose up --build none - -# Generate Build Config artifacts for OpenShift -kompose up --provider openshift --build build-config -``` - ## Alternative Conversions The default `kompose` transformation will generate Kubernetes [Deployments](/docs/concepts/workloads/controllers/deployment/) and [Services](/docs/concepts/services-networking/service/), in yaml format. You have alternative option to generate json with `-j`. Also, you can alternatively generate [Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/) objects, [Daemon Sets](/docs/concepts/workloads/controllers/daemonset/), or [Helm](https://github.com/helm/helm) charts. diff --git a/content/fr/docs/tasks/configure-pod-container/translate-compose-kubernetes.md b/content/fr/docs/tasks/configure-pod-container/translate-compose-kubernetes.md index f856847e857b8..0ade44801bd82 100644 --- a/content/fr/docs/tasks/configure-pod-container/translate-compose-kubernetes.md +++ b/content/fr/docs/tasks/configure-pod-container/translate-compose-kubernetes.md @@ -121,22 +121,7 @@ En quelques étapes, nous vous emmenons de Docker Compose à Kubernetes. Tous do kompose.service.type: LoadBalancer ``` -2. Lancez la commande `kompose up` pour déployer directement sur Kubernetes, ou passez plutôt à l'étape suivante pour générer un fichier à utiliser avec `kubectl`. - - ```bash - $ kompose up - We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. - If you need different kind of resources, use the 'kompose convert' and 'kubectl apply -f' commands instead. - - INFO Successfully created Service: redis - INFO Successfully created Service: web - INFO Successfully created Deployment: redis - INFO Successfully created Deployment: web - - Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods,pvc' for details. - ``` - -3. Pour convertir le fichier `docker-compose.yml` en fichiers que vous pouvez utiliser avec `kubectl`, lancez `kompose convert` et ensuite `kubectl apply -f `. +2. Pour convertir le fichier `docker-compose.yml` en fichiers que vous pouvez utiliser avec `kubectl`, lancez `kompose convert` et ensuite `kubectl apply -f `. ```bash $ kompose convert @@ -160,7 +145,7 @@ En quelques étapes, nous vous emmenons de Docker Compose à Kubernetes. Tous do Vos déploiements fonctionnent sur Kubernetes. -4. Accédez à votre application. +3. Accédez à votre application. Si vous utilisez déjà `minikube` pour votre processus de développement : @@ -201,10 +186,7 @@ En quelques étapes, nous vous emmenons de Docker Compose à Kubernetes. Tous do - CLI - [`kompose convert`](#kompose-convert) - - [`kompose up`](#kompose-up) - - [`kompose down`](#kompose-down) - Documentation - - [Construire et pousser des images de docker](#build-and-push-docker-images) - [Conversions alternatives](#alternative-conversions) - [Etiquettes](#labels) - [Redémarrage](#restart) @@ -301,152 +283,6 @@ INFO OpenShift file "foo-buildconfig.yaml" created Si vous poussez manuellement les artefacts OpenShift en utilisant ``oc create -f``, vous devez vous assurer que vous poussez l'artefact imagestream avant l'artefact buildconfig, pour contourner ce problème OpenShift : https://github.com/openshift/origin/issues/4518 . {{< /note >}} -## `kompose up` - -Kompose propose un moyen simple de déployer votre application "composée" sur Kubernetes ou OpenShift via `kompose up`. - - -### Kubernetes -```sh -$ kompose --file ./examples/docker-guestbook.yml up -We are going to create Kubernetes deployments and services for your Dockerized application. -If you need different kind of resources, use the 'kompose convert' and 'kubectl apply -f' commands instead. - -INFO Successfully created service: redis-master -INFO Successfully created service: redis-slave -INFO Successfully created service: frontend -INFO Successfully created deployment: redis-master -INFO Successfully created deployment: redis-slave -INFO Successfully created deployment: frontend - -Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods' for details. - -$ kubectl get deployment,svc,pods -NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -deployment.extensions/frontend 1 1 1 1 4m -deployment.extensions/redis-master 1 1 1 1 4m -deployment.extensions/redis-slave 1 1 1 1 4m - -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -service/frontend ClusterIP 10.0.174.12 80/TCP 4m -service/kubernetes ClusterIP 10.0.0.1 443/TCP 13d -service/redis-master ClusterIP 10.0.202.43 6379/TCP 4m -service/redis-slave ClusterIP 10.0.1.85 6379/TCP 4m - -NAME READY STATUS RESTARTS AGE -pod/frontend-2768218532-cs5t5 1/1 Running 0 4m -pod/redis-master-1432129712-63jn8 1/1 Running 0 4m -pod/redis-slave-2504961300-nve7b 1/1 Running 0 4m -``` - -**Note**: - -- Vous devez avoir un cluster Kubernetes en cours d'exécution avec kubectl pré-configuré. -- Seuls les déploiements et les services sont générés et déployés dans Kubernetes. Si vous avez besoin d'autres types de ressources, utilisez les commandes `kompose convert` et `kubectl apply -f` à la place. - -### OpenShift -```sh -$ kompose --file ./examples/docker-guestbook.yml --provider openshift up -We are going to create OpenShift DeploymentConfigs and Services for your Dockerized application. -If you need different kind of resources, use the 'kompose convert' and 'oc create -f' commands instead. - -INFO Successfully created service: redis-slave -INFO Successfully created service: frontend -INFO Successfully created service: redis-master -INFO Successfully created deployment: redis-slave -INFO Successfully created ImageStream: redis-slave -INFO Successfully created deployment: frontend -INFO Successfully created ImageStream: frontend -INFO Successfully created deployment: redis-master -INFO Successfully created ImageStream: redis-master - -Your application has been deployed to OpenShift. You can run 'oc get dc,svc,is' for details. - -$ oc get dc,svc,is -NAME REVISION DESIRED CURRENT TRIGGERED BY -dc/frontend 0 1 0 config,image(frontend:v4) -dc/redis-master 0 1 0 config,image(redis-master:e2e) -dc/redis-slave 0 1 0 config,image(redis-slave:v1) -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -svc/frontend 172.30.46.64 80/TCP 8s -svc/redis-master 172.30.144.56 6379/TCP 8s -svc/redis-slave 172.30.75.245 6379/TCP 8s -NAME DOCKER REPO TAGS UPDATED -is/frontend 172.30.12.200:5000/fff/frontend -is/redis-master 172.30.12.200:5000/fff/redis-master -is/redis-slave 172.30.12.200:5000/fff/redis-slave v1 -``` - -**Note**: - -- Vous devez avoir un cluster OpenShift en cours d'exécution avec `oc` pré-configuré (`oc login`) - -## `kompose down` - -Une fois que vous avez déployé l'application "composée" sur Kubernetes, `$ kompose down` vous -facilitera la suppression de l'application en supprimant ses déploiements et services. Si vous avez besoin de supprimer d'autres ressources, utilisez la commande 'kubectl'. - -```sh -$ kompose --file docker-guestbook.yml down -INFO Successfully deleted service: redis-master -INFO Successfully deleted deployment: redis-master -INFO Successfully deleted service: redis-slave -INFO Successfully deleted deployment: redis-slave -INFO Successfully deleted service: frontend -INFO Successfully deleted deployment: frontend -``` - -**Note**: - -- Vous devez avoir un cluster Kubernetes en cours d'exécution avec kubectl pré-configuré. - -## Construire et pousser des images de docker - -Kompose permet de construire et de pousser des images Docker. Lorsque vous utilisez la clé `build` dans votre fichier Docker Compose, votre image sera : - - - Automatiquement construite avec le Docker en utilisant la clé "image" spécifiée dans votre fichier - - Être poussé vers le bon dépôt Docker en utilisant les identifiants locaux (situés dans `.docker/config`) - -Utilisation d'un [exemple de fichier Docker Compose](https://raw.githubusercontent.com/kubernetes/kompose/master/examples/buildconfig/docker-compose.yml): - -```yaml -version: "2" - -services: - foo: - build: "./build" - image: docker.io/foo/bar -``` - -En utilisant `kompose up` avec une clé `build` : - -```none -$ kompose up -INFO Build key detected. Attempting to build and push image 'docker.io/foo/bar' -INFO Building image 'docker.io/foo/bar' from directory 'build' -INFO Image 'docker.io/foo/bar' from directory 'build' built successfully -INFO Pushing image 'foo/bar:latest' to registry 'docker.io' -INFO Attempting authentication credentials 'https://index.docker.io/v1/ -INFO Successfully pushed image 'foo/bar:latest' to registry 'docker.io' -INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl apply -f' commands instead. - -INFO Deploying application in "default" namespace -INFO Successfully created Service: foo -INFO Successfully created Deployment: foo - -Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods,pvc' for details. -``` - -Afin de désactiver cette fonctionnalité, ou de choisir d'utiliser la génération de BuildConfig (avec OpenShift) `--build (local|build-config|none)` peut être passé. - -```sh -# Désactiver la construction/poussée d'images Docker -$ kompose up --build none - -# Générer des artefacts de Build Config pour OpenShift -$ kompose up --provider openshift --build build-config -``` - ## Autres conversions La transformation par défaut `komposer` va générer des [Déploiements](/docs/concepts/workloads/controllers/deployment/) et [Services](/docs/concepts/services-networking/service/) de Kubernetes, au format yaml. Vous avez une autre option pour générer json avec `-j`. Vous pouvez aussi générer des objets de [Replication Controllers](/docs/concepts/workloads/controllers/replicationcontroller/), [Daemon Sets](/docs/concepts/workloads/controllers/daemonset/), ou [Helm](https://github.com/helm/helm) charts. diff --git a/content/zh/docs/tasks/configure-pod-container/translate-compose-kubernetes.md b/content/zh/docs/tasks/configure-pod-container/translate-compose-kubernetes.md index e0cc2fec773ae..6a96b492a62da 100644 --- a/content/zh/docs/tasks/configure-pod-container/translate-compose-kubernetes.md +++ b/content/zh/docs/tasks/configure-pod-container/translate-compose-kubernetes.md @@ -285,10 +285,7 @@ you need is an existing `docker-compose.yml` file. - CLI - [`kompose convert`](#kompose-convert) - - [`kompose up`](#kompose-up) - - [`kompose down`](#kompose-down) - 文档 - - [构建和推送 Docker 镜像](#build-and-push-docker-images) - [其他转换方式](#其他转换方式) - [标签](#labels) - [重启](#restart) @@ -447,219 +441,6 @@ If you are manually pushing the Openshift artifacts using ``oc create -f``, you imagestream 工件,以解决 Openshift 的这个问题:https://github.com/openshift/origin/issues/4518 。 {{< /note >}} -## `kompose up` - - -Kompose 支持通过 `kompose up` 直接将你的"复合的(composed)" 应用程序 -部署到 Kubernetes 或 OpenShift。 - - -### Kubernetes `kompose up` 示例 - -```shell -kompose --file ./examples/docker-guestbook.yml up -``` - -```none -We are going to create Kubernetes deployments and services for your Dockerized application. -If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead. - -INFO Successfully created service: redis-master -INFO Successfully created service: redis-slave -INFO Successfully created service: frontend -INFO Successfully created deployment: redis-master -INFO Successfully created deployment: redis-slave -INFO Successfully created deployment: frontend - -Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods' for details. -``` - -```shell -kubectl get deployment,svc,pods -``` - -``` -NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -deployment.extensions/frontend 1 1 1 1 4m -deployment.extensions/redis-master 1 1 1 1 4m -deployment.extensions/redis-slave 1 1 1 1 4m - -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -service/frontend ClusterIP 10.0.174.12 80/TCP 4m -service/kubernetes ClusterIP 10.0.0.1 443/TCP 13d -service/redis-master ClusterIP 10.0.202.43 6379/TCP 4m -service/redis-slave ClusterIP 10.0.1.85 6379/TCP 4m - -NAME READY STATUS RESTARTS AGE -pod/frontend-2768218532-cs5t5 1/1 Running 0 4m -pod/redis-master-1432129712-63jn8 1/1 Running 0 4m -pod/redis-slave-2504961300-nve7b 1/1 Running 0 4m -``` - - -{{< note >}} - -- 你必须有一个运行正常的 Kubernetes 集群,该集群具有预先配置的 kubectl 上下文。 -- 此操作仅生成 Deployment 和 Service 对象并将其部署到 Kubernetes。 - 如果需要部署其他不同类型的资源,请使用 `kompose convert` 和 `kubectl create -f` 命令。 -{{< /note >}} - - -### OpenShift `kompose up` 示例 - -```shell -kompose --file ./examples/docker-guestbook.yml --provider openshift up -``` - -```none -We are going to create OpenShift DeploymentConfigs and Services for your Dockerized application. -If you need different kind of resources, use the 'kompose convert' and 'oc create -f' commands instead. - -INFO Successfully created service: redis-slave -INFO Successfully created service: frontend -INFO Successfully created service: redis-master -INFO Successfully created deployment: redis-slave -INFO Successfully created ImageStream: redis-slave -INFO Successfully created deployment: frontend -INFO Successfully created ImageStream: frontend -INFO Successfully created deployment: redis-master -INFO Successfully created ImageStream: redis-master - -Your application has been deployed to OpenShift. You can run 'oc get dc,svc,is' for details. -``` - -```shell -oc get dc,svc,is -``` - -```none -NAME REVISION DESIRED CURRENT TRIGGERED BY -dc/frontend 0 1 0 config,image(frontend:v4) -dc/redis-master 0 1 0 config,image(redis-master:e2e) -dc/redis-slave 0 1 0 config,image(redis-slave:v1) -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -svc/frontend 172.30.46.64 80/TCP 8s -svc/redis-master 172.30.144.56 6379/TCP 8s -svc/redis-slave 172.30.75.245 6379/TCP 8s -NAME DOCKER REPO TAGS UPDATED -is/frontend 172.30.12.200:5000/fff/frontend -is/redis-master 172.30.12.200:5000/fff/redis-master -is/redis-slave 172.30.12.200:5000/fff/redis-slave v1 -``` - -{{< note >}} - -你必须有一个运行正常的 OpenShift 集群,该集群具有预先配置的 `oc` 上下文 (`oc login`)。 -{{< /note >}} - -## `kompose down` - - -你一旦将"复合(composed)" 应用部署到 Kubernetes,`kompose down` -命令将能帮你通过删除 Deployment 和 Service 对象来删除应用。 -如果需要删除其他资源,请使用 'kubectl' 命令。 - -```shell -kompose --file docker-guestbook.yml down -``` - -``` -INFO Successfully deleted service: redis-master -INFO Successfully deleted deployment: redis-master -INFO Successfully deleted service: redis-slave -INFO Successfully deleted deployment: redis-slave -INFO Successfully deleted service: frontend -INFO Successfully deleted deployment: frontend -``` - -{{< note >}} - -- 你必须有一个运行正常的 Kubernetes 集群,该集群具有预先配置的 kubectl 上下文。 -{{< /note >}} - - -## 构建和推送 Docker 镜像 {#build-and-push-docker-images} - -Kompose 支持构建和推送 Docker 镜像。如果 Docker Compose 文件中使用了 `build` -关键字,你的镜像将会: - -- 使用文档中指定的 `image` 键自动构建 Docker 镜像 -- 使用本地凭据推送到正确的 Docker 仓库 - -使用 [Docker Compose 文件示例](https://raw.githubusercontent.com/kubernetes/kompose/master/examples/buildconfig/docker-compose.yml) - -```yaml -version: "2" - -services: - foo: - build: "./build" - image: docker.io/foo/bar -``` - - -使用带有 `build` 键的 `kompose up` 命令: - -```shell -kompose up -``` - -```none -INFO Build key detected. Attempting to build and push image 'docker.io/foo/bar' -INFO Building image 'docker.io/foo/bar' from directory 'build' -INFO Image 'docker.io/foo/bar' from directory 'build' built successfully -INFO Pushing image 'foo/bar:latest' to registry 'docker.io' -INFO Attempting authentication credentials 'https://index.docker.io/v1/ -INFO Successfully pushed image 'foo/bar:latest' to registry 'docker.io' -INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead. - -INFO Deploying application in "default" namespace -INFO Successfully created Service: foo -INFO Successfully created Deployment: foo - -Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods,pvc' for details. -``` - - -要想禁用该功能,或者使用 BuildConfig 中的版本(在 OpenShift 中), -可以通过传递 `--build (local|build-config|none)` 参数来实现。 - -```shell -# 禁止构造和推送 Docker 镜像 -kompose up --build none - -# 为 OpenShift 生成 Build Config 工件 -kompose up --provider openshift --build build-config -``` - @@ -22,14 +17,10 @@ This page provides an overview of NodeLocal DNSCache feature in Kubernetes. --> 本页概述了 Kubernetes 中的 NodeLocal DNSCache 功能。 - - ## {{% heading "prerequisites" %}} - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - -NodeLocal DNSCache 通过在集群节点上作为 DaemonSet 运行 dns 缓存代理来提高集群 DNS 性能。 -在当今的体系结构中,处于 ClusterFirst DNS 模式的 Pod 可以连接到 kube-dns serviceIP 进行 DNS 查询。 +NodeLocal DNSCache 通过在集群节点上作为 DaemonSet 运行 DNS 缓存代理来提高集群 DNS 性能。 +在当今的体系结构中,运行在 ClusterFirst DNS 模式下的 Pod 可以连接到 kube-dns `serviceIP` 进行 DNS 查询。 通过 kube-proxy 添加的 iptables 规则将其转换为 kube-dns/CoreDNS 端点。 -借助这种新架构,Pods 将可以访问在同一节点上运行的 dns 缓存代理,从而避免了 iptables DNAT 规则和连接跟踪。 -本地缓存代理将查询 kube-dns 服务以获取集群主机名的缓存缺失(默认为 cluster.local 后缀)。 +借助这种新架构,Pods 将可以访问在同一节点上运行的 DNS 缓存代理,从而避免 iptables DNAT 规则和连接跟踪。 +本地缓存代理将查询 kube-dns 服务以获取集群主机名的缓存缺失(默认为 "`cluster.local`" 后缀)。 -* 使用当前的 DNS 体系结构,如果没有本地 kube-dns/CoreDNS 实例,则具有最高 DNS QPS 的 Pod 可能必须延伸到另一个节点。 -在这种脚本下,拥有本地缓存将有助于改善延迟。 +* 使用当前的 DNS 体系结构,如果没有本地 kube-dns/CoreDNS 实例,则具有最高 DNS QPS + 的 Pod 可能必须延伸到另一个节点。 + 在这种场景下,拥有本地缓存将有助于改善延迟。 -* 跳过 iptables DNAT 和连接跟踪将有助于减少 [conntrack 竞争](https://github.com/kubernetes/kubernetes/issues/56903)并避免 UDP DNS 条目填满 conntrack 表。 +* 跳过 iptables DNAT 和连接跟踪将有助于减少 + [conntrack 竞争](https://github.com/kubernetes/kubernetes/issues/56903) + 并避免 UDP DNS 条目填满 conntrack 表。 -* 从本地缓存代理到 kube-dns 服务的连接可以升级到 TCP 。 -TCP conntrack 条目将在连接关闭时被删除,相反 UDP 条目必须超时([默认](https://www.kernel.org/doc/Documentation/networking/nf_conntrack-sysctl.txt) `nf_conntrack_udp_timeout` 是 30 秒) +* 从本地缓存代理到 kube-dns 服务的连接可以升级为 TCP 。 + TCP conntrack 条目将在连接关闭时被删除,相反 UDP 条目必须超时 + ([默认](https://www.kernel.org/doc/Documentation/networking/nf_conntrack-sysctl.txt) + `nf_conntrack_udp_timeout` 是 30 秒)。 -* 将 DNS 查询从 UDP 升级到 TCP 将减少归因于丢弃的 UDP 数据包和 DNS 超时的尾部等待时间,通常长达 30 秒(3 次重试+ 10 秒超时)。 +* 将 DNS 查询从 UDP 升级到 TCP 将减少由于被丢弃的 UDP 包和 DNS 超时而带来的尾部等待时间; + 这类延时通常长达 30 秒(3 次重试 + 10 秒超时)。 + 由于 nodelocal 缓存监听 UDP DNS 查询,应用不需要变更。 -* 在节点级别对 dns 请求的度量和可见性。 +* 在节点级别对 DNS 请求的度量和可见性。 -启用 NodeLocal DNSCache 之后,这是 DNS 查询所遵循的路径: - +启用 NodeLocal DNSCache 之后,DNS 查询所遵循的路径如下: -{{< figure src="/images/docs/nodelocaldns.svg" alt="NodeLocal DNSCache 流" title="Nodelocal DNSCache 流" caption="此图显示了 NodeLocal DNSCache 如何处理 DNS 查询。" >}} +{{< figure src="/images/docs/nodelocaldns.svg" alt="NodeLocal DNSCache 流" title="Nodelocal DNSCache 流" caption="此图显示了 NodeLocal DNSCache 如何处理 DNS 查询。" class="diagram-medium" >}} {{< note >}} NodeLocal DNSCache 的本地侦听 IP 地址可以是任何地址,只要该地址不和你的集群里现有的 IP 地址发生冲突。 -推荐使用本地范围内的地址,例如,IPv4 链路本地区段 169.254.0.0/16 内的地址, -或者 IPv6 唯一本地地址区段 fd00::/8 内的地址。 +推荐使用本地范围内的地址,例如,IPv4 链路本地区段 '169.254.0.0/16' 内的地址, +或者 IPv6 唯一本地地址区段 'fd00::/8' 内的地址。 {{< /note >}} * 如果使用 IPv6,在使用 IP:Port 格式的时候需要把 CoreDNS 配置文件里的所有 IPv6 地址用方括号包起来。 - 如果你使用上述的示例清单,需要把 [配置行 L70](https://github.com/kubernetes/kubernetes/blob/b2ecd1b3a3192fbbe2b9e348e095326f51dc43dd/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml#L70) + 如果你使用上述的示例清单,需要把 + [配置行 L70](https://github.com/kubernetes/kubernetes/blob/b2ecd1b3a3192fbbe2b9e348e095326f51dc43dd/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml#L70) 修改为 `health [__PILLAR__LOCAL__DNS__]:8080`。 * 把清单里的变量更改为正确的值: - * kubedns=`kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}` - * domain=`` + ``` + kubedns=`kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}` + domain= + localdns= + ``` - * localdns=`` - - `` 的默认值是 "cluster.local"。 `` 是 NodeLocal DNSCache 选择的本地侦听 IP 地址。 + `` 的默认值是 "`cluster.local`"。`` 是 + NodeLocal DNSCache 选择的本地侦听 IP 地址。 - * 如果 kube-proxy 运行在 IPTABLES 模式: + * 如果 kube-proxy 运行在 IPTABLES 模式: - ``` bash - sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g" nodelocaldns.yaml - ``` + ``` bash + sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g" nodelocaldns.yaml + ``` - node-local-dns Pods 会设置 `__PILLAR__CLUSTER__DNS__` 和 `__PILLAR__UPSTREAM__SERVERS__`。 - 在此模式下, node-local-dns Pods 会同时侦听 kube-dns 服务的 IP 地址和 `` 的地址, - 以便 Pods 可以使用其中任何一个 IP 地址来查询 DNS 记录。 - - * 如果 kube-proxy 运行在 IPVS 模式: + * 如果 kube-proxy 运行在 IPVS 模式: - ``` bash - sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__//g; s/__PILLAR__CLUSTER__DNS__/$kubedns/g" nodelocaldns.yaml - ``` + ``` bash + sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__//g; s/__PILLAR__CLUSTER__DNS__/$kubedns/g" nodelocaldns.yaml + ``` - 在此模式下,node-local-dns Pods 只会侦听 `` 的地址。 - node-local-dns 接口不能绑定 kube-dns 的集群 IP 地址,因为 IPVS 负载均衡 - 使用的接口已经占用了该地址。 - node-local-dns Pods 会设置 `__PILLAR__UPSTREAM__SERVERS__`。 + 在此模式下,node-local-dns Pods 只会侦听 `` 的地址。 + node-local-dns 接口不能绑定 kube-dns 的集群 IP 地址,因为 IPVS 负载均衡 + 使用的接口已经占用了该地址。 + node-local-dns Pods 会设置 `__PILLAR__UPSTREAM__SERVERS__`。 * 运行 `kubectl create -f nodelocaldns.yaml` -* 如果 kube-proxy 运行在 IPVS 模式,需要修改 kubelet 的 `--cluster-dns` 参数为 NodeLocal DNSCache 正在侦听的 `` 地址。 - 否则,不需要修改 `--cluster-dns` 参数,因为 NodeLocal DNSCache 会同时侦听 kube-dns 服务的 IP 地址和 `` 的地址。 +* 如果 kube-proxy 运行在 IPVS 模式,需要修改 kubelet 的 `--cluster-dns` 参数 + NodeLocal DNSCache 正在侦听的 `` 地址。 + 否则,不需要修改 `--cluster-dns` 参数,因为 NodeLocal DNSCache 会同时侦听 + kube-dns 服务的 IP 地址和 `` 的地址。 启用后,node-local-dns Pods 将在每个集群节点上的 kube-system 名字空间中运行。 -此 Pod 在缓存模式下运行 [CoreDNS](https://github.com/coredns/coredns) ,因此每个节点都可以使用不同插件公开的所有 CoreDNS 指标。 +此 Pod 在缓存模式下运行 [CoreDNS](https://github.com/coredns/coredns) , +因此每个节点都可以使用不同插件公开的所有 CoreDNS 指标。 + +如果要禁用该功能,你可以使用 `kubectl delete -f ` 来删除 DaemonSet。 +你还应该回滚你对 kubelet 配置所做的所有改动。 + + +## StubDomains 和上游服务器配置 + + +`node-local-dns` Pod 能够自动读取 `kube-system` 名字空间中 `kube-dns` ConfigMap +中保存的 StubDomains 和上游服务器信息。ConfigMap 中的内容需要遵从 +[此示例](/zh/docs/tasks/administer-cluster/dns-custom-nameservers/#example-1) +中所给的格式。 +`node-local-dns` ConfigMap 也可被直接修改,使用 Corefile 格式设置 stubDomain 配置。 +某些云厂商可能不允许直接修改 `node-local-dns` ConfigMap 的内容。 +在这种情况下,可以更新 `kube-dns` ConfigMap。 + + +## 设置内存限制 + + +`node-local-dns` Pod 使用内存来保存缓存项并处理查询。 +由于它们并不监视 Kubernetes 对象变化,集群规模或者 Service/Endpoints +的数量都不会直接影响内存用量。内存用量会受到 DNS 查询模式的影响。 +根据 [CoreDNS 文档](https://github.com/coredns/deployment/blob/master/kubernetes/Scaling_CoreDNS.md), + +> The default cache size is 10000 entries, which uses about 30 MB when completely filled. +> (默认的缓存大小是 10000 个表项,当完全填充时会使用约 30 MB 内存) + + +这一数值是(缓存完全被填充时)每个服务器块的内存用量。 +通过设置小一点的缓存大小可以降低内存用量。 + +并发查询的数量会影响内存需求,因为用来处理查询请求而创建的 Go 协程都需要一定量的内存。 +你可以在 forward 插件中使用 `max_concurrent` 选项设置并发查询数量上限。 + + +如果一个 `node-local-dns` Pod 尝试使用的内存超出可提供的内存量 +(因为系统资源总量的,或者所配置的[资源约束](/zh/docs/concepts/configuration/manage-resources-containers/))的原因, +操作系统可能会关闭这一 Pod 的容器。 +发生这种情况时,被终止的("OOMKilled")容器不会清理其启动期间所添加的定制包过滤规则。 +该 `node-local-dns` 容器应该会被重启(因其作为 DaemonSet 的一部分被管理), +但因上述原因可能每次容器失败时都会导致 DNS 有一小段时间不可用: +the packet filtering rules direct DNS queries to a local Pod that is unhealthy +(包过滤器规则将 DNS 查询转发到本地某个不健康的 Pod)。 + + +通过不带限制地运行 `node-local-dns` Pod 并度量其内存用量峰值,你可以为其确定一个合适的内存限制值。 +你也可以安装并使用一个运行在 “Recommender Mode(建议者模式)” 的 +[VerticalPodAutoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler), +并查看该组件输出的建议信息。 -如果要禁用该功能,你可以使用 `kubectl delete -f ` 来删除 DaemonSet。你还应该恢复你对 kubelet 配置所做的所有改动。 From da8b6d730f6a1845484c482e64ac7930603f80af Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Fri, 11 Mar 2022 19:55:17 +0800 Subject: [PATCH 028/138] [zh] Update coredns page --- .../docs/tasks/administer-cluster/coredns.md | 119 ++++++++---------- 1 file changed, 55 insertions(+), 64 deletions(-) diff --git a/content/zh/docs/tasks/administer-cluster/coredns.md b/content/zh/docs/tasks/administer-cluster/coredns.md index 22ada210261e0..24773adbb9a0f 100644 --- a/content/zh/docs/tasks/administer-cluster/coredns.md +++ b/content/zh/docs/tasks/administer-cluster/coredns.md @@ -28,8 +28,10 @@ This page describes the CoreDNS upgrade process and how to install CoreDNS inste ## 关于 CoreDNS @@ -37,11 +39,12 @@ Like Kubernetes, the CoreDNS project is hosted by the {{< glossary_tooltip text= 与 Kubernetes 一样,CoreDNS 项目由 {{< glossary_tooltip text="CNCF" term_id="cncf" >}} 托管。 -通过在现有的集群中替换 kube-dns,可以在集群中使用 CoreDNS 代替 kube-dns 部署, -或者使用 kubeadm 等工具来为你部署和升级集群。 +通过替换现有集群部署中的 kube-dns,或者使用 kubeadm 等工具来为你部署和升级集群, +可以在你的集群中使用 CoreDNS 而非 kube-dns, - ## 迁移到 CoreDNS ### 使用 kubeadm 升级现有集群 -在 Kubernetes 1.10 及更高版本中,当你使用 `kubeadm` 升级使用 `kube-dns` 的集群时,你还可以迁移到 CoreDNS。 -在本例中 `kubeadm` 将生成 CoreDNS 配置("Corefile")基于 `kube-dns` ConfigMap, -保存存根域和上游名称服务器的配置。 +在 Kubernetes 1.21 版本中,kubeadm 移除了对将 `kube-dns` 作为 DNS 应用的支持。 +对于 `kubeadm` v{{< skew currentVersion >}},所支持的唯一的集群 DNS 应用是 CoreDNS。 -如果你正在从 kube-dns 迁移到 CoreDNS,请确保在升级期间将 `CoreDNS` 特性门设置为 `true`。 -例如,`v1.11.0` 升级应该是这样的: - -``` -kubeadm upgrade apply v1.11.0 --feature-gates=CoreDNS=true -``` - - -在 Kubernetes 版本 1.13 和更高版本中,`CoreDNS`特性门已经删除,CoreDNS 在默认情况下使用。 - - -在 1.11 之前的版本中,核心文件将被升级过程中创建的文件覆盖。 -**如果已对其进行自定义,则应保存现有的 ConfigMap。** -在新的 ConfigMap 启动并运行后,你可以重新应用自定义。 - - -如果你在 Kubernetes 1.11 及更高版本中运行 CoreDNS,则在升级期间,将保留现有的 Corefile。 - - -在 kubernetes 1.21 中,kubeadm 移除了对 `kube-dns` 的支持。 +当你使用 `kubeadm` 升级使用 `kube-dns` 的集群时,你还可以执行到 CoreDNS 的迁移。 +在这种场景中,`kubeadm` 将基于 `kube-dns` ConfigMap 生成 CoreDNS 配置("Corefile"), +保存存根域和上游名称服务器的配置。 ## 升级 CoreDNS -从 v1.9 起,Kubernetes 提供了 CoreDNS。 -你可以在[此处](https://github.com/coredns/deployment/blob/master/kubernetes/CoreDNS-k8s_version.md) -查看 Kubernetes 随附的 CoreDNS 版本以及对 CoreDNS 所做的更改。 +你可以在 [CoreDNS version in Kubernetes](https://github.com/coredns/deployment/blob/master/kubernetes/CoreDNS-k8s_version.md) +页面查看 kubeadm 为不同版本 Kubernetes 所安装的 CoreDNS 版本。 -如果你只想升级 CoreDNS 或使用自己的自定义镜像,则可以手动升级 CoreDNS。 +如果你只想升级 CoreDNS 或使用自己的定制镜像,也可以手动升级 CoreDNS。 参看[指南和演练](https://github.com/coredns/deployment/blob/master/kubernetes/Upgrading_CoreDNS.md) 文档了解如何平滑升级。 +在升级你的集群过程中,请确保现有 CoreDNS 的配置("Corefile")被保留下来。 + +如果使用 `kubeadm` 工具来升级集群,则 `kubeadm` 可以自动处理保留现有 CoreDNS +配置这一事项。 ## CoreDNS 调优 当资源利用方面有问题时,优化 CoreDNS 的配置可能是有用的。 -有关详细信息,请参阅[有关扩缩 CoreDNS 的文档](https://github.com/coredns/deployment/blob/master/kubernetes/Scaling_CoreDNS.md)。 +有关详细信息,请参阅有关[扩缩 CoreDNS 的文档](https://github.com/coredns/deployment/blob/master/kubernetes/Scaling_CoreDNS.md)。 ## {{% heading "whatsnext" %}} -你可以通过修改 `Corefile` 来配置 [CoreDNS](https://coredns.io),以支持比 kube-dns 更多的用例。 -请参考 [CoreDNS 网站](https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/) +kube-dns does by modifying the CoreDNS configuration ("Corefile"). +For more information, see the [documentation](https://coredns.io/plugins/kubernetes/) +for the `kubernetes` CoreDNS plugin, or read the +[Custom DNS Entries for Kubernetes](https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/) +in the CoreDNS blog. +--> +你可以通过修改 CoreDNS 的配置("Corefile")来配置 [CoreDNS](https://coredns.io), +以支持比 kube-dns 更多的用例。 +请参考 `kubernetes` CoreDNS 插件的[文档](https://coredns.io/plugins/kubernetes/) +或者 CoreDNS 博客上的博文 +[Custom DNS Entries for Kubernetes](https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/), 以了解更多信息。 From 07fb0c1a6de5170fd2f456b081841c2c72881a04 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Sun, 27 Feb 2022 17:58:29 +0800 Subject: [PATCH 029/138] [zh] Translate API server config v1alpha1 reference --- .../config-api/apiserver-config.v1alpha1.md | 447 ++++++++++++++++++ 1 file changed, 447 insertions(+) create mode 100644 content/zh/docs/reference/config-api/apiserver-config.v1alpha1.md diff --git a/content/zh/docs/reference/config-api/apiserver-config.v1alpha1.md b/content/zh/docs/reference/config-api/apiserver-config.v1alpha1.md new file mode 100644 index 0000000000000..b399f304e324e --- /dev/null +++ b/content/zh/docs/reference/config-api/apiserver-config.v1alpha1.md @@ -0,0 +1,447 @@ +--- +title: kube-apiserver 配置 (v1alpha1) +content_type: tool-reference +package: apiserver.k8s.io/v1alpha1 +auto_generated: true +--- + + +

包 v1alpha1 包含 API 的 v1alpha1 版本。

+ + +## 资源类型 + +- [AdmissionConfiguration](#apiserver-k8s-io-v1alpha1-AdmissionConfiguration) +- [EgressSelectorConfiguration](#apiserver-k8s-io-v1alpha1-EgressSelectorConfiguration) +- [TracingConfiguration](#apiserver-k8s-io-v1alpha1-TracingConfiguration) + +## `AdmissionConfiguration` {#apiserver-k8s-io-v1alpha1-AdmissionConfiguration} + +

+AdmissionConfiguration 为准入控制器提供版本化的配置信息。 +

+ + + + + + + + + + + + +
字段描述
apiVersion
string
apiserver.k8s.io/v1alpha1
kind
string
AdmissionConfiguration
plugins
+[]AdmissionPluginConfiguration +
+

+ plugins 允许用户为每个准入控制插件指定设置。 +

+
+ +## `EgressSelectorConfiguration` {#apiserver-k8s-io-v1alpha1-EgressSelectorConfiguration} + +

+EgressSelectorConfiguration 为 Egress 选择算符客户端提供版本化的配置选项。 +

+ + + + + + + + + + + + +
字段描述
apiVersion
string
apiserver.k8s.io/v1alpha1
kind
string
EgressSelectorConfiguration
egressSelections [必需]
+[]EgressSelection +
+

+ connectionServices 包含一组 Egress 选择算符客户端配置选项。 +

+
+ +## `TracingConfiguration` {#apiserver-k8s-io-v1alpha1-TracingConfiguration} + +

+TracingConfiguration 为跟踪客户端提供版本化的配置信息。 +

+ + + + + + + + + + + + + + + +
字段描述
apiVersion
string
apiserver.k8s.io/v1alpha1
kind
string
TracingConfiguration
endpoint
+string +
+

+在控制面节点上运行的采集器的端点。 +API 服务器在向采集器发送数据时将 egressType 设置为 ControlPlane。 +这里的语法定义在 https://github.com/grpc/grpc/blob/master/doc/naming.md。 +默认值为 otlpgrpc 的默认值,即 localhost:4317 +这一连接是不安全的,且不支持 TLS。 +

+
samplingRatePerMillion
+int32 +
+

+ samplingRatePerMillion 设置每一百万个数据点中要采样的样本个数。默认值为 0。 +

+
+ +## `AdmissionPluginConfiguration` {#apiserver-k8s-io-v1alpha1-AdmissionPluginConfiguration} + + +**出现在:** + +- [AdmissionConfiguration](#apiserver-k8s-io-v1alpha1-AdmissionConfiguration) + +

+AdmissionPluginConfiguration 为某个插件提供配置信息。 +

+ + + + + + + + + + + + + + + +
字段描述
name [必需]
+string +
+

+ name 是准入控制器的名称。此名称必须与所注册的准入插件名称匹配。 +

+
path
+string +
+

+ path 为指向包含插件配置数据的配置文件的路径。 +

+
configuration
+k8s.io/apimachinery/pkg/runtime.Unknown +
+

+ configuration 是一个嵌入的配置对象,用作插件的配置数据来源。 + 如果设置了此字段,则使用此字段而不是指向配置文件的路径。 +

+
+ +## `Connection` {#apiserver-k8s-io-v1alpha1-Connection} + + +**出现在:** + +- [EgressSelection](#apiserver-k8s-io-v1alpha1-EgressSelection) + +

+Connection 提供某个 Egress 选择客户端的配置信息。 +

+ + + + + + + + + + + + +
字段描述
proxyProtocol [必需]
+ProtocolType +
+

+ proxyProtocol 是客户端连接到 konnectivity 服务器所使用的协议。 +

+
transport
+Transport +
+

+ transport 定义的是传输层的配置。我们使用这个配置来联系 konnectivity 服务器。 + 当 proxyProtocol 是 HTTPConnect 或 GRPC 时需要设置此字段。 +

+
+ +## `EgressSelection` {#apiserver-k8s-io-v1alpha1-EgressSelection} + + +**出现在:** + +- [EgressSelectorConfiguration](#apiserver-k8s-io-v1alpha1-EgressSelectorConfiguration) + +

+EgressSelection 为某个 Egress 选择客户端提供配置信息。 +

+ + + + + + + + + + + + +
字段描述
name [必需]
+string +
+

+ name 是 Egress 选择器的名称。当前支持的取值有 "controlplane", + "master","etcd" 和 "cluster"。 + "master" Egress 选择器已被弃用,推荐使用 "controlplane"。 +

+
connection [必需]
+Connection +
+

+ connection 是用来配置 Egress 选择器的配置信息。 +

+
+ +## `ProtocolType` {#apiserver-k8s-io-v1alpha1-ProtocolType} + + +(`string` 类型的别名) + +**出现在:** + +- [Connection](#apiserver-k8s-io-v1alpha1-Connection) + +

+ProtocolType 是 connection.protocolType 的合法值集合。 +

+ +## `TCPTransport` {#apiserver-k8s-io-v1alpha1-TCPTransport} + + +**出现在:** + +- [Transport](#apiserver-k8s-io-v1alpha1-Transport) + +

+TCPTransport 提供使用 TCP 连接 konnectivity 服务器时需要的信息。 +

+ + + + + + + + + + + + +
字段描述
url [必需]
+string +
+

+ url 是要连接的 konnectivity 服务器的位置。例如 "https://127.0.0.1:8131"。 +

+
tlsConfig
+TLSConfig +
+

+ tlsConfig 是使用 TLS 来连接 konnectivity 服务器时需要的信息。 +

+
+ +## `TLSConfig` {#apiserver-k8s-io-v1alpha1-TLSConfig} + + +**出现在:** + +- [TCPTransport](#apiserver-k8s-io-v1alpha1-TCPTransport) + + +

+TLSConfig 为连接 konnectivity 服务器提供身份认证信息。仅用于 TCPTransport。 +

+ + + + + + + + + + + + + + + +
字段描述
caBundle
+string +
+

+ caBundle 是指向用来确定与 konnectivity 服务器间信任欢喜的 CA 证书包的文件位置。 + 当 tcpTransport.url 前缀为 "http://" 时必须不设置,或者设置为空。 + 如果 tcpTransport.url 前缀为 "https://" 并且此字段未设置,则默认使用系统的信任根。 +

+
clientKey
+string +
+

+ clientKey 是与 konnectivity 服务器进行 mtls 握手时使用的客户端秘钥文件位置。 + 如果 `tcp.url` 前缀为 http://,必须不指定或者为空; + 如果 `tcp.url` 前缀为 https://,必须设置。 +

+
clientCert
+string +
+

+ clientCert 是与 konnectivity 服务器进行 mtls 握手时使用的客户端证书文件位置。 + 如果 `tcp.url` 前缀为 http://,必须不指定或者为空; + 如果 `tcp.url` 前缀为 https://,必须设置。 +

+
+ +## `Transport` {#apiserver-k8s-io-v1alpha1-Transport} + + +**出现在:** + +- [Connection](#apiserver-k8s-io-v1alpha1-Connection) + + +

+Transport 定义联系 konnectivity 服务器时要使用的传输层配置。 +

+ + + + + + + + + + + + +
字段描述
tcp
+TCPTransport +
+

+ tcp 包含通过 TCP 与 konnectivity 服务器通信时使用的 TCP 配置。 + 目前使用 TCP 传输时不支持 GRPC 的 proxyProtocol。 + tcpuds 二者至少设置一个。 +

+
uds
+UDSTransport +
+

+ uds 包含通过 UDS 与 konnectivity 服务器通信时使用的 UDS 配置。 + tcpuds 二者至少设置一个。 +

+
+ +## `UDSTransport` {#apiserver-k8s-io-v1alpha1-UDSTransport} + + +**出现在:** + +- [Transport](#apiserver-k8s-io-v1alpha1-Transport) + +

+UDSTransport 设置通过 UDS 连接 konnectivity 服务器时需要的信息。 +

+ + + + + + + + + + +
字段描述
udsName [必需]
+string +
+

+ udsName 是与 konnectivity 服务器连接时使用的 UNIX 域套接字名称。 + 字段取值不要求包含 unix:// 前缀。 + (例如:/etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket) +

+
+ From 66a07b0baea09665b00b5e85638fe0de764839ce Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Mon, 14 Mar 2022 20:06:29 +0800 Subject: [PATCH 030/138] Fix links in resource metrics pipeline page --- .../resource-metrics-pipeline.md | 34 ++++++++++++++----- 1 file changed, 25 insertions(+), 9 deletions(-) diff --git a/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md b/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md index 14afc52c24764..06c9fe70a241b 100644 --- a/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md +++ b/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md @@ -13,9 +13,13 @@ This API makes information available about resource usage for node and pod, incl If you deploy the Metrics API into your cluster, clients of the Kubernetes API can then query for this information, and you can use Kubernetes' access control mechanisms to manage permissions to do so. -The [HorizontalPodAutoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/) (HPA) and [VerticalPodAutoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme) (VPA) use data from the metrics API to adjust workload replicas and resources to meet customer demand. +The [HorizontalPodAutoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/) (HPA) and +[VerticalPodAutoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme) (VPA) +use data from the metrics API to adjust workload replicas and resources to meet customer demand. -You can also view the resource metrics using the [`kubectl top`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#top) command. +You can also view the resource metrics using the +[`kubectl top`](/docs/reference/generated/kubectl/kubectl-commands#top) +command. {{< note >}} The Metrics API, and the metrics pipeline that it enables, only offers the minimum @@ -78,15 +82,19 @@ The architecture components, from right to left in the figure, consist of the fo The metrics-server implements the Metrics API. This API allows you to access CPU and memory usage for the nodes and pods in your cluster. Its primary role is to feed resource usage metrics to K8s autoscaler components. Here is an example of the Metrics API request for a `minikube` node piped through `jq` for easier reading: + ```shell kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes/minikube" | jq '.' ``` Here is the same API call using `curl`: + ```shell curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes/minikube ``` -Sample reply: + +Sample response: + ```json { "kind": "NodeMetrics", @@ -104,16 +112,21 @@ Sample reply: } } ``` + Here is an example of the Metrics API request for a `kube-scheduler-minikube` pod contained in the `kube-system` namespace and piped through `jq` for easier reading: ```shell kubectl get --raw "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube" | jq '.' ``` + Here is the same API call using `curl`: + ```shell curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube ``` -Sample reply: + +Sample response: + ```json { "kind": "PodMetrics", @@ -153,7 +166,8 @@ CPU is reported as the average core usage measured in cpu units. One cpu, in Kub This value is derived by taking a rate over a cumulative CPU counter provided by the kernel (in both Linux and Windows kernels). The time window used to calculate CPU is shown under window field in Metrics API. -To learn more about how Kubernetes allocates and measures CPU resources, see [meaning of CPU](/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-cpu). +To learn more about how Kubernetes allocates and measures CPU resources, see +[meaning of CPU](/docs/concepts/configuration/manage-resources-container/#meaning-of-cpu). ### Memory @@ -163,7 +177,8 @@ In an ideal world, the "working set" is the amount of memory in-use that cannot The Kubernetes model for a container's working set expects that the container runtime counts anonymous memory associated with the container in question. The working set metric typically also includes some cached (file-backed) memory, because the host OS cannot always reclaim pages. -To learn more about how Kubernetes allocates and measures memory resources, see [meaning of memory](/docs/concepts/configuration/manage-compute-resources-container/#meaning-of-memory). +To learn more about how Kubernetes allocates and measures memory resources, see +[meaning of memory](/docs/concepts/configuration/manage-resources-container/#meaning-of-memory). ## Metrics Server @@ -177,7 +192,6 @@ The metrics-server calls the [kubelet](/docs/reference/command-line-tools-refere * Metrics resource endpoint `/metrics/resource` in version v0.6.0+ or * Summary API endpoint `/stats/summary` in older versions - To learn more about the metrics-server, see the [metrics-server repository](https://github.com/kubernetes-sigs/metrics-server). You can also check out the following: @@ -196,14 +210,16 @@ for consumers to read. Here is an example of a Summary API request for a `minikube` node: - ```shell kubectl get --raw "/api/v1/nodes/minikube/proxy/stats/summary" ``` + Here is the same API call using `curl`: + ```shell curl http://localhost:8080/api/v1/nodes/minikube/proxy/stats/summary ``` + {{< note >}} The summary API `/stats/summary` endpoint will be replaced by the `/metrics/resource` endpoint beginning with metrics-server 0.6.x. -{{< /note >}} \ No newline at end of file +{{< /note >}} From 88d8ec551c22733d01e24e4285d5b8045cb5c3d7 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Mon, 14 Mar 2022 20:15:54 +0800 Subject: [PATCH 031/138] Wrap long lines for ease of change tracking --- .../resource-metrics-pipeline.md | 98 +++++++++++++------ 1 file changed, 70 insertions(+), 28 deletions(-) diff --git a/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md b/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md index 06c9fe70a241b..c2818940e2b6c 100644 --- a/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md +++ b/content/en/docs/tasks/debug-application-cluster/resource-metrics-pipeline.md @@ -8,10 +8,11 @@ content_type: concept -For Kubernetes, the _Metrics API_ offers a basic set of metrics to support automatic scaling and similar use cases. -This API makes information available about resource usage for node and pod, including metrics for CPU and memory. -If you deploy the Metrics API into your cluster, clients of the Kubernetes API can then query for this information, and -you can use Kubernetes' access control mechanisms to manage permissions to do so. +For Kubernetes, the _Metrics API_ offers a basic set of metrics to support automatic scaling and +similar use cases. This API makes information available about resource usage for node and pod, +including metrics for CPU and memory. If you deploy the Metrics API into your cluster, clients of +the Kubernetes API can then query for this information, and you can use Kubernetes' access control +mechanisms to manage permissions to do so. The [HorizontalPodAutoscaler](/docs/tasks/run-application/horizontal-pod-autoscale/) (HPA) and [VerticalPodAutoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme) (VPA) @@ -63,25 +64,38 @@ Figure 1. Resource Metrics Pipeline The architecture components, from right to left in the figure, consist of the following: -* [cAdvisor](https://github.com/google/cadvisor): Daemon for collecting, aggregating and exposing container metrics included in Kubelet. -* [kubelet](/docs/concepts/overview/components/#kubelet): Node agent for managing container resources. Resource metrics are accessible using the `/metrics/resource` and `/stats` kubelet API endpoints. -* [Summary API](#summary-api-source): API provided by the kubelet for discovering and retrieving per-node summarized stats available through the `/stats` endpoint. -* [metrics-server](#metrics-server): Cluster addon component that collects and aggregates resource metrics pulled from each kubelet. The API server serves Metrics API for use by HPA, VPA, and by the `kubectl top` command. Metrics Server is a reference implementation of the Metrics API. -* [Metrics API](#metrics-api): Kubernetes API supporting access to CPU and memory used for workload autoscaling. To make this work in your cluster, you need an API extension server that provides the Metrics API. +* [cAdvisor](https://github.com/google/cadvisor): Daemon for collecting, aggregating and exposing + container metrics included in Kubelet. +* [kubelet](/docs/concepts/overview/components/#kubelet): Node agent for managing container + resources. Resource metrics are accessible using the `/metrics/resource` and `/stats` kubelet + API endpoints. +* [Summary API](#summary-api-source): API provided by the kubelet for discovering and retrieving + per-node summarized stats available through the `/stats` endpoint. +* [metrics-server](#metrics-server): Cluster addon component that collects and aggregates resource + metrics pulled from each kubelet. The API server serves Metrics API for use by HPA, VPA, and by + the `kubectl top` command. Metrics Server is a reference implementation of the Metrics API. +* [Metrics API](#metrics-api): Kubernetes API supporting access to CPU and memory used for + workload autoscaling. To make this work in your cluster, you need an API extension server that + provides the Metrics API. {{< note >}} cAdvisor supports reading metrics from cgroups, which works with typical container runtimes on Linux. - If you use a container runtime that uses another resource isolation mechanism, for example virtualization, then that container runtime must support [CRI Container Metrics](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/cri-container-stats.md) in order for metrics to be available to the kubelet. + If you use a container runtime that uses another resource isolation mechanism, for example + virtualization, then that container runtime must support + [CRI Container Metrics](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-node/cri-container-stats.md) + in order for metrics to be available to the kubelet. {{< /note >}} - ## Metrics API -The metrics-server implements the Metrics API. This API allows you to access CPU and memory usage for the nodes and pods in your cluster. Its primary role is to feed resource usage metrics to K8s autoscaler components. +The metrics-server implements the Metrics API. This API allows you to access CPU and memory usage +for the nodes and pods in your cluster. Its primary role is to feed resource usage metrics to K8s +autoscaler components. -Here is an example of the Metrics API request for a `minikube` node piped through `jq` for easier reading: +Here is an example of the Metrics API request for a `minikube` node piped through `jq` for easier +reading: ```shell kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes/minikube" | jq '.' @@ -113,7 +127,8 @@ Sample response: } ``` -Here is an example of the Metrics API request for a `kube-scheduler-minikube` pod contained in the `kube-system` namespace and piped through `jq` for easier reading: +Here is an example of the Metrics API request for a `kube-scheduler-minikube` pod contained in the +`kube-system` namespace and piped through `jq` for easier reading: ```shell kubectl get --raw "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-scheduler-minikube" | jq '.' @@ -151,20 +166,31 @@ Sample response: } ``` -The Metrics API is defined in the [k8s.io/metrics](https://github.com/kubernetes/metrics) repository. You must enable the [API aggregation layer](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) and register an [APIService](/docs/reference/kubernetes-api/cluster-resources/api-service-v1/) for the `metrics.k8s.io` API. +The Metrics API is defined in the [k8s.io/metrics](https://github.com/kubernetes/metrics) +repository. You must enable the [API aggregation layer](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) +and register an [APIService](/docs/reference/kubernetes-api/cluster-resources/api-service-v1/) +for the `metrics.k8s.io` API. -To learn more about the Metrics API, see [resource metrics API design](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/resource-metrics-api.md), the [metrics-server repository](https://github.com/kubernetes-sigs/metrics-server) and the [resource metrics API](https://github.com/kubernetes/metrics#resource-metrics-api). +To learn more about the Metrics API, see [resource metrics API design](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/resource-metrics-api.md), +the [metrics-server repository](https://github.com/kubernetes-sigs/metrics-server) and the +[resource metrics API](https://github.com/kubernetes/metrics#resource-metrics-api). -{{< note >}} You must deploy the metrics-server or alternative adapter that serves the Metrics API to be able to access it. {{< /note >}} +{{< note >}} +You must deploy the metrics-server or alternative adapter that serves the Metrics API to be able +to access it. +{{< /note >}} ## Measuring resource usage ### CPU -CPU is reported as the average core usage measured in cpu units. One cpu, in Kubernetes, is equivalent to 1 vCPU/Core for cloud providers, and 1 hyper-thread on bare-metal Intel processors. +CPU is reported as the average core usage measured in cpu units. One cpu, in Kubernetes, is +equivalent to 1 vCPU/Core for cloud providers, and 1 hyper-thread on bare-metal Intel processors. -This value is derived by taking a rate over a cumulative CPU counter provided by the kernel (in both Linux and Windows kernels). The time window used to calculate CPU is shown under window field in Metrics API. +This value is derived by taking a rate over a cumulative CPU counter provided by the kernel (in +both Linux and Windows kernels). The time window used to calculate CPU is shown under window field +in Metrics API. To learn more about how Kubernetes allocates and measures CPU resources, see [meaning of CPU](/docs/concepts/configuration/manage-resources-container/#meaning-of-cpu). @@ -173,26 +199,39 @@ To learn more about how Kubernetes allocates and measures CPU resources, see Memory is reported as the working set, measured in bytes, at the instant the metric was collected. -In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under memory pressure. However, calculation of the working set varies by host OS, and generally makes heavy use of heuristics to produce an estimate. +In an ideal world, the "working set" is the amount of memory in-use that cannot be freed under +memory pressure. However, calculation of the working set varies by host OS, and generally makes +heavy use of heuristics to produce an estimate. -The Kubernetes model for a container's working set expects that the container runtime counts anonymous memory associated with the container in question. The working set metric typically also includes some cached (file-backed) memory, because the host OS cannot always reclaim pages. +The Kubernetes model for a container's working set expects that the container runtime counts +anonymous memory associated with the container in question. The working set metric typically also +includes some cached (file-backed) memory, because the host OS cannot always reclaim pages. To learn more about how Kubernetes allocates and measures memory resources, see [meaning of memory](/docs/concepts/configuration/manage-resources-container/#meaning-of-memory). ## Metrics Server -The metrics-server fetches resource metrics from the kubelets and exposes them in the Kubernetes API server through the Metrics API for use by the HPA and VPA. You can also view these metrics using the `kubectl top` command. +The metrics-server fetches resource metrics from the kubelets and exposes them in the Kubernetes +API server through the Metrics API for use by the HPA and VPA. You can also view these metrics +using the `kubectl top` command. -The metrics-server uses the Kubernetes API to track nodes and pods in your cluster. The metrics-server queries each node over HTTP to fetch metrics. The metrics-server also builds an internal view of pod metadata, and keeps a cache of pod health. That cached pod health information is available via the extension API that the metrics-server makes available. +The metrics-server uses the Kubernetes API to track nodes and pods in your cluster. The +metrics-server queries each node over HTTP to fetch metrics. The metrics-server also builds an +internal view of pod metadata, and keeps a cache of pod health. That cached pod health information +is available via the extension API that the metrics-server makes available. -For example with an HPA query, the metrics-server needs to identify which pods fulfill the label selectors in the deployment. +For example with an HPA query, the metrics-server needs to identify which pods fulfill the label +selectors in the deployment. + +The metrics-server calls the [kubelet](/docs/reference/command-line-tools-reference/kubelet/) API +to collect metrics from each node. Depending on the metrics-server version it uses: -The metrics-server calls the [kubelet](/docs/reference/command-line-tools-reference/kubelet/) API to collect metrics from each node. Depending on the metrics-server version it uses: * Metrics resource endpoint `/metrics/resource` in version v0.6.0+ or * Summary API endpoint `/stats/summary` in older versions -To learn more about the metrics-server, see the [metrics-server repository](https://github.com/kubernetes-sigs/metrics-server). +To learn more about the metrics-server, see the +[metrics-server repository](https://github.com/kubernetes-sigs/metrics-server). You can also check out the following: @@ -204,7 +243,8 @@ You can also check out the following: ### Summary API source -The [kubelet](/docs/reference/command-line-tools-reference/kubelet/) gathers stats at the node, volume, pod and container level, and emits this information in +The [kubelet](/docs/reference/command-line-tools-reference/kubelet/) gathers stats at the node, +volume, pod and container level, and emits this information in the [Summary API](https://github.com/kubernetes/kubernetes/blob/7d309e0104fedb57280b261e5677d919cb2a0e2d/staging/src/k8s.io/kubelet/pkg/apis/stats/v1alpha1/types.go) for consumers to read. @@ -221,5 +261,7 @@ curl http://localhost:8080/api/v1/nodes/minikube/proxy/stats/summary ``` {{< note >}} -The summary API `/stats/summary` endpoint will be replaced by the `/metrics/resource` endpoint beginning with metrics-server 0.6.x. +The summary API `/stats/summary` endpoint will be replaced by the `/metrics/resource` endpoint +beginning with metrics-server 0.6.x. {{< /note >}} + From f1575f40c22841adad4c1482d5c48ebc36212707 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Mon, 14 Mar 2022 20:20:04 +0800 Subject: [PATCH 032/138] Fix link in change PV reclaim policy page --- .../administer-cluster/change-pv-reclaim-policy.md | 10 +--------- 1 file changed, 1 insertion(+), 9 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md b/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md index 408438e1deddb..48ec32d80c28d 100644 --- a/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md +++ b/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md @@ -7,14 +7,10 @@ content_type: task This page shows how to change the reclaim policy of a Kubernetes PersistentVolume. - ## {{% heading "prerequisites" %}} - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - ## Why change reclaim policy of a PersistentVolume @@ -81,8 +77,6 @@ kubectl patch pv -p "{\"spec\":{\"persistentVolumeReclaimPolicy\" `default/claim3` has reclaim policy `Retain`. It will not be automatically deleted when a user deletes claim `default/claim3`. - - ## {{% heading "whatsnext" %}} * Learn more about [PersistentVolumes](/docs/concepts/storage/persistent-volumes/). @@ -91,8 +85,6 @@ kubectl patch pv -p "{\"spec\":{\"persistentVolumeReclaimPolicy\" ### References {#reference} * {{< api-reference page="config-and-storage-resources/persistent-volume-v1" >}} - * Pay attention to the `.spec.persistentVolumeReclaimPolicy` [field](https://kubernetes.io/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/#PersistentVolumeSpec) of PersistentVolume. + * Pay attention to the `.spec.persistentVolumeReclaimPolicy` [field](/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/#PersistentVolumeSpec) of PersistentVolume. * {{< api-reference page="config-and-storage-resources/persistent-volume-claim-v1" >}} - - From e9a8cc6edaba82a73aeadec13064cc0a58aec2e3 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Mon, 14 Mar 2022 20:22:22 +0800 Subject: [PATCH 033/138] Reformat the indentation of lists and sample outputs --- .../change-pv-reclaim-policy.md | 71 ++++++++++--------- 1 file changed, 39 insertions(+), 32 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md b/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md index 48ec32d80c28d..457fbd6332b43 100644 --- a/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md +++ b/content/en/docs/tasks/administer-cluster/change-pv-reclaim-policy.md @@ -29,53 +29,58 @@ Released phase, where all of its data can be manually recovered. 1. List the PersistentVolumes in your cluster: - ```shell - kubectl get pv - ``` + ```shell + kubectl get pv + ``` - The output is similar to this: + The output is similar to this: - NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE - pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s - pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s - pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s + ```none + NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE + pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s + pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s + pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s + ``` - This list also includes the name of the claims that are bound to each volume + This list also includes the name of the claims that are bound to each volume for easier identification of dynamically provisioned volumes. 1. Choose one of your PersistentVolumes and change its reclaim policy: - ```shell - kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' - ``` + ```shell + kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' + ``` - where `` is the name of your chosen PersistentVolume. + where `` is the name of your chosen PersistentVolume. - {{< note >}} - On Windows, you must _double_ quote any JSONPath template that contains spaces (not single quote as shown above for bash). This in turn means that you must use a single quote or escaped double quote around any literals in the template. For example: + {{< note >}} + On Windows, you must _double_ quote any JSONPath template that contains spaces (not single + quote as shown above for bash). This in turn means that you must use a single quote or escaped + double quote around any literals in the template. For example: -```cmd -kubectl patch pv -p "{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}" -``` - - {{< /note >}} + ```cmd + kubectl patch pv -p "{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}" + ``` + {{< /note >}} 1. Verify that your chosen PersistentVolume has the right policy: - ```shell - kubectl get pv - ``` + ```shell + kubectl get pv + ``` - The output is similar to this: + The output is similar to this: - NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE - pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 40s - pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 36s - pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 33s + ```none + NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE + pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 40s + pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 36s + pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 33s + ``` - In the preceding output, you can see that the volume bound to claim - `default/claim3` has reclaim policy `Retain`. It will not be automatically - deleted when a user deletes claim `default/claim3`. + In the preceding output, you can see that the volume bound to claim + `default/claim3` has reclaim policy `Retain`. It will not be automatically + deleted when a user deletes claim `default/claim3`. ## {{% heading "whatsnext" %}} @@ -85,6 +90,8 @@ kubectl patch pv -p "{\"spec\":{\"persistentVolumeReclaimPolicy\" ### References {#reference} * {{< api-reference page="config-and-storage-resources/persistent-volume-v1" >}} - * Pay attention to the `.spec.persistentVolumeReclaimPolicy` [field](/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/#PersistentVolumeSpec) of PersistentVolume. + * Pay attention to the `.spec.persistentVolumeReclaimPolicy` + [field](/docs/reference/kubernetes-api/config-and-storage-resources/persistent-volume-v1/#PersistentVolumeSpec) + of PersistentVolume. * {{< api-reference page="config-and-storage-resources/persistent-volume-claim-v1" >}} From 2d0d647d0b60a59e58591e7edd8bbbb0141d6964 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Sun, 13 Mar 2022 13:45:28 +0800 Subject: [PATCH 034/138] [zh] Update configure pod configmap This PR resyncs the configure-pod-configmap page, and translates the comments in the reference example YAML files. --- .../configure-pod-configmap.md | 186 +++++++++++------- .../examples/pods/pod-configmap-volume.yaml | 3 +- .../pod-single-configmap-env-variable.yaml | 6 +- 3 files changed, 115 insertions(+), 80 deletions(-) diff --git a/content/zh/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/zh/docs/tasks/configure-pod-container/configure-pod-configmap.md index da40f40f5e9cc..39c8efac1b8ac 100644 --- a/content/zh/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/zh/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -17,8 +17,16 @@ card: +很多应用在其初始化或运行期间要依赖一些配置信息。大多数时候, +存在要调整配置参数所设置的数值的需求。 +ConfigMap 是 Kubernetes 用来向应用 Pod 中注入配置数据的方法。 + ConfigMap 允许你将配置文件与镜像文件分离,以使容器化的应用程序具有可移植性。 本页提供了一系列使用示例,这些示例演示了如何创建 ConfigMap 以及配置 Pod 使用存储在 ConfigMap 中的数据。 @@ -35,8 +43,8 @@ You can use either `kubectl create configmap` or a ConfigMap generator in `kusto --> ## 创建 ConfigMap -你可以使用 `kubectl create configmap` 或者在 `kustomization.yaml` 中的 ConfigMap 生成器 -来创建 ConfigMap。注意,`kubectl` 从 1.14 版本开始支持 `kustomization.yaml`。 +你可以使用 `kubectl create configmap` 或者在 `kustomization.yaml` 中的 ConfigMap +生成器来创建 ConfigMap。注意,`kubectl` 从 1.14 版本开始支持 `kustomization.yaml`。 ### 使用 kubectl create configmap 创建 ConfigMap -你可以使用 `kubectl create configmap` 命令基于 -[目录](#create-configmaps-from-directories)、[文件](#create-configmaps-from-files) -或者[字面值](#create-configmaps-from-literal-values)来创建 ConfigMap: +你可以使用 `kubectl create configmap` +命令基于[目录](#create-configmaps-from-directories)、 +[文件](#create-configmaps-from-files)或者[字面值](#create-configmaps-from-literal-values)来创建 +ConfigMap: ```shell -kubectl create configmap +kubectl create configmap <映射名称> <数据源> ``` -其中,\ 是要设置的 ConfigMap 名称,\ 是要从中提取数据的目录、 +其中,`<映射名称>` 是为 ConfigMap 指定的名称,`<数据源>` 是要从中提取数据的目录、 文件或者字面值。 ConfigMap 对象的名称必须是合法的 [DNS 子域名](/zh/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). @@ -66,8 +75,8 @@ ConfigMap 对象的名称必须是合法的 -在你基于文件来创建 ConfigMap 时,\ 中的键名默认取自 -文件的基本名,而对应的值则默认为文件的内容。 +在你基于文件来创建 ConfigMap 时,`<数据源>` 中的键名默认取自文件的基本名, +而对应的值则默认为文件的内容。 #### 基于目录创建 ConfigMap {#create-configmaps-from-directories} + 你可以使用 `kubectl create configmap` 基于同一目录中的多个文件创建 ConfigMap。 -当你基于目录来创建 ConfigMap 时,kubectl 识别目录下基本名可以作为合法键名的 -文件,并将这些文件打包到新的 ConfigMap 中。普通文件之外的所有目录项都会被 -忽略(例如,子目录、符号链接、设备、管道等等)。 +当你基于目录来创建 ConfigMap 时,kubectl 识别目录下基本名可以作为合法键名的文件, +并将这些文件打包到新的 ConfigMap 中。普通文件之外的所有目录项都会被忽略 +(例如:子目录、符号链接、设备、管道等等)。 例如: @@ -256,7 +266,7 @@ kubectl create configmap game-config-2 --from-file=configure-pod-container/confi -描述上面创建的 `game-config-2` configmap +描述上面创建的 `game-config-2` ConfigMap: ```shell kubectl describe configmaps game-config-2 @@ -324,22 +334,20 @@ enemies=aliens lives=3 allowed="true" --> - -Env 文件包含环境变量列表。 -其中适用以下语法规则: +Env 文件包含环境变量列表。其中适用以下语法规则: - Env 文件中的每一行必须为 VAR=VAL 格式。 - 以#开头的行(即注释)将被忽略。 - 空行将被忽略。 - 引号不会被特殊处理(即它们将成为 ConfigMap 值的一部分)。 -将示例文件下载到 `configure-pod-container/configmap/` 目录 +将示例文件下载到 `configure-pod-container/configmap/` 目录: ```shell wget https://kubernetes.io/examples/configmap/game-env-file.properties -O configure-pod-container/configmap/game-env-file.properties ``` -env 文件 `game-env-file.properties` 如下所示: +Env 文件 `game-env-file.properties` 如下所示: ```shell cat configure-pod-container/configmap/game-env-file.properties @@ -410,10 +418,10 @@ kubectl create configmap config-multi-env-files \ ``` --> ```shell -# 将样本文件下载到 `configure-pod-container/configmap/` 目录 +# 将示例文件下载到 `configure-pod-container/configmap/` 目录 wget https://k8s.io/examples/configmap/ui-env-file.properties -O configure-pod-container/configmap/ui-env-file.properties -# 创建 configmap +# 创建 ConfigMap kubectl create configmap config-multi-env-files \ --from-env-file=configure-pod-container/configmap/game-env-file.properties \ --from-env-file=configure-pod-container/configmap/ui-env-file.properties @@ -432,6 +440,7 @@ kubectl get configmap config-multi-env-files -o yaml where the output is similar to this: --> 输出类似以下内容: + ```yaml apiVersion: v1 kind: ConfigMap @@ -459,13 +468,13 @@ You can define a key other than the file name to use in the `data` section of yo 而不是按默认行为使用文件名: ```shell -kubectl create configmap game-config-3 --from-file== +kubectl create configmap game-config-3 --from-file=<我的键名>=<文件路径> ``` -`` 是你要在 ConfigMap 中使用的键名,`` 是你想要键表示数据源文件的位置。 +`<我的键名>` 是你要在 ConfigMap 中使用的键名,`<文件路径>` 是你想要键所表示的数据源文件的位置。 例如: @@ -479,7 +488,7 @@ would produce the following ConfigMap: --> 将产生以下 ConfigMap: -``` +```shell kubectl get configmaps game-config-3 -o yaml ``` @@ -513,7 +522,9 @@ data: You can use `kubectl create configmap` with the `--from-literal` argument to define a literal value from the command line: --> #### 根据字面值创建 ConfigMap {#create-configmaps-from-literal-values} -你可以将 `kubectl create configmap` 与 `--from-literal` 参数一起使用,从命令行定义文字值: + +你可以将 `kubectl create configmap` 与 `--from-literal` 参数一起使用, +通过命令行定义文字值: ```shell kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm @@ -551,7 +562,6 @@ data: #### 基于文件生成 ConfigMap -例如,要从 `configure-pod-container/configmap/kubectl/game.properties` 文件生成一个 ConfigMap: +例如,要基于 `configure-pod-container/configmap/kubectl/game.properties` +文件生成一个 ConfigMap: ```shell # 创建包含 ConfigMapGenerator 的 kustomization.yaml 文件 @@ -585,11 +596,12 @@ EOF -使用 kustomization 目录创建 ConfigMap 对象: +应用(Apply)kustomization 目录创建 ConfigMap 对象: ```shell kubectl apply -k . ``` + ``` configmap/game-config-4-m9dm2f92bt created ``` @@ -597,7 +609,7 @@ configmap/game-config-4-m9dm2f92bt created -你可以检查 ConfigMap 是这样创建的: +你可以检查 ConfigMap 被创建如下: ```shell kubectl get configmap @@ -645,7 +657,7 @@ with the key `game-special-key` --> #### 定义从文件生成 ConfigMap 时要使用的键 -在 ConfigMap 生成器,你可以定义一个非文件名的键名。 +在 ConfigMap 生成器中,你可以定义一个非文件名的键名。 例如,从 `configure-pod-container/configmap/game.properties` 文件生成 ConfigMap, 但使用 `game-special-key` 作为键名: @@ -662,11 +674,12 @@ EOF -使用 Kustomization 目录创建 ConfigMap 对象。 +应用 Kustomization 目录创建 ConfigMap 对象。 ```shell kubectl apply -k . ``` + ``` configmap/game-config-5-m67dt67794 created ``` @@ -677,7 +690,7 @@ configmap/game-config-5-m67dt67794 created To generate a ConfigMap from literals `special.type=charm` and `special.how=very`, you can specify the ConfigMap generator in `kusotmization.yaml` as --> -#### 从字面值生成 ConfigMap +#### 基于字面值生成 ConfigMap 要基于字符串 `special.type=charm` 和 `special.how=very` 生成 ConfigMap, 可以在 `kusotmization.yaml` 中配置 ConfigMap 生成器: @@ -701,6 +714,7 @@ Apply the kustomization directory to create the ConfigMap object. ```shell kubectl apply -k . ``` + ``` configmap/special-config-2-c92b5mmcf2 created ``` @@ -726,7 +740,7 @@ configmap/special-config-2-c92b5mmcf2 created -2. 将 ConfigMap 中定义的 `special.how` 值分配给 Pod 规范中的 `SPECIAL_LEVEL_KEY` 环境变量。 +2. 将 ConfigMap 中定义的 `special.how` 赋值给 Pod 规约中的 `SPECIAL_LEVEL_KEY` 环境变量。 {{< codenew file="pods/pod-single-configmap-env-variable.yaml" >}} @@ -760,7 +774,7 @@ configmap/special-config-2-c92b5mmcf2 created -* 在 Pod 规范中定义环境变量。 +* 在 Pod 规约中定义环境变量。 {{< codenew file="pods/pod-multiple-configmap-env-variable.yaml" >}} @@ -803,7 +817,8 @@ Kubernetes v1.6 和更高版本支持此功能。 -* 使用 `envFrom` 将所有 ConfigMap 的数据定义为容器环境变量,ConfigMap 中的键成为 Pod 中的环境变量名称。 +* 使用 `envFrom` 将所有 ConfigMap 的数据定义为容器环境变量,ConfigMap + 中的键成为 Pod 中的环境变量名称。 {{< codenew file="pods/pod-configmap-envFrom.yaml" >}} @@ -825,12 +840,13 @@ Kubernetes v1.6 和更高版本支持此功能。 -你可以使用 `$(VAR_NAME)` Kubernetes 替换语法在容器的 `command` 和 `args` 部分中使用 ConfigMap 定义的环境变量。 +你可以使用 `$(VAR_NAME)` Kubernetes 替换语法在容器的 `command` 和 `args` +属性中使用 ConfigMap 定义的环境变量。 -例如,以下 Pod 规范 +例如,以下 Pod 规约 {{< codenew file="pods/pod-configmap-env-var-valueFrom.yaml" >}} @@ -866,7 +882,7 @@ As explained in [Create ConfigMaps from files](#create-configmaps-from-files), w -本节中的示例引用了一个名为 special-config 的 ConfigMap,如下所示: +本节中的示例引用了一个名为 'special-config' 的 ConfigMap,如下所示: {{< codenew file="configmap/configmap-multikeys.yaml" >}} @@ -886,10 +902,11 @@ Add the ConfigMap name under the `volumes` section of the Pod specification. This adds the ConfigMap data to the directory specified as `volumeMounts.mountPath` (in this case, `/etc/config`). The `command` section references the `special.level` item stored in the ConfigMap. --> -### 使用存储在 ConfigMap 中的数据填充数据卷 +### 使用存储在 ConfigMap 中的数据填充卷 在 Pod 规约的 `volumes` 部分下添加 ConfigMap 名称。 -这会将 ConfigMap 数据添加到指定为 `volumeMounts.mountPath` 的目录(在本例中为 `/etc/config`)。 +这会将 ConfigMap 数据添加到 `volumeMounts.mountPath` 所指定的目录 +(在本例中为 `/etc/config`)。 `command` 部分引用存储在 ConfigMap 中的 `special.level`。 {{< codenew file="pods/pod-configmap-volume.yaml" >}} @@ -914,14 +931,14 @@ SPECIAL_TYPE If there are some files in the `/etc/config/` directory, they will be deleted. --> {{< caution >}} -如果在 `/etc/config/` 目录中有一些文件,它们将被删除。 +如果在 `/etc/config/` 目录中有一些文件,这些文件将被删除。 {{< /caution >}} {{< note >}} -文本数据会使用 UTF-8 字符编码的形式展现为文件。如果使用其他字符编码, +文本数据会展现为 UTF-8 字符编码的文件。如果使用其他字符编码, 可以使用 `binaryData`。 {{< /note >}} @@ -931,10 +948,11 @@ Text data is exposed as files using the UTF-8 character encoding. To use some ot Use the `path` field to specify the desired file path for specific ConfigMap items. In this case, the `SPECIAL_LEVEL` item will be mounted in the `config-volume` volume at `/etc/config/keys`. --> -### 将 ConfigMap 数据添加到数据卷中的特定路径 +### 将 ConfigMap 数据添加到卷中的特定路径 使用 `path` 字段为特定的 ConfigMap 项目指定预期的文件路径。 -在这里,ConfigMap中,键值 `SPECIAL_LEVEL` 的内容将挂载在 `config-volume` 数据卷中 `/etc/config/keys` 文件下。 +在这里,ConfigMap 中键 `SPECIAL_LEVEL` 的内容将挂载在 `config-volume` +卷中 `/etc/config/keys` 文件中。 {{< codenew file="pods/pod-configmap-volume-specific-key.yaml" >}} @@ -950,11 +968,12 @@ kubectl create -f https://kubernetes.io/examples/pods/pod-configmap-volume-speci -当 pod 运行时,命令 `cat /etc/config/keys` 产生以下输出: +当 Pod 运行时,命令 `cat /etc/config/keys` 产生以下输出: ``` very ``` + @@ -968,33 +987,49 @@ Like before, all previous files in the `/etc/config/` directory will be deleted. You can project keys to specific paths and specific permissions on a per-file basis. The [Secrets](/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod) user guide explains the syntax. --> -### 映射键以指定路径和文件权限 +### 映射键到指定路径并设置文件访问权限 -你可以通过指定键名到特定目录的投射关系,也可以逐个文件地设定访问权限。 +你可以将指定键名投射到特定目录,也可以逐个文件地设定访问权限。 [Secret 用户指南](/zh/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod) -中对这一语法提供了解释。 +中为这一语法提供了解释。 + + +### 可选的引用 {#optional-references} + +ConfigMap 引用可以被标记为 “optional(可选的)”。如果所引用的 ConfigMap 不存在, +则所挂载的卷将会是空的。如果所引用的 ConfigMap 确实存在,但是所引用的主键不存在, +则在挂载点下对应的路径也会不存在。 -### 挂载的 ConfigMap 将自动更新 +### 挂载的 ConfigMap 将自动更新 {#mounted-configmaps-are-updated-automatically} + + +当某个已被挂载的 ConfigMap 被更新,所投射的内容最终也会被更新。 +对于 Pod 已经启动之后所引用的、可选的 ConfigMap 才出现的情形, +这一动态更新现象也是适用的。 -更新已经在数据卷中使用的 ConfigMap 时,已映射的键最终也会被更新。 -`kubelet` 在每次定期同步时都会检查已挂载的 ConfigMap 是否是最新的。 +`kubelet` 在每次周期性同步时都会检查已挂载的 ConfigMap 是否是最新的。 但是,它使用其本地的基于 TTL 的缓存来获取 ConfigMap 的当前值。 因此,从更新 ConfigMap 到将新键映射到 Pod 的总延迟可能与 -kubelet 同步周期 + ConfigMap 在 kubelet 中缓存的 TTL 一样长。 +kubelet 同步周期(默认 1 分钟) + ConfigMap 在 kubelet 中缓存的 TTL +(默认 1 分钟)一样长。 +你可以通过更新 Pod 的某个注解来触发立即更新。 {{< note >}} ConfigMap 应该引用属性文件,而不是替换它们。可以将 ConfigMap 理解为类似于 Linux -`/etc` 目录及其内容的东西。例如,如果你从 ConfigMap 创建 +`/etc` 目录及其内容的东西。例如,如果你基于 ConfigMap 创建 [Kubernetes 卷](/zh/docs/concepts/storage/volumes/),则 ConfigMap -中的每个数据项都由该数据卷中的单个文件表示。 +中的每个数据项都由该数据卷中的某个独立的文件表示。 {{< /note >}} -### 限制 +### 限制 {#restrictions} -- 在 Pod 规范中引用之前,必须先创建一个 ConfigMap(除非将 ConfigMap 标记为"可选")。 - 如果引用的 ConfigMap 不存在,则 Pod 将不会启动。同样,引用 ConfigMap 中不存在的键也会阻止 Pod 启动。 +- 在 Pod 规约中引用某个 ConfigMap 之前,必须先创建它(除非将 ConfigMap 标记为 + “optional(可选)”)。如果引用的 ConfigMap 不存在,则 Pod 将不会启动。 + 同样,引用 ConfigMap 中不存在的主键也会令 Pod 无法启动。 -- 如果你使用 `envFrom` 基于 ConfigMap 定义环境变量,那么无效的键将被忽略。 - 可以启动 Pod,但无效名称将记录在事件日志中(`InvalidVariableNames`)。 - 日志消息列出了每个跳过的键。例如: +- 如果你使用 `envFrom` 来基于 ConfigMap 定义环境变量,那么无效的键将被忽略。 + Pod 可以被启动,但无效名称将被记录在事件日志中(`InvalidVariableNames`)。 + 日志消息列出了每个被跳过的键。例如: ```shell kubectl get events @@ -1086,13 +1122,13 @@ data: -- ConfigMap 位于特定的{{< glossary_tooltip term_id="namespace" text="名字空间" >}} - 中。每个 ConfigMap 只能被同一名字空间中的 Pod 引用. +- ConfigMap 位于确定的{{< glossary_tooltip term_id="namespace" text="名字空间" >}}中。 + 每个 ConfigMap 只能被同一名字空间中的 Pod 引用. -- 你不能将 ConfigMap 用于 {{< glossary_tooltip text="静态 Pod" term_id="static-pod" >}}, +- 你不能将 ConfigMap 用于{{< glossary_tooltip text="静态 Pod" term_id="static-pod" >}}, 因为 Kubernetes 不支持这种用法。 ## {{% heading "whatsnext" %}} @@ -1101,5 +1137,5 @@ data: * Follow a real world example of [Configuring Redis using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/). --> * 浏览[使用 ConfigMap 配置 Redis](/zh/docs/tutorials/configuration/configure-redis-using-configmap/) - 真实实例 + 真实实例。 diff --git a/content/zh/examples/pods/pod-configmap-volume.yaml b/content/zh/examples/pods/pod-configmap-volume.yaml index 23b0f7718e157..5203d4c66b78f 100644 --- a/content/zh/examples/pods/pod-configmap-volume.yaml +++ b/content/zh/examples/pods/pod-configmap-volume.yaml @@ -13,7 +13,6 @@ spec: volumes: - name: config-volume configMap: - # Provide the name of the ConfigMap containing the files you want - # to add to the container + # 提供包含要添加到容器中的文件的 ConfigMap 的名称 name: special-config restartPolicy: Never diff --git a/content/zh/examples/pods/pod-single-configmap-env-variable.yaml b/content/zh/examples/pods/pod-single-configmap-env-variable.yaml index c86123afd76ce..5777c814f9005 100644 --- a/content/zh/examples/pods/pod-single-configmap-env-variable.yaml +++ b/content/zh/examples/pods/pod-single-configmap-env-variable.yaml @@ -8,12 +8,12 @@ spec: image: k8s.gcr.io/busybox command: [ "/bin/sh", "-c", "env" ] env: - # Define the environment variable + # 定义环境变量 - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: - # The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY + # ConfigMap 包含你要赋给 SPECIAL_LEVEL_KEY 的值 name: special-config - # Specify the key associated with the value + # 指定与取值相关的键名 key: special.how restartPolicy: Never From 5bffffb3cb0a655e12cddadc7774048b7944c0f0 Mon Sep 17 00:00:00 2001 From: Song Shukun Date: Wed, 16 Mar 2022 23:13:42 +0900 Subject: [PATCH 035/138] [zh] Update connect-applications-service.md --- .../connect-applications-service.md | 171 ++++++++---------- 1 file changed, 76 insertions(+), 95 deletions(-) diff --git a/content/zh/docs/concepts/services-networking/connect-applications-service.md b/content/zh/docs/concepts/services-networking/connect-applications-service.md index e75422a41bc9a..d24745a3f0af1 100644 --- a/content/zh/docs/concepts/services-networking/connect-applications-service.md +++ b/content/zh/docs/concepts/services-networking/connect-applications-service.md @@ -9,30 +9,25 @@ weight: 30 -## Kubernetes 连接容器模型 +## Kubernetes 连接容器的模型 既然有了一个持续运行、可复制的应用,我们就能够将它暴露到网络上。 -在讨论 Kubernetes 网络连接的方式之前,非常值得与 Docker 中 “正常” 方式的网络进行对比。 - -默认情况下,Docker 使用私有主机网络连接,只能与同在一台机器上的容器进行通信。 -为了实现容器的跨节点通信,必须在机器自己的 IP 上为这些容器分配端口,为容器进行端口转发或者代理。 -多个开发人员或是提供容器的团队之间协调端口的分配很难做到规模化,那些难以控制的集群级别的问题,都会交由用户自己去处理。 Kubernetes 假设 Pod 可与其它 Pod 通信,不管它们在哪个主机上。 -Kubernetes 给 Pod 分配属于自己的集群私有 IP 地址,所以没必要在 Pod 或映射到的容器的端口和主机端口之间显式地创建连接。 -这表明了在 Pod 内的容器都能够连接到本地的每个端口,集群中的所有 Pod 不需要通过 NAT 转换就能够互相看到。 -文档的剩余部分详述如何在一个网络模型之上运行可靠的服务。 +Kubernetes 给每一个 Pod 分配一个集群私有 IP 地址,所以没必要在 +Pod 与 Pod 之间创建连接或将容器的端口映射到主机端口。 +这意味着同一个 Pod 内的所有容器能通过 localhost 上的端口互相连通,集群中的所有 Pod +也不需要通过 NAT 转换就能够互相看到。 +本文档的剩余部分详述如何在上述网络模型之上运行可靠的服务。 -该指南使用一个简单的 Nginx server 来演示并证明谈到的概念。 +本指南使用一个简单的 Nginx 服务器来演示概念验证原型。 @@ -45,7 +40,7 @@ Create an nginx Pod, and note that it has a container port specification: ## 在集群中暴露 Pod 我们在之前的示例中已经做过,然而让我们以网络连接的视角再重做一遍。 -创建一个 Nginx Pod,并且注意,它有一个容器端口的规范: +创建一个 Nginx Pod,注意其中包含一个容器端口的规约: {{< codenew file="service/networking/run-my-nginx.yaml" >}} @@ -77,16 +72,17 @@ kubectl get pods -l run=my-nginx -o yaml | grep podIP ``` -应该能够通过 ssh 登录到集群中的任何一个节点上,使用 curl 也能调通所有 IP 地址。 +你应该能够通过 ssh 登录到集群中的任何一个节点上,并使用诸如 `curl` 之类的工具向这两个 IP 地址发出查询请求。 需要注意的是,容器不会使用该节点上的 80 端口,也不会使用任何特定的 NAT 规则去路由流量到 Pod 上。 -这意味着可以在同一个节点上运行多个 Pod,使用相同的容器端口,并且可以从集群中任何其他的 Pod 或节点上使用 IP 的方式访问到它们。 -像 Docker 一样,端口能够被发布到主机节点的接口上,但是出于网络模型的原因应该从根本上减少这种用法。 +这意味着可以在同一个节点上运行多个 Nginx Pod,使用相同的 `containerPort`,并且可以从集群中任何其他的 +Pod 或节点上使用 IP 的方式访问到它们。 +如果你想的话,你依然可以将宿主节点的某个端口的流量转发到 Pod 中,但是出于网络模型的原因,你不必这么做。 -如果对此好奇,可以获取更多关于 [如何实现网络模型](/zh/docs/concepts/cluster-administration/networking/#how-to-achieve-this) 的内容。 +如果对此好奇,请参考 [Kubernetes 网络模型](/zh/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model)。 ## 创建 Service -我们有 Pod 在一个扁平的、集群范围的地址空间中运行 Nginx 服务,可以直接连接到这些 Pod,但如果某个节点死掉了会发生什么呢? +我们有一组在一个扁平的、集群范围的地址空间中运行 Nginx 服务的 Pod。 +理论上,你可以直接连接到这些 Pod,但如果某个节点死掉了会发生什么呢? Pod 会终止,Deployment 将创建新的 Pod,且使用不同的 IP。这正是 Service 要解决的问题。 -Kubernetes Service 从逻辑上定义了运行在集群中的一组 Pod,这些 Pod 提供了相同的功能。 +Kubernetes Service 是集群中提供相同功能的一组 Pod 的抽象表达。 当每个 Service 创建时,会被分配一个唯一的 IP 地址(也称为 clusterIP)。 -这个 IP 地址与一个 Service 的生命周期绑定在一起,当 Service 存在的时候它也不会改变。 +这个 IP 地址与 Service 的生命周期绑定在一起,只要 Service 存在,它就不会改变。 可以配置 Pod 使它与 Service 进行通信,Pod 知道与 Service 通信将被自动地负载均衡到该 Service 中的某些 Pod 上。 可以使用 `kubectl expose` 命令为 2个 Nginx 副本创建一个 Service: @@ -120,7 +117,7 @@ service/my-nginx exposed This is equivalent to `kubectl apply -f` the following yaml: --> -这等价于使用 `kubectl create -f` 命令创建,对应如下的 yaml 文件: +这等价于使用 `kubectl create -f` 命令及如下的 yaml 文件创建: {{< codenew file="service/networking/nginx-svc.yaml" >}} @@ -134,11 +131,11 @@ View [Service](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/ API object to see the list of supported fields in service definition. Check your Service: --> -上述规约将创建一个 Service,对应具有标签 `run: my-nginx` 的 Pod,目标 TCP 端口 80, -并且在一个抽象的 Service 端口(`targetPort`:容器接收流量的端口;`port`:抽象的 Service -端口,可以使任何其它 Pod 访问该 Service 的端口)上暴露。 -查看 [Service API 对象](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core) -了解 Service 定义支持的字段列表。 +上述规约将创建一个 Service,该 Service 会将所有具有标签 `run: my-nginx` 的 Pod 的 TCP +80 端口暴露到一个抽象的 Service 端口上(`targetPort`:容器接收流量的端口;`port`:可任意取值的抽象的 Service +端口,其他 Pod 通过该端口访问 Service)。 +查看 [Service](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#service-v1-core) +API 对象以了解 Service 所能接受的字段列表。 查看你的 Service 资源: ```shell @@ -158,7 +155,7 @@ matching the Service's selector will automatically get added to the endpoints. Check the endpoints, and note that the IPs are the same as the Pods created in the first step: --> -正如前面所提到的,一个 Service 由一组 backend Pod 组成。这些 Pod 通过 `endpoints` 暴露出来。 +正如前面所提到的,一个 Service 由一组 Pod 提供支撑。这些 Pod 通过 `endpoints` 暴露出来。 Service Selector 将持续评估,结果被 POST 到一个名称为 `my-nginx` 的 Endpoint 对象上。 当 Pod 终止后,它会自动从 Endpoint 中移除,新的能够匹配上 Service Selector 的 Pod 将自动地被添加到 Endpoint 中。 检查该 Endpoint,注意到 IP 地址与在第一步创建的 Pod 是相同的。 @@ -194,7 +191,7 @@ never hits the wire. If you're curious about how this works you can read more about the [service proxy](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies). --> -现在,能够从集群中任意节点上使用 curl 命令请求 Nginx Service `:` 。 +现在,你应该能够从集群中任意节点上使用 curl 命令向 `:` 发送请求以访问 Nginx Service。 注意 Service IP 完全是虚拟的,它从来没有走过网络,如果对它如何工作的原理感到好奇, 可以进一步阅读[服务代理](/zh/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies) 的内容。 @@ -204,12 +201,12 @@ about the [service proxy](/docs/concepts/services-networking/service/#virtual-ip Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the -[CoreDNS cluster addon](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/coredns). +[CoreDNS cluster addon](https://releases.k8s.io/{{< param "fullversion" >}}/cluster/addons/dns/coredns). --> ## 访问 Service -Kubernetes支持两种查找服务的主要模式: 环境变量和DNS。 前者开箱即用,而后者则需要[CoreDNS集群插件] -[CoreDNS 集群插件](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/coredns). +Kubernetes支持两种查找服务的主要模式: 环境变量和 DNS。前者开箱即用,而后者则需要 +[CoreDNS 集群插件](https://releases.k8s.io/{{< param "fullversion" >}}/cluster/addons/dns/coredns). ### 环境变量 -当 Pod 在 Node 上运行时,kubelet 会为每个活跃的 Service 添加一组环境变量。 -这会有一个顺序的问题。想了解为何,检查正在运行的 Nginx Pod 的环境变量(Pod 名称将不会相同): +当 Pod 在节点上运行时,kubelet 会针对每个活跃的 Service 为 Pod 添加一组环境变量。 +这就引入了一个顺序的问题。为解释这个问题,让我们先检查正在运行的 Nginx Pod +的环境变量(你的环境中的 Pod 名称将会与下面示例命令中的不同): ```shell kubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE @@ -254,10 +252,11 @@ replicas. This will give you scheduler-level Service spreading of your Pods variables: --> -注意,还没有谈及到 Service。这是因为创建副本先于 Service。 -这样做的另一个缺点是,调度器可能在同一个机器上放置所有 Pod,如果该机器宕机则所有的 Service 都会挂掉。 -正确的做法是,我们杀掉 2 个 Pod,等待 Deployment 去创建它们。 -这次 Service 会 *先于* 副本存在。这将实现调度器级别的 Service,能够使 Pod 分散创建(假定所有的 Node 都具有同样的容量),以及正确的环境变量: +能看到环境变量中并没有你创建的 Service 相关的值。这是因为副本的创建先于 Service。 +这样做的另一个缺点是,调度器可能会将所有 Pod 部署到同一台机器上,如果该机器宕机则整个 Service 都会离线。 +要改正的话,我们可以先终止这 2 个 Pod,然后等待 Deployment 去重新创建它们。 +这次 Service 会*先于*副本存在。这将实现调度器级别的 Pod 按 Service +分布(假定所有的节点都具有同样的容量),并提供正确的环境变量: ```shell kubectl scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2; @@ -274,7 +273,7 @@ my-nginx-3800858182-j4rm4 1/1 Running 0 5s 10.244.3.8 You may notice that the pods have different names, since they are killed and recreated. --> -可能注意到,Pod 具有不同的名称,因为它们被杀掉后并被重新创建。 +你可能注意到,Pod 具有不同的名称,这是因为它们是被重新创建的。 ```shell kubectl exec my-nginx-3800858182-e9ihh -- printenv | grep SERVICE @@ -293,8 +292,8 @@ KUBERNETES_SERVICE_PORT_HTTPS=443 Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it's running on your cluster: --> -Kubernetes 提供了一个 DNS 插件 Service,它使用 skydns 自动为其它 Service 指派 DNS 名字。 -如果它在集群中处于运行状态,可以通过如下命令来检查: +Kubernetes 提供了一个自动为其它 Service 分配 DNS 名字的 DNS 插件 Service。 +你可以通过如下命令检查它是否在工作: ```shell kubectl get services kube-dns --namespace=kube-system @@ -305,18 +304,15 @@ kube-dns ClusterIP 10.0.0.10 53/UDP,53/TCP 8m ``` -如果没有在运行,可以[启用它](https://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/kube-dns/README.md#how-do-i-configure-it)。 -本段剩余的内容,将假设已经有一个 Service,它具有一个长久存在的 IP(my-nginx), -一个为该 IP 指派名称的 DNS 服务器。 这里我们使用 CoreDNS 集群插件(应用名为 `kube-dns`), -所以可以通过标准做法,使在集群中的任何 Pod 都能与该 Service 通信(例如:`gethostbyname()`)。 +本段剩余的内容假设你已经有一个拥有持久 IP 地址的 Service(my-nginx),以及一个为其 +IP 分配名称的 DNS 服务器。 这里我们使用 CoreDNS 集群插件(应用名为 `kube-dns`), +所以在集群中的任何 Pod 中,你都可以使用标准方法(例如:`gethostbyname()`)与该 Service 通信。 如果 CoreDNS 没有在运行,你可以参照 -[CoreDNS README](https://github.com/coredns/deployment/tree/master/kubernetes) 或者 -[安装 CoreDNS](/zh/docs/tasks/administer-cluster/coredns/#installing-coredns) 来启用它。 +[CoreDNS README](https://github.com/coredns/deployment/tree/master/kubernetes) +或者[安装 CoreDNS](/zh/docs/tasks/administer-cluster/coredns/#installing-coredns) 来启用它。 让我们运行另一个 curl 应用来进行测试: ```shell @@ -351,21 +347,21 @@ Till now we have only accessed the nginx server from within the cluster. Before * An nginx server configured to use the certificates * A [secret](/docs/concepts/configuration/secret/) that makes the certificates accessible to pods -You can acquire all these from the [nginx https example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/https-nginx/). This requires having go and make tools installed. If you don't want to install those, then follow the manual steps later. In short: +You can acquire all these from the [nginx https example](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/). This requires having go and make tools installed. If you don't want to install those, then follow the manual steps later. In short: --> ## 保护 Service {#securing-the-service} 到现在为止,我们只在集群内部访问了 Nginx 服务器。在将 Service 暴露到因特网之前,我们希望确保通信信道是安全的。 -为实现这一目的,可能需要: +为实现这一目的,需要: -* 用于 HTTPS 的自签名证书(除非已经有了一个识别身份的证书) +* 用于 HTTPS 的自签名证书(除非已经有了一个身份证书) * 使用证书配置的 Nginx 服务器 -* 使证书可以访问 Pod 的 [Secret](/zh/docs/concepts/configuration/secret/) +* 使 Pod 可以访问证书的 [Secret](/zh/docs/concepts/configuration/secret/) -你可以从 [Nginx https 示例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/https-nginx/) -获取所有上述内容。你需要安装 go 和 make 工具。如果你不想安装这些软件,可以按照 -后文所述的手动执行步骤执行操作。简要过程如下: +你可以从 +[Nginx https 示例](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/)获取所有上述内容。 +你需要安装 go 和 make 工具。如果你不想安装这些软件,可以按照后文所述的手动执行步骤执行操作。简要过程如下: ```shell make keys KEY=/tmp/nginx.key CERT=/tmp/nginx.crt @@ -385,19 +381,6 @@ nginxsecret kubernetes.io/tls 2 1m 以下是 configmap: ```shell @@ -420,9 +403,9 @@ Following are the manual steps to follow in case you run into problems running m 以下是你在运行 make 时遇到问题时要遵循的手动步骤(例如,在 Windows 上): ```shell -# Create a public private key pair +# 创建公钥和相对应的私钥 openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -out /d/tmp/nginx.crt -subj "/CN=my-nginx/O=my-nginx" -# Convert the keys to base64 encoding +# 对密钥实施 base64 编码 cat /d/tmp/nginx.crt | base64 cat /d/tmp/nginx.key | base64 ``` @@ -447,7 +430,7 @@ data: -现在使用文件创建 Secrets: +现在使用文件创建 Secret: ```shell kubectl apply -f nginxsecrets.yaml @@ -462,7 +445,7 @@ nginxsecret kubernetes.io/tls 2 1m -现在修改 nginx 副本,启动一个使用在秘钥中的证书的 HTTPS 服务器和 Service,暴露端口(80 和 443): +现在修改 nginx 副本以启动一个使用 Secret 中的证书的 HTTPS 服务器以及相应的用于暴露其端口(80 和 443)的 Service: {{< codenew file="service/networking/nginx-secure-app.yaml" >}} @@ -470,7 +453,7 @@ Now modify your nginx replicas to start an https server using the certificate in Noteworthy points about the nginx-secure-app manifest: - It contains both Deployment and Service specification in the same file. -- The [nginx server](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/https-nginx/default.conf) +- The [nginx server](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/default.conf) serves HTTP traffic on port 80 and HTTPS traffic on 443, and nginx Service exposes both ports. - Each container has access to the keys through a volume mounted at `/etc/nginx/ssl`. @@ -478,10 +461,10 @@ Noteworthy points about the nginx-secure-app manifest: --> 关于 nginx-secure-app 清单,值得注意的几点如下: -- 它在相同的文件中包含了 Deployment 和 Service 的规约 -- [nginx 服务器](https://github.com/kubernetes/kubernetes/tree/{{< param "githubbranch" >}}/staging/https-nginx/default.conf) - 处理 80 端口上的 HTTP 流量,以及 443 端口上的 HTTPS 流量,Nginx Service 暴露了这两个端口。 -- 每个容器访问挂载在 /etc/nginx/ssl 卷上的秘钥。这需要在 Nginx 服务器启动之前安装好。 +- 它将 Deployment 和 Service 的规约放在了同一个文件中。 +- [Nginx 服务器](https://github.com/kubernetes/examples/tree/master/staging/https-nginx/default.conf)通过 + 80 端口处理 HTTP 流量,通过 443 端口处理 HTTPS 流量,而 Nginx Service 则暴露了这两个端口。 +- 每个容器能通过挂载在 `/etc/nginx/ssl` 的卷访问秘钥。卷和密钥需要在 Nginx 服务器启动*之前*配置好。 ```shell kubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml @@ -508,7 +491,7 @@ Let's test this from a pod (the same secret is being reused for simplicity, the 注意最后一步我们是如何提供 `-k` 参数执行 curl 命令的,这是因为在证书生成时, 我们不知道任何关于运行 nginx 的 Pod 的信息,所以不得不在执行 curl 命令时忽略 CName 不匹配的情况。 通过创建 Service,我们连接了在证书中的 CName 与在 Service 查询时被 Pod 使用的实际 DNS 名字。 -让我们从一个 Pod 来测试(为了简化使用同一个秘钥,Pod 仅需要使用 nginx.crt 去访问 Service): +让我们从一个 Pod 来测试(为了方便,这里使用同一个 Secret,Pod 仅需要使用 nginx.crt 去访问 Service): {{< codenew file="service/networking/curlpod.yaml" >}} @@ -538,10 +521,10 @@ node has a public IP. --> ## 暴露 Service -对我们应用的某些部分,可能希望将 Service 暴露在一个外部 IP 地址上。 +对应用的某些部分,你可能希望将 Service 暴露在一个外部 IP 地址上。 Kubernetes 支持两种实现方式:NodePort 和 LoadBalancer。 -在上一段创建的 Service 使用了 `NodePort`,因此 Nginx https 副本已经就绪, -如果使用一个公网 IP,能够处理 Internet 上的流量。 +在上一段创建的 Service 使用了 `NodePort`,因此,如果你的节点有一个公网 +IP,那么 Nginx HTTPS 副本已经能够处理因特网上的流量。 ```shell kubectl get svc my-nginx -o yaml | grep nodePort -C 5 @@ -579,18 +562,18 @@ kubectl get nodes -o yaml | grep ExternalIP -C 1 type: ExternalIP allocatable: ... -$ curl https://: -k +$ curl https://: -k ...

Welcome to nginx!

``` -让我们重新创建一个 Service,使用一个云负载均衡器,只需要将 `my-nginx` Service 的 `Type` -由 `NodePort` 改成 `LoadBalancer`。 +让我们重新创建一个 Service 以使用云负载均衡器。 +将 `my-nginx` Service 的 `Type` 由 `NodePort` 改成 `LoadBalancer`: ```shell kubectl edit svc my-nginx @@ -616,17 +599,15 @@ output, in fact, so you'll need to do `kubectl describe service my-nginx` to see it. You'll see something like this: --> -在 `EXTERNAL-IP` 列指定的 IP 地址是在公网上可用的。`CLUSTER-IP` 只在集群/私有云网络中可用。 +在 `EXTERNAL-IP` 列中的 IP 地址能在公网上被访问到。`CLUSTER-IP` 只能从集群/私有云网络中访问。 -注意,在 AWS 上类型 `LoadBalancer` 创建一个 ELB,它使用主机名(比较长),而不是 IP。 -它太长以至于不能适配标准 `kubectl get svc` 的输出,事实上需要通过执行 `kubectl describe service my-nginx` 命令来查看它。 +注意,在 AWS 上,类型 `LoadBalancer` 的服务会创建一个 ELB,且 ELB 使用主机名(比较长),而不是 IP。 +ELB 的主机名太长以至于不能适配标准 `kubectl get svc` 的输出,所以需要通过执行 +`kubectl describe service my-nginx` 命令来查看它。 可以看到类似如下内容: ```shell kubectl describe service my-nginx -``` - -``` ... LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com ... From a0cdf50ac68dbbed37197231e3d1bb0eba493bc2 Mon Sep 17 00:00:00 2001 From: Guilhem Lettron Date: Thu, 17 Mar 2022 16:01:40 +0100 Subject: [PATCH 036/138] fix: YAML indentation and highlighting --- .../update-api-object-kubectl-patch.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md b/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md index e6e33efe753ef..7b38703c741d0 100644 --- a/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md +++ b/content/en/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md @@ -189,11 +189,11 @@ kubectl get deployment patch-demo --output yaml The output shows that the PodSpec in the Deployment has only one Toleration: -```shell +```yaml tolerations: - - effect: NoSchedule - key: disktype - value: ssd +- effect: NoSchedule + key: disktype + value: ssd ``` Notice that the `tolerations` list in the PodSpec was replaced, not merged. This is because From 0e4113ad5140d659f95e3654c0e27a8c85567f60 Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Fri, 18 Mar 2022 17:22:00 +0800 Subject: [PATCH 037/138] [zh] Delete kube-scheduler-config.v1beta1.md --- .../kube-scheduler-config.v1beta1.md | 2156 ----------------- 1 file changed, 2156 deletions(-) delete mode 100644 content/zh/docs/reference/config-api/kube-scheduler-config.v1beta1.md diff --git a/content/zh/docs/reference/config-api/kube-scheduler-config.v1beta1.md b/content/zh/docs/reference/config-api/kube-scheduler-config.v1beta1.md deleted file mode 100644 index ac32e65674cb5..0000000000000 --- a/content/zh/docs/reference/config-api/kube-scheduler-config.v1beta1.md +++ /dev/null @@ -1,2156 +0,0 @@ ---- -title: kube-scheduler Configuration (v1beta1) -content_type: tool-reference -package: kubescheduler.config.k8s.io/v1 -auto_generated: true ---- - - -## Resource Types - - -- [Policy](#kubescheduler-config-k8s-io-v1-Policy) -- [DefaultPreemptionArgs](#kubescheduler-config-k8s-io-v1beta1-DefaultPreemptionArgs) -- [InterPodAffinityArgs](#kubescheduler-config-k8s-io-v1beta1-InterPodAffinityArgs) -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta1-KubeSchedulerConfiguration) -- [NodeAffinityArgs](#kubescheduler-config-k8s-io-v1beta1-NodeAffinityArgs) -- [NodeLabelArgs](#kubescheduler-config-k8s-io-v1beta1-NodeLabelArgs) -- [NodeResourcesFitArgs](#kubescheduler-config-k8s-io-v1beta1-NodeResourcesFitArgs) -- [NodeResourcesLeastAllocatedArgs](#kubescheduler-config-k8s-io-v1beta1-NodeResourcesLeastAllocatedArgs) -- [NodeResourcesMostAllocatedArgs](#kubescheduler-config-k8s-io-v1beta1-NodeResourcesMostAllocatedArgs) -- [PodTopologySpreadArgs](#kubescheduler-config-k8s-io-v1beta1-PodTopologySpreadArgs) -- [RequestedToCapacityRatioArgs](#kubescheduler-config-k8s-io-v1beta1-RequestedToCapacityRatioArgs) -- [ServiceAffinityArgs](#kubescheduler-config-k8s-io-v1beta1-ServiceAffinityArgs) -- [VolumeBindingArgs](#kubescheduler-config-k8s-io-v1beta1-VolumeBindingArgs) - - - - -## `Policy` {#kubescheduler-config-k8s-io-v1-Policy} - - - - - -Policy describes a struct for a policy resource used in api. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
apiVersion
string
kubescheduler.config.k8s.io/v1
kind
string
Policy
predicates [Required]
-[]PredicatePolicy -
- Holds the information to configure the fit predicate functions
priorities [Required]
-[]PriorityPolicy -
- Holds the information to configure the priority functions
extenders [Required]
-[]LegacyExtender -
- Holds the information to communicate with the extender(s)
hardPodAffinitySymmetricWeight [Required]
-int32 -
- RequiredDuringScheduling affinity is not symmetric, but there is an implicit PreferredDuringScheduling affinity rule -corresponding to every RequiredDuringScheduling affinity rule. -HardPodAffinitySymmetricWeight represents the weight of implicit PreferredDuringScheduling affinity rule, in the range 1-100.
alwaysCheckAllPredicates [Required]
-bool -
- When AlwaysCheckAllPredicates is set to true, scheduler checks all -the configured predicates even after one or more of them fails. -When the flag is set to false, scheduler skips checking the rest -of the predicates after it finds one predicate that failed.
- - - -## `ExtenderManagedResource` {#kubescheduler-config-k8s-io-v1-ExtenderManagedResource} - - - - -**Appears in:** - -- [Extender](#kubescheduler-config-k8s-io-v1beta1-Extender) - -- [LegacyExtender](#kubescheduler-config-k8s-io-v1-LegacyExtender) - - -ExtenderManagedResource describes the arguments of extended resources -managed by an extender. - - - - - - - - - - - - - - - - - - -
FieldDescription
name [Required]
-string -
- Name is the extended resource name.
ignoredByScheduler [Required]
-bool -
- IgnoredByScheduler indicates whether kube-scheduler should ignore this -resource when applying predicates.
- - - -## `ExtenderTLSConfig` {#kubescheduler-config-k8s-io-v1-ExtenderTLSConfig} - - - - -**Appears in:** - -- [Extender](#kubescheduler-config-k8s-io-v1beta1-Extender) - -- [LegacyExtender](#kubescheduler-config-k8s-io-v1-LegacyExtender) - - -ExtenderTLSConfig contains settings to enable TLS with extender - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
insecure [Required]
-bool -
- Server should be accessed without verifying the TLS certificate. For testing only.
serverName [Required]
-string -
- ServerName is passed to the server for SNI and is used in the client to check server -certificates against. If ServerName is empty, the hostname used to contact the -server is used.
certFile [Required]
-string -
- Server requires TLS client certificate authentication
keyFile [Required]
-string -
- Server requires TLS client certificate authentication
caFile [Required]
-string -
- Trusted root certificates for server
certData [Required]
-[]byte -
- CertData holds PEM-encoded bytes (typically read from a client certificate file). -CertData takes precedence over CertFile
keyData [Required]
-[]byte -
- KeyData holds PEM-encoded bytes (typically read from a client certificate key file). -KeyData takes precedence over KeyFile
caData [Required]
-[]byte -
- CAData holds PEM-encoded bytes (typically read from a root certificates bundle). -CAData takes precedence over CAFile
- - - -## `LabelPreference` {#kubescheduler-config-k8s-io-v1-LabelPreference} - - - - -**Appears in:** - -- [PriorityArgument](#kubescheduler-config-k8s-io-v1-PriorityArgument) - - -LabelPreference holds the parameters that are used to configure the corresponding priority function - - - - - - - - - - - - - - - - - - -
FieldDescription
label [Required]
-string -
- Used to identify node "groups"
presence [Required]
-bool -
- This is a boolean flag -If true, higher priority is given to nodes that have the label -If false, higher priority is given to nodes that do not have the label
- - - -## `LabelsPresence` {#kubescheduler-config-k8s-io-v1-LabelsPresence} - - - - -**Appears in:** - -- [PredicateArgument](#kubescheduler-config-k8s-io-v1-PredicateArgument) - - -LabelsPresence holds the parameters that are used to configure the corresponding predicate in scheduler policy configuration. - - - - - - - - - - - - - - - - - - -
FieldDescription
labels [Required]
-[]string -
- The list of labels that identify node "groups" -All of the labels should be either present (or absent) for the node to be considered a fit for hosting the pod
presence [Required]
-bool -
- The boolean flag that indicates whether the labels should be present or absent from the node
- - - -## `LegacyExtender` {#kubescheduler-config-k8s-io-v1-LegacyExtender} - - - - -**Appears in:** - -- [Policy](#kubescheduler-config-k8s-io-v1-Policy) - - -LegacyExtender holds the parameters used to communicate with the extender. If a verb is unspecified/empty, -it is assumed that the extender chose not to provide that extension. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
urlPrefix [Required]
-string -
- URLPrefix at which the extender is available
filterVerb [Required]
-string -
- Verb for the filter call, empty if not supported. This verb is appended to the URLPrefix when issuing the filter call to extender.
preemptVerb [Required]
-string -
- Verb for the preempt call, empty if not supported. This verb is appended to the URLPrefix when issuing the preempt call to extender.
prioritizeVerb [Required]
-string -
- Verb for the prioritize call, empty if not supported. This verb is appended to the URLPrefix when issuing the prioritize call to extender.
weight [Required]
-int64 -
- The numeric multiplier for the node scores that the prioritize call generates. -The weight should be a positive integer
bindVerb [Required]
-string -
- Verb for the bind call, empty if not supported. This verb is appended to the URLPrefix when issuing the bind call to extender. -If this method is implemented by the extender, it is the extender's responsibility to bind the pod to apiserver. Only one extender -can implement this function.
enableHttps [Required]
-bool -
- EnableHTTPS specifies whether https should be used to communicate with the extender
tlsConfig [Required]
-ExtenderTLSConfig -
- TLSConfig specifies the transport layer security config
httpTimeout [Required]
-time.Duration -
- HTTPTimeout specifies the timeout duration for a call to the extender. Filter timeout fails the scheduling of the pod. Prioritize -timeout is ignored, k8s/other extenders priorities are used to select the node.
nodeCacheCapable [Required]
-bool -
- NodeCacheCapable specifies that the extender is capable of caching node information, -so the scheduler should only send minimal information about the eligible nodes -assuming that the extender already cached full details of all nodes in the cluster
managedResources
-[]ExtenderManagedResource -
- ManagedResources is a list of extended resources that are managed by -this extender. -- A pod will be sent to the extender on the Filter, Prioritize and Bind - (if the extender is the binder) phases iff the pod requests at least - one of the extended resources in this list. If empty or unspecified, - all pods will be sent to this extender. -- If IgnoredByScheduler is set to true for a resource, kube-scheduler - will skip checking the resource in predicates.
ignorable [Required]
-bool -
- Ignorable specifies if the extender is ignorable, i.e. scheduling should not -fail when the extender returns an error or is not reachable.
- - - -## `PredicateArgument` {#kubescheduler-config-k8s-io-v1-PredicateArgument} - - - - -**Appears in:** - -- [PredicatePolicy](#kubescheduler-config-k8s-io-v1-PredicatePolicy) - - -PredicateArgument represents the arguments to configure predicate functions in scheduler policy configuration. -Only one of its members may be specified - - - - - - - - - - - - - - - - - - -
FieldDescription
serviceAffinity [Required]
-ServiceAffinity -
- The predicate that provides affinity for pods belonging to a service -It uses a label to identify nodes that belong to the same "group"
labelsPresence [Required]
-LabelsPresence -
- The predicate that checks whether a particular node has a certain label -defined or not, regardless of value
- - - -## `PredicatePolicy` {#kubescheduler-config-k8s-io-v1-PredicatePolicy} - - - - -**Appears in:** - -- [Policy](#kubescheduler-config-k8s-io-v1-Policy) - - -PredicatePolicy describes a struct of a predicate policy. - - - - - - - - - - - - - - - - - - -
FieldDescription
name [Required]
-string -
- Identifier of the predicate policy -For a custom predicate, the name can be user-defined -For the Kubernetes provided predicates, the name is the identifier of the pre-defined predicate
argument [Required]
-PredicateArgument -
- Holds the parameters to configure the given predicate
- - - -## `PriorityArgument` {#kubescheduler-config-k8s-io-v1-PriorityArgument} - - - - -**Appears in:** - -- [PriorityPolicy](#kubescheduler-config-k8s-io-v1-PriorityPolicy) - - -PriorityArgument represents the arguments to configure priority functions in scheduler policy configuration. -Only one of its members may be specified - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
serviceAntiAffinity [Required]
-ServiceAntiAffinity -
- The priority function that ensures a good spread (anti-affinity) for pods belonging to a service -It uses a label to identify nodes that belong to the same "group"
labelPreference [Required]
-LabelPreference -
- The priority function that checks whether a particular node has a certain label -defined or not, regardless of value
requestedToCapacityRatioArguments [Required]
-RequestedToCapacityRatioArguments -
- The RequestedToCapacityRatio priority function is parametrized with function shape.
- - - -## `PriorityPolicy` {#kubescheduler-config-k8s-io-v1-PriorityPolicy} - - - - -**Appears in:** - -- [Policy](#kubescheduler-config-k8s-io-v1-Policy) - - -PriorityPolicy describes a struct of a priority policy. - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
name [Required]
-string -
- Identifier of the priority policy -For a custom priority, the name can be user-defined -For the Kubernetes provided priority functions, the name is the identifier of the pre-defined priority function
weight [Required]
-int64 -
- The numeric multiplier for the node scores that the priority function generates -The weight should be non-zero and can be a positive or a negative integer
argument [Required]
-PriorityArgument -
- Holds the parameters to configure the given priority function
- - - -## `RequestedToCapacityRatioArguments` {#kubescheduler-config-k8s-io-v1-RequestedToCapacityRatioArguments} - - - - -**Appears in:** - -- [PriorityArgument](#kubescheduler-config-k8s-io-v1-PriorityArgument) - - -RequestedToCapacityRatioArguments holds arguments specific to RequestedToCapacityRatio priority function. - - - - - - - - - - - - - - - - - - -
FieldDescription
shape [Required]
-[]UtilizationShapePoint -
- Array of point defining priority function shape.
resources [Required]
-[]ResourceSpec -
- No description provided. -
- - - -## `ResourceSpec` {#kubescheduler-config-k8s-io-v1-ResourceSpec} - - - - -**Appears in:** - -- [RequestedToCapacityRatioArguments](#kubescheduler-config-k8s-io-v1-RequestedToCapacityRatioArguments) - - -ResourceSpec represents single resource and weight for bin packing of priority RequestedToCapacityRatioArguments. - - - - - - - - - - - - - - - - - - -
FieldDescription
name [Required]
-string -
- Name of the resource to be managed by RequestedToCapacityRatio function.
weight [Required]
-int64 -
- Weight of the resource.
- - - -## `ServiceAffinity` {#kubescheduler-config-k8s-io-v1-ServiceAffinity} - - - - -**Appears in:** - -- [PredicateArgument](#kubescheduler-config-k8s-io-v1-PredicateArgument) - - -ServiceAffinity holds the parameters that are used to configure the corresponding predicate in scheduler policy configuration. - - - - - - - - - - - - - -
FieldDescription
labels [Required]
-[]string -
- The list of labels that identify node "groups" -All of the labels should match for the node to be considered a fit for hosting the pod
- - - -## `ServiceAntiAffinity` {#kubescheduler-config-k8s-io-v1-ServiceAntiAffinity} - - - - -**Appears in:** - -- [PriorityArgument](#kubescheduler-config-k8s-io-v1-PriorityArgument) - - -ServiceAntiAffinity holds the parameters that are used to configure the corresponding priority function - - - - - - - - - - - - - -
FieldDescription
label [Required]
-string -
- Used to identify node "groups"
- - - -## `UtilizationShapePoint` {#kubescheduler-config-k8s-io-v1-UtilizationShapePoint} - - - - -**Appears in:** - -- [RequestedToCapacityRatioArguments](#kubescheduler-config-k8s-io-v1-RequestedToCapacityRatioArguments) - - -UtilizationShapePoint represents single point of priority function shape. - - - - - - - - - - - - - - - - - - -
FieldDescription
utilization [Required]
-int32 -
- Utilization (x axis). Valid values are 0 to 100. Fully utilized node maps to 100.
score [Required]
-int32 -
- Score assigned to given utilization (y axis). Valid values are 0 to 10.
- - - - - -## `ClientConnectionConfiguration` {#ClientConnectionConfiguration} - - - - -**Appears in:** - -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta1-KubeSchedulerConfiguration) - - -ClientConnectionConfiguration contains details for constructing a client. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
kubeconfig [Required]
-string -
- kubeconfig is the path to a KubeConfig file.
acceptContentTypes [Required]
-string -
- acceptContentTypes defines the Accept header sent by clients when connecting to a server, overriding the -default value of 'application/json'. This field will control all connections to the server used by a particular -client.
contentType [Required]
-string -
- contentType is the content type used when sending data to the server from this client.
qps [Required]
-float32 -
- qps controls the number of queries per second allowed for this connection.
burst [Required]
-int32 -
- burst allows extra queries to accumulate when a client is exceeding its rate.
- -## `DebuggingConfiguration` {#DebuggingConfiguration} - - - - -**Appears in:** - -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta1-KubeSchedulerConfiguration) - - -DebuggingConfiguration holds configuration for Debugging related features. - - - - - - - - - - - - - - - - - - -
FieldDescription
enableProfiling [Required]
-bool -
- enableProfiling enables profiling via web interface host:port/debug/pprof/
enableContentionProfiling [Required]
-bool -
- enableContentionProfiling enables lock contention profiling, if -enableProfiling is true.
- -## `LeaderElectionConfiguration` {#LeaderElectionConfiguration} - - - - -**Appears in:** - -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta1-KubeSchedulerConfiguration) - - -LeaderElectionConfiguration defines the configuration of leader election -clients for components that can run with leader election enabled. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
leaderElect [Required]
-bool -
- leaderElect enables a leader election client to gain leadership -before executing the main loop. Enable this when running replicated -components for high availability.
leaseDuration [Required]
-meta/v1.Duration -
- leaseDuration is the duration that non-leader candidates will wait -after observing a leadership renewal until attempting to acquire -leadership of a led but unrenewed leader slot. This is effectively the -maximum duration that a leader can be stopped before it is replaced -by another candidate. This is only applicable if leader election is -enabled.
renewDeadline [Required]
-meta/v1.Duration -
- renewDeadline is the interval between attempts by the acting master to -renew a leadership slot before it stops leading. This must be less -than or equal to the lease duration. This is only applicable if leader -election is enabled.
retryPeriod [Required]
-meta/v1.Duration -
- retryPeriod is the duration the clients should wait between attempting -acquisition and renewal of a leadership. This is only applicable if -leader election is enabled.
resourceLock [Required]
-string -
- resourceLock indicates the resource object type that will be used to lock -during leader election cycles.
resourceName [Required]
-string -
- resourceName indicates the name of resource object that will be used to lock -during leader election cycles.
resourceNamespace [Required]
-string -
- resourceName indicates the namespace of resource object that will be used to lock -during leader election cycles.
- -## `LoggingConfiguration` {#LoggingConfiguration} - - - - -**Appears in:** - -- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) - - -LoggingConfiguration contains logging options -Refer [Logs Options](https://github.com/kubernetes/component-base/blob/master/logs/options.go) for more information. - - - - - - - - - - - - - - - - - - -
FieldDescription
format [Required]
-string -
- Format Flag specifies the structure of log messages. -default value of format is `text`
sanitization [Required]
-bool -
- [Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens). -Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.`)
- - - - -## `DefaultPreemptionArgs` {#kubescheduler-config-k8s-io-v1beta1-DefaultPreemptionArgs} - - - - - -DefaultPreemptionArgs holds arguments used to configure the -DefaultPreemption plugin. - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
apiVersion
string
kubescheduler.config.k8s.io/v1beta1
kind
string
DefaultPreemptionArgs
minCandidateNodesPercentage [Required]
-int32 -
- MinCandidateNodesPercentage is the minimum number of candidates to -shortlist when dry running preemption as a percentage of number of nodes. -Must be in the range [0, 100]. Defaults to 10% of the cluster size if -unspecified.
minCandidateNodesAbsolute [Required]
-int32 -
- MinCandidateNodesAbsolute is the absolute minimum number of candidates to -shortlist. The likely number of candidates enumerated for dry running -preemption is given by the formula: -numCandidates = max(numNodes ∗ minCandidateNodesPercentage, minCandidateNodesAbsolute) -We say "likely" because there are other factors such as PDB violations -that play a role in the number of candidates shortlisted. Must be at least -0 nodes. Defaults to 100 nodes if unspecified.
- - - -## `InterPodAffinityArgs` {#kubescheduler-config-k8s-io-v1beta1-InterPodAffinityArgs} - - - - - -InterPodAffinityArgs holds arguments used to configure the InterPodAffinity plugin. - - - - - - - - - - - - - - - - - -
FieldDescription
apiVersion
string
kubescheduler.config.k8s.io/v1beta1
kind
string
InterPodAffinityArgs
hardPodAffinityWeight [Required]
-int32 -
- HardPodAffinityWeight is the scoring weight for existing pods with a -matching hard affinity to the incoming pod.
- - - -## `KubeSchedulerConfiguration` {#kubescheduler-config-k8s-io-v1beta1-KubeSchedulerConfiguration} - - - - - -KubeSchedulerConfiguration configures a scheduler - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
apiVersion
string
kubescheduler.config.k8s.io/v1beta1
kind
string
KubeSchedulerConfiguration
parallelism [Required]
-int32 -
- Parallelism defines the amount of parallelism in algorithms for scheduling a Pods. Must be greater than 0. Defaults to 16
leaderElection [Required]
-LeaderElectionConfiguration -
- LeaderElection defines the configuration of leader election client.
clientConnection [Required]
-ClientConnectionConfiguration -
- ClientConnection specifies the kubeconfig file and client connection -settings for the proxy server to use when communicating with the apiserver.
healthzBindAddress [Required]
-string -
- HealthzBindAddress is the IP address and port for the health check server to serve on, -defaulting to 0.0.0.0:10251
metricsBindAddress [Required]
-string -
- MetricsBindAddress is the IP address and port for the metrics server to -serve on, defaulting to 0.0.0.0:10251.
DebuggingConfiguration [Required]
-DebuggingConfiguration -
(Members of DebuggingConfiguration are embedded into this type.) - DebuggingConfiguration holds configuration for Debugging related features -TODO: We might wanna make this a substruct like Debugging componentbaseconfigv1alpha1.DebuggingConfiguration
percentageOfNodesToScore [Required]
-int32 -
- PercentageOfNodesToScore is the percentage of all nodes that once found feasible -for running a pod, the scheduler stops its search for more feasible nodes in -the cluster. This helps improve scheduler's performance. Scheduler always tries to find -at least "minFeasibleNodesToFind" feasible nodes no matter what the value of this flag is. -Example: if the cluster size is 500 nodes and the value of this flag is 30, -then scheduler stops finding further feasible nodes once it finds 150 feasible ones. -When the value is 0, default percentage (5%--50% based on the size of the cluster) of the -nodes will be scored.
podInitialBackoffSeconds [Required]
-int64 -
- PodInitialBackoffSeconds is the initial backoff for unschedulable pods. -If specified, it must be greater than 0. If this value is null, the default value (1s) -will be used.
podMaxBackoffSeconds [Required]
-int64 -
- PodMaxBackoffSeconds is the max backoff for unschedulable pods. -If specified, it must be greater than podInitialBackoffSeconds. If this value is null, -the default value (10s) will be used.
profiles [Required]
-[]KubeSchedulerProfile -
- Profiles are scheduling profiles that kube-scheduler supports. Pods can -choose to be scheduled under a particular profile by setting its associated -scheduler name. Pods that don't specify any scheduler name are scheduled -with the "default-scheduler" profile, if present here.
extenders [Required]
-[]Extender -
- Extenders are the list of scheduler extenders, each holding the values of how to communicate -with the extender. These extenders are shared by all scheduler profiles.
- - - -## `NodeAffinityArgs` {#kubescheduler-config-k8s-io-v1beta1-NodeAffinityArgs} - - - - - -NodeAffinityArgs holds arguments to configure the NodeAffinity plugin. - - - - - - - - - - - - - - - - - -
FieldDescription
apiVersion
string
kubescheduler.config.k8s.io/v1beta1
kind
string
NodeAffinityArgs
addedAffinity
-core/v1.NodeAffinity -
- AddedAffinity is applied to all Pods additionally to the NodeAffinity -specified in the PodSpec. That is, Nodes need to satisfy AddedAffinity -AND .spec.NodeAffinity. AddedAffinity is empty by default (all Nodes -match). -When AddedAffinity is used, some Pods with affinity requirements that match -a specific Node (such as Daemonset Pods) might remain unschedulable.
- - - -## `NodeLabelArgs` {#kubescheduler-config-k8s-io-v1beta1-NodeLabelArgs} - - - - - -NodeLabelArgs holds arguments used to configure the NodeLabel plugin. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
apiVersion
string
kubescheduler.config.k8s.io/v1beta1
kind
string
NodeLabelArgs
presentLabels [Required]
-[]string -
- PresentLabels should be present for the node to be considered a fit for hosting the pod
absentLabels [Required]
-[]string -
- AbsentLabels should be absent for the node to be considered a fit for hosting the pod
presentLabelsPreference [Required]
-[]string -
- Nodes that have labels in the list will get a higher score.
absentLabelsPreference [Required]
-[]string -
- Nodes that don't have labels in the list will get a higher score.
- - - -## `NodeResourcesFitArgs` {#kubescheduler-config-k8s-io-v1beta1-NodeResourcesFitArgs} - - - - - -NodeResourcesFitArgs holds arguments used to configure the NodeResourcesFit plugin. - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
apiVersion
string
kubescheduler.config.k8s.io/v1beta1
kind
string
NodeResourcesFitArgs
ignoredResources [Required]
-[]string -
- IgnoredResources is the list of resources that NodeResources fit filter -should ignore.
ignoredResourceGroups [Required]
-[]string -
- IgnoredResourceGroups defines the list of resource groups that NodeResources fit filter should ignore. -e.g. if group is ["example.com"], it will ignore all resource names that begin -with "example.com", such as "example.com/aaa" and "example.com/bbb". -A resource group name can't contain '/'.
- - - -## `NodeResourcesLeastAllocatedArgs` {#kubescheduler-config-k8s-io-v1beta1-NodeResourcesLeastAllocatedArgs} - - - - - -NodeResourcesLeastAllocatedArgs holds arguments used to configure NodeResourcesLeastAllocated plugin. - - - - - - - - - - - - - - - - - -
FieldDescription
apiVersion
string
kubescheduler.config.k8s.io/v1beta1
kind
string
NodeResourcesLeastAllocatedArgs
resources [Required]
-[]ResourceSpec -
- Resources to be managed, if no resource is provided, default resource set with both -the weight of "cpu" and "memory" set to "1" will be applied. -Resource with "0" weight will not accountable for the final score.
- - - -## `NodeResourcesMostAllocatedArgs` {#kubescheduler-config-k8s-io-v1beta1-NodeResourcesMostAllocatedArgs} - - - - - -NodeResourcesMostAllocatedArgs holds arguments used to configure NodeResourcesMostAllocated plugin. - - - - - - - - - - - - - - - - - -
FieldDescription
apiVersion
string
kubescheduler.config.k8s.io/v1beta1
kind
string
NodeResourcesMostAllocatedArgs
resources [Required]
-[]ResourceSpec -
- Resources to be managed, if no resource is provided, default resource set with both -the weight of "cpu" and "memory" set to "1" will be applied. -Resource with "0" weight will not accountable for the final score.
- - - -## `PodTopologySpreadArgs` {#kubescheduler-config-k8s-io-v1beta1-PodTopologySpreadArgs} - - - - - -PodTopologySpreadArgs holds arguments used to configure the PodTopologySpread plugin. - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
apiVersion
string
kubescheduler.config.k8s.io/v1beta1
kind
string
PodTopologySpreadArgs
defaultConstraints
-[]core/v1.TopologySpreadConstraint -
- DefaultConstraints defines topology spread constraints to be applied to -Pods that don't define any in `pod.spec.topologySpreadConstraints`. -`.defaultConstraints[∗].labelSelectors` must be empty, as they are -deduced from the Pod's membership to Services, ReplicationControllers, -ReplicaSets or StatefulSets. -When not empty, .defaultingType must be "List".
defaultingType
-PodTopologySpreadConstraintsDefaulting -
- DefaultingType determines how .defaultConstraints are deduced. Can be one -of "System" or "List". - -- "System": Use kubernetes defined constraints that spread Pods among - Nodes and Zones. -- "List": Use constraints defined in .defaultConstraints. - -Defaults to "List" if feature gate DefaultPodTopologySpread is disabled -and to "System" if enabled.
- - - -## `RequestedToCapacityRatioArgs` {#kubescheduler-config-k8s-io-v1beta1-RequestedToCapacityRatioArgs} - - - - - -RequestedToCapacityRatioArgs holds arguments used to configure RequestedToCapacityRatio plugin. - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
apiVersion
string
kubescheduler.config.k8s.io/v1beta1
kind
string
RequestedToCapacityRatioArgs
shape [Required]
-[]UtilizationShapePoint -
- Points defining priority function shape
resources [Required]
-[]ResourceSpec -
- Resources to be managed
- - - -## `ServiceAffinityArgs` {#kubescheduler-config-k8s-io-v1beta1-ServiceAffinityArgs} - - - - - -ServiceAffinityArgs holds arguments used to configure the ServiceAffinity plugin. - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
apiVersion
string
kubescheduler.config.k8s.io/v1beta1
kind
string
ServiceAffinityArgs
affinityLabels [Required]
-[]string -
- AffinityLabels are homogeneous for pods that are scheduled to a node. -(i.e. it returns true IFF this pod can be added to this node such that all other pods in -the same service are running on nodes with the exact same values for Labels).
antiAffinityLabelsPreference [Required]
-[]string -
- AntiAffinityLabelsPreference are the labels to consider for service anti affinity scoring.
- - - -## `VolumeBindingArgs` {#kubescheduler-config-k8s-io-v1beta1-VolumeBindingArgs} - - - - - -VolumeBindingArgs holds arguments used to configure the VolumeBinding plugin. - - - - - - - - - - - - - - - - - -
FieldDescription
apiVersion
string
kubescheduler.config.k8s.io/v1beta1
kind
string
VolumeBindingArgs
bindTimeoutSeconds [Required]
-int64 -
- BindTimeoutSeconds is the timeout in seconds in volume binding operation. -Value must be non-negative integer. The value zero indicates no waiting. -If this value is nil, the default value (600) will be used.
- - - -## `Extender` {#kubescheduler-config-k8s-io-v1beta1-Extender} - - - - -**Appears in:** - -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta1-KubeSchedulerConfiguration) - - -Extender holds the parameters used to communicate with the extender. If a verb is unspecified/empty, -it is assumed that the extender chose not to provide that extension. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
urlPrefix [Required]
-string -
- URLPrefix at which the extender is available
filterVerb [Required]
-string -
- Verb for the filter call, empty if not supported. This verb is appended to the URLPrefix when issuing the filter call to extender.
preemptVerb [Required]
-string -
- Verb for the preempt call, empty if not supported. This verb is appended to the URLPrefix when issuing the preempt call to extender.
prioritizeVerb [Required]
-string -
- Verb for the prioritize call, empty if not supported. This verb is appended to the URLPrefix when issuing the prioritize call to extender.
weight [Required]
-int64 -
- The numeric multiplier for the node scores that the prioritize call generates. -The weight should be a positive integer
bindVerb [Required]
-string -
- Verb for the bind call, empty if not supported. This verb is appended to the URLPrefix when issuing the bind call to extender. -If this method is implemented by the extender, it is the extender's responsibility to bind the pod to apiserver. Only one extender -can implement this function.
enableHTTPS [Required]
-bool -
- EnableHTTPS specifies whether https should be used to communicate with the extender
tlsConfig [Required]
-ExtenderTLSConfig -
- TLSConfig specifies the transport layer security config
httpTimeout [Required]
-meta/v1.Duration -
- HTTPTimeout specifies the timeout duration for a call to the extender. Filter timeout fails the scheduling of the pod. Prioritize -timeout is ignored, k8s/other extenders priorities are used to select the node.
nodeCacheCapable [Required]
-bool -
- NodeCacheCapable specifies that the extender is capable of caching node information, -so the scheduler should only send minimal information about the eligible nodes -assuming that the extender already cached full details of all nodes in the cluster
managedResources
-[]ExtenderManagedResource -
- ManagedResources is a list of extended resources that are managed by -this extender. -- A pod will be sent to the extender on the Filter, Prioritize and Bind - (if the extender is the binder) phases iff the pod requests at least - one of the extended resources in this list. If empty or unspecified, - all pods will be sent to this extender. -- If IgnoredByScheduler is set to true for a resource, kube-scheduler - will skip checking the resource in predicates.
ignorable [Required]
-bool -
- Ignorable specifies if the extender is ignorable, i.e. scheduling should not -fail when the extender returns an error or is not reachable.
- - - -## `KubeSchedulerProfile` {#kubescheduler-config-k8s-io-v1beta1-KubeSchedulerProfile} - - - - -**Appears in:** - -- [KubeSchedulerConfiguration](#kubescheduler-config-k8s-io-v1beta1-KubeSchedulerConfiguration) - - -KubeSchedulerProfile is a scheduling profile. - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
schedulerName [Required]
-string -
- SchedulerName is the name of the scheduler associated to this profile. -If SchedulerName matches with the pod's "spec.schedulerName", then the pod -is scheduled with this profile.
plugins [Required]
-Plugins -
- Plugins specify the set of plugins that should be enabled or disabled. -Enabled plugins are the ones that should be enabled in addition to the -default plugins. Disabled plugins are any of the default plugins that -should be disabled. -When no enabled or disabled plugin is specified for an extension point, -default plugins for that extension point will be used if there is any. -If a QueueSort plugin is specified, the same QueueSort Plugin and -PluginConfig must be specified for all profiles.
pluginConfig [Required]
-[]PluginConfig -
- PluginConfig is an optional set of custom plugin arguments for each plugin. -Omitting config args for a plugin is equivalent to using the default config -for that plugin.
- - - -## `Plugin` {#kubescheduler-config-k8s-io-v1beta1-Plugin} - - - - -**Appears in:** - -- [PluginSet](#kubescheduler-config-k8s-io-v1beta1-PluginSet) - - -Plugin specifies a plugin name and its weight when applicable. Weight is used only for Score plugins. - - - - - - - - - - - - - - - - - - -
FieldDescription
name [Required]
-string -
- Name defines the name of plugin
weight [Required]
-int32 -
- Weight defines the weight of plugin, only used for Score plugins.
- - - -## `PluginConfig` {#kubescheduler-config-k8s-io-v1beta1-PluginConfig} - - - - -**Appears in:** - -- [KubeSchedulerProfile](#kubescheduler-config-k8s-io-v1beta1-KubeSchedulerProfile) - - -PluginConfig specifies arguments that should be passed to a plugin at the time of initialization. -A plugin that is invoked at multiple extension points is initialized once. Args can have arbitrary structure. -It is up to the plugin to process these Args. - - - - - - - - - - - - - - - - - - -
FieldDescription
name [Required]
-string -
- Name defines the name of plugin being configured
args [Required]
-k8s.io/apimachinery/pkg/runtime.RawExtension -
- Args defines the arguments passed to the plugins at the time of initialization. Args can have arbitrary structure.
- - - -## `PluginSet` {#kubescheduler-config-k8s-io-v1beta1-PluginSet} - - - - -**Appears in:** - -- [Plugins](#kubescheduler-config-k8s-io-v1beta1-Plugins) - - -PluginSet specifies enabled and disabled plugins for an extension point. -If an array is empty, missing, or nil, default plugins at that extension point will be used. - - - - - - - - - - - - - - - - - - -
FieldDescription
enabled [Required]
-[]Plugin -
- Enabled specifies plugins that should be enabled in addition to default plugins. -These are called after default plugins and in the same order specified here.
disabled [Required]
-[]Plugin -
- Disabled specifies default plugins that should be disabled. -When all default plugins need to be disabled, an array containing only one "∗" should be provided.
- - - -## `Plugins` {#kubescheduler-config-k8s-io-v1beta1-Plugins} - - - - -**Appears in:** - -- [KubeSchedulerProfile](#kubescheduler-config-k8s-io-v1beta1-KubeSchedulerProfile) - - -Plugins include multiple extension points. When specified, the list of plugins for -a particular extension point are the only ones enabled. If an extension point is -omitted from the config, then the default set of plugins is used for that extension point. -Enabled plugins are called in the order specified here, after default plugins. If they need to -be invoked before default plugins, default plugins must be disabled and re-enabled here in desired order. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
queueSort [Required]
-PluginSet -
- QueueSort is a list of plugins that should be invoked when sorting pods in the scheduling queue.
preFilter [Required]
-PluginSet -
- PreFilter is a list of plugins that should be invoked at "PreFilter" extension point of the scheduling framework.
filter [Required]
-PluginSet -
- Filter is a list of plugins that should be invoked when filtering out nodes that cannot run the Pod.
postFilter [Required]
-PluginSet -
- PostFilter is a list of plugins that are invoked after filtering phase, no matter whether filtering succeeds or not.
preScore [Required]
-PluginSet -
- PreScore is a list of plugins that are invoked before scoring.
score [Required]
-PluginSet -
- Score is a list of plugins that should be invoked when ranking nodes that have passed the filtering phase.
reserve [Required]
-PluginSet -
- Reserve is a list of plugins invoked when reserving/unreserving resources -after a node is assigned to run the pod.
permit [Required]
-PluginSet -
- Permit is a list of plugins that control binding of a Pod. These plugins can prevent or delay binding of a Pod.
preBind [Required]
-PluginSet -
- PreBind is a list of plugins that should be invoked before a pod is bound.
bind [Required]
-PluginSet -
- Bind is a list of plugins that should be invoked at "Bind" extension point of the scheduling framework. -The scheduler call these plugins in order. Scheduler skips the rest of these plugins as soon as one returns success.
postBind [Required]
-PluginSet -
- PostBind is a list of plugins that should be invoked after a pod is successfully bound.
- - - -## `PodTopologySpreadConstraintsDefaulting` {#kubescheduler-config-k8s-io-v1beta1-PodTopologySpreadConstraintsDefaulting} - -(Alias of `string`) - - -**Appears in:** - -- [PodTopologySpreadArgs](#kubescheduler-config-k8s-io-v1beta1-PodTopologySpreadArgs) - - -PodTopologySpreadConstraintsDefaulting defines how to set default constraints -for the PodTopologySpread plugin. - - - - - -## `ResourceSpec` {#kubescheduler-config-k8s-io-v1beta1-ResourceSpec} - - - - -**Appears in:** - -- [NodeResourcesLeastAllocatedArgs](#kubescheduler-config-k8s-io-v1beta1-NodeResourcesLeastAllocatedArgs) - -- [NodeResourcesMostAllocatedArgs](#kubescheduler-config-k8s-io-v1beta1-NodeResourcesMostAllocatedArgs) - -- [RequestedToCapacityRatioArgs](#kubescheduler-config-k8s-io-v1beta1-RequestedToCapacityRatioArgs) - - -ResourceSpec represents single resource and weight for bin packing of priority RequestedToCapacityRatioArguments. - - - - - - - - - - - - - - - - - - -
FieldDescription
name [Required]
-string -
- Name of the resource to be managed by RequestedToCapacityRatio function.
weight [Required]
-int64 -
- Weight of the resource.
- - - -## `UtilizationShapePoint` {#kubescheduler-config-k8s-io-v1beta1-UtilizationShapePoint} - - - - -**Appears in:** - -- [RequestedToCapacityRatioArgs](#kubescheduler-config-k8s-io-v1beta1-RequestedToCapacityRatioArgs) - - -UtilizationShapePoint represents single point of priority function shape. - - - - - - - - - - - - - - - - - - -
FieldDescription
utilization [Required]
-int32 -
- Utilization (x axis). Valid values are 0 to 100. Fully utilized node maps to 100.
score [Required]
-int32 -
- Score assigned to given utilization (y axis). Valid values are 0 to 10.
- - From 2f72e09addc8cd56d7faaae6f59c457ae122ab70 Mon Sep 17 00:00:00 2001 From: PriyanshuAhlawat Date: Fri, 18 Mar 2022 17:04:39 +0530 Subject: [PATCH 038/138] Update update-daemon-set.md --- content/en/docs/tasks/manage-daemon/update-daemon-set.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/en/docs/tasks/manage-daemon/update-daemon-set.md b/content/en/docs/tasks/manage-daemon/update-daemon-set.md index d36f487794701..e435f392fe49e 100644 --- a/content/en/docs/tasks/manage-daemon/update-daemon-set.md +++ b/content/en/docs/tasks/manage-daemon/update-daemon-set.md @@ -34,11 +34,11 @@ DaemonSet has two update strategy types: To enable the rolling update feature of a DaemonSet, you must set its `.spec.updateStrategy.type` to `RollingUpdate`. -You may want to set -[`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/docs/concepts/workloads/controllers/deployment/#max-unavailable) +You may want to set +[`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/#DaemonSetSpec) (default to 1), -[`.spec.minReadySeconds`](/docs/concepts/workloads/controllers/deployment/#min-ready-seconds) -(default to 0) and +[`.spec.minReadySeconds`](/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/#DaemonSetSpec) +(default to 0) and [`.spec.updateStrategy.rollingUpdate.maxSurge`](/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/#DaemonSetSpec) (a beta feature and defaults to 0) as well. From c8019f35d5f07849d9058c98d6cc413eedbe8564 Mon Sep 17 00:00:00 2001 From: Arhell Date: Sat, 19 Mar 2022 00:23:12 +0200 Subject: [PATCH 039/138] [ja] fix download link --- content/ja/docs/home/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/ja/docs/home/_index.md b/content/ja/docs/home/_index.md index 44419b48904a4..097c6f6fce7f2 100644 --- a/content/ja/docs/home/_index.md +++ b/content/ja/docs/home/_index.md @@ -59,7 +59,7 @@ cards: title: "K8sリリースノート" description: "もしKubernetesをインストールする、また最新バージョンにアップグレードする場合、最新のリリースノートを参照してください。" button: "Kubernetesをダウンロードする" - button_path: "/docs/setup/release/notes" + button_path: "/releases/download" - name: about title: ドキュメントについて description: このWebサイトには、Kubernetesの最新バージョンと過去4世代のドキュメントが含まれています。 From 00f74d9ae42a9844022768186300a6ed9507119e Mon Sep 17 00:00:00 2001 From: Jeremy Puchta Date: Sun, 20 Mar 2022 16:36:50 +0100 Subject: [PATCH 040/138] Remove trailing whitespaces from cli commands --- content/en/docs/tasks/tools/install-kubectl-macos.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/tasks/tools/install-kubectl-macos.md b/content/en/docs/tasks/tools/install-kubectl-macos.md index fb5ec2a306a12..9861aca1568da 100644 --- a/content/en/docs/tasks/tools/install-kubectl-macos.md +++ b/content/en/docs/tasks/tools/install-kubectl-macos.md @@ -114,7 +114,7 @@ The following methods exist for installing kubectl on macOS: Or use this for detailed view of version: ```cmd - kubectl version --client --output=yaml + kubectl version --client --output=yaml ``` ### Install with Homebrew on macOS @@ -124,7 +124,7 @@ If you are on macOS and using [Homebrew](https://brew.sh/) package manager, you 1. Run the installation command: ```bash - brew install kubectl + brew install kubectl ``` or From c67ac232ed5a5157725f72a12a73e55c07935bee Mon Sep 17 00:00:00 2001 From: Song Shukun Date: Sun, 20 Mar 2022 19:41:19 +0900 Subject: [PATCH 041/138] [zh] Update labels-annotations-taints related reference pages --- .../reference/labels-annotations-taints.md | 519 -------- .../labels-annotations-taints/_index.md | 1066 +++++++++++++++++ .../audit-annotations.md | 106 ++ 3 files changed, 1172 insertions(+), 519 deletions(-) delete mode 100644 content/zh/docs/reference/labels-annotations-taints.md create mode 100644 content/zh/docs/reference/labels-annotations-taints/_index.md create mode 100644 content/zh/docs/reference/labels-annotations-taints/audit-annotations.md diff --git a/content/zh/docs/reference/labels-annotations-taints.md b/content/zh/docs/reference/labels-annotations-taints.md deleted file mode 100644 index e4b1fdfe70012..0000000000000 --- a/content/zh/docs/reference/labels-annotations-taints.md +++ /dev/null @@ -1,519 +0,0 @@ ---- -title: 常见的标签、注解和污点 -content_type: concept -weight: 20 ---- - - - - - -Kubernetes 预留命名空间 kubernetes.io 用于所有的标签和注解。 - -本文档有两个作用,一是作为可用值的参考,二是作为赋值的协调点。 - - - -## kubernetes.io/arch - -示例:`kubernetes.io/arch=amd64` - -用于:Node - - -Kubelet 用 Go 定义的 `runtime.GOARCH` 生成该标签的键值。在混合使用 arm 和 x86 节点的场景中,此键值可以带来极大便利。 - -## kubernetes.io/os - -示例:`kubernetes.io/os=linux` - -用于:Node - - -Kubelet 用 Go 定义的 `runtime.GOOS` 生成该标签的键值。在混合使用异构操作系统场景下(例如:混合使用 Linux 和 Windows 节点),此键值可以带来极大便利。 - -## kubernetes.io/metadata.name - -示例:`kubernetes.io/metadata.name=mynamespace` - -用于:Namespaces - - -当 `NamespaceDefaultLabelName` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) -被启用时,Kubernetes API 服务器会在所有命名空间上设置此标签。标签值被设置为命名空间的名称。 - -如果你想使用标签 {{< glossary_tooltip text="选择器" term_id="selector" >}} 来指向特定的命名空间,这很有用。 - -## beta.kubernetes.io/arch (deprecated) - - -此标签已被弃用,取而代之的是 `kubernetes.io/arch`. - -## beta.kubernetes.io/os (deprecated) - - -此标签已被弃用,取而代之的是 `kubernetes.io/os`. - -## kubernetes.io/hostname {#kubernetesiohostname} - -示例:`kubernetes.io/hostname=ip-172-20-114-199.ec2.internal` - -用于:Node - - -Kubelet 用主机名生成此标签。需要注意的是主机名可修改,这是把“实际的”主机名通过参数 `--hostname-override` 传给 `kubelet` 实现的。 - -此标签也可用做拓扑层次的一个部分。更多信息参见[topology.kubernetes.io/zone](#topologykubernetesiozone)。 - -## controller.kubernetes.io/pod-deletion-cost {#pod-deletion-cost} - -示例:`controller.kubernetes.io/pod-deletion-cost=10` - -用于:Pod - - -该注解用于设置 [Pod 删除开销](/zh/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost), -允许用户影响 ReplicaSet 的缩减顺序。该注解解析为 `int32` 类型。 - -## beta.kubernetes.io/instance-type (deprecated) - -{{< note >}} - -从 v1.17 起,此标签被弃用,取而代之的是 [node.kubernetes.io/instance-type](#nodekubernetesioinstance-type). -{{< /note >}} - - -## node.kubernetes.io/instance-type {#nodekubernetesioinstance-type} - -示例:`node.kubernetes.io/instance-type=m3.medium` - -用于:Node - - -Kubelet 用 `cloudprovider` 定义的实例类型生成此标签。 -所以只有用到 `cloudprovider` 的场合,才会设置此标签。 -此标签非常有用,特别是在你希望把特定工作负载打到特定实例类型的时候,但更常见的调度方法是基于 Kubernetes 调度器来执行基于资源的调度。 -你应该聚焦于使用基于属性的调度方式,而尽量不要依赖实例类型(例如:应该申请一个 GPU,而不是 `g2.2xlarge`)。 - -## failure-domain.beta.kubernetes.io/region (deprecated) {#failure-domainbetakubernetesioregion} - -参见 [topology.kubernetes.io/region](#topologykubernetesioregion). - -{{< note >}} - -从 v1.17 开始,此标签被弃用,取而代之的是 [topology.kubernetes.io/region](#topologykubernetesioregion). -{{< /note >}} - -## failure-domain.beta.kubernetes.io/zone (deprecated) {#failure-domainbetakubernetesiozone} - -参见 [topology.kubernetes.io/zone](#topologykubernetesiozone). - -{{< note >}} - -从 v1.17 开始,此标签被弃用,取而代之的是 [topology.kubernetes.io/zone](#topologykubernetesiozone). -{{< /note >}} - -## statefulset.kubernetes.io/pod-name {#statefulsetkubernetesiopod-name} - -示例:`statefulset.kubernetes.io/pod-name=mystatefulset-7` - - -当 StatefulSet 控制器为 StatefulSet 创建 Pod 时,控制平面会在该 Pod 上设置此标签。 -标签的值是正在创建的 Pod 的名称。 - -更多细节请参见 StatefulSet 文章中的 [Pod 名称标签](/zh/docs/concepts/workloads/controllers/statefulset/#pod-name-label)。 - -## topology.kubernetes.io/region {#topologykubernetesioregion} - -示例 - -`topology.kubernetes.io/region=us-east-1` - -参见 [topology.kubernetes.io/zone](#topologykubernetesiozone). - -## topology.kubernetes.io/zone {#topologykubernetesiozone} - -示例: - -`topology.kubernetes.io/zone=us-east-1c` - -用于:Node, PersistentVolume - - -Node 场景:`kubelet` 或外部的 `cloud-controller-manager` 用 `cloudprovider` 提供的信息生成此标签。 -所以只有在用到 `cloudprovider` 的场景下,此标签才会被设置。 -但如果此标签在你的拓扑中有意义,你也可以考虑在 node 上设置它。 - -PersistentVolume 场景:拓扑自感知的卷制备程序将在 `PersistentVolumes` 上自动设置节点亲和性限制。 - - -一个可用区(zone)表示一个逻辑故障域。Kubernetes 集群通常会跨越多个可用区以提高可用性。 -虽然可用区的确切定义留给基础设施来决定,但可用区常见的属性包括:可用区内的网络延迟非常低,可用区内的网络通讯没成本,独立于其他可用区的故障域。 -例如,一个可用区中的节点可以共享交换机,但不同可用区则不会。 - - -一个地区(region)表示一个更大的域,由一个到多个可用区组成。对于 Kubernetes 来说,跨越多个地区的集群很罕见。 -虽然可用区和地区的确切定义留给基础设施来决定,但地区的常见属性包括:地区间比地区内更高的网络延迟,地区间网络流量更高的成本,独立于其他可用区或是地区的故障域。例如,一个地区内的节点可以共享电力基础设施(例如 UPS 或发电机),但不同地区内的节点显然不会。 - - -Kubernetes 对可用区和地区的结构做出一些假设: -1)地区和可用区是层次化的:可用区是地区的严格子集,任何可用区都不能再 2 个地区中出现。 -2)可用区名字在地区中独一无二:例如地区 "africa-east-1" 可由可用区 "africa-east-1a" 和 "africa-east-1b" 构成。 - - -你可以安全的假定拓扑类的标签是固定不变的。即使标签严格来说是可变的,但使用者依然可以假定一个节点只有通过销毁、重建的方式,才能在可用区间移动。 - - -Kubernetes 能以多种方式使用这些信息。 -例如,调度器自动地尝试将 ReplicaSet 中的 Pod 打散在单可用区集群的不同节点上(以减少节点故障的影响,参见[kubernetes.io/hostname](#kubernetesiohostname))。 -在多可用区的集群中,这类打散分布的行为也会应用到可用区(以减少可用区故障的影响)。 -做到这一点靠的是 _SelectorSpreadPriority_。 - - -_SelectorSpreadPriority_ 是一种最大能力分配方法(best effort)。如果集群中的可用区是异构的(例如:不同数量的节点,不同类型的节点,或不同的 Pod 资源需求),这种分配方法可以防止平均分配 Pod 到可用区。如果需要,你可以用同构的可用区(相同数量和类型的节点)来减少潜在的不平衡分布。 - - -调度器(通过 _VolumeZonePredicate_ 的预测)也会保障声明了某卷的 Pod 只能分配到该卷相同的可用区。 -卷不支持跨可用区挂载。 - - -如果 `PersistentVolumeLabel` 不支持给 PersistentVolume 自动打标签,你可以考虑手动加标签(或增加 `PersistentVolumeLabel` 支持)。 -有了 `PersistentVolumeLabel`,调度器可以防止 Pod 挂载不同可用区中的卷。 -如果你的基础架构没有此限制,那你根本就没有必要给卷增加 zone 标签。 - -## node.kubernetes.io/windows-build {#nodekubernetesiowindows-build} - -示例: `node.kubernetes.io/windows-build=10.0.17763` - -用于:Node - - -当 kubelet 运行于 Microsoft Windows,它给节点自动打标签,以记录 Windows Server 的版本。 - -标签值的格式为 "主版本.次版本.构建号" - -## service.kubernetes.io/headless {#servicekubernetesioheadless} - -示例:`service.kubernetes.io/headless=""` - -用于:Service - - -在无头(headless)服务的场景下,控制平面为 Endpoint 对象添加此标签。 - -## kubernetes.io/service-name {#kubernetesioservice-name} - -示例:`kubernetes.io/service-name="nginx"` - -用于:Service - - -Kubernetes 用此标签区分多个服务。当前仅用于 `ELB`(Elastic Load Balancer)。 - -## endpointslice.kubernetes.io/managed-by {#endpointslicekubernetesiomanaged-by} - -示例:`endpointslice.kubernetes.io/managed-by="controller"` - -用于:EndpointSlices - - -此标签用来指向管理 EndpointSlice 的控制器或实体。 -此标签的目的是用集群中不同的控制器或实体来管理不同的 EndpointSlice。 - -## endpointslice.kubernetes.io/skip-mirror {#endpointslicekubernetesioskip-mirror} - -示例:`endpointslice.kubernetes.io/skip-mirror="true"` - -用于:Endpoints - - -此标签在 Endpoints 资源上设为 `"true"` 指示 EndpointSliceMirroring 控制器不要镜像此 EndpointSlices 资源。 - -## service.kubernetes.io/service-proxy-name {#servicekubernetesioservice-proxy-name} - -示例:`service.kubernetes.io/service-proxy-name="foo-bar"` - -用于:Service - - -kube-proxy 把此标签用于客户代理,将服务控制委托给客户代理。 - -## experimental.windows.kubernetes.io/isolation-type - -示例:`experimental.windows.kubernetes.io/isolation-type: "hyperv"` - -用于:Pod - - -此注解用于运行 Hyper-V 隔离的 Windows 容器。 -要使用 Hyper-V 隔离特性,并创建 Hyper-V 隔离容器,kubelet 应该用特性门控 HyperVContainer=true 来启动,并且 Pod 应该包含注解 `experimental.windows.kubernetes.io/isolation-type=hyperv`。 - -{{< note >}} -你只能在单容器 Pod 上设置此注解。 -{{< /note >}} - -## ingressclass.kubernetes.io/is-default-class - -示例:`ingressclass.kubernetes.io/is-default-class: "true"` - -用于:IngressClass - - -当唯一的 IngressClass 资源将此注解的值设为 "true",没有指定类型的新 Ingress 资源将使用此默认类型。 - -## kubernetes.io/ingress.class (deprecated) - -{{< note >}} - -从 v1.18 开始,此注解被弃用,取而代之的是 `spec.ingressClassName`。 -{{< /note >}} - -## storageclass.kubernetes.io/is-default-class - -示例:`storageclass.kubernetes.io/is-default-class=true` - -用于:StorageClass - - -当单个的 StorageClass 资源将这个注解设置为 `"true"` 时,新的持久卷申领(PVC) -资源若未指定类别,将被设定为此默认类别。 - -## alpha.kubernetes.io/provided-node-ip - -示例:`alpha.kubernetes.io/provided-node-ip: "10.0.0.1"` - -用于:Node - - -kubectl 在 Node 上设置此注解,表示它的 IPv4 地址。 - -当 kubectl 由外部的云供应商启动时,在 Node 上设置此注解,表示由命令行标记(`--node-ip`)设置的 IP 地址。 -cloud-controller-manager 向云供应商验证此 IP 是否有效。 - -## batch.kubernetes.io/job-completion-index - -示例:`batch.kubernetes.io/job-completion-index: "3"` - -用于:Pod - - -kube-controller-manager 中的 Job 控制器给创建使用索引 -[完成模式](/zh/docs/concepts/workloads/controllers/job/#completion-mode) -的 Pod 设置此注解。 - -## kubectl.kubernetes.io/default-container - -示例:`kubectl.kubernetes.io/default-container: "front-end-app"` - - -注解的值是此 Pod 的默认容器名称。 -例如,`kubectl logs` 或 `kubectl exec` 没有 `-c` 或 `--container` 参数时,将使用这个默认的容器。 - -## endpoints.kubernetes.io/over-capacity - -示例:`endpoints.kubernetes.io/over-capacity:warning` - -用于:Endpoints - - -在 Kubernetes 集群 v1.21(或更高版本)中,如果 Endpoint 超过 1000 个,Endpoint 控制器 -就会向其添加这个注解。该注解表示 Endpoint 资源已超过容量。 - -**以下列出的污点只能用于 Node** - -## node.kubernetes.io/not-ready - -示例:`node.kubernetes.io/not-ready:NoExecute` - - -节点控制器通过健康监控来检测节点是否就绪,并据此添加/删除此污点。 - -## node.kubernetes.io/unreachable - -示例:`node.kubernetes.io/unreachable:NoExecute` - - -如果 [NodeCondition](/docs/concepts/architecture/nodes/#condition) 的 `Ready` 键值为 `Unknown`,节点控制器将添加污点到 node。 - -## node.kubernetes.io/unschedulable - -示例:`node.kubernetes.io/unschedulable:NoSchedule` - - -当初始化节点时,添加此污点,来避免竟态的发生。 - -## node.kubernetes.io/memory-pressure - -示例:`node.kubernetes.io/memory-pressure:NoSchedule` - - -kubelet 依据节点上观测到的 `memory.available` 和 `allocatableMemory.available` 来检测内存压力。 -用观测值对比 kubelet 设置的阈值,以判断节点状态和污点是否可以被添加/移除。 - -## node.kubernetes.io/disk-pressure - -示例:`node.kubernetes.io/disk-pressure:NoSchedule` - - -kubelet 依据节点上观测到的 `imagefs.available`、`imagefs.inodesFree`、`nodefs.available` 和 `nodefs.inodesFree`(仅 Linux) 来判断磁盘压力。 -用观测值对比 kubelet 设置的阈值,以确定节点状态和污点是否可以被添加/移除。 - -## node.kubernetes.io/network-unavailable - -示例:`node.kubernetes.io/network-unavailable:NoSchedule` - - -它初始由 kubectl 设置,云供应商用它来指示对额外网络配置的需求。 -仅当云中的路由器配置妥当后,云供应商才会移除此污点。 - -## node.kubernetes.io/pid-pressure - -示例:`node.kubernetes.io/pid-pressure:NoSchedule` - - -kubelet 检查 `/proc/sys/kernel/pid_max` 尺寸的 D 值(D-value),以及节点上 Kubernetes 消耗掉的 PID,以获取可用的 PID 数量,此数量可通过指标 `pid.available` 得到。 -然后用此指标对比 kubelet 设置的阈值,以确定节点状态和污点是否可以被添加/移除。 - -## node.cloudprovider.kubernetes.io/uninitialized - -示例:`node.cloudprovider.kubernetes.io/uninitialized:NoSchedule` - - -当 kubelet 由外部云供应商启动时,在节点上设置此污点以标记节点不可用,直到一个 cloud-controller-manager 控制器初始化此节点之后,才会移除此污点。 - -## node.cloudprovider.kubernetes.io/shutdown - -示例:`node.cloudprovider.kubernetes.io/shutdown:NoSchedule` - - -如果一个云供应商的节点被指定为关机状态,节点被打上污点 `node.cloudprovider.kubernetes.io/shutdown`,污点的影响为 `NoSchedule`。 diff --git a/content/zh/docs/reference/labels-annotations-taints/_index.md b/content/zh/docs/reference/labels-annotations-taints/_index.md new file mode 100644 index 0000000000000..3281cf10be5f0 --- /dev/null +++ b/content/zh/docs/reference/labels-annotations-taints/_index.md @@ -0,0 +1,1066 @@ +--- +title: 常见的标签、注解和污点 +content_type: concept +weight: 20 +no_list: true +--- + + + + + +Kubernetes 保留命名空间 kubernetes.io 下的所有的标签和注解。 + +本文档有两个作用,一是作为可用值的参考,二是作为赋值的协调点。 + + + + +## 用于 API 对象的标签、注解和污点 + +### kubernetes.io/arch + +示例:`kubernetes.io/arch=amd64` + +用于:Node + +Kubelet 用 Go 定义的 `runtime.GOARCH` 生成该标签的键值。 +在混合使用 ARM 和 x86 节点的场景中,此键值可以带来极大便利。 + + +### kubernetes.io/os + +示例:`kubernetes.io/os=linux` + +用于:Node + +Kubelet 用 Go 定义的 `runtime.GOOS` 生成该标签的键值。 +在混合使用异构操作系统场景下(例如:混合使用 Linux 和 Windows 节点),此键值可以带来极大便利。 + + +### kubernetes.io/metadata.name + +示例:`kubernetes.io/metadata.name=mynamespace` + +用于:Namespace + +Kubernetes API 服务器({{< glossary_tooltip text="控制平面" term_id="control-plane" >}}的一部分) +会在所有命名空间上设置此标签。标签值被设置为命名空间的名称。你无法更改此标签值。 + +如果你想使用标签{{< glossary_tooltip text="选择器" term_id="selector" >}}来指向特定的命名空间, +此标签很有用。 + + +### beta.kubernetes.io/arch (已弃用) + +此标签已被弃用,取而代之的是 `kubernetes.io/arch`. + +### beta.kubernetes.io/os (已弃用) + +此标签已被弃用,取而代之的是 `kubernetes.io/os`. + + +### kubernetes.io/hostname {#kubernetesiohostname} + +示例:`kubernetes.io/hostname=ip-172-20-114-199.ec2.internal` + +用于:Node + +Kubelet 用主机名生成此标签的取值。 +注意可以通过传入参数 `--hostname-override` 给 `kubelet` 来修改此“实际”主机名。 + +此标签也可用做拓扑层次的一个部分。 +更多信息参见 [topology.kubernetes.io/zone](#topologykubernetesiozone)。 + + +### kubernetes.io/change-cause {#change-cause} + +示例:`kubernetes.io/change-cause=kubectl edit --record deployment foo` + +用于:所有对象 + +此注解是对改动原因的最好的推测。 + +当在可能修改一个对象的 `kubectl` 命令中加入 `--record` 时,会生成此注解。 + + +### kubernetes.io/description {#description} + +示例:`kubernetes.io/description: "Description of K8s object."` + +用于:所有对象 + +此注解用于描述给定对象的具体行为 + + +### kubernetes.io/enforce-mountable-secrets {#enforce-mountable-secrets} + +示例:`kubernetes.io/enforce-mountable-secrets: "true"` + +用于:ServiceAccount + +此注解只在值为 **true** 时生效。 +此注解表示以此服务账号运行的 Pod 只能引用此服务账号的 `secrets` 字段中所写的 Secret API 对象。 + + +### controller.kubernetes.io/pod-deletion-cost {#pod-deletion-cost} + +示例:`controller.kubernetes.io/pod-deletion-cost=10` + +用于:Pod + +该注解用于设置 [Pod 删除开销](/zh/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost), +允许用户影响 ReplicaSet 的缩减顺序。该注解解析为 `int32` 类型。 + + +### beta.kubernetes.io/instance-type (已弃用) + +{{< note >}} +从 v1.17 起,此标签被弃用,取而代之的是 [node.kubernetes.io/instance-type](#nodekubernetesioinstance-type)。 +{{< /note >}} + + +### node.kubernetes.io/instance-type {#nodekubernetesioinstance-type} + +示例:`node.kubernetes.io/instance-type=m3.medium` + +用于:Node + +Kubelet 用 `cloudprovider` 定义的实例类型生成此标签的取值。 +所以只有用到 `cloudprovider` 的场合,才会设置此标签。 +在你希望把特定工作负载调度到特定实例类型的时候此标签很有用,但更常见的调度方法是基于 +Kubernetes 调度器来执行基于资源的调度。 +你应该聚焦于使用基于属性的调度方式,而不是基于实例类型(例如:应该申请一个 GPU,而不是 `g2.2xlarge`)。 + + +### failure-domain.beta.kubernetes.io/region (已弃用) {#failure-domainbetakubernetesioregion} + +参见 [topology.kubernetes.io/region](#topologykubernetesioregion). + +{{< note >}} +从 v1.17 开始,此标签被弃用,取而代之的是 [topology.kubernetes.io/region](#topologykubernetesioregion)。 +{{< /note >}} + + +### failure-domain.beta.kubernetes.io/zone (已弃用) {#failure-domainbetakubernetesiozone} + +参见 [topology.kubernetes.io/zone](#topologykubernetesiozone). + +{{< note >}} +从 v1.17 开始,此标签被弃用,取而代之的是 [topology.kubernetes.io/zone](#topologykubernetesiozone)。 +{{< /note >}} + + +### statefulset.kubernetes.io/pod-name {#statefulsetkubernetesiopod-name} + +示例:`statefulset.kubernetes.io/pod-name=mystatefulset-7` + +当 StatefulSet 控制器为 StatefulSet 创建 Pod 时,控制平面会在该 Pod 上设置此标签。 +标签的值是正在创建的 Pod 的名称。 + +更多细节请参见 StatefulSet 文章中的 [Pod 名称标签](/zh/docs/concepts/workloads/controllers/statefulset/#pod-name-label)。 + + +### topology.kubernetes.io/region {#topologykubernetesioregion} + +示例:`topology.kubernetes.io/region=us-east-1` + +参见 [topology.kubernetes.io/zone](#topologykubernetesiozone)。 + + +### topology.kubernetes.io/zone {#topologykubernetesiozone} + +示例:`topology.kubernetes.io/zone=us-east-1c` + +用于:Node、PersistentVolume + + +Node 场景:`kubelet` 或外部的 `cloud-controller-manager` 用 `cloudprovider` 提供的信息生成此标签。 +所以只有在用到 `cloudprovider` 的场景下,此标签才会被设置。 +但如果此标签在你的拓扑中有意义,你也可以考虑在 Node 上设置它。 + +PersistentVolume 场景:拓扑自感知的卷制备程序将在 `PersistentVolumes` 上自动设置节点亲和性限制。 + + +一个可用区(Zone)表示一个逻辑故障域。Kubernetes 集群通常会跨越多个可用区以提高可用性。 +虽然可用区的确切定义留给基础设施来决定,但可用区常见的属性包括: +可用区内的网络延迟非常低,可用区内的网络通讯无成本,以及故障独立性。 +例如,一个可用区中的节点可以共享交换机,但不同可用区则不应该。 + + +一个地区(Region)表示一个更大的域,由一个或多个可用区组成。对于 Kubernetes 来说,跨越多个地区的集群很罕见。 +虽然可用区和地区的确切定义留给基础设施来决定,但地区的常见属性包括: +相比于地区内通信地区间的网络延迟更高,地区间网络流量成本更高,以及故障独立性。 +例如,一个地区内的节点也许会共享电力基础设施(例如 UPS 或发电机),但不同地区内的节点显然不会。 + + +Kubernetes 对可用区和地区的结构做出一些假设: +1)地区和可用区是层次化的:可用区是地区的严格子集,任何可用区都不能在 2 个地区中出现。 +2)可用区名字在地区中独一无二:例如地区 "africa-east-1" 可由可用区 "africa-east-1a" 和 "africa-east-1b" 构成。 + + +你可以安全地假定拓扑类的标签是固定不变的。 +即使标签严格来说是可变的,使用者依然可以假定一个节点只有通过销毁、重建的方式,才能在可用区间移动。 + + +Kubernetes 能以多种方式使用这些信息。 +例如,调度器自动地尝试将 ReplicaSet 中的 Pod +打散在单可用区集群的不同节点上(以减少节点故障的影响,参见[kubernetes.io/hostname](#kubernetesiohostname))。 +在多可用区的集群中,这类打散分布的行为也会应用到可用区(以减少可用区故障的影响)。 +做到这一点靠的是 _SelectorSpreadPriority_。 + + +_SelectorSpreadPriority_ 是一种尽力而为(best effort)的分配方法。 +如果集群中的可用区是异构的(例如:节点数量不同、节点类型不同或者 Pod +的资源需求不同),这种分配方法可以防止平均分配 Pod 到可用区。 +如果需要,你可以用同构的可用区(相同数量和类型的节点)来减少潜在的不平衡分布。 + + +调度器会(通过 _VolumeZonePredicate_ 断言)保障申领了某卷的 Pod 只能分配到该卷相同的可用区。 +卷不支持跨可用区挂载。 + + +如果 `PersistentVolumeLabel` 不支持给你的 PersistentVolume 自动打标签,你可以考虑手动加标签(或增加 +`PersistentVolumeLabel` 支持)。 +有了 `PersistentVolumeLabel`,调度器可以防止 Pod 挂载不同可用区中的卷。 +如果你的基础架构没有此限制,那你根本就没有必要给卷增加 zone 标签。 + + +### volume.beta.kubernetes.io/storage-provisioner (已弃用) + +示例:`volume.beta.kubernetes.io/storage-provisioner: k8s.io/minikube-hostpath` + +用于:PersistentVolumeClaim + +该注解已被弃用。 + + +### volume.kubernetes.io/storage-provisioner + +用于:PersistentVolumeClaim + +该注解会被加到动态制备的 PVC 上。 + + +### node.kubernetes.io/windows-build {#nodekubernetesiowindows-build} + +示例: `node.kubernetes.io/windows-build=10.0.17763` + +用于:Node + +当 kubelet 运行于 Microsoft Windows 时,它给节点自动打标签,以记录 Windows Server 的版本。 + +标签值的格式为 "主版本.次版本.构建号"。 + + +### service.kubernetes.io/headless {#servicekubernetesioheadless} + +示例:`service.kubernetes.io/headless=""` + +用于:Service + +在无头(headless)服务的场景下,控制平面为 Endpoints 对象添加此标签。 + + +### kubernetes.io/service-name {#kubernetesioservice-name} + +示例:`kubernetes.io/service-name="nginx"` + +用于:Service + +Kubernetes 用此标签区分多个服务。当前仅用于 `ELB`(Elastic Load Balancer)。 + + +### endpointslice.kubernetes.io/managed-by {#endpointslicekubernetesiomanaged-by} + +示例:`endpointslice.kubernetes.io/managed-by="controller"` + +用于:EndpointSlice + +此标签用来标示管理 EndpointSlice 的控制器或实体。 +此标签的目的是允许集群中使用不同控制器或实体来管理不同的 EndpointSlice。 + + +### endpointslice.kubernetes.io/skip-mirror {#endpointslicekubernetesioskip-mirror} + +示例:`endpointslice.kubernetes.io/skip-mirror="true"` + +用于:Endpoints + +此标签在 Endpoints 资源上设为 `"true"` 时,指示 EndpointSliceMirroring 控制器不要使用 EndpointSlices 镜像此资源。 + + +### service.kubernetes.io/service-proxy-name {#servicekubernetesioservice-proxy-name} + +示例:`service.kubernetes.io/service-proxy-name="foo-bar"` + +用于:Service + +此标签被 kube-proxy 用于自定义代理,将服务控制委托给自定义代理。 + + +### experimental.windows.kubernetes.io/isolation-type (已弃用) {#experimental-windows-kubernetes-io-isolation-type} + +示例:`experimental.windows.kubernetes.io/isolation-type: "hyperv"` + +用于:Pod + +此注解用于运行 Hyper-V 隔离的 Windows 容器。 +要使用 Hyper-V 隔离特性,并创建 Hyper-V 隔离的容器,kubelet 应启用特性门控 HyperVContainer=true,并且 +Pod 应该包含注解 `experimental.windows.kubernetes.io/isolation-type=hyperv`。 + + +{{< note >}} +你只能在单容器 Pod 上设置此注解。 +从 v1.20 开始,此注解被弃用。实验性的 Hyper-V 支持于 1.21 中被移除。 +{{< /note >}} + + +### ingressclass.kubernetes.io/is-default-class + +示例:`ingressclass.kubernetes.io/is-default-class: "true"` + +用于:IngressClass + +当仅有一个 IngressClass 资源将此注解的值设为 `"true"`,没有指定类的新 Ingress 资源将使用此默认类。 + + +### kubernetes.io/ingress.class (已弃用) + +{{< note >}} +从 v1.18 开始,此注解被弃用,取而代之的是 `spec.ingressClassName`。 +{{< /note >}} + + +### storageclass.kubernetes.io/is-default-class + +示例:`storageclass.kubernetes.io/is-default-class=true` + +用于:StorageClass + +当仅有一个 StorageClass 资源将这个注解设置为 `"true"` 时,没有指定类的新 +PersistentVolumeClaim 资源将被设定为此默认类。 + + +### alpha.kubernetes.io/provided-node-ip + +示例:`alpha.kubernetes.io/provided-node-ip: "10.0.0.1"` + +用于:Node + +kubelet 在 Node 上设置此注解,标示它所配置的 IPv4 地址。 + +如果 kubelet 启动时配置了“external”云驱动,它会在 Node +上设置此注解以标示通过命令行参数(`--node-ip`)设置的 IP 地址。 +该 IP 地址由 cloud-controller-manager 向云驱动验证有效性。 + + +### batch.kubernetes.io/job-completion-index + +示例:`batch.kubernetes.io/job-completion-index: "3"` + +用于:Pod + +kube-controller-manager 中的 Job +控制器给使用索引(Indexed)[完成模式](/zh/docs/concepts/workloads/controllers/job/#completion-mode)创建的 +Pod 设置此注解。 + + +### kubectl.kubernetes.io/default-container + +示例:`kubectl.kubernetes.io/default-container: "front-end-app"` + +此注解的值是 Pod 的默认容器名称。 +例如,`kubectl logs` 或 `kubectl exec` 没有传入 `-c` 或 `--container` 参数时,将使用这个默认的容器。 + + +### endpoints.kubernetes.io/over-capacity + +示例:`endpoints.kubernetes.io/over-capacity:warning` + +用于:Endpoints + +在 v1.22(或更高版本)的 Kubernetes 集群中,如果 Endpoints +资源中的端点超过了 1000 个,Endpoints 控制器就会向其添加这个注解。 +该注解表示此 Endpoints 资源已超过容量,而其端点数已被截断至 1000。 + + +### batch.kubernetes.io/job-tracking + +示例:`batch.kubernetes.io/job-tracking: ""` + +用于:Job + +Job 资源中若包含了此注解,则代表控制平面正[使用 Finalizer 追踪 Job 的状态](/zh/docs/concepts/workloads/controllers/job/#job-tracking-with-finalizers)。 +你**不该**手动添加或移除此注解。 + + +### scheduler.alpha.kubernetes.io/preferAvoidPods (已弃用) {#scheduleralphakubernetesio-preferavoidpods} + +用于:Node + +此注解要求启用 [NodePreferAvoidPods 调度插件](/zh/docs/reference/scheduling/config/#scheduling-plugins)。 +该插件已于 Kubernetes 1.22 起弃用。 +请转而使用[污点和容忍度](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)。 + + +**以下列出的污点只能用于 Node** + +### node.kubernetes.io/not-ready + +示例:`node.kubernetes.io/not-ready:NoExecute` + +节点控制器通过健康监控来检测节点是否就绪,并据此添加/删除此污点。 + + +### node.kubernetes.io/unreachable + +示例:`node.kubernetes.io/unreachable:NoExecute` + +如果[节点状况](/zh/docs/concepts/architecture/nodes/#condition)的 +`Ready` 键值为 `Unknown`,节点控制器会为节点添加此污点。 + + +### node.kubernetes.io/unschedulable + +示例:`node.kubernetes.io/unschedulable:NoSchedule` + +此污点会在节点初始化时被添加,以避免竟态的发生。 + + +### node.kubernetes.io/memory-pressure + +示例:`node.kubernetes.io/memory-pressure:NoSchedule` + +kubelet 依据节点上观测到的 `memory.available` 和 `allocatableMemory.available` 来检测内存压力。 +用观测值对比 kubelet 设置的阈值,以判断是否需要添加/移除节点状况和污点。 + + +### node.kubernetes.io/disk-pressure + +示例:`node.kubernetes.io/disk-pressure:NoSchedule` + +kubelet 依据节点上观测到的 `imagefs.available`、`imagefs.inodesFree`、`nodefs.available` 和 +`nodefs.inodesFree`(仅 Linux) 来判断磁盘压力。 +用观测值对比 kubelet 设置的阈值,以判断是否需要添加/移除节点状况和污点。 + + +### node.kubernetes.io/network-unavailable + +示例:`node.kubernetes.io/network-unavailable:NoSchedule` + +此污点初始由 kubelet 设置,云驱动用它来指示对额外网络配置的需求。 +仅当云中的路由配置妥当后,云驱动才会移除此污点。 + + +### node.kubernetes.io/pid-pressure + +示例:`node.kubernetes.io/pid-pressure:NoSchedule` + +kubelet 检查 `/proc/sys/kernel/pid_max` 尺寸的 D 值(D-value),以及节点上 +Kubernetes 消耗掉的 PID 以获取可用的 PID 数量,即指标 `pid.available` 所指代的值。 +然后用此指标对比 kubelet 设置的阈值,以确定节点状态和污点是否可以被添加/移除。 + + +### node.cloudprovider.kubernetes.io/uninitialized + +示例:`node.cloudprovider.kubernetes.io/uninitialized:NoSchedule` + +如果 kubelet 启动时设置了“external”云驱动,将在节点上设置此污点以标记节点不可用,直到 +cloud-controller-manager 中的某个控制器初始化此节点之后,才会移除此污点。 + + +### node.cloudprovider.kubernetes.io/shutdown + +示例:`node.cloudprovider.kubernetes.io/shutdown:NoSchedule` + +如果 Node 处于云驱动所指定的关机状态,Node 将被打上污点 +`node.cloudprovider.kubernetes.io/shutdown`,污点的效果为 `NoSchedule`。 + + +### pod-security.kubernetes.io/enforce + +示例:`pod-security.kubernetes.io/enforce: baseline` + +用于:Namespace + +此标签的值**必须**是 `privileged`、`baseline`、`restricted` 之一,对应 +[Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)中定义的级别。 +具体而言,被此标签标记的命名空间下,任何创建不满足安全要求的 Pod 的请求都会被都会被 _禁止_。 + +更多信息请查阅[执行命名空间级别的 Pod 安全性设置](/zh/docs/concepts/security/pod-security-admission)。 + + +### pod-security.kubernetes.io/enforce-version + +示例:`pod-security.kubernetes.io/enforce-version: {{< skew latestVersion >}}` + +用于:Namespace + +此标签的值**必须**是 `latest` 或一个以 `v.` 格式表示的有效的 Kubernets 版本号。 +此标签决定了验证 Pod 时所使用的 [Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)策略的版本。 + +更多信息请查阅[执行命名空间级别的 Pod 安全性设置](/zh/docs/concepts/security/pod-security-admission)。 + + +### pod-security.kubernetes.io/audit + +示例:`pod-security.kubernetes.io/audit: baseline` + +用于:Namespace + +此标签的值**必须**是 `privileged`、`baseline`、`restricted` 之一,对应 +[Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)中定义的级别。 +具体而言,此标签不会阻止不满足安全性要求的 Pod 的创建,但会在那些 Pod 中添加审计(Audit)注解。 + +更多信息请查阅[执行命名空间级别的 Pod 安全性设置](/zh/docs/concepts/security/pod-security-admission)。 + + +### pod-security.kubernetes.io/audit-version + +示例:`pod-security.kubernetes.io/audit-version: {{< skew latestVersion >}}` + +用于:Namespace + +此标签的值**必须**是 `latest` 或一个以 `v.` 格式表示的有效的 Kubernets 版本号。 +此标签决定了验证 Pod 时所使用的 [Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)策略的版本。 + +更多信息请查阅[执行命名空间级别的 Pod 安全性设置](/zh/docs/concepts/security/pod-security-admission)。 + + +### pod-security.kubernetes.io/warn + +示例:`pod-security.kubernetes.io/warn: baseline` + +用于:Namespace + +此标签的值**必须**是 `privileged`、`baseline`、`restricted` 之一,对应 +[Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)中定义的级别。 +具体而言,此标签不会阻止不满足安全性要求的 Pod 的创建,但会返回给用户一个警告。 +注意在创建或更新包含 Pod 模板的对象(例如 Deployment、Job、StatefulSet 等)时,也会显示该警告。 + +更多信息请查阅[执行命名空间级别的 Pod 安全性设置](/zh/docs/concepts/security/pod-security-admission)。 + + +### pod-security.kubernetes.io/warn-version + +示例:`pod-security.kubernetes.io/warn-version: {{< skew latestVersion >}}` + +用于:Namespace + +此标签的值**必须**是 `latest` 或一个以 `v.` 格式表示的有效的 Kubernets 版本号。 +此标签决定了验证 Pod 时所使用的 [Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)策略的版本。 +注意在创建或更新包含 Pod 模板的对象(例如 Deployment、Job、StatefulSet 等)时,也会显示该警告。 + +更多信息请查阅[执行命名空间级别的 Pod 安全性设置](/zh/docs/concepts/security/pod-security-admission)。 + + +### seccomp.security.alpha.kubernetes.io/pod (已弃用) {#seccomp-security-alpha-kubernetes-io-pod} + +此注解已于 Kubernetes v1.19 起被弃用,且将于 v1.25 失效。 +要为 Pod 设定具体的安全设置,请在 Pod 规约中加入 `securityContext` 字段。 +Pod 的 [`.spec.securityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) +字段定义了 Pod 级别的安全属性。 +当你[为 Pod 设置安全性上下文](/zh/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod)时, +你设定的配置会被应用到该 Pod 的所有容器中。 + + +### container.seccomp.security.alpha.kubernetes.io/[NAME] {#container-seccomp-security-alpha-kubernetes-io} + +此注解已于 Kubernetes v1.19 起被弃用,且将于 v1.25 失效。 +[使用 seccomp 限制容器的系统调用](/zh/docs/tutorials/security/seccomp/)教程会指导你完成对 +Pod 或其中的一个容器应用 seccomp 配置文件的全部流程。 +该教程涵盖了 Kubernetes 所支持的配置 seccomp 的机制,此机制基于 Pod 的 `.spec.securityContext`。 + + +## 用于审计的注解 + +- [`pod-security.kubernetes.io/exempt`](/zh/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-exempt) +- [`pod-security.kubernetes.io/enforce-policy`](/zh/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-enforce-policy) +- [`pod-security.kubernetes.io/audit-violations`](/zh/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-audit-violations) + +更多细节请参阅[审计注解](/zh/docs/reference/labels-annotations-taints/audit-annotations/)。 \ No newline at end of file diff --git a/content/zh/docs/reference/labels-annotations-taints/audit-annotations.md b/content/zh/docs/reference/labels-annotations-taints/audit-annotations.md new file mode 100644 index 0000000000000..8fe6e438de162 --- /dev/null +++ b/content/zh/docs/reference/labels-annotations-taints/audit-annotations.md @@ -0,0 +1,106 @@ +--- +title: 审计注解 +weight: 1 +--- + + + + + +此页面是 kubernetes.io 命名空间中的审计注解的参考文档。 +这些注解会被应用到 `audit.k8s.io` API 组中的 `Event` 对象中。 + + +{{< note >}} +下列注解并未用在 Kubernetes API 中。 +当你在集群中[启用审计](/zh/docs/tasks/debug-application-cluster/audit/)时,审计事件的数据将通过 +`audit.k8s.io` API 组中的 `Event` 对象来记录。 +注解会被应用到审计事件中。审计事件与 +[Event API](/docs/reference/kubernetes-api/cluster-resources/event-v1/)(`events.k8s.io` API 组)中的对象不同。 +{{< /note >}} + + + + +## pod-security.kubernetes.io/exempt + +示例:`pod-security.kubernetes.io/exempt: namespace` + +此注解的值**必须**是 `user`、`namespace`、`runtimeClass` 之一,对应 +[Pod 安全性豁免](/zh/docs/concepts/security/pod-security-admission/#exemptions)维度。 +此注解标示了 Pod 安全性豁免的维度。 + + +## pod-security.kubernetes.io/enforce-policy + +示例:`pod-security.kubernetes.io/enforce-policy: restricted:latest` + +此注解的值**必须**是 `privileged:`、`baseline:`、`restricted:` +之一,对应 [Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)中定义的级别。 +`` **必须**是 `latest` 或一个以 `v.` 格式表示的有效的 Kubernets 版本号。 +此注解标示了 Pod 安全性准入过程中执行批准或拒绝的级别。 + +更多信息请查阅 [Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)。 + + +## pod-security.kubernetes.io/audit-violations + +示例:`pod-security.kubernetes.io/audit-violations: would violate +PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container +"example" must set securityContext.allowPrivilegeEscalation=false), ...` + +此注解详细描述了一次审计策略的违背信息,其中包含了所触犯的 +[Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)级别以及具体的策略。 + +更多信息请查阅 [Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)。 \ No newline at end of file From 88f20c03b567c502a56e904b84ee006c8f1a4818 Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Mon, 21 Mar 2022 10:58:23 +0800 Subject: [PATCH 042/138] [zh] Update namespace Signed-off-by: xin.li --- content/zh/docs/reference/glossary/namespace.md | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/content/zh/docs/reference/glossary/namespace.md b/content/zh/docs/reference/glossary/namespace.md index 27e2e5018fcbc..6ef350412c910 100644 --- a/content/zh/docs/reference/glossary/namespace.md +++ b/content/zh/docs/reference/glossary/namespace.md @@ -27,16 +27,19 @@ tags: --> -名字空间是 Kubernetes 为了在同一物理集群上支持多个虚拟集群而使用的一种抽象。 +名字空间是 Kubernetes 用来支持隔离单个 {{< glossary_tooltip text="集群" term_id="cluster" >}}中的资源组的一种抽象。 -名字空间用来组织集群中对象,并为集群资源划分提供了一种方法。同一名字空间内的资源名称必须唯一,但跨名字空间时不作要求。 +名字空间用来组织集群中对象,并为集群资源划分提供了一种方法。 +同一名字空间内的资源名称必须唯一,但跨名字空间时不作要求。 +基于名字空间的作用域限定仅适用于名字空间作用域的对象(例如 Deployment、Services 等), +而不适用于集群作用域的对象(例如 StorageClass、Node、PersistentVolume 等)。 在一些文档里名字空间也称为命名空间。 From ded133d876b8ca3a03a8e6613722afc7d1f01000 Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Mon, 21 Mar 2022 11:11:19 +0800 Subject: [PATCH 043/138] [zh] Update pod-disruption-budget Signed-off-by: xin.li --- .../reference/glossary/pod-disruption-budget.md | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/content/zh/docs/reference/glossary/pod-disruption-budget.md b/content/zh/docs/reference/glossary/pod-disruption-budget.md index a900db749d021..444bec87e35c5 100644 --- a/content/zh/docs/reference/glossary/pod-disruption-budget.md +++ b/content/zh/docs/reference/glossary/pod-disruption-budget.md @@ -35,6 +35,15 @@ tags: --> - [Pod Disruption Budget](/zh/docs/concepts/workloads/pods/disruptions/) 使应用所有者能够为多实例应用创建一个对象,来确保一定数量的具有指定标签的 Pod 在任何时候都不会被主动驱逐。 PDB 无法防止非主动的中断,但是会计入预算(budget)。 \ No newline at end of file + [Pod 干扰预算(Pod Disruption Budget,PDB)](/zh/docs/concepts/workloads/pods/disruptions/) + 使应用所有者能够为多实例应用创建一个对象,来确保一定数量的具有指定标签的 Pod 在任何时候都不会被主动驱逐。 + +PDB 无法防止非主动的中断,但是会计入预算(budget)。 From 2543983f4e7a324b0d4c222cf41b5494b471162a Mon Sep 17 00:00:00 2001 From: cpanato Date: Mon, 21 Mar 2022 16:59:56 +0100 Subject: [PATCH 044/138] update patch calendar for April cycle Signed-off-by: cpanato --- content/en/releases/patch-releases.md | 5 ++++- data/releases/schedule.yaml | 27 ++++++++++++++++++--------- 2 files changed, 22 insertions(+), 10 deletions(-) diff --git a/content/en/releases/patch-releases.md b/content/en/releases/patch-releases.md index e956012593694..3358da7171368 100644 --- a/content/en/releases/patch-releases.md +++ b/content/en/releases/patch-releases.md @@ -78,10 +78,10 @@ releases may also occur in between these. | Monthly Patch Release | Cherry Pick Deadline | Target date | | --------------------- | -------------------- | ----------- | -| March 2022 | 2022-03-11 | 2022-03-16 | | April 2022 | 2022-04-08 | 2022-04-13 | | May 2022 | 2022-05-13 | 2022-05-18 | | June 2022 | 2022-06-10 | 2022-06-15 | +| July 2022 | 2022-07-08 | 2022-07-13 | ## Detailed Release History for Active Branches @@ -93,6 +93,7 @@ End of Life for **1.23** is **2023-02-28**. | Patch Release | Cherry Pick Deadline | Target Date | Note | |---------------|----------------------|-------------|------| +| 1.23.6 | 2022-04-08 | 2022-04-13 | | | 1.23.5 | 2022-03-11 | 2022-03-16 | | | 1.23.4 | 2022-02-11 | 2022-02-16 | | | 1.23.3 | 2022-01-24 | 2022-01-25 | [Out-of-Band Release](https://groups.google.com/u/2/a/kubernetes.io/g/dev/c/Xl1sm-CItaY) | @@ -107,6 +108,7 @@ End of Life for **1.22** is **2022-10-28** | Patch Release | Cherry Pick Deadline | Target Date | Note | |---------------|----------------------|-------------|------| +| 1.22.9 | 2022-04-08 | 2022-04-13 | | | 1.22.8 | 2022-03-11 | 2022-03-16 | | | 1.22.7 | 2022-02-11 | 2022-02-16 | | | 1.22.6 | 2022-01-14 | 2022-01-19 | | @@ -124,6 +126,7 @@ End of Life for **1.21** is **2022-06-28** | Patch Release | Cherry Pick Deadline | Target Date | Note | | ------------- | -------------------- | ----------- | ---------------------------------------------------------------------- | +| 1.21.12 | 2022-04-08 | 2022-04-13 | | | 1.21.11 | 2022-03-11 | 2022-03-16 | | | 1.21.10 | 2022-02-11 | 2022-02-16 | | | 1.21.9 | 2022-01-14 | 2022-01-19 | | diff --git a/data/releases/schedule.yaml b/data/releases/schedule.yaml index 94f3d0d821a84..72c0d01b893ab 100644 --- a/data/releases/schedule.yaml +++ b/data/releases/schedule.yaml @@ -1,11 +1,14 @@ schedules: - release: 1.23 releaseDate: 2021-12-07 - next: 1.23.5 - cherryPickDeadline: 2022-03-11 - targetDate: 2022-03-16 + next: 1.23.6 + cherryPickDeadline: 2022-04-08 + targetDate: 2022-04-13 endOfLifeDate: 2023-02-28 previousPatches: + - release: 1.23.5 + cherryPickDeadline: 2022-03-11 + targetDate: 2022-03-16 - release: 1.23.4 cherryPickDeadline: 2022-02-11 targetDate: 2022-02-16 @@ -21,11 +24,14 @@ schedules: targetDate: 2021-12-16 - release: 1.22 releaseDate: 2021-08-04 - next: 1.22.8 - cherryPickDeadline: 2022-03-11 - targetDate: 2022-03-16 + next: 1.22.9 + cherryPickDeadline: 2022-04-08 + targetDate: 2022-04-13 endOfLifeDate: 2022-10-28 previousPatches: + - release: 1.22.8 + cherryPickDeadline: 2022-03-11 + targetDate: 2022-03-16 - release: 1.22.7 cherryPickDeadline: 2022-02-11 targetDate: 2022-02-16 @@ -49,11 +55,14 @@ schedules: targetDate: 2021-08-19 - release: 1.21 releaseDate: 2021-04-08 - next: 1.21.11 - cherryPickDeadline: 2022-03-11 - targetDate: 2022-03-16 + next: 1.21.12 + cherryPickDeadline: 2022-04-08 + targetDate: 2022-04-13 endOfLifeDate: 2022-06-28 previousPatches: + - release: 1.21.11 + cherryPickDeadline: 2022-03-11 + targetDate: 2022-03-16 - release: 1.21.10 cherryPickDeadline: 2022-02-11 targetDate: 2022-02-16 From a578683f1304aa8f3bc7d33d458f105406d16f30 Mon Sep 17 00:00:00 2001 From: Marcelo Dellacroce Mansur Date: Mon, 21 Mar 2022 15:43:07 -0300 Subject: [PATCH 045/138] docs(pt-br): translate dockershim removal faq to pt-br --- .../2022-02-17-updated-dockershim-faq.md | 208 ++++++++++++++++++ 1 file changed, 208 insertions(+) create mode 100644 content/pt-br/blog/_posts/2022-02-17-updated-dockershim-faq.md diff --git a/content/pt-br/blog/_posts/2022-02-17-updated-dockershim-faq.md b/content/pt-br/blog/_posts/2022-02-17-updated-dockershim-faq.md new file mode 100644 index 0000000000000..1fdc271cf68a0 --- /dev/null +++ b/content/pt-br/blog/_posts/2022-02-17-updated-dockershim-faq.md @@ -0,0 +1,208 @@ +--- +layout: blog +title: "Atualizado: Perguntas frequentes (FAQ) sobre a remoção do Dockershim" +date: 2022-02-17 +slug: dockershim-faq +aliases: [ '/dockershim' ] +--- + +**Esta é uma atualização do artigo original [FAQ sobre a depreciação do Dockershim](/blog/2020/12/02/dockershim-faq/), +publicado no final de 2020.** + +Este documento aborda algumas perguntas frequentes sobre a +descontinuação e remoção do _dockershim_, que foi +[anunciado](/blog/2020/12/08/kubernetes-1-20-release-announcement/) +como parte do lançamento do Kubernetes v1.20. Para obter mais detalhes sobre +o que isso significa, confira a postagem do blog +[Não entre em pânico: Kubernetes e Docker](/pt-br/blog/2020/12/02/dont-panic-kubernetes-and-docker/). + +Além disso, você pode ler [verifique se a remoção do dockershim afeta você](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/) +para determinar qual impacto a remoção do _dockershim_ teria para você +ou para sua organização. + +Como o lançamento do Kubernetes 1.24 se tornou iminente, estamos trabalhando bastante para tentar fazer uma transição suave. + +- Escrevemos uma postagem no blog detalhando nosso [compromisso e os próximos passos](/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/). +- Acreditamos que não há grandes obstáculos para a migração para [outros agentes de execução de contêiner](/docs/setup/production-environment/container-runtimes/#container-runtimes). +- Há também um guia [Migrando do dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/) disponível. +- Também criamos uma página para listar + [artigos sobre a remoção do dockershim e sobre o uso de agentes de execução compatíveis com CRI](/docs/reference/node/topics-on-dockershim-and-cri-compatible-runtimes/). Essa lista inclui alguns dos documentos já mencionados e também + abrange fontes externas selecionadas (incluindo guias de fornecedores). + +### Por que o _dockershim_ está sendo removido do Kubernetes? + +As primeiras versões do Kubernetes funcionavam apenas com um ambiente de execução de contêiner específico: +Docker Engine. Mais tarde, o Kubernetes adicionou suporte para trabalhar com outros agentes de execução de contêiner. +O padrão CRI (_Container Runtime Interface_ ou Interface de Agente de Execução de Containers) foi [criado](/blog/2016/12/container-runtime-interface-cri-in-kubernetes/) para +habilitar a interoperabilidade entre orquestradores (como Kubernetes) e diferentes agentes +de execução de contêiner. +O Docker Engine não implementa essa interface (CRI), então o projeto Kubernetes criou um +código especial para ajudar na transição, e tornou esse código _dockershim_ parte do projeto +Kubernetes. + +O código _dockershim_ sempre foi destinado a ser uma solução temporária (daí o nome: _shim_). +Você pode ler mais sobre a discussão e o planejamento da comunidade na +[Proposta de remoção do Dockershim para aprimoramento do Kubernetes][drkep]. +Na verdade, manter o _dockershim_ se tornou um fardo pesado para os mantenedores do Kubernetes. + +Além disso, recursos que são amplamente incompatíveis com o _dockershim_, como +_cgroups v2_ e _namespaces_ de usuário estão sendo implementados nos agentes de execução de CRI +mais recentes. A remoção do suporte para o _dockershim_ permitirá um maior +desenvolvimento nessas áreas. + +[drkep]: https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim + +### Ainda posso usar o Docker Engine no Kubernetes 1.23? + +Sim, a única coisa que mudou na versão 1.20 é a presença de um aviso no log de inicialização +do [kubelet] se estiver usando o Docker Engine como agente de execução de contêiner. +Você verá este aviso em todas as versões até 1.23. A remoção do _dockershim_ ocorre no Kubernetes 1.24. + +[kubelet]: /docs/reference/command-line-tools-reference/kubelet/ + +### Quando o _dockershim_ será removido? + +Dado o impacto dessa mudança, estamos definindo um cronograma de depreciação mais longo. +A remoção do _dockershim_ está agendada para o Kubernetes v1.24, consulte a +[Proposta de remoção do Dockershim para aprimoramento do Kubernetes][drkep]. +O projeto Kubernetes trabalhará em estreita colaboração com fornecedores e outros ecossistemas para garantir +uma transição suave e avaliará os acontecimentos à medida que a situação for evoluindo. + +### Ainda posso usar o Docker Engine como meu agente de execução do contêiner? + +Primeiro, se você usa o Docker em seu próprio PC para desenvolver ou testar contêineres: nada muda. +Você ainda pode usar o Docker localmente, independentemente dos agentes de execução de contêiner que +você usa em seus Clusters Kubernetes. Os contêineres tornam esse tipo de interoperabilidade possível. + +Mirantis e Docker [comprometeram-se][mirantis] a manter um adaptador substituto para o +Docker Engine, e a manter este adaptador mesmo após o _dockershim_ ser removido +do Kubernetes. O adaptador substituto é chamado [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd). + +[mirantis]: https://www.mirantis.com/blog/mirantis-to-take-over-support-of-kubernetes-dockershim-2/ + +### Minhas imagens de contêiner existentes ainda funcionarão? + +Sim, as imagens produzidas a partir do `docker build` funcionarão com todas as implementações do CRI. +Todas as suas imagens existentes ainda funcionarão exatamente da mesma forma. + +#### E as imagens privadas? + +Sim. Todos os agentes de execução de CRI são compatíveis com as mesmas configurações de segredos usadas no +Kubernetes, seja por meio do PodSpec ou ServiceAccount. + +### Docker e contêineres são a mesma coisa? + +Docker popularizou o padrão de contêineres Linux e tem sido fundamental no +desenvolvimento desta tecnologia. No entanto, os contêineres já existiam +no Linux há muito tempo. O ecossistema de contêineres cresceu para ser muito +mais abrangente do que apenas Docker. Padrões como o OCI e o CRI ajudaram muitas +ferramentas a crescer e prosperar no nosso ecossistema, alguns substituindo +aspectos do Docker, enquanto outros aprimoram funcionalidades já existentes. + +### Existem exemplos de pessoas que usam outros agentes de execução de contêineres em produção hoje? + +Todos os artefatos produzidos pelo projeto Kubernetes (binários Kubernetes) são validados +a cada lançamento de versão. + +Além disso, o projeto [kind] vem usando containerd há algum tempo e tem +visto uma melhoria na estabilidade para seu caso de uso. Kind e containerd são executados +várias vezes todos os dias para validar quaisquer alterações na base de código do Kubernetes. +Outros projetos relacionados seguem um padrão semelhante, demonstrando a estabilidade e +usabilidade de outros agentes de execução de contêiner. Como exemplo, o OpenShift 4.x utiliza +o agente de execução [CRI-O] em produção desde junho de 2019. + +Para outros exemplos e referências, dê uma olhada em projetos adeptos do containerd e +CRI-O, dois agentes de execução de contêineres sob o controle da _Cloud Native Computing Foundation_ +([CNCF]). + +- [containerd](https://github.com/containerd/containerd/blob/master/ADOPTERS.md) +- [CRI-O](https://github.com/cri-o/cri-o/blob/master/ADOPTERS.md) + +[CRI-O]: https://cri-o.io/ +[kind]: https://kind.sigs.k8s.io/ +[CNCF]: https://cncf.io + +### As pessoas continuam referenciando OCI, o que é isso? + +OCI significa _[Open Container Initiative]_ (ou Iniciativa Open Source de Contêineres), que padronizou muitas das +interfaces entre ferramentas e tecnologias de contêiner. Eles mantêm uma +especificação padrão para imagens de contêiner (OCI image-spec) e para +contêineres em execução (OCI runtime-spec). Eles também mantêm uma implementação real +da especificação do agente de execução na forma de [runc], que é o agente de execução padrão +para ambos [containerd] e [CRI-O]. O CRI baseia-se nessas especificações de baixo nível para +fornecer um padrão de ponta a ponta para gerenciar contêineres. + +[Open Container Initiative]: https://opencontainers.org/about/overview/ +[runc]: https://github.com/opencontainers/runc +[containerd]: https://containerd.io/ + +### Qual implementação de CRI devo usar? + +Essa é uma pergunta complexa e depende de muitos fatores. Se você estiver +trabalhando com Docker, mudar para containerd deve ser uma troca relativamente fácil e +terá um desempenho estritamente melhor e menos sobrecarga. No entanto, nós encorajamos você a +explorar todas as opções do [cenário CNCF], pois outro agente de execução de contêiner +pode funcionar ainda melhor para o seu ambiente. + +[cenário CNCF]: https://landscape.cncf.io/card-mode?category=container-runtime&grouping=category + +### O que devo ficar atento ao mudar a minha implementação de CRI utilizada? + +Embora o código de conteinerização base seja o mesmo entre o Docker e a maioria +CRIs (incluindo containerd), existem algumas poucas diferenças. Alguns +pontos a se considerar ao migrar são: + +- Configuração de _log_ +- Limitações de recursos de agentes de execução +- Scripts de provisionamento que chamam o docker ou usam o docker por meio de seu soquete de controle +- Plugins kubectl que exigem CLI do docker ou o soquete de controle +- Ferramentas do projeto Kubernetes que requerem acesso direto ao Docker Engine + (por exemplo: a ferramenta depreciada `kube-imagepuller`) +- Configuração de funcionalidades como `registry-mirrors` e _registries_ inseguros +- Outros scripts de suporte ou _daemons_ que esperam que o Docker Engine esteja disponível e seja executado + fora do Kubernetes (por exemplo, agentes de monitoramento ou segurança) +- GPUs ou hardware especial e como eles se integram ao seu agente de execução e ao Kubernetes + +Se você usa solicitações ou limites de recursos do Kubernetes ou usa DaemonSets para coleta de logs +em arquivos, eles continuarão a funcionar da mesma forma. Mas se você personalizou +sua configuração `dockerd`, você precisará adaptá-la para seu novo agente de execução de +contêiner assim que possível. + +Outro aspecto a ser observado é que ferramentas para manutenção do sistema ou execuções dentro de um +contêiner no momento da criação de imagens podem não funcionar mais. Para o primeiro, a ferramenta +[`crictl`][cr] pode ser utilizada como um substituto natural (veja +[migrando do docker cli para o crictl](https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/#mapping-from-docker-cli-to-crictl)) +e para o último, você pode usar novas opções de construções de contêiner, como [img], [buildah], +[kaniko], ou [buildkit-cli-for-kubectl] que não requerem Docker. + +[cr]: https://github.com/kubernetes-sigs/cri-tools +[img]: https://github.com/genuinetools/img +[buildah]: https://github.com/containers/buildah +[kaniko]: https://github.com/GoogleContainerTools/kaniko +[buildkit-cli-for-kubectl]: https://github.com/vmware-tanzu/buildkit-cli-for-kubectl + +Para containerd, você pode começar com sua [documentação] para ver quais opções de configuração +estão disponíveis à medida que você vá realizando a migração. + +[documentação]: https://github.com/containerd/cri/blob/master/docs/registry.md + +Para obter instruções sobre como usar containerd e CRI-O com Kubernetes, consulte o +documentação do Kubernetes em [Agentes de execução de contêineres] + +[Agentes de execução de contêineres]: /docs/setup/production-environment/container-runtimes/ + +### E se eu tiver mais perguntas? + +Se você usa uma distribuição do Kubernetes com suporte do fornecedor, pode perguntar a eles sobre +planos de atualização para seus produtos. Para perguntas de usuário final, poste-as +no nosso fórum da comunidade de usuários: https://discuss.kubernetes.io/. + +Você também pode conferir a excelente postagem do blog +[Espere, o Docker está depreciado no Kubernetes agora?][dep], uma discussão técnica mais aprofundada +sobre as mudanças. + +[dep]: https://dev.to/inductor/wait-docker-is-deprecated-in-kubernetes-now-what-do-i-do-e4m + +### Posso ganhar um abraço? + +Sim, ainda estamos dando abraços se solicitado. 🤗🤗🤗 From 2176c33e62fef86b07604ada6c0b54c65d01edfb Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Thomas=20G=C3=BCttler?= Date: Mon, 21 Mar 2022 21:15:38 +0100 Subject: [PATCH 046/138] second yaml snippet was not alligned to first one This snippet had less indentation then the previous. Now the second path is aligned to the second path --- .../access-application-cluster/ingress-minikube.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md index ca723f73cf881..251bebbaeff4e 100644 --- a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md +++ b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md @@ -240,13 +240,13 @@ The following manifest defines an Ingress that sends traffic to your Service via following lines at the end: ```yaml - - path: /v2 - pathType: Prefix - backend: - service: - name: web2 - port: - number: 8080 + - path: /v2 + pathType: Prefix + backend: + service: + name: web2 + port: + number: 8080 ``` 1. Apply the changes: From 35bdfad7af1cdabc493f349cfda29dbeee0ee7bf Mon Sep 17 00:00:00 2001 From: Marcelo Dellacroce Mansur Date: Mon, 21 Mar 2022 21:16:53 -0300 Subject: [PATCH 047/138] docs(pt-br): typo correction --- content/pt-br/blog/_posts/2022-02-17-updated-dockershim-faq.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/pt-br/blog/_posts/2022-02-17-updated-dockershim-faq.md b/content/pt-br/blog/_posts/2022-02-17-updated-dockershim-faq.md index 1fdc271cf68a0..526ec6e344173 100644 --- a/content/pt-br/blog/_posts/2022-02-17-updated-dockershim-faq.md +++ b/content/pt-br/blog/_posts/2022-02-17-updated-dockershim-faq.md @@ -148,7 +148,7 @@ pode funcionar ainda melhor para o seu ambiente. ### O que devo ficar atento ao mudar a minha implementação de CRI utilizada? -Embora o código de conteinerização base seja o mesmo entre o Docker e a maioria +Embora o código de conteinerização base seja o mesmo entre o Docker e a maioria dos CRIs (incluindo containerd), existem algumas poucas diferenças. Alguns pontos a se considerar ao migrar são: From bdb898ba52df7bd2b94550d018ed1c70afeddc17 Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Mon, 21 Mar 2022 13:44:54 +0800 Subject: [PATCH 048/138] [zh] Update security Signed-off-by: xin.li --- .../reference/issues-security/security.md | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/content/zh/docs/reference/issues-security/security.md b/content/zh/docs/reference/issues-security/security.md index ba6280825960d..3124098682040 100644 --- a/content/zh/docs/reference/issues-security/security.md +++ b/content/zh/docs/reference/issues-security/security.md @@ -34,11 +34,6 @@ Join the [kubernetes-security-announce](https://groups.google.com/forum/#!forum/ --> 加入 [kubernetes-security-announce](https://groups.google.com/forum/#!forum/kubernetes-security-announce) 组,以获取关于安全性和主要 API 公告的电子邮件。 - -你也可以使用[此链接](https://groups.google.com/forum/feed/kubernetes-security-announce/msgs/rss_v2_0.xml?num=50) 订阅上述的 RSS 反馈。 - @@ -57,14 +52,18 @@ To make a report, please email the private [security@kubernetes.io](mailto:secur 详细信息电子邮件到[security@kubernetes.io](mailto:security@kubernetes.io)列表。 -你还可以通过电子邮件向私有 [security@kubernetes.io](mailto:security@kubernetes.io) 列表发送电子邮件,邮件中应该包含[所有 Kubernetes 错误报告](https://git.k8s.io/kubernetes/.github/ISSUE_TEMPLATE/bug-report.md)所需的详细信息。 +你还可以通过电子邮件向私有 [security@kubernetes.io](mailto:security@kubernetes.io) +列表发送电子邮件,邮件中应该包含 +[所有 Kubernetes 错误报告](https://github.com/kubernetes/kubernetes/blob/master/.github/ISSUE_TEMPLATE/bug-report.yaml) +所需的详细信息。 -你可以使用[产品安全团队成员](https://git.k8s.io/security/README.md#product-security-committee-psc) -的 GPG 密钥加密你的电子邮件到此列表。使用 GPG 加密不需要公开。 +你可以使用[安全响应委员会成员](https://git.k8s.io/security/README.md#product-security-committee-psc)的 +GPG 密钥加密你的发往邮件列表的邮件。揭示问题时不需要使用 GPG 来加密。 -每个报告在 3 个工作日内由产品安全团队成员确认和分析。这将启动[安全发布过程](https://git.k8s.io/sig-release/security-release-process-documentation/security-release-process.md#disclosures)。 +每个报告在 3 个工作日内由安全响应委员会成员确认和分析。这将启动[安全发布过程](https://git.k8s.io/sig-release/security-release-process-documentation/security-release-process.md#disclosures)。 -与产品安全团队共享的任何漏洞信息都保留在 Kubernetes 项目中,除非有必要修复该问题,否则不会传播到其他项目。 +与安全响应委员会共享的任何漏洞信息都保留在 Kubernetes 项目中,除非有必要修复该问题,否则不会传播到其他项目。 -公开披露日期由 Kubernetes 产品安全团队和 bug 提交者协商。我们倾向于在用户缓解措施可用时尽快完全披露该 bug。 +公开披露日期由 Kubernetes 安全响应委员会和 bug 提交者协商。 +我们倾向于在能够为用户提供缓解措施之后尽快完全披露该 bug。 + + +`import "k8s.io/apimachinery/pkg/apis/meta/v1"` + + +`ListMeta` 描述了合成资源必须具有的元数据,包括列表和各种状态对象。 +一个资源仅能有 `{ObjectMeta, ListMeta}` 中的一个。 + +
+ + + +- **continue** (string) + + 如果用户对返回的条目数量设置了限制,则 `continue` 可能被设置,表示服务器有更多可用的数据。 + 该值是不透明的,可用于向提供此列表服务的端点发出另一个请求,以检索下一组可用的对象。 + 如果服务器配置已更改或时间已过去几分钟,则可能无法继续提供一致的列表。 + 除非你在错误消息中收到此令牌(token),否则使用此 `continue` 值时返回的 `resourceVersion` + 字段应该和第一个响应中的值是相同的。 + + + +- **remainingItemCount** (int64) + + `remainingItemCount` 是列表中未包含在此列表响应中的后续项目的数量。 + 如果列表请求包含标签或字段选择器,则剩余项目的数量是未知的,并且在序列化期间该字段将保持未设置和省略。 + 如果列表是完整的(因为它没有分块或者这是最后一个块),那么就没有剩余的项目,并且在序列化过程中该字段将保持未设置和省略。 + 早于 v1.15 的服务器不设置此字段。`remainingItemCount` 的预期用途是*估计*集合的大小。 + 客户端不应依赖于设置准确的 `remainingItemCount`。 + + + +- **resourceVersion** (string) + + 标识该对象的服务器内部版本的字符串,客户端可以用该字段来确定对象何时被更改。 + 该值对客户端是不透明的,并且应该原样传回给服务器。该值由系统填充,只读。 + 更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency。 + + + +- **selfLink** (string) + + selfLink 表示此对象的 URL,由系统填充,只读。 + + 已弃用。 Kubernetes 将在 1.20 版本中停止传播该字段,并计划在 1.21 版本中删除该字段。 + + + + + From 4651a6e810fdef3ddfaacd1bd5f6171586da23e7 Mon Sep 17 00:00:00 2001 From: liuzhilin12 <84970609+liuzhilin12@users.noreply.github.com> Date: Tue, 22 Mar 2022 10:37:44 +0800 Subject: [PATCH 050/138] Translate en/docs/reference/kubernetes-api/common-definitions/typed-local-object-reference.md (#32271) * Translate Reference/glossary/pod-disruption * translate reference/glossary/pod-disruption page * Translate Reference/glossary/pod-disruption * Translate Reference/glossary/pod-disruption * Translate Reference/glossary/pod-disruption * translate en/docs/reference/kubernetes-api/common-definitions/typed-local-object-reference.md * Delete pod-disruption.md * Translate en/docs/reference/kubernetes-api/common-definitions/typed-local-object-reference.md * Translate en/docs/reference/kubernetes-api/common-definitions/typed-local-object-reference.md * Translate en/docs/reference/kubernetes-api/common-definitions/typed-local-object-reference.md --- .../typed-local-object-reference.md | 79 +++++++++++++++++++ 1 file changed, 79 insertions(+) create mode 100644 content/zh/docs/reference/kubernetes-api/common-definitions/typed-local-object-reference.md diff --git a/content/zh/docs/reference/kubernetes-api/common-definitions/typed-local-object-reference.md b/content/zh/docs/reference/kubernetes-api/common-definitions/typed-local-object-reference.md new file mode 100644 index 0000000000000..fead44fe99c0f --- /dev/null +++ b/content/zh/docs/reference/kubernetes-api/common-definitions/typed-local-object-reference.md @@ -0,0 +1,79 @@ +--- +api_metadata: + apiVersion: "" + import: "k8s.io/api/core/v1" + kind: "TypedLocalObjectReference" +content_type: "api_reference" +description: "TypedLocalObjectReference 包含足够的信息,可以让你在同一个名称空间中定位指定类型的引用对象。" +title: "TypedLocalObjectReference" +weight: 13 +auto_generated: true +--- + + + + + + +`import "k8s.io/api/core/v1"` + + + +TypedLocalObjectReference 包含足够的信息,可以让你在同一个名称空间中定位特定类型的引用对象。 + + +
+ +- **kind** (string), 必需 + + Kind 是被引用的资源的类型 + +- **name** (string), 必需 + + Name 是被引用的资源的名称 + +- **apiGroup** (string) + + APIGroup 是被引用资源的组。如果不指定 APIGroup,则指定的 Kind 必须在核心 API 组中。对于任何其它第三方类型,都需要 APIGroup。 + + + + + From f1ae390104fd00089782aa87284411917fbbfbee Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Mon, 21 Mar 2022 10:18:23 +0800 Subject: [PATCH 051/138] [zh] Update kubectl.md Signed-off-by: xin.li --- content/zh/docs/reference/glossary/kubectl.md | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/content/zh/docs/reference/glossary/kubectl.md b/content/zh/docs/reference/glossary/kubectl.md index 2c8bde22eb846..c32cd0812dd35 100644 --- a/content/zh/docs/reference/glossary/kubectl.md +++ b/content/zh/docs/reference/glossary/kubectl.md @@ -4,9 +4,10 @@ id: kubectl date: 2018-04-12 full_link: /docs/user-guide/kubectl-overview/ short_description: > - kubectl 是用来和 Kubernetes API 服务器进行通信的命令行工具。 + kubectl 是用来和 Kubernetes 集群进行通信的命令行工具。 aka: +- kubectl tags: - tool - fundamental @@ -19,9 +20,10 @@ id: kubectl date: 2018-04-12 full_link: /docs/user-guide/kubectl-overview/ short_description: > - A command line tool for communicating with a Kubernetes API server. + A command line tool for communicating with a Kubernetes cluster. aka: +- kubectl tags: - tool - fundamental @@ -29,15 +31,17 @@ tags: --> - kubectl 是用来和 {{< glossary_tooltip text="Kubernetes API" term_id="kubernetes-api" >}} 服务器进行通信的命令行工具。 - +kubectl 是使用 Kubernetes API 与 Kubernetes +集群的{{}}进行通信的命令行工具。 -您可以使用 kubectl 创建、检查、更新和删除 Kubernetes 对象。 +你可以使用 `kubectl` 创建、检视、更新和删除 Kubernetes 对象。 From 61849547f2f6a2ade1908165030d43e18d4b454c Mon Sep 17 00:00:00 2001 From: Arhell Date: Tue, 22 Mar 2022 10:38:22 +0200 Subject: [PATCH 052/138] [ja] update audit.md --- content/ja/docs/tasks/debug-application-cluster/audit.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/ja/docs/tasks/debug-application-cluster/audit.md b/content/ja/docs/tasks/debug-application-cluster/audit.md index b020b9c0b21e4..c38014b39e023 100644 --- a/content/ja/docs/tasks/debug-application-cluster/audit.md +++ b/content/ja/docs/tasks/debug-application-cluster/audit.md @@ -149,6 +149,7 @@ volumeMounts: 最後に`hostPath`を設定します: ```yaml ... +volumes: - name: audit hostPath: path: /etc/kubernetes/audit-policy.yaml @@ -158,7 +159,6 @@ volumeMounts: hostPath: path: /var/log/audit.log type: FileOrCreate - ``` ### Webhookバックエンド From 64fcfda8f16c7e98c560ec73feff2f43a5b72208 Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Tue, 22 Mar 2022 18:22:05 +0800 Subject: [PATCH 053/138] [zh] Update audit-annotions.md Signed-off-by: xin.li --- content/zh/docs/reference/labels-annotations-taints/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/zh/docs/reference/labels-annotations-taints/_index.md b/content/zh/docs/reference/labels-annotations-taints/_index.md index 3281cf10be5f0..b3585743e8084 100644 --- a/content/zh/docs/reference/labels-annotations-taints/_index.md +++ b/content/zh/docs/reference/labels-annotations-taints/_index.md @@ -1033,7 +1033,7 @@ Pod 的 [`.spec.securityContext`](/docs/reference/kubernetes-api/workload-resour 你设定的配置会被应用到该 Pod 的所有容器中。 -### container.seccomp.security.alpha.kubernetes.io/[NAME] {#container-seccomp-security-alpha-kubernetes-io} +### container.seccomp.security.alpha.kubernetes.io/[NAME](已弃用){#container-seccomp-security-alpha-kubernetes-io} 此注解已于 Kubernetes v1.19 起被弃用,且将于 v1.25 失效。 [使用 seccomp 限制容器的系统调用](/zh/docs/tutorials/security/seccomp/)教程会指导你完成对 From 941151f36206e5d857d7bba6a37b305dad3dc6fb Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Tue, 22 Mar 2022 18:28:05 +0800 Subject: [PATCH 054/138] [zh] Update jsonpath Signed-off-by: xin.li --- content/zh/docs/reference/kubectl/jsonpath.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/content/zh/docs/reference/kubectl/jsonpath.md b/content/zh/docs/reference/kubectl/jsonpath.md index 9d9c82b9744ef..82385b700966b 100644 --- a/content/zh/docs/reference/kubectl/jsonpath.md +++ b/content/zh/docs/reference/kubectl/jsonpath.md @@ -1,13 +1,11 @@ --- title: JSONPath 支持 content_type: concept -weight: 25 --- From 09ae2ba0f63c30e139d91e9a3edd797b28d38722 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Tue, 22 Mar 2022 18:23:10 +0800 Subject: [PATCH 055/138] Clarify topology aware hints needs explicit enabling --- .../concepts/services-networking/topology-aware-hints.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/content/en/docs/concepts/services-networking/topology-aware-hints.md b/content/en/docs/concepts/services-networking/topology-aware-hints.md index 4cc4f4aa5e9af..cd02b4015ca17 100644 --- a/content/en/docs/concepts/services-networking/topology-aware-hints.md +++ b/content/en/docs/concepts/services-networking/topology-aware-hints.md @@ -19,6 +19,12 @@ those network endpoints can be routed closer to where it originated. For example, you can route traffic within a locality to reduce costs, or to improve network performance. +{{< note >}} +The "topology-aware hints" feature is at Beta stage and it is **NOT** enabled +by default. To try out this feature, you have to enable the `TopologyAwareHints` +[feature gate](/docs/reference/command-line-tools-reference/feature-gates/). +{{< /note >}} + ## Motivation From dea3143e8fa10d62d0671d2d3c3508ff363fed09 Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Tue, 22 Mar 2022 22:11:12 +0800 Subject: [PATCH 056/138] [zh] Update labels-annotations-taints/_index.md Signed-off-by: xin.li --- content/zh/docs/reference/labels-annotations-taints/_index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/zh/docs/reference/labels-annotations-taints/_index.md b/content/zh/docs/reference/labels-annotations-taints/_index.md index 3281cf10be5f0..b3585743e8084 100644 --- a/content/zh/docs/reference/labels-annotations-taints/_index.md +++ b/content/zh/docs/reference/labels-annotations-taints/_index.md @@ -1033,7 +1033,7 @@ Pod 的 [`.spec.securityContext`](/docs/reference/kubernetes-api/workload-resour 你设定的配置会被应用到该 Pod 的所有容器中。 -### container.seccomp.security.alpha.kubernetes.io/[NAME] {#container-seccomp-security-alpha-kubernetes-io} +### container.seccomp.security.alpha.kubernetes.io/[NAME](已弃用){#container-seccomp-security-alpha-kubernetes-io} 此注解已于 Kubernetes v1.19 起被弃用,且将于 v1.25 失效。 [使用 seccomp 限制容器的系统调用](/zh/docs/tutorials/security/seccomp/)教程会指导你完成对 From bde860ddcde42d793065c3e99c800ff9e2585ca0 Mon Sep 17 00:00:00 2001 From: Abigail McCarthy <20771501+a-mccarthy@users.noreply.github.com> Date: Tue, 22 Mar 2022 10:25:23 -0400 Subject: [PATCH 057/138] Update content/en/docs/concepts/containers/images.md Co-authored-by: Tim Bannister --- content/en/docs/concepts/containers/images.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md index 70f2e959cec81..c07880a4580d2 100644 --- a/content/en/docs/concepts/containers/images.md +++ b/content/en/docs/concepts/containers/images.md @@ -278,7 +278,9 @@ Kubernetes supports specifying container image registry keys on a Pod. #### Creating a Secret with a Docker config -Using a Docker registry as an example, run the following command, substituting the appropriate uppercase values: +You need to know the username, registry password and client email address for authenticating +to the registry, as well as its hostname. +Run the following command, substituting the appropriate uppercase values: ```shell kubectl create secret docker-registry --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL From 7add16b378794ff8f9c49f833bd55cbb31ca86b0 Mon Sep 17 00:00:00 2001 From: Kirill Roskolii Date: Wed, 23 Mar 2022 10:13:58 +1300 Subject: [PATCH 058/138] Add proxy-url example to kubectl documentation (#32245) * Add proxy-url example * Fix typo Co-authored-by: Jihoon Seo <46767780+jihoon-seo@users.noreply.github.com> Co-authored-by: Jihoon Seo <46767780+jihoon-seo@users.noreply.github.com> --- .../organize-cluster-access-kubeconfig.md | 21 +++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/content/en/docs/concepts/configuration/organize-cluster-access-kubeconfig.md b/content/en/docs/concepts/configuration/organize-cluster-access-kubeconfig.md index b27fcdee6153d..713592cf989ec 100644 --- a/content/en/docs/concepts/configuration/organize-cluster-access-kubeconfig.md +++ b/content/en/docs/concepts/configuration/organize-cluster-access-kubeconfig.md @@ -148,7 +148,28 @@ File references on the command line are relative to the current working director In `$HOME/.kube/config`, relative paths are stored relatively, and absolute paths are stored absolutely. +## Proxy +You can configure `kubectl` to use proxy by setting `proxy-url` in the kubeconfig file, like: + +```yaml +apiVersion: v1 +kind: Config + +proxy-url: https://proxy.host:3128 + +clusters: +- cluster: + name: development + +users: +- name: developer + +contexts: +- context: + name: development + +``` ## {{% heading "whatsnext" %}} From 451eb18dcf05802a604e6b640051d8ca398e2c42 Mon Sep 17 00:00:00 2001 From: Priyanshu Ahlawat <84102724+PriyanshuAhlawat@users.noreply.github.com> Date: Wed, 23 Mar 2022 06:37:59 +0530 Subject: [PATCH 059/138] Update kubelet-integration.md (#32228) * Update kubelet-integration.md * Update kubelet-integration.md --- .../tools/kubeadm/kubelet-integration.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md b/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md index 59477d944fa63..46252999197c0 100644 --- a/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md +++ b/content/en/docs/setup/production-environment/tools/kubeadm/kubelet-integration.md @@ -208,6 +208,7 @@ The DEB and RPM packages shipped with the Kubernetes releases are: | Package name | Description | |--------------|-------------| | `kubeadm` | Installs the `/usr/bin/kubeadm` CLI tool and the [kubelet drop-in file](#the-kubelet-drop-in-file-for-systemd) for the kubelet. | -| `kubelet` | Installs the kubelet binary in `/usr/bin` and CNI binaries in `/opt/cni/bin`. | +| `kubelet` | Installs the `/usr/bin/kubelet` binary. | | `kubectl` | Installs the `/usr/bin/kubectl` binary. | | `cri-tools` | Installs the `/usr/bin/crictl` binary from the [cri-tools git repository](https://github.com/kubernetes-sigs/cri-tools). | +| `kubernetes-cni` | Installs the `/opt/cni/bin` binaries from the [plugins git repository](https://github.com/containernetworking/plugins). | From 3355f284cf4263212de2fc15f1d262c030d40f87 Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Tue, 22 Mar 2022 22:20:38 +0800 Subject: [PATCH 060/138] [zh] Modify short_description about namespace Signed-off-by: xin.li --- content/en/docs/reference/glossary/namespace.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/reference/glossary/namespace.md b/content/en/docs/reference/glossary/namespace.md index 33a97d90a17b7..02381c4ee615e 100644 --- a/content/en/docs/reference/glossary/namespace.md +++ b/content/en/docs/reference/glossary/namespace.md @@ -4,7 +4,7 @@ id: namespace date: 2018-04-12 full_link: /docs/concepts/overview/working-with-objects/namespaces short_description: > - An abstraction used by Kubernetes to support multiple virtual clusters on the same physical cluster. + An abstraction used by Kubernetes to support isolation of groups of resources within a single cluster. aka: tags: From cb4183dfb088de05dd0b2bfb91e030242c40f5d9 Mon Sep 17 00:00:00 2001 From: twilight0620 <69295632+twilight0620@users.noreply.github.com> Date: Wed, 23 Mar 2022 10:55:59 +0800 Subject: [PATCH 061/138] [zh] Resynchronize Chinese content about policies (#32272) * [zh] chinese content is different from english content about policies. * [zh] review comment modify * [zh] review comment modify --- .../zh/docs/reference/scheduling/policies.md | 211 ++---------------- 1 file changed, 22 insertions(+), 189 deletions(-) diff --git a/content/zh/docs/reference/scheduling/policies.md b/content/zh/docs/reference/scheduling/policies.md index 41f115be080b6..dbc6a958606be 100644 --- a/content/zh/docs/reference/scheduling/policies.md +++ b/content/zh/docs/reference/scheduling/policies.md @@ -1,209 +1,42 @@ --- title: 调度策略 content_type: concept -weight: 10 +sitemap: + priority: 0.2 # Scheduling priorities are deprecated --- - - - -{{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} -根据调度策略指定的*断言(predicates)*和*优先级(priorities)* -分别对节点进行[过滤和打分](/zh/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation)。 - - -你可以通过执行 `kube-scheduler --policy-config-file ` 或 -`kube-scheduler --policy-configmap ` -设置并使用[调度策略](/zh/docs/reference/config-api/kube-scheduler-policy-config.v1/)。 - - - - - -## 断言 {#predicates} - - - -以下*断言*实现了过滤接口: - - -- `PodFitsHostPorts`:检查 Pod 请求的端口(网络协议类型)在节点上是否可用。 - - -- `PodFitsHost`:检查 Pod 是否通过主机名指定了 Node。 - - -- `PodFitsResources`:检查节点的空闲资源(例如,CPU和内存)是否满足 Pod 的要求。 - - -- `MatchNodeSelector`:检查 Pod 的节点{{< glossary_tooltip text="选择算符" term_id="selector" >}} - 和节点的 {{< glossary_tooltip text="标签" term_id="label" >}} 是否匹配。 - - -- `NoVolumeZoneConflict`:给定该存储的故障区域限制, - 评估 Pod 请求的{{< glossary_tooltip text="卷" term_id="volume" >}}在节点上是否可用。 - - -- `NoDiskConflict`:根据 Pod 请求的卷是否在节点上已经挂载,评估 Pod 和节点是否匹配。 - - -- `MaxCSIVolumeCount`:决定附加 {{< glossary_tooltip text="CSI" term_id="csi" >}} 卷的数量,判断是否超过配置的限制。 - - -- `PodToleratesNodeTaints`:检查 Pod 的{{< glossary_tooltip text="容忍" term_id="toleration" >}} - 是否能容忍节点的{{< glossary_tooltip text="污点" term_id="taint" >}}。 - - -- `CheckVolumeBinding`:基于 Pod 的卷请求,评估 Pod 是否适合节点,这里的卷包括绑定的和未绑定的 - {{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}} 都适用。 - - - -## 优先级 {#priorities} - - -以下*优先级*实现了打分接口: - - -- `SelectorSpreadPriority`:属于同一 {{< glossary_tooltip text="Service" term_id="service" >}}、 - {{< glossary_tooltip term_id="statefulset" >}} 或 - {{< glossary_tooltip term_id="replica-set" >}} 的 Pod,跨主机部署。 - - -- `InterPodAffinityPriority`:实现了 [Pod 间亲和性与反亲和性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)的优先级。 - -- `LeastRequestedPriority`:偏向最少请求资源的节点。 - 换句话说,节点上的 Pod 越多,使用的资源就越多,此策略给出的排名就越低。 +In Kubernetes versions before v1.23, a scheduling policy can be used to specify the *predicates* and *priorities* process. For example, you can set a scheduling policy by +running `kube-scheduler --policy-config-file ` or `kube-scheduler --policy-configmap `. - -- `MostRequestedPriority`:支持最多请求资源的节点。 - 该策略将 Pod 调度到整体工作负载所需的最少的一组节点上。 - - -- `RequestedToCapacityRatioPriority`:使用默认的打分方法模型,创建基于 ResourceAllocationPriority 的 requestedToCapacity。 - - -- `BalancedResourceAllocation`:偏向平衡资源使用的节点。 - - -- `NodePreferAvoidPodsPriority`:根据节点的注解 `scheduler.alpha.kubernetes.io/preferAvoidPods` 对节点进行优先级排序。 - 你可以使用它来暗示两个不同的 Pod 不应在同一节点上运行。 - - -- `NodeAffinityPriority`:根据节点亲和中 PreferredDuringSchedulingIgnoredDuringExecution 字段对节点进行优先级排序。 - 你可以在[将 Pod 分配给节点](/zh/docs/concepts/scheduling-eviction/assign-pod-node/)中了解更多。 - - -- `TaintTolerationPriority`:根据节点上无法忍受的污点数量,给所有节点进行优先级排序。 - 此策略会根据排序结果调整节点的等级。 - - -- `ImageLocalityPriority`:偏向已在本地缓存 Pod 所需容器镜像的节点。 - - -- `ServiceSpreadingPriority`:对于给定的 Service,此策略旨在确保该 Service 关联的 Pod 在不同的节点上运行。 - 它偏向把 Pod 调度到没有该服务的节点。 - 整体来看,Service 对于单个节点故障变得更具弹性。 - - -- `EqualPriority`:给予所有节点相等的权重。 - - -- `EvenPodsSpreadPriority`:实现了 [Pod 拓扑扩展约束](/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints/)的优先级排序。 +在 Kubernetes v1.23 版本之前,可以使用调度策略来指定 *predicates* 和 *priorities* 进程。 +例如,可以通过运行 `kube-scheduler --policy-config-file ` 或者 + `kube-scheduler --policy-configmap ` 设置调度策略。 +但是从 Kubernetes v1.23 版本开始,不再支持这种调度策略。 +同样地也不支持相关的 `policy-config-file`、 `policy-configmap`、 `policy-configmap-namespace` 以及 `use-legacy-policy-config` 标志。 +你可以通过使用 [调度配置](/zh/docs/reference/scheduling/config/)来实现类似的行为。 ## {{% heading "whatsnext" %}} - -* 了解[调度](/zh/docs/concepts/scheduling-eviction/kube-scheduler/) -* 了解 [kube-scheduler 配置](/zh/docs/reference/scheduling/config/) -* 阅读 [kube-scheduler 配置参考 (v1beta1)](/zh/docs/reference/config-api/kube-scheduler-config.v1beta2) -* 阅读 [kube-scheduler 策略参考 (v1)](/zh/docs/reference/config-api/kube-scheduler-policy-config.v1/) + +* 了解 [调度](/zh/docs/concepts/scheduling-eviction/kube-scheduler/)。 +* 了解 [kube-scheduler 配置](/zh/docs/reference/scheduling/config/)。 +* 阅读 [kube-scheduler 配置参考(v1beta3)](/zh/docs/reference/config-api/kube-scheduler-config.v1beta3/)。 + From d62ce868ad3a5af77ba77b0a0311445249c1c218 Mon Sep 17 00:00:00 2001 From: twilight0620 <69295632+twilight0620@users.noreply.github.com> Date: Wed, 23 Mar 2022 10:57:58 +0800 Subject: [PATCH 062/138] [zh] Translate delete-options page into Chinese (#32274) * [zh] Translate delete-options page into Chinese * [zh] review comment modify --- .../common-definitions/delete-options.md | 139 ++++++++++++++++++ 1 file changed, 139 insertions(+) create mode 100644 content/zh/docs/reference/kubernetes-api/common-definitions/delete-options.md diff --git a/content/zh/docs/reference/kubernetes-api/common-definitions/delete-options.md b/content/zh/docs/reference/kubernetes-api/common-definitions/delete-options.md new file mode 100644 index 0000000000000..a4bd2e0405121 --- /dev/null +++ b/content/zh/docs/reference/kubernetes-api/common-definitions/delete-options.md @@ -0,0 +1,139 @@ +--- +api_metadata: + apiVersion: "" + import: "k8s.io/apimachinery/pkg/apis/meta/v1" + kind: "DeleteOptions" +content_type: "api_reference" +description: "删除 API 对象时可能会提供删除选项。" +title: "删除选项" +weight: 1 +auto_generated: true +--- + + + +`import "k8s.io/apimachinery/pkg/apis/meta/v1"` + + +删除 API 对象时可能会提供 DeleteOptions。 + +
+ + + +- **apiVersion** (string) + + `APIVersion` 定义对象表示的版本化模式。 + 服务器应将已识别的模式转换为最新的内部值,并可能拒绝无法识别的值。 + 更多信息:https ://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + + + +- **dryRun** ([]string) + + 该值如果存在,则表示不应保留修改。 + 无效或无法识别的 `dryRun` 指令将导致错误响应并且不会进一步处理请求。有效值为: + + - `All`:处理所有试运行阶段(Dry Run Stages) + + + +- **gracePeriodSeconds** (int64) + + 表示对象被删除之前的持续时间(以秒为单位)。 + 值必须是非负整数。零值表示立即删除。如果此值为 `nil`,则将使用指定类型的默认宽限期。如果未指定,则为每个对象的默认值。 + + + +- **kind** (string) + + `kind` 是一个字符串值,表示此对象代表的 REST 资源。 + 服务器可以从客户端提交请求的端点推断出此值。此值无法更新,是驼峰的格式。 + 更多信息:https ://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 。 + + + +- **orphanDependents** (boolean) + + 已弃用:该字段将在 1.7 中弃用,请使用 `propagationPolicy` 字段。 + 该字段表示依赖对象是否应该是孤儿。如果为 true/false,对象的 finalizers 列表中会被添加上或者移除掉 “orphan” 终结器(Finalizer)。 + 可以设置此字段或者设置 `propagationPolicy` 字段,但不能同时设置以上两个字段。 + + + +- **preconditions** (Preconditions) + + 先决条件必须在执行删除之前完成。如果无法满足这些条件,将返回 409(冲突)状态。 + + + *执行操作(更新、删除等)之前必须满足先决条件。* + + - **preconditions.resourceVersion** (string) + + 指定目标资源版本(resourceVersion)。 + + - **preconditions.uid** (string) + + 指定目标 UID. + + + +- **propagationPolicy** (string) + + 表示是否以及如何执行垃圾收集。可以设置此字段或 `orphanDependents` 字段,但不能同时设置二者。 + 默认策略由 `metadata.finalizers` 中现有终结器(Finalizer)集合和特定资源的默认策略决定。 + 可接受的值为: `Orphan` - 令依赖对象成为孤儿对象;`Background` - 允许垃圾收集器在后台删除依赖项;`Foreground` - 一个级联策略,前台删除所有依赖项。 + + + + + From 3147d5cea646773a0ffe11950244452709ef1c68 Mon Sep 17 00:00:00 2001 From: twilight0620 <69295632+twilight0620@users.noreply.github.com> Date: Wed, 23 Mar 2022 11:05:59 +0800 Subject: [PATCH 063/138] [zh] Translate 'label-selector' into Chinese (#32202) * Translate label-selector into Chinese * [zh] Translate label-selector into Chinese --- .../common-definitions/_index.md | 5 + .../common-definitions/label-selector.md | 103 ++++++++++++++++++ 2 files changed, 108 insertions(+) create mode 100644 content/zh/docs/reference/kubernetes-api/common-definitions/_index.md create mode 100644 content/zh/docs/reference/kubernetes-api/common-definitions/label-selector.md diff --git a/content/zh/docs/reference/kubernetes-api/common-definitions/_index.md b/content/zh/docs/reference/kubernetes-api/common-definitions/_index.md new file mode 100644 index 0000000000000..372df75c5863f --- /dev/null +++ b/content/zh/docs/reference/kubernetes-api/common-definitions/_index.md @@ -0,0 +1,5 @@ +--- +title: "公共定义" +weight: 9 +--- + diff --git a/content/zh/docs/reference/kubernetes-api/common-definitions/label-selector.md b/content/zh/docs/reference/kubernetes-api/common-definitions/label-selector.md new file mode 100644 index 0000000000000..2930691758fc6 --- /dev/null +++ b/content/zh/docs/reference/kubernetes-api/common-definitions/label-selector.md @@ -0,0 +1,103 @@ + + +--- +api_metadata: + apiVersion: "" + import: "k8s.io/apimachinery/pkg/apis/meta/v1" + kind: "LabelSelector" +content_type: "api_reference" +description: "标签选择器是对一组资源的标签查询。" +title: "标签选择器" +weight: 2 +auto_generated: true +--- + +`import "k8s.io/apimachinery/pkg/apis/meta/v1"` + + + +标签选择器是对一组资源的标签查询。 + +`matchLabels` 和 `matchExpressions` 的结果按逻辑与的关系组合。一个 `empty` 标签选择器匹配所有对象。一个 `null` 标签选择器不匹配任何对象。 + +
+ + + +- **matchExpressions** ([]LabelSelectorRequirement) + + `matchExpressions` 是 `LabelSelectorRequirement` 的列表,这些需求结果按逻辑与的关系来计算。 + + + *标签选择器要求是包含值、键和关联键和值的运算符的选择器。* + + + + - **matchExpressions.key** (string), 必填 + + *补丁策略: 按照键 `key` 合并* + + `key` 是选择器应用的标签键. + + + + - **matchExpressions.operator** (string),必填 + + operator 表示键与一组值的关系。有效的运算符包括 `In`、`NotIn`、`Exists` 和 `DoesNotExist`。 + + + + - **matchExpressions.values** ([]string) + + `values` 是一个字符串值数组。如果运算符为 `In` 或 `NotIn`,则 `values` 数组必须为非空。 + + 如果运算符是 `Exists` 或 `DoesNotExist`,则 `values` 数组必须为空。 + + 该数组在战略性补丁(Strategic Merge Patch)期间被替换。 + + + + - **matchLabels** (map[string]string) + + `matchLabels` 是 {`key`,`value`} 键值对的映射。 + + `matchLabels` 映射中的单个 {`key`,`value`} 键值对相当于 `matchExpressions` 的一个元素,其键字段为 `key`,运算符为 `In`,`values` 数组仅包含 `value`。 + + 所表达的需求最终要按逻辑与的关系组合。 + + + + + + From dcd6f9972490129efcbd1cdd83f7a72e3b4e6358 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Thu, 10 Mar 2022 12:54:17 +0800 Subject: [PATCH 064/138] [zh] Rework securing cluster page This page was translated poorly. Many texts were not properly translated. --- .../administer-cluster/securing-a-cluster.md | 178 +++++++++--------- 1 file changed, 91 insertions(+), 87 deletions(-) diff --git a/content/zh/docs/tasks/administer-cluster/securing-a-cluster.md b/content/zh/docs/tasks/administer-cluster/securing-a-cluster.md index 5731d530b338d..151c9f212a2e1 100644 --- a/content/zh/docs/tasks/administer-cluster/securing-a-cluster.md +++ b/content/zh/docs/tasks/administer-cluster/securing-a-cluster.md @@ -34,7 +34,8 @@ they are allowed to perform is the first line of defense. --> ## 控制对 Kubernetes API 的访问 -因为 Kubernetes 是完全通过 API 驱动的,所以,控制和限制谁可以通过 API 访问集群,以及允许这些访问者执行什么样的 API 动作,就成为了安全控制的第一道防线。 +因为 Kubernetes 是完全通过 API 驱动的,所以,控制和限制谁可以通过 API 访问集群, +以及允许这些访问者执行什么样的 API 动作,就成为了安全控制的第一道防线。 ### 为所有 API 交互使用传输层安全 (TLS) -Kubernetes 期望集群中所有的 API 通信在默认情况下都使用 TLS 加密,大多数安装方法也允许创建所需的证书并且分发到集群组件中。请注意,某些组件和安装方法可能使用 HTTP 来访问本地端口, 管理员应该熟悉每个组件的设置,以识别潜在的不安全的流量。 +Kubernetes 期望集群中所有的 API 通信在默认情况下都使用 TLS 加密, +大多数安装方法也允许创建所需的证书并且分发到集群组件中。 +请注意,某些组件和安装方法可能使用 HTTP 来访问本地端口, +管理员应该熟悉每个组件的设置,以识别可能不安全的流量。 与身份验证一样,简单而广泛的角色可能适合于较小的集群,但是随着更多的用户与集群交互, -可能需要将团队划分成有更多角色限制的单独的命名空间。 +可能需要将团队划分到有更多角色限制的、单独的名字空间中去。 -就鉴权而言,理解怎么样更新一个对象可能导致在其它地方的发生什么样的行为是非常重要的。 -例如,用户可能不能直接创建 Pod,但允许他们通过创建一个 Deployment 来创建这些 Pod, +就鉴权而言,很重要的一点是理解对象上的更新操作如何导致在其它地方发生对应行为。 +例如,用户可能不能直接创建 Pod,但允许他们通过创建 Deployment 来创建这些 Pod, 这将让他们间接创建这些 Pod。 同样地,从 API 删除一个节点将导致调度到这些节点上的 Pod 被中止,并在其他节点上重新创建。 -原生的角色设计代表了灵活性和常见用例之间的平衡,但有限制的角色应该仔细审查, -以防止意外升级。如果外包角色不满足你的需求,则可以为用例指定特定的角色。 +原生的角色设计代表了灵活性和常见用例之间的平衡,但须限制的角色应该被仔细审查, +以防止意外的权限升级。如果内置的角色无法满足你的需求,则可以根据使用场景需要创建特定的角色。 如果你希望获取更多信息,请参阅[鉴权参考](/zh/docs/reference/access-authn-authz/authorization/)。 @@ -136,12 +140,12 @@ Consult the [Kubelet authentication/authorization reference](/docs/admin/kubelet --> ## 控制对 Kubelet 的访问 -Kubelet 公开 HTTPS 端点,这些端点授予节点和容器强大的控制权。 +Kubelet 公开 HTTPS 端点,这些端点提供了对节点和容器的强大的控制能力。 默认情况下,Kubelet 允许对此 API 进行未经身份验证的访问。 -生产级别的集群应启用 Kubelet 身份验证和授权。 +生产级别的集群应启用 Kubelet 身份认证和授权。 -如果你希望获取更多信息,请参考 +进一步的信息,请参考 [Kubelet 身份验证/授权参考](/zh/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/)。 ## 控制运行时负载或用户的能力 -Kubernetes 中的授权故意设置为了高层级,它侧重于对资源的粗粒度行为。 -更强大的控制是以通过用例限制这些对象如何作用于集群、自身和其他资源上的**策略**存在的。 +Kubernetes 中的授权故意设计成较高抽象级别,侧重于对资源的粗粒度行为。 +更强大的控制是 **策略** 的形式呈现的,根据使用场景限制这些对象如何作用于集群、自身和其他资源。 ### 限制集群上的资源使用 -[资源配额](/zh/docs/concepts/policy/resource-quotas/) -限制了授予命名空间的资源的数量或容量。 -这通常用于限制命名空间可以分配的 CPU、内存或持久磁盘的数量,但也可以控制 -每个命名空间中有多少个 Pod、服务或卷的存在。 +[资源配额(Resource Quota)](/zh/docs/concepts/policy/resource-quotas/)限制了赋予命名空间的资源的数量或容量。 +资源配额通常用于限制名字空间可以分配的 CPU、内存或持久磁盘的数量, +但也可以控制每个名字空间中存在多少个 Pod、Service 或 Volume。 -[限制范围](/zh/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/)限制 -上述某些资源的最大值或者最小值,以防止用户使用类似内存这样的通用保留资源时请求 -不合理的过高或过低的值,或者在没有指定的情况下提供默认限制。 +[限制范围(Limit Range)](/zh/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) +限制上述某些资源的最大值或者最小值,以防止用户使用类似内存这样的通用保留资源时请求不合理的过高或过低的值, +或者在没有指定的情况下提供默认限制。 -一般来说,大多数应用程序需要限制对主机资源的访问, -他们可以在不能访问主机信息的情况下成功以根进程(UID 0)运行。 +一般来说,大多数应用程序需要对主机资源的有限制的访问, +这样它们可以在不访问主机信息的情况下,成功地以 root 账号(UID 0)运行。 但是,考虑到与 root 用户相关的特权,在编写应用程序容器时,你应该使用非 root 用户运行。 -类似地,希望阻止客户端应用程序逃避其容器的管理员,应该使用限制性的 pod 安全策略。 +类似地,希望阻止客户端应用程序从其容器中逃逸的管理员,应该使用限制性较强的 Pod 安全策略。 ### 限制网络访问 -基于命名空间的[网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/) -允许应用程序作者限制其它命名空间中的哪些 Pod 可以访问它们命名空间内的 Pod 和端口。 +基于名字空间的[网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/) +允许应用程序作者限制其它名字空间中的哪些 Pod 可以访问自身名字空间内的 Pod 和端口。 现在已经有许多支持网络策略的 -[Kubernetes 网络供应商](/zh/docs/concepts/cluster-administration/networking/)。 +[Kubernetes 网络驱动](/zh/docs/concepts/cluster-administration/networking/)。 -对于可以控制用户的应用程序是否在集群之外可见的许多集群,配额和限制范围也可用于 -控制用户是否可以请求节点端口或负载均衡服务。 +配额(Quota)和限制范围(Limit Range)也可用于控制用户是否可以请求节点端口或负载均衡服务。 +在很多集群上,节点端口和负载均衡服务也可控制用户的应用程序是否在集群之外可见。 -在插件或者环境基础上控制网络规则可以增加额外的保护措施,比如节点防火墙、物理分离 -群集节点以防止串扰、或者高级的网络策略。 +此外也可能存在一些基于插件或基于环境的网络规则,能够提供额外的保护能力。 +例如各节点上的防火墙、物理隔离群集节点以防止串扰或者高级的网络策略等。 -### 限制云 metadata API 访问 +### 限制云元数据 API 访问 -云平台(AWS, Azure, GCE 等)经常讲 metadate 本地服务暴露给实例。 -默认情况下,这些 API 可由运行在实例上的 Pod 访问,并且可以包含 -该云节点的凭据或配置数据(如 kubelet 凭据)。 -这些凭据可以用于在集群内升级或在同一账户下升级到其他云服务。 +云平台(AWS、Azure、GCE 等)经常将元数据服务暴露给本地实例。 +默认情况下,这些 API 可由运行在实例上的 Pod 访问,且其中可能包含该云节点的凭据或配置数据 +(如 kubelet 凭据)。 +这些凭据可以用于在集群内提升权限或获得权限访问同一账户的其他云服务。 -在云平台上运行 Kubernetes 时,限制对实例凭据的权限,使用 +在云平台上运行 Kubernetes 时,需要限制对实例凭据的权限,使用 [网络策略](/zh/docs/tasks/administer-cluster/declare-network-policy/) -限制对 metadata API 的 pod 访问,并避免使用配置数据来传递机密。 +限制 Pod 对元数据 API 的访问,并避免使用配置数据来传递机密信息。 -### 控制 Pod 可以访问哪些节点 +### 控制 Pod 可以访问的节点 -默认情况下,对哪些节点可以运行 pod 没有任何限制。 +默认情况下,对 Pod 可以运行在哪些节点上是没有任何限制的。 Kubernetes 给最终用户提供了 -[一组丰富的策略用于控制 pod 放在节点上的位置](/zh/docs/concepts/scheduling-eviction/assign-pod-node/), +一组丰富的策略用于[控制 Pod 所放置的节点位置](/zh/docs/concepts/scheduling-eviction/assign-pod-node/), 以及[基于污点的 Pod 放置和驱逐](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)。 -对于许多集群,可以约定由作者采用或者强制通过工具使用这些策略来分离工作负载。 +对于许多集群,使用这些策略来分离工作负载可以作为一种约定,要求作者遵守或者通过工具强制。 -对于管理员,Beta 阶段的准入插件 `PodNodeSelector` 可用于强制命名空间中的 Pod -使用默认或需要使用特定的节点选择器。 -如果最终用户无法改变命名空间,这可以强烈地限制所有的 pod 在特定工作负载的位置。 +对于管理员,Beta 阶段的准入插件 `PodNodeSelector` 可用于强制某名字空间中的 Pod +使用默认的或特定的节点选择算符。 +如果最终用户无法改变名字空间,这一机制可以有效地限制特定工作负载中所有 Pod 的放置位置。 ## 保护集群组件免受破坏 -本节描述保护集群免受破坏的一些常见模式。 +本节描述保护集群免受破坏的一些常用模式。 ### 限制访问 etcd -对于 API 来说,拥有 etcd 后端的写访问权限,相当于获得了整个集群的 root 权限, -并且可以使用写访问权限来相当快速地升级。 -从 API 服务器访问它们的 etcd 服务器,管理员应该使用广受信任的凭证, -如通过 TLS 客户端证书的相互认证。 -通常,我们建议将 etcd 服务器隔离到只有API服务器可以访问的防火墙后面。 +拥有对 API 的 etcd 后端的写访问权限相当于获得了整个集群的 root 权限, +读访问权限也可能被利用,实现相当快速的权限提升。 +对于从 API 服务器访问其 etcd 服务器,管理员应该总是使用比较强的凭证,如通过 TLS +客户端证书来实现双向认证。 +通常,我们建议将 etcd 服务器隔离到只有 API 服务器可以访问的防火墙后面。 {{< caution >}} -允许集群中其它组件拥有读或写全空间的权限去访问 etcd 实例,相当于授予群集管理员访问的权限。 -对于非主控组件,强烈推荐使用单独的 etcd 实例,或者使用 etcd 的访问控制列表 -去限制只能读或者写空间的一个子集。 +允许集群中其它组件对整个主键空间(keyspace)拥有读或写权限去访问 etcd 实例, +相当于授予这些组件群集管理员的访问权限。 +对于非主控组件,强烈推荐使用不同的 etcd 实例,或者使用 etcd 的访问控制列表 +来限制这些组件只能读或写主键空间的一个子集。 {{< /caution >}} -### 开启审计日志 +### 启用审计日志 [审计日志](/zh/docs/tasks/debug-application-cluster/audit/)是 Beta 特性, 负责记录 API 操作以便在发生破坏时进行事后分析。 @@ -351,7 +355,7 @@ do not use. ### 限制使用 alpha 和 beta 特性 Kubernetes 的 alpha 和 beta 特性还在努力开发中,可能存在导致安全漏洞的缺陷或错误。 -要始终评估 alpha 和 beta 特性可能为你的安全态势带来的风险。 +要始终评估 alpha 和 beta 特性可能给你的安全态势带来的风险。 当你怀疑存在风险时,可以禁用那些不需要使用的特性。 -### 频繁回收基础设施证书 +### 经常轮换基础设施证书 -一个 Secret 或凭据的寿命越短,攻击者就越难使用该凭据。 -在证书上设置短生命周期并实现自动回收,是控制安全的一个好方法。 -因此,使用身份验证提供程序时,应该要求可以控制发布令牌的可用时间,并尽可能使用短寿命。 -如果在外部集成中使用服务帐户令牌,则应该频繁地回收这些令牌。 -例如,一旦引导阶段完成,就应该撤销用于设置节点的引导令牌,或者取消它的授权。 +一项机密信息或凭据的生命期越短,攻击者就越难使用该凭据。 +在证书上设置较短的生命期并实现自动轮换是控制安全的一个好方法。 +使用身份验证提供程序时,应该使用那些可以控制所发布令牌的合法时长的提供程序, +并尽可能设置较短的生命期。 +如果在外部集成场景中使用服务帐户令牌,则应该经常性地轮换这些令牌。 +例如,一旦引导阶段完成,就应该撤销用于配置节点的引导令牌,或者取消它的授权。 ### 在启用第三方集成之前,请先审查它们 -许多集成到 Kubernetes 的第三方都可以改变你集群的安全配置。 +许多集成到 Kubernetes 的第三方软件或服务都可能改变你的集群的安全配置。 启用集成时,在授予访问权限之前,你应该始终检查扩展所请求的权限。 -例如,许多安全集成可以请求访问来查看集群上的所有 Secret, -从而有效地使该组件成为集群管理。 -当有疑问时,如果可能的话,将集成限制在单个命名空间中运行。 +例如,许多安全性集成中可能要求查看集群上的所有 Secret 的访问权限, +本质上该组件便成为了集群的管理员。 +当有疑问时,如果可能的话,将要集成的组件限制在某指定名字空间中运行。 -如果组件创建的 Pod 能够在命名空间中做一些类似 `kube-system` 命名空间中的事情, -那么它也可能是出乎意料的强大。 -因为这些 Pod 可以访问服务账户的 Secret,或者,如果这些服务帐户被授予访问许可的 -[Pod 安全策略](/zh/docs/concepts/policy/pod-security-policy/)的权限,它们能以高权限运行。 +如果执行 Pod 创建操作的组件能够在 `kube-system` 这类名字空间中创建 Pod, +则这类组件也可能获得意外的权限,因为这些 Pod 可以访问服务账户的 Secret, +或者,如果对应服务帐户被授权访问宽松的 [Pod 安全策略](/zh/docs/concepts/policy/pod-security-policy/), +它们就能以较高的权限运行。 ### 接收安全更新和报告漏洞的警报 -加入 [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce) -组,能够获取有关安全公告的邮件。有关如何报告漏洞的更多信息,请参见 -[安全报告](/zh/docs/reference/issues-security/security/)页面。 - - - +请加入 [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce) +组,这样你就能够收到有关安全公告的邮件。有关如何报告漏洞的更多信息, +请参见[安全报告](/zh/docs/reference/issues-security/security/)页面。 From 72b2603b93ac7548698039404df1261ba0b2b6a6 Mon Sep 17 00:00:00 2001 From: Jihoon Seo Date: Wed, 23 Mar 2022 16:50:57 +0900 Subject: [PATCH 065/138] [zh] Fix rendering of 'label-selector' page --- .../common-definitions/label-selector.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/content/zh/docs/reference/kubernetes-api/common-definitions/label-selector.md b/content/zh/docs/reference/kubernetes-api/common-definitions/label-selector.md index 2930691758fc6..ffa372d2df5eb 100644 --- a/content/zh/docs/reference/kubernetes-api/common-definitions/label-selector.md +++ b/content/zh/docs/reference/kubernetes-api/common-definitions/label-selector.md @@ -1,25 +1,25 @@ - - +auto_generated: true --- + + `import "k8s.io/apimachinery/pkg/apis/meta/v1"` From 3acb359995c9bd04a17760cda76ff8f517db1a94 Mon Sep 17 00:00:00 2001 From: Hyunsuk Shin Date: Wed, 23 Mar 2022 16:53:06 +0900 Subject: [PATCH 066/138] =?UTF-8?q?=EC=98=A4=ED=83=80=20=EC=88=98=EC=A0=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../ko/docs/tasks/run-application/horizontal-pod-autoscale.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md index 3e0856e511fc6..ef322e4b6d757 100644 --- a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -51,7 +51,7 @@ Horizontal Pod Autoscaling을 활용하는 쿠버네티스는 Horizontal Pod Autoscaling을 간헐적으로(intermittently) 실행되는 -컨트롤 루프 형태로 구현했다(지숙적인 프로세스가 아니다). +컨트롤 루프 형태로 구현했다(지속적인 프로세스가 아니다). 실행 주기는 [`kube-controller-manager`](/docs/reference/command-line-tools-reference/kube-controller-manager/)의 `--horizontal-pod-autoscaler-sync-period` 파라미터에 의해 설정된다(기본 주기는 15초이다). From f4ddeb7247cef8de723c458f9c1075010781eb9d Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Wed, 23 Mar 2022 16:22:46 +0800 Subject: [PATCH 067/138] [zh] Update namespace.md Signed-off-by: xin.li --- content/zh/docs/reference/glossary/namespace.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/zh/docs/reference/glossary/namespace.md b/content/zh/docs/reference/glossary/namespace.md index 6ef350412c910..5934d748fe79b 100644 --- a/content/zh/docs/reference/glossary/namespace.md +++ b/content/zh/docs/reference/glossary/namespace.md @@ -4,7 +4,7 @@ id: namespace date: 2018-04-12 full_link: /zh/docs/concepts/overview/working-with-objects/namespaces/ short_description: > - 名字空间是 Kubernetes 为了在同一物理集群上支持多个虚拟集群而使用的一种抽象。 + 名字空间是 Kubernetes 用来支持隔离单个集群中的资源组的一种抽象。 aka: tags: @@ -18,7 +18,7 @@ id: namespace date: 2018-04-12 full_link: /docs/concepts/overview/working-with-objects/namespaces/ short_description: > - An abstraction used by Kubernetes to support multiple virtual clusters on the same physical cluster. + An abstraction used by Kubernetes to support isolation of groups of resources within a single cluster. aka: tags: From f5ec6b03db0925dc2b9cb9bbdc8135dcbeacbe0e Mon Sep 17 00:00:00 2001 From: "paul.zhang" Date: Wed, 23 Mar 2022 17:47:59 +0800 Subject: [PATCH 068/138] [zh] Translate docs/reference/labels-annotations-taints to Chinese (#32255) * Translate docs/reference/labels-annotations-taints to zh language #32244 * Translate docs/reference/labels-annotations-taints to zh language #32244 * Translate docs/reference/labels-annotations-taints to zh language #32244 * Translate docs/reference/labels-annotations-taints to zh language #32244 * Translate docs/reference/labels-annotations-taints to zh language #32244 --- .../labels-annotations-taints/_index.md | 562 +++++++++--------- .../audit-annotations.md | 54 +- 2 files changed, 293 insertions(+), 323 deletions(-) diff --git a/content/zh/docs/reference/labels-annotations-taints/_index.md b/content/zh/docs/reference/labels-annotations-taints/_index.md index b3585743e8084..1fd14e8b6dbaa 100644 --- a/content/zh/docs/reference/labels-annotations-taints/_index.md +++ b/content/zh/docs/reference/labels-annotations-taints/_index.md @@ -1,31 +1,28 @@ --- -title: 常见的标签、注解和污点 +title: 众所周知的标签、注解和污点 content_type: concept weight: 20 no_list: true --- - - - -Kubernetes 保留命名空间 kubernetes.io 下的所有的标签和注解。 +Kubernetes 将所有标签和注解保留在 kubernetes.io Namespace中。 -本文档有两个作用,一是作为可用值的参考,二是作为赋值的协调点。 +本文档既可作为值的参考,也可作为分配值的协调点。 - -## 用于 API 对象的标签、注解和污点 +## API 对象上使用的标签、注解和污点 -### kubernetes.io/arch +### kubernetes.io/arch {#kubernetes-io-arch} -示例:`kubernetes.io/arch=amd64` +例子:`kubernetes.io/arch=amd64` 用于:Node -Kubelet 用 Go 定义的 `runtime.GOARCH` 生成该标签的键值。 -在混合使用 ARM 和 x86 节点的场景中,此键值可以带来极大便利。 - +Kubelet 使用 Go 定义的 `runtime.GOARCH` 填充它。 如果你混合使用 ARM 和 X86 节点,这会很方便。 -### kubernetes.io/os +### kubernetes.io/os {#kubernetes-io-os} -示例:`kubernetes.io/os=linux` +例子:`kubernetes.io/os=linux` 用于:Node -Kubelet 用 Go 定义的 `runtime.GOOS` 生成该标签的键值。 -在混合使用异构操作系统场景下(例如:混合使用 Linux 和 Windows 节点),此键值可以带来极大便利。 - +Kubelet 使用 Go 定义的 `runtime.GOOS` 填充它。如果你在集群中混合使用操作系统(例如:混合 Linux 和 Windows 节点),这会很方便。 -### kubernetes.io/metadata.name +### kubernetes.io/metadata.name {#kubernetes-io-metadata-name} -示例:`kubernetes.io/metadata.name=mynamespace` +例子:`kubernetes.io/metadata.name=mynamespace` 用于:Namespace -Kubernetes API 服务器({{< glossary_tooltip text="控制平面" term_id="control-plane" >}}的一部分) -会在所有命名空间上设置此标签。标签值被设置为命名空间的名称。你无法更改此标签值。 - -如果你想使用标签{{< glossary_tooltip text="选择器" term_id="selector" >}}来指向特定的命名空间, -此标签很有用。 +Kubernetes API 服务器({{}} 的一部分)在所有 Namespace 上设置此标签。 +标签值被设置 Namespace 的名称。你无法更改此标签的值。 +如果你想使用标签{{}}定位特定 Namespace,这很有用。 -### beta.kubernetes.io/arch (已弃用) +### beta.kubernetes.io/arch (已弃用) {#beta-kubernetes-io-arch} -此标签已被弃用,取而代之的是 `kubernetes.io/arch`. +此标签已被弃用。请改用`kubernetes.io/arch`。 -### beta.kubernetes.io/os (已弃用) +### beta.kubernetes.io/os (已弃用) {#beta-kubernetes-io-os} -此标签已被弃用,取而代之的是 `kubernetes.io/os`. +此标签已被弃用。请改用`kubernetes.io/os`。 ### kubernetes.io/hostname {#kubernetesiohostname} -示例:`kubernetes.io/hostname=ip-172-20-114-199.ec2.internal` +例子:`kubernetes.io/hostname=ip-172-20-114-199.ec2.internal` 用于:Node -Kubelet 用主机名生成此标签的取值。 -注意可以通过传入参数 `--hostname-override` 给 `kubelet` 来修改此“实际”主机名。 +Kubelet 使用主机名填充此标签。请注意,可以通过将 `--hostname-override` 标志传递给 `kubelet` 来替代“实际”主机名。 -此标签也可用做拓扑层次的一个部分。 -更多信息参见 [topology.kubernetes.io/zone](#topologykubernetesiozone)。 +此标签也用作拓扑层次结构的一部分。 有关详细信息,请参阅 [topology.kubernetes.io/zone](#topologykubernetesiozone)。 ### kubernetes.io/change-cause {#change-cause} -示例:`kubernetes.io/change-cause=kubectl edit --record deployment foo` +例子:`kubernetes.io/change-cause=kubectl edit --record deployment foo` 用于:所有对象 -此注解是对改动原因的最好的推测。 +此注解是对某些事物发生变更的原因的最佳猜测。 -当在可能修改一个对象的 `kubectl` 命令中加入 `--record` 时,会生成此注解。 +将 `--record` 添加到可能会更改对象的 `kubectl` 命令时会填充它。 ### kubernetes.io/description {#description} -示例:`kubernetes.io/description: "Description of K8s object."` +例子:`kubernetes.io/description: "Description of K8s object."` 用于:所有对象 -此注解用于描述给定对象的具体行为 +此注解用于描述给定对象的特定行为。 ### kubernetes.io/enforce-mountable-secrets {#enforce-mountable-secrets} -示例:`kubernetes.io/enforce-mountable-secrets: "true"` +例子:`kubernetes.io/enforce-mountable-secrets: "true"` 用于:ServiceAccount -此注解只在值为 **true** 时生效。 -此注解表示以此服务账号运行的 Pod 只能引用此服务账号的 `secrets` 字段中所写的 Secret API 对象。 +此注解的值必须为 **true** 才能生效。此注解表示作为此服务帐户运行的 Pod 只能引用在服务帐户的 `secrets` 字段中指定的 Secret API 对象。 ### controller.kubernetes.io/pod-deletion-cost {#pod-deletion-cost} -示例:`controller.kubernetes.io/pod-deletion-cost=10` +例子:`controller.kubernetes.io/pod-deletion-cost=10` 用于:Pod -该注解用于设置 [Pod 删除开销](/zh/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost), -允许用户影响 ReplicaSet 的缩减顺序。该注解解析为 `int32` 类型。 +该注解用于设置 [Pod 删除成本](/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost) 允许用户影响 ReplicaSet 缩减顺序。注解解析为 `int32` 类型。 - -### beta.kubernetes.io/instance-type (已弃用) - -{{< note >}} -从 v1.17 起,此标签被弃用,取而代之的是 [node.kubernetes.io/instance-type](#nodekubernetesioinstance-type)。 -{{< /note >}} +{{< note >}} 从 v1.17 开始,此标签已弃用,取而代之的是 [node.kubernetes.io/instance-type](#nodekubernetesioinstance-type)。 {{< /note >}} ### node.kubernetes.io/instance-type {#nodekubernetesioinstance-type} -示例:`node.kubernetes.io/instance-type=m3.medium` +例子:`node.kubernetes.io/instance-type=m3.medium` 用于:Node -Kubelet 用 `cloudprovider` 定义的实例类型生成此标签的取值。 -所以只有用到 `cloudprovider` 的场合,才会设置此标签。 -在你希望把特定工作负载调度到特定实例类型的时候此标签很有用,但更常见的调度方法是基于 -Kubernetes 调度器来执行基于资源的调度。 -你应该聚焦于使用基于属性的调度方式,而不是基于实例类型(例如:应该申请一个 GPU,而不是 `g2.2xlarge`)。 +Kubelet 使用 `cloudprovider` 定义的实例类型填充它。 +仅当你使用 `cloudprovider` 时才会设置此项。如果你希望将某些工作负载定位到某些实例类型,则此设置非常方便,但通常你希望依靠 Kubernetes 调度程序来执行基于资源的调度。 +你应该基于属性而不是实例类型来调度(例如:需要 GPU,而不是需要 `g2.2xlarge`)。 ### failure-domain.beta.kubernetes.io/region (已弃用) {#failure-domainbetakubernetesioregion} -参见 [topology.kubernetes.io/region](#topologykubernetesioregion). +请参阅 [topology.kubernetes.io/region](#topologykubernetesioregion)。 -{{< note >}} -从 v1.17 开始,此标签被弃用,取而代之的是 [topology.kubernetes.io/region](#topologykubernetesioregion)。 -{{< /note >}} + +{{< note >}} 从 v1.17 开始,此标签已弃用,取而代之的是 [topology.kubernetes.io/region](#topologykubernetesioregion)。 {{}} ### failure-domain.beta.kubernetes.io/zone (已弃用) {#failure-domainbetakubernetesiozone} -参见 [topology.kubernetes.io/zone](#topologykubernetesiozone). +请参阅 [topology.kubernetes.io/zone](#topologykubernetesiozone)。 -{{< note >}} -从 v1.17 开始,此标签被弃用,取而代之的是 [topology.kubernetes.io/zone](#topologykubernetesiozone)。 -{{< /note >}} + +{{< note >}} 从 v1.17 开始,此标签已弃用,取而代之的是 [topology.kubernetes.io/zone](#topologykubernetesiozone)。 {{}} ### statefulset.kubernetes.io/pod-name {#statefulsetkubernetesiopod-name} -示例:`statefulset.kubernetes.io/pod-name=mystatefulset-7` +例子:`statefulset.kubernetes.io/pod-name=mystatefulset-7` -当 StatefulSet 控制器为 StatefulSet 创建 Pod 时,控制平面会在该 Pod 上设置此标签。 -标签的值是正在创建的 Pod 的名称。 +当 StatefulSet 控制器为 StatefulSet 创建 Pod 时,控制平面会在该 Pod 上设置此标签。标签的值是正在创建的 Pod 的名称。 -更多细节请参见 StatefulSet 文章中的 [Pod 名称标签](/zh/docs/concepts/workloads/controllers/statefulset/#pod-name-label)。 +有关详细信息,请参阅 StatefulSet 主题中的 [Pod 名称标签](/docs/concepts/workloads/controllers/statefulset/#pod-name-label)。 ### topology.kubernetes.io/region {#topologykubernetesioregion} -示例:`topology.kubernetes.io/region=us-east-1` +例子:`topology.kubernetes.io/region=us-east-1` -参见 [topology.kubernetes.io/zone](#topologykubernetesiozone)。 +请参阅 [topology.kubernetes.io/zone](#topologykubernetesiozone)。 -### topology.kubernetes.io/zone {#topologykubernetesiozone} - -示例:`topology.kubernetes.io/zone=us-east-1c` - -用于:Node、PersistentVolume +Used on: Node、PersistentVolume - -Node 场景:`kubelet` 或外部的 `cloud-controller-manager` 用 `cloudprovider` 提供的信息生成此标签。 -所以只有在用到 `cloudprovider` 的场景下,此标签才会被设置。 -但如果此标签在你的拓扑中有意义,你也可以考虑在 Node 上设置它。 - -PersistentVolume 场景:拓扑自感知的卷制备程序将在 `PersistentVolumes` 上自动设置节点亲和性限制。 - -一个可用区(Zone)表示一个逻辑故障域。Kubernetes 集群通常会跨越多个可用区以提高可用性。 -虽然可用区的确切定义留给基础设施来决定,但可用区常见的属性包括: -可用区内的网络延迟非常低,可用区内的网络通讯无成本,以及故障独立性。 -例如,一个可用区中的节点可以共享交换机,但不同可用区则不应该。 - -一个地区(Region)表示一个更大的域,由一个或多个可用区组成。对于 Kubernetes 来说,跨越多个地区的集群很罕见。 -虽然可用区和地区的确切定义留给基础设施来决定,但地区的常见属性包括: -相比于地区内通信地区间的网络延迟更高,地区间网络流量成本更高,以及故障独立性。 -例如,一个地区内的节点也许会共享电力基础设施(例如 UPS 或发电机),但不同地区内的节点显然不会。 +### topology.kubernetes.io/zone {#topologykubernetesiozone} + +例子:`topology.kubernetes.io/zone=us-east-1c` + +用于:Node、PersistentVolume + +在 Node 上:`kubelet` 或外部 `cloud-controller-manager` 使用 `cloudprovider` 提供的信息填充它。仅当你使用 `cloudprovider` 时才会设置此项。 +但是,如果它在你的拓扑中有意义,你应该考虑在 Node 上设置它。 - -Kubernetes 对可用区和地区的结构做出一些假设: -1)地区和可用区是层次化的:可用区是地区的严格子集,任何可用区都不能在 2 个地区中出现。 -2)可用区名字在地区中独一无二:例如地区 "africa-east-1" 可由可用区 "africa-east-1a" 和 "africa-east-1b" 构成。 +Kubernetes 对 Zone 和 Region 的结构做了一些假设: + +1. Zone 和 Region 是分层的: Zone 是 Region 的严格子集,没有 Zone 可以在两个 Region 中; - -你可以安全地假定拓扑类的标签是固定不变的。 -即使标签严格来说是可变的,使用者依然可以假定一个节点只有通过销毁、重建的方式,才能在可用区间移动。 +你可以大胆假设拓扑标签不会改变。尽管严格地讲标签是可变的,但节点的用户可以假设给定 +节点只能通过销毁和重新创建才能完成 Zone 间移动。 - -Kubernetes 能以多种方式使用这些信息。 -例如,调度器自动地尝试将 ReplicaSet 中的 Pod -打散在单可用区集群的不同节点上(以减少节点故障的影响,参见[kubernetes.io/hostname](#kubernetesiohostname))。 -在多可用区的集群中,这类打散分布的行为也会应用到可用区(以减少可用区故障的影响)。 -做到这一点靠的是 _SelectorSpreadPriority_。 +Kubernetes 可以通过多种方式使用这些信息。例如,调度程序会自动尝试将 ReplicaSet 中的 Pod +分布在单 Zone 集群中的多个节点上(以便减少节点故障的影响,请参阅 [kubernetes.io/hostname](#kubernetesiohostname))。 +对于多 Zone 集群,这种分布行为也适用于 Zone(以减少 Zone 故障的影响)。 +Zone 级别的 Pod 分布是通过 _SelectorSpreadPriority_ 实现的。 - -_SelectorSpreadPriority_ 是一种尽力而为(best effort)的分配方法。 -如果集群中的可用区是异构的(例如:节点数量不同、节点类型不同或者 Pod -的资源需求不同),这种分配方法可以防止平均分配 Pod 到可用区。 -如果需要,你可以用同构的可用区(相同数量和类型的节点)来减少潜在的不平衡分布。 +_SelectorSpreadPriority_ 是一个尽力而为的放置机制。如果集群中的 Zone 是异构的 +(例如:节点数量不同、节点类型不同或 Pod 资源需求有别等),这种放置机制可能会让你的 +Pod 无法实现跨 Zone 均匀分布。 +如果需要,你可以使用同质 Zone(节点数量和类型均相同)来减少不均匀分布的可能性。 - -调度器会(通过 _VolumeZonePredicate_ 断言)保障申领了某卷的 Pod 只能分配到该卷相同的可用区。 -卷不支持跨可用区挂载。 +调度程序还将(通过 _VolumeZonePredicate_ 条件)确保申领给定卷的 Pod 仅被放置在与该卷相同的 Zone 中。 +卷不能跨 Zone 挂接。 - -如果 `PersistentVolumeLabel` 不支持给你的 PersistentVolume 自动打标签,你可以考虑手动加标签(或增加 -`PersistentVolumeLabel` 支持)。 -有了 `PersistentVolumeLabel`,调度器可以防止 Pod 挂载不同可用区中的卷。 -如果你的基础架构没有此限制,那你根本就没有必要给卷增加 zone 标签。 +你应该考虑手动添加标签(或添加对 `PersistentVolumeLabel` 的支持)。 +基于 `PersistentVolumeLabel` ,调度程序可以防止 Pod 挂载来自其他 Zone 的卷。如果你的基础架构没有此限制,则不需要将 Zone 标签添加到卷上。 -### volume.beta.kubernetes.io/storage-provisioner (已弃用) +### volume.beta.kubernetes.io/storage-provisioner (已弃用) {#volume-beta-kubernetes-io-storage-provisioner} -示例:`volume.beta.kubernetes.io/storage-provisioner: k8s.io/minikube-hostpath` +例子:`volume.beta.kubernetes.io/storage-provisioner: k8s.io/minikube-hostpath` 用于:PersistentVolumeClaim -该注解已被弃用。 +此注解已被弃用。 -### volume.kubernetes.io/storage-provisioner +### volume.kubernetes.io/storage-provisioner {#volume-kubernetes-io-storage-provisioner} 用于:PersistentVolumeClaim -该注解会被加到动态制备的 PVC 上。 +此注解将被添加到根据需要动态制备的 PVC 上。 ### node.kubernetes.io/windows-build {#nodekubernetesiowindows-build} -示例: `node.kubernetes.io/windows-build=10.0.17763` +例子:`node.kubernetes.io/windows-build=10.0.17763` 用于:Node -当 kubelet 运行于 Microsoft Windows 时,它给节点自动打标签,以记录 Windows Server 的版本。 +当 kubelet 在 Microsoft Windows 上运行时,它会自动标记其所在节点以记录所使用的 Windows Server 的版本。 -标签值的格式为 "主版本.次版本.构建号"。 +标签的值采用 “MajorVersion.MinorVersion.BuildNumber” 格式。 ### service.kubernetes.io/headless {#servicekubernetesioheadless} -示例:`service.kubernetes.io/headless=""` +例子:`service.kubernetes.io/headless=""` 用于:Service -在无头(headless)服务的场景下,控制平面为 Endpoints 对象添加此标签。 +当拥有的 Service 是无头类型时,控制平面将此标签添加到 Endpoints 对象。 ### kubernetes.io/service-name {#kubernetesioservice-name} -示例:`kubernetes.io/service-name="nginx"` +例子:`kubernetes.io/service-name="nginx"` 用于:Service -Kubernetes 用此标签区分多个服务。当前仅用于 `ELB`(Elastic Load Balancer)。 +Kubernetes 使用这个标签来区分多个服务。目前仅用于 `ELB` (弹性负载均衡器)。 ### endpointslice.kubernetes.io/managed-by {#endpointslicekubernetesiomanaged-by} -示例:`endpointslice.kubernetes.io/managed-by="controller"` +例子:`endpointslice.kubernetes.io/managed-by="controller"` 用于:EndpointSlice -此标签用来标示管理 EndpointSlice 的控制器或实体。 -此标签的目的是允许集群中使用不同控制器或实体来管理不同的 EndpointSlice。 +用于标示管理 EndpointSlice 的控制器或实体。该标签旨在使不同的 EndpointSlice +对象能够由同一集群内的不同控制器或实体管理。 ### endpointslice.kubernetes.io/skip-mirror {#endpointslicekubernetesioskip-mirror} -示例:`endpointslice.kubernetes.io/skip-mirror="true"` +例子:`endpointslice.kubernetes.io/skip-mirror="true"` 用于:Endpoints -此标签在 Endpoints 资源上设为 `"true"` 时,指示 EndpointSliceMirroring 控制器不要使用 EndpointSlices 镜像此资源。 +可以在 Endpoints 资源上将此标签设置为 `"true"`,以指示 EndpointSliceMirroring +控制器不应使用 EndpointSlice 镜像此 Endpoints 资源。 ### service.kubernetes.io/service-proxy-name {#servicekubernetesioservice-proxy-name} -示例:`service.kubernetes.io/service-proxy-name="foo-bar"` +例子:`service.kubernetes.io/service-proxy-name="foo-bar"` 用于:Service -此标签被 kube-proxy 用于自定义代理,将服务控制委托给自定义代理。 +kube-proxy 自定义代理会使用这个标签,它将服务控制委托给自定义代理。 ### experimental.windows.kubernetes.io/isolation-type (已弃用) {#experimental-windows-kubernetes-io-isolation-type} -示例:`experimental.windows.kubernetes.io/isolation-type: "hyperv"` +例子:`experimental.windows.kubernetes.io/isolation-type: "hyperv"` 用于:Pod -此注解用于运行 Hyper-V 隔离的 Windows 容器。 -要使用 Hyper-V 隔离特性,并创建 Hyper-V 隔离的容器,kubelet 应启用特性门控 HyperVContainer=true,并且 -Pod 应该包含注解 `experimental.windows.kubernetes.io/isolation-type=hyperv`。 +注解用于运行具有 Hyper-V 隔离的 Windows 容器。要使用 Hyper-V 隔离功能并创建 Hyper-V +隔离容器,kubelet 启动时应该需要设置特性门控 HyperVContainer=true。 {{< note >}} -你只能在单容器 Pod 上设置此注解。 -从 v1.20 开始,此注解被弃用。实验性的 Hyper-V 支持于 1.21 中被移除。 -{{< /note >}} +你只能在具有单个容器的 Pod 上设置此注解。 +从 v1.20 开始,此注解已弃用。1.21 中删除了实验性 Hyper-V 支持。 +{{}} -### ingressclass.kubernetes.io/is-default-class +### ingressclass.kubernetes.io/is-default-class {#ingressclass-kubernetes-io-is-default-class} -示例:`ingressclass.kubernetes.io/is-default-class: "true"` +例子:`ingressclass.kubernetes.io/is-default-class: "true"` 用于:IngressClass -当仅有一个 IngressClass 资源将此注解的值设为 `"true"`,没有指定类的新 Ingress 资源将使用此默认类。 +当单个 IngressClass 资源将此注解设置为 `"true"`时,新的未指定 Ingress 类的 Ingress +资源将被设置为此默认类。 -### kubernetes.io/ingress.class (已弃用) +### kubernetes.io/ingress.class (已弃用) {#kubernetes-io-ingress-class} -{{< note >}} -从 v1.18 开始,此注解被弃用,取而代之的是 `spec.ingressClassName`。 -{{< /note >}} +{{< note >}} +从 v1.18 开始,不推荐使用此注解以鼓励使用 `spec.ingressClassName`。 +{{}} -### storageclass.kubernetes.io/is-default-class +### storageclass.kubernetes.io/is-default-class {#storageclass-kubernetes-io-is-default-class} -示例:`storageclass.kubernetes.io/is-default-class=true` +例子:`storageclass.kubernetes.io/is-default-class=true` 用于:StorageClass -当仅有一个 StorageClass 资源将这个注解设置为 `"true"` 时,没有指定类的新 -PersistentVolumeClaim 资源将被设定为此默认类。 +当单个 StorageClass 资源将此注解设置为 `"true"` 时,新的未指定存储类的 PersistentVolumeClaim +资源将被设置为此默认类。 -### alpha.kubernetes.io/provided-node-ip +### alpha.kubernetes.io/provided-node-ip {#alpha-kubernetes-io-provided-node-ip} -示例:`alpha.kubernetes.io/provided-node-ip: "10.0.0.1"` +例子:`alpha.kubernetes.io/provided-node-ip: "10.0.0.1"` 用于:Node -kubelet 在 Node 上设置此注解,标示它所配置的 IPv4 地址。 +kubelet 可以在 Node 上设置此注解来表示其配置的 IPv4 地址。 -如果 kubelet 启动时配置了“external”云驱动,它会在 Node -上设置此注解以标示通过命令行参数(`--node-ip`)设置的 IP 地址。 -该 IP 地址由 cloud-controller-manager 向云驱动验证有效性。 +当使用“外部”云驱动启动时,kubelet 会在 Node 上设置此注解以表示从命令行标志 ( `--node-ip` ) 设置的 IP 地址。 +云控制器管理器通过云驱动验证此 IP 是否有效。 -### batch.kubernetes.io/job-completion-index +### batch.kubernetes.io/job-completion-index {#batch-kubernetes-io-job-completion-index} -示例:`batch.kubernetes.io/job-completion-index: "3"` +例子:`batch.kubernetes.io/job-completion-index: "3"` 用于:Pod -kube-controller-manager 中的 Job -控制器给使用索引(Indexed)[完成模式](/zh/docs/concepts/workloads/controllers/job/#completion-mode)创建的 -Pod 设置此注解。 +kube-controller-manager 中的 Job 控制器为使用 Indexed +[完成模式](/zh/docs/concepts/workloads/controllers/job/#completion-mode)创建的 Pod +设置此注解。 -### kubectl.kubernetes.io/default-container +### kubectl.kubernetes.io/default-container {#kubectl-kubernetes-io-default-container} -示例:`kubectl.kubernetes.io/default-container: "front-end-app"` +例子:`kubectl.kubernetes.io/default-container: "front-end-app"` -此注解的值是 Pod 的默认容器名称。 -例如,`kubectl logs` 或 `kubectl exec` 没有传入 `-c` 或 `--container` 参数时,将使用这个默认的容器。 +此注解的值是此 Pod 的默认容器名称。例如,未指定 `-c` 或 `--container` 标志时执行 +`kubectl logs` 或 `kubectl exec` 命令将使用此默认容器。 -### endpoints.kubernetes.io/over-capacity +### endpoints.kubernetes.io/over-capacity {#endpoints-kubernetes-io-over-capacity} -示例:`endpoints.kubernetes.io/over-capacity:warning` +例子:`endpoints.kubernetes.io/over-capacity:truncated` 用于:Endpoints -在 v1.22(或更高版本)的 Kubernetes 集群中,如果 Endpoints -资源中的端点超过了 1000 个,Endpoints 控制器就会向其添加这个注解。 -该注解表示此 Endpoints 资源已超过容量,而其端点数已被截断至 1000。 +在 Kubernetes 集群 v1.22(或更高版本)中,如果 Endpoints 资源超过 1000 个,Endpoints +控制器会将此注解添加到 Endpoints 资源。 +注解表示 Endpoints 资源已超出容量,并且已将 Endpoints 数截断为 1000。 -### batch.kubernetes.io/job-tracking +### batch.kubernetes.io/job-tracking {#batch-kubernetes-io-job-tracking} -示例:`batch.kubernetes.io/job-tracking: ""` +例子:`batch.kubernetes.io/job-tracking: ""` 用于:Job -Job 资源中若包含了此注解,则代表控制平面正[使用 Finalizer 追踪 Job 的状态](/zh/docs/concepts/workloads/controllers/job/#job-tracking-with-finalizers)。 -你**不该**手动添加或移除此注解。 +Job 上存在此注解表明控制平面正在[使用 Finalizer 追踪 Job](/zh/docs/concepts/workloads/controllers/job/#job-tracking-with-finalizers)。 +你 **不** 可以手动添加或删除此注解。 -### scheduler.alpha.kubernetes.io/preferAvoidPods (已弃用) {#scheduleralphakubernetesio-preferavoidpods} +### scheduler.alpha.kubernetes.io/preferAvoidPods (deprecated) {#scheduleralphakubernetesio-preferavoidpods} 用于:Node -此注解要求启用 [NodePreferAvoidPods 调度插件](/zh/docs/reference/scheduling/config/#scheduling-plugins)。 -该插件已于 Kubernetes 1.22 起弃用。 -请转而使用[污点和容忍度](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)。 +此注解需要启用 [NodePreferAvoidPods 调度插件](/zh/docs/reference/scheduling/config/#scheduling-plugins)。 +该插件自 Kubernetes 1.22 起已被弃用。 +请改用[污点和容忍度](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)。 - -**以下列出的污点只能用于 Node** - -### node.kubernetes.io/not-ready - -示例:`node.kubernetes.io/not-ready:NoExecute` - -节点控制器通过健康监控来检测节点是否就绪,并据此添加/删除此污点。 - -### node.kubernetes.io/unreachable +### node.kubernetes.io/not-ready {#node-kubernetes-io-not-ready} + +例子:`node.kubernetes.io/not-ready:NoExecute` -示例:`node.kubernetes.io/unreachable:NoExecute` +Node 控制器通过监控 Node 的健康状况来检测 Node 是否准备就绪,并相应地添加或删除此污点。 -如果[节点状况](/zh/docs/concepts/architecture/nodes/#condition)的 -`Ready` 键值为 `Unknown`,节点控制器会为节点添加此污点。 +### node.kubernetes.io/unreachable {#node-kubernetes-io-unreachable} + +例子:`node.kubernetes.io/unreachable:NoExecute` + +Node 控制器将此污点添加到对应[节点状况](/zh/docs/concepts/architecture/nodes/#condition) `Ready` +为 `Unknown` 的 Node 上。 -### node.kubernetes.io/unschedulable +### node.kubernetes.io/unschedulable {#node-kubernetes-io-unschedulable} -示例:`node.kubernetes.io/unschedulable:NoSchedule` +例子:`node.kubernetes.io/unschedulable:NoSchedule` -此污点会在节点初始化时被添加,以避免竟态的发生。 +在初始化 Node 期间,为避免竞争条件,此污点将被添加到 Node 上。 -### node.kubernetes.io/memory-pressure +### node.kubernetes.io/memory-pressure {#node-kubernetes-io-memory-pressure} -示例:`node.kubernetes.io/memory-pressure:NoSchedule` +例子:`node.kubernetes.io/memory-pressure:NoSchedule` -kubelet 依据节点上观测到的 `memory.available` 和 `allocatableMemory.available` 来检测内存压力。 -用观测值对比 kubelet 设置的阈值,以判断是否需要添加/移除节点状况和污点。 +kubelet 根据在 Node 上观察到的 `memory.available` 和 `allocatableMemory.available` 检测内存压力。 +然后将观察到的值与可以在 kubelet 上设置的相应阈值进行比较,以确定是否应添加/删除 Node 状况和污点。 -### node.kubernetes.io/disk-pressure +### node.kubernetes.io/disk-pressure {#node-kubernetes-io-disk-pressure} -示例:`node.kubernetes.io/disk-pressure:NoSchedule` +例子:`node.kubernetes.io/disk-pressure:NoSchedule` -kubelet 依据节点上观测到的 `imagefs.available`、`imagefs.inodesFree`、`nodefs.available` 和 -`nodefs.inodesFree`(仅 Linux) 来判断磁盘压力。 -用观测值对比 kubelet 设置的阈值,以判断是否需要添加/移除节点状况和污点。 +kubelet 根据在 Node 上观察到的 `imagefs.available`、`imagefs.inodesFree`、`nodefs.available` 和 `nodefs.inodesFree`(仅限 Linux )检测磁盘压力。 +然后将观察到的值与可以在 kubelet 上设置的相应阈值进行比较,以确定是否应添加/删除 Node 状况和污点。 -### node.kubernetes.io/network-unavailable +### node.kubernetes.io/network-unavailable {#node-kubernetes-io-network-unavailable} -示例:`node.kubernetes.io/network-unavailable:NoSchedule` +例子:`node.kubernetes.io/network-unavailable:NoSchedule` -此污点初始由 kubelet 设置,云驱动用它来指示对额外网络配置的需求。 -仅当云中的路由配置妥当后,云驱动才会移除此污点。 +当使用的云驱动指示需要额外的网络配置时,此注解最初由 kubelet 设置。 +只有云上的路由被正确地配置了,此污点才会被云驱动移除 -### node.kubernetes.io/pid-pressure +### node.kubernetes.io/pid-pressure {#node-kubernetes-io-pid-pressure} -示例:`node.kubernetes.io/pid-pressure:NoSchedule` +例子:`node.kubernetes.io/pid-pressure:NoSchedule` -kubelet 检查 `/proc/sys/kernel/pid_max` 尺寸的 D 值(D-value),以及节点上 -Kubernetes 消耗掉的 PID 以获取可用的 PID 数量,即指标 `pid.available` 所指代的值。 -然后用此指标对比 kubelet 设置的阈值,以确定节点状态和污点是否可以被添加/移除。 +kubelet 检查 `/proc/sys/kernel/pid_max` 大小的 D 值和 Kubernetes 在 Node 上消耗的 PID, +以获取可用 PID 数量,并将其作为 `pid.available` 指标值。 +然后该指标与在 kubelet 上设置的相应阈值进行比较,以确定是否应该添加/删除 Node 状况和污点。 -### node.cloudprovider.kubernetes.io/uninitialized +### node.cloudprovider.kubernetes.io/uninitialized {#node-cloudprovider-kubernetes-io-shutdown} -示例:`node.cloudprovider.kubernetes.io/uninitialized:NoSchedule` +例子:`node.cloudprovider.kubernetes.io/uninitialized:NoSchedule` -如果 kubelet 启动时设置了“external”云驱动,将在节点上设置此污点以标记节点不可用,直到 -cloud-controller-manager 中的某个控制器初始化此节点之后,才会移除此污点。 +在使用“外部”云驱动启动 kubelet 时,在 Node 上设置此污点以将其标记为不可用,直到来自 +cloud-controller-manager 的控制器初始化此 Node,然后移除污点。 -### node.cloudprovider.kubernetes.io/shutdown +### node.cloudprovider.kubernetes.io/shutdown {#node-cloudprovider-kubernetes-io-shutdown} -示例:`node.cloudprovider.kubernetes.io/shutdown:NoSchedule` +例子:`node.cloudprovider.kubernetes.io/shutdown:NoSchedule` -如果 Node 处于云驱动所指定的关机状态,Node 将被打上污点 -`node.cloudprovider.kubernetes.io/shutdown`,污点的效果为 `NoSchedule`。 +如果 Node 处于云驱动所指定的关闭状态,则 Node 会相应地被设置污点,对应的污点和效果为 +`node.cloudprovider.kubernetes.io/shutdown` 和 `NoSchedule`。 -### pod-security.kubernetes.io/enforce +### pod-security.kubernetes.io/enforce {#pod-security-kubernetes-io-enforce} -示例:`pod-security.kubernetes.io/enforce: baseline` +例子:`pod-security.kubernetes.io/enforce: baseline` 用于:Namespace -此标签的值**必须**是 `privileged`、`baseline`、`restricted` 之一,对应 -[Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)中定义的级别。 -具体而言,被此标签标记的命名空间下,任何创建不满足安全要求的 Pod 的请求都会被都会被 _禁止_。 +值**必须**是 `privileged`、`baseline` 或 `restricted` 之一,它们对应于 +[Pod 安全标准](/zh/docs/concepts/security/pod-security-standards) 级别。 +特别地,`enforce` 标签 **禁止** 在带标签的 Namespace 中创建任何不符合指示级别要求的 Pod。 -更多信息请查阅[执行命名空间级别的 Pod 安全性设置](/zh/docs/concepts/security/pod-security-admission)。 +请请参阅[在名字空间级别实施 Pod 安全性](/zh/docs/concepts/security/pod-security-admission)了解更多信息。 -### pod-security.kubernetes.io/enforce-version +### pod-security.kubernetes.io/enforce-version {#pod-security-kubernetes-io-enforce-version} -示例:`pod-security.kubernetes.io/enforce-version: {{< skew latestVersion >}}` +例子:`pod-security.kubernetes.io/enforce-version: {{< skew latestVersion >}}` 用于:Namespace -此标签的值**必须**是 `latest` 或一个以 `v.` 格式表示的有效的 Kubernets 版本号。 -此标签决定了验证 Pod 时所使用的 [Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)策略的版本。 +值**必须**是 `latest` 或格式为 `v.` 的有效 Kubernetes 版本。 +此注解决定了在验证提交的 Pod 时要应用的 [Pod 安全标准](/zh/docs/concepts/security/pod-security-standards)策略的版本。 -更多信息请查阅[执行命名空间级别的 Pod 安全性设置](/zh/docs/concepts/security/pod-security-admission)。 +请参阅[在名字空间级别实施 Pod 安全性](/zh/docs/concepts/security/pod-security-admission)了解更多信息。 -### pod-security.kubernetes.io/audit +### pod-security.kubernetes.io/audit {#pod-security-kubernetes-io-audit} -示例:`pod-security.kubernetes.io/audit: baseline` +例子:`pod-security.kubernetes.io/audit: baseline` 用于:Namespace -此标签的值**必须**是 `privileged`、`baseline`、`restricted` 之一,对应 -[Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)中定义的级别。 -具体而言,此标签不会阻止不满足安全性要求的 Pod 的创建,但会在那些 Pod 中添加审计(Audit)注解。 +值**必须**是与 [Pod 安全标准](/zh/docs/concepts/security/pod-security-standards) 级别相对应的 +`privileged`、`baseline` 或 `restricted` 之一。 +具体来说,`audit` 标签不会阻止在带标签的 Namespace 中创建不符合指示级别要求的 Pod, +但会向该 Pod 添加审计注解。 -更多信息请查阅[执行命名空间级别的 Pod 安全性设置](/zh/docs/concepts/security/pod-security-admission)。 +请参阅[在名字空间级别实施 Pod 安全性](/zh/docs/concepts/security/pod-security-admission)了解更多信息。 -### pod-security.kubernetes.io/audit-version +### pod-security.kubernetes.io/audit-version {#pod-security-kubernetes-io-audit-version} -示例:`pod-security.kubernetes.io/audit-version: {{< skew latestVersion >}}` +例子:`pod-security.kubernetes.io/audit-version: {{< skew latestVersion >}}` 用于:Namespace -此标签的值**必须**是 `latest` 或一个以 `v.` 格式表示的有效的 Kubernets 版本号。 -此标签决定了验证 Pod 时所使用的 [Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)策略的版本。 +值**必须**是 `latest` 或格式为 `v.` 的有效 Kubernetes 版本。 +此注解决定了在验证提交的 Pod 时要应用的 [Pod 安全标准](/zh/docs/concepts/security/pod-security-standards)策略的版本。 -更多信息请查阅[执行命名空间级别的 Pod 安全性设置](/zh/docs/concepts/security/pod-security-admission)。 +请参阅[在名字空间级别实施 Pod 安全性](/zh/docs/concepts/security/pod-security-admission)了解更多信息。 -### pod-security.kubernetes.io/warn +### pod-security.kubernetes.io/warn {#pod-security-kubernetes-io-warn} -示例:`pod-security.kubernetes.io/warn: baseline` +例子:`pod-security.kubernetes.io/warn: baseline` 用于:Namespace -此标签的值**必须**是 `privileged`、`baseline`、`restricted` 之一,对应 -[Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)中定义的级别。 -具体而言,此标签不会阻止不满足安全性要求的 Pod 的创建,但会返回给用户一个警告。 -注意在创建或更新包含 Pod 模板的对象(例如 Deployment、Job、StatefulSet 等)时,也会显示该警告。 +值**必须**是与 [Pod 安全标准](/zh/docs/concepts/security/pod-security-standards)级别相对应的 +`privileged`、`baseline` 或 `restricted` 之一。特别地, +`warn` 标签不会阻止在带标签的 Namespace 中创建不符合指示级别概述要求的 Pod,但会在这样做后向用户返回警告。 +请注意,在创建或更新包含 Pod 模板的对象时也会显示警告,例如 Deployment、Jobs、StatefulSets 等。 -更多信息请查阅[执行命名空间级别的 Pod 安全性设置](/zh/docs/concepts/security/pod-security-admission)。 +请参阅[在名字空间级别实施 Pod 安全性](/zh/docs/concepts/security/pod-security-admission)了解更多信息。 -### pod-security.kubernetes.io/warn-version +### pod-security.kubernetes.io/warn-version {#pod-security-kubernetes-io-warn-version} -示例:`pod-security.kubernetes.io/warn-version: {{< skew latestVersion >}}` +例子:`pod-security.kubernetes.io/warn-version: {{< skew latestVersion >}}` 用于:Namespace -此标签的值**必须**是 `latest` 或一个以 `v.` 格式表示的有效的 Kubernets 版本号。 -此标签决定了验证 Pod 时所使用的 [Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)策略的版本。 -注意在创建或更新包含 Pod 模板的对象(例如 Deployment、Job、StatefulSet 等)时,也会显示该警告。 +值**必须**是 `latest` 或格式为 `v.` 的有效 Kubernetes 版本。 +此注解决定了在验证提交的 Pod 时要应用的 [Pod 安全标准](/zh/docs/concepts/security/pod-security-standards)策略的版本。 +请注意,在创建或更新包含 Pod 模板的对象时也会显示警告, +例如 Deployment、Jobs、StatefulSets 等。 -更多信息请查阅[执行命名空间级别的 Pod 安全性设置](/zh/docs/concepts/security/pod-security-admission)。 +请参阅[在名字空间级别实施 Pod 安全性](/zh/docs/concepts/security/pod-security-admission)了解更多信息。 ### seccomp.security.alpha.kubernetes.io/pod (已弃用) {#seccomp-security-alpha-kubernetes-io-pod} -此注解已于 Kubernetes v1.19 起被弃用,且将于 v1.25 失效。 -要为 Pod 设定具体的安全设置,请在 Pod 规约中加入 `securityContext` 字段。 -Pod 的 [`.spec.securityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) +此注解自 Kubernetes v1.19 起已被弃用,将在 v1.25 中失效。 +要为 Pod 指定安全设置,请在 Pod 规范中包含 `securityContext` 字段。 +Pod 的 `.spec` 中的 [`securityContext`](/zh/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) 字段定义了 Pod 级别的安全属性。 -当你[为 Pod 设置安全性上下文](/zh/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod)时, -你设定的配置会被应用到该 Pod 的所有容器中。 +你[为 Pod 设置安全上下文](/zh/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod) 时, +你所给出的设置适用于该 Pod 中的所有容器。 -### container.seccomp.security.alpha.kubernetes.io/[NAME](已弃用){#container-seccomp-security-alpha-kubernetes-io} +### container.seccomp.security.alpha.kubernetes.io/[NAME] {#container-seccomp-security-alpha-kubernetes-io} -此注解已于 Kubernetes v1.19 起被弃用,且将于 v1.25 失效。 -[使用 seccomp 限制容器的系统调用](/zh/docs/tutorials/security/seccomp/)教程会指导你完成对 -Pod 或其中的一个容器应用 seccomp 配置文件的全部流程。 -该教程涵盖了 Kubernetes 所支持的配置 seccomp 的机制,此机制基于 Pod 的 `.spec.securityContext`。 +此注解自 Kubernetes v1.19 起已被弃用,将在 v1.25 中失效。 +教程[使用 seccomp 限制容器的系统调用](/zh/docs/tutorials/clusters/seccomp/)将引导你完成将 +seccomp 配置文件应用于 Pod 或其容器的步骤。 +该教程介绍了在 Kubernetes 中配置 seccomp 的支持机制,基于在 Pod 的 `.spec` 中设置 `securityContext`。 -## 用于审计的注解 +## 用于审计的注解 {#annonations-used-for-audit} - [`pod-security.kubernetes.io/exempt`](/zh/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-exempt) -- [`pod-security.kubernetes.io/enforce-policy`](/zh/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-enforce-policy) +- [`pod-security.kubernetes.io/enforce-policy`](/zh/zh/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-enforce-policy) - [`pod-security.kubernetes.io/audit-violations`](/zh/docs/reference/labels-annotations-taints/audit-annotations/#pod-security-kubernetes-io-audit-violations) -更多细节请参阅[审计注解](/zh/docs/reference/labels-annotations-taints/audit-annotations/)。 \ No newline at end of file +在[审计注解](/zh/docs/reference/labels-annotations-taints/audit-annotations/)页面上查看更多详细信息。 \ No newline at end of file diff --git a/content/zh/docs/reference/labels-annotations-taints/audit-annotations.md b/content/zh/docs/reference/labels-annotations-taints/audit-annotations.md index 8fe6e438de162..2ab584e4ed668 100644 --- a/content/zh/docs/reference/labels-annotations-taints/audit-annotations.md +++ b/content/zh/docs/reference/labels-annotations-taints/audit-annotations.md @@ -1,5 +1,5 @@ --- -title: 审计注解 +title: "审计注解" weight: 1 --- - -此页面是 kubernetes.io 命名空间中的审计注解的参考文档。 -这些注解会被应用到 `audit.k8s.io` API 组中的 `Event` 对象中。 +该页面作为 kubernetes.io 名字空间的审计注解的参考。这些注解适用于 API 组 `audit.k8s.io` 中的 `Event` 对象。 {{< note >}} -下列注解并未用在 Kubernetes API 中。 -当你在集群中[启用审计](/zh/docs/tasks/debug-application-cluster/audit/)时,审计事件的数据将通过 -`audit.k8s.io` API 组中的 `Event` 对象来记录。 -注解会被应用到审计事件中。审计事件与 -[Event API](/docs/reference/kubernetes-api/cluster-resources/event-v1/)(`events.k8s.io` API 组)中的对象不同。 -{{< /note >}} +Kubernetes API 中不使用以下注解。当你在集群中[启用审计](/zh/docs/tasks/debug-application-cluster/audit/)时, +审计事件数据将使用 API 组 `audit.k8s.io` 中的 `Event` 写入。 +注解适用于审计事件。审计事件不同于[事件 API ](/zh/docs/reference/kubernetes-api/cluster-resources/event-v1/) +(API 组 `events.k8s.io`)中的对象。 +{{}} - -## pod-security.kubernetes.io/exempt +## pod-security.kubernetes.io/exempt {#pod-security-kubernetes-io-exempt} -示例:`pod-security.kubernetes.io/exempt: namespace` +例子:`pod-security.kubernetes.io/exempt: namespace` -此注解的值**必须**是 `user`、`namespace`、`runtimeClass` 之一,对应 -[Pod 安全性豁免](/zh/docs/concepts/security/pod-security-admission/#exemptions)维度。 -此注解标示了 Pod 安全性豁免的维度。 +值**必须**是对应于 [Pod 安全豁免](/zh/docs/concepts/security/pod-security-admission/#exemptions)维度的 +`user`、`namespace` 或 `runtimeClass` 之一。 +此注解指示 PodSecurity 基于哪个维度的强制豁免执行。 -## pod-security.kubernetes.io/enforce-policy +## pod-security.kubernetes.io/enforce-policy {#pod-security-kubernetes-io-enforce-policy} -示例:`pod-security.kubernetes.io/enforce-policy: restricted:latest` +例子:`pod-security.kubernetes.io/enforce-policy: restricted:latest` -此注解的值**必须**是 `privileged:`、`baseline:`、`restricted:` -之一,对应 [Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)中定义的级别。 -`` **必须**是 `latest` 或一个以 `v.` 格式表示的有效的 Kubernets 版本号。 -此注解标示了 Pod 安全性准入过程中执行批准或拒绝的级别。 +值**必须**是对应于 [Pod 安全标准](/zh/docs/concepts/security/pod-security-standards) 级别的 +`privileged:<版本>`、`baseline:<版本>`、`restricted:<版本>`, +关联的版本**必须**是 `latest` 或格式为 `v.` 的有效 Kubernetes 版本。 +此注解通知有关在 PodSecurity 准入期间允许或拒绝 Pod 的执行级别。 -更多信息请查阅 [Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)。 +有关详细信息,请参阅 [Pod 安全标准](/zh/docs/concepts/security/pod-security-standards/)。 -## pod-security.kubernetes.io/audit-violations +## pod-security.kubernetes.io/audit-violations {#pod-security-kubernetes-io-audit-violations} -示例:`pod-security.kubernetes.io/audit-violations: would violate +例子:`pod-security.kubernetes.io/audit-violations: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "example" must set securityContext.allowPrivilegeEscalation=false), ...` -此注解详细描述了一次审计策略的违背信息,其中包含了所触犯的 -[Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)级别以及具体的策略。 +注解值给出审计策略违规的详细说明,它包含所违反的 [Pod 安全标准](/zh/docs/concepts/security/pod-security-standards/)级别以及 +PodSecurity 执行中违反的特定策略及对应字段。 -更多信息请查阅 [Pod 安全性标准](/zh/docs/concepts/security/pod-security-standards)。 \ No newline at end of file +有关详细信息,请参阅 [Pod 安全标准](/zh/docs/concepts/security/pod-security-standards/)。 \ No newline at end of file From 62149d055f1ca12ef781dd473e49a10a6d8f24f6 Mon Sep 17 00:00:00 2001 From: syxunion <86048991+syxunion@users.noreply.github.com> Date: Wed, 23 Mar 2022 18:10:00 +0800 Subject: [PATCH 069/138] Translate object-field-selector to chinese (#32432) * chinese page object-field-selector * Modify according to comments --- .../object-field-selector.md | 54 +++++++++++++++++++ 1 file changed, 54 insertions(+) create mode 100644 content/zh/docs/reference/kubernetes-api/common-definitions/object-field-selector.md diff --git a/content/zh/docs/reference/kubernetes-api/common-definitions/object-field-selector.md b/content/zh/docs/reference/kubernetes-api/common-definitions/object-field-selector.md new file mode 100644 index 0000000000000..536c9b7f867f7 --- /dev/null +++ b/content/zh/docs/reference/kubernetes-api/common-definitions/object-field-selector.md @@ -0,0 +1,54 @@ +--- +api_metadata: + apiVersion: "" + import: "k8s.io/api/core/v1" + kind: "ObjectFieldSelector" +content_type: "api_reference" +description: "ObjectFieldSelector 选择对象的 APIVersioned 字段。" +title: "ObjectFieldSelector" +weight: 6 +auto_generated: true +--- + + + + + +`import "k8s.io/api/core/v1"` + + +ObjectFieldSelector 选择对象的 APIVersioned 字段。 + +
+ + +- **fieldPath** (string), 必需的 + + 在指定 API 版本中要选择的字段的路径。 + +- **apiVersion** (string) + + `fieldPath` 写入时所使用的模式版本,默认为 "v1"。 + + From bd6b45c4491f1733373cb416bda6266883d05c60 Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Wed, 23 Mar 2022 18:15:37 +0800 Subject: [PATCH 070/138] [zh] Update cron-jobs and delete repeat context Signed-off-by: xin.li --- .../concepts/workloads/controllers/cron-jobs.md | 14 +------------- 1 file changed, 1 insertion(+), 13 deletions(-) diff --git a/content/zh/docs/concepts/workloads/controllers/cron-jobs.md b/content/zh/docs/concepts/workloads/controllers/cron-jobs.md index acbde8bf08052..e31fa8edc36b9 100644 --- a/content/zh/docs/concepts/workloads/controllers/cron-jobs.md +++ b/content/zh/docs/concepts/workloads/controllers/cron-jobs.md @@ -115,19 +115,7 @@ This example CronJob manifest prints the current time and a hello message every # │ │ ┌───────────── 月的某天 (1 - 31) # │ │ │ ┌───────────── 月份 (1 - 12) # │ │ │ │ ┌───────────── 周的某天 (0 - 6)(周日到周一;在某些系统上,7 也是星期日) -# │ │ │ │ │ -# │ │ │ │ │ -# │ │ │ │ │ -# * * * * * -``` - -``` -# ┌───────────── 分钟 (0 - 59) -# │ ┌───────────── 小时 (0 - 23) -# │ │ ┌───────────── 月的某天 (1 - 31) -# │ │ │ ┌───────────── 月份 (1 - 12) -# │ │ │ │ ┌───────────── 周的某天 (0 - 6) (周日到周一;在某些系统上,7 也是星期日) -# │ │ │ │ │ +# │ │ │ │ │ 或者是 sun,mon,tue,web,thu,fri,sat # │ │ │ │ │ # │ │ │ │ │ # * * * * * From 2a2ab57c82ccd91608436839ca19534d33e83ecb Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Wed, 23 Mar 2022 21:11:18 +0800 Subject: [PATCH 071/138] [zh] Update kops.md Signed-off-by: xin.li --- content/zh/docs/setup/production-environment/tools/kops.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/zh/docs/setup/production-environment/tools/kops.md b/content/zh/docs/setup/production-environment/tools/kops.md index 94b448361af24..73b322da644dc 100644 --- a/content/zh/docs/setup/production-environment/tools/kops.md +++ b/content/zh/docs/setup/production-environment/tools/kops.md @@ -410,12 +410,12 @@ See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to expl ## {{% heading "whatsnext" %}} * 了解有关 Kubernetes 的[概念](/zh/docs/concepts/) 和 - [`kubectl`](/zh/docs/reference/kubectl/overview/) 有关的更多信息。 + [`kubectl`](/zh/docs/reference/kubectl/) 有关的更多信息。 * 了解 `kops` [高级用法](https://github.com/kubernetes/kops)。 * 请参阅 `kops` [文档](https://github.com/kubernetes/kops) 获取教程、 最佳做法和高级配置选项。 From bdf2e6ee97ff512e732865cbf5162691edf154ad Mon Sep 17 00:00:00 2001 From: Natali Vlatko Date: Wed, 23 Mar 2022 19:30:36 +0100 Subject: [PATCH 072/138] Add natalisucks to reviewers/approvers of English language content --- OWNERS_ALIASES | 2 ++ 1 file changed, 2 insertions(+) diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index ce80300961c9f..a98ae58b6f04b 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -24,6 +24,7 @@ aliases: - jimangel - jlbutler - kbhawkey + - natalisucks - onlydole - pi-victor - reylejano @@ -39,6 +40,7 @@ aliases: - jimangel - kbhawkey - mehabhalodiya + - natalisucks - onlydole - rajeshdeshpande02 - sftim From f128a0db145d20174dcec4216d68efd12b01ad77 Mon Sep 17 00:00:00 2001 From: Arhell Date: Thu, 24 Mar 2022 00:16:26 +0200 Subject: [PATCH 073/138] [ru] fix typo --- content/ru/docs/concepts/architecture/controller.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/ru/docs/concepts/architecture/controller.md b/content/ru/docs/concepts/architecture/controller.md index 4f28a51836cd5..3df6517fa48c2 100644 --- a/content/ru/docs/concepts/architecture/controller.md +++ b/content/ru/docs/concepts/architecture/controller.md @@ -11,7 +11,7 @@ weight: 30 Вот один из примеров контура управления: термостат в помещении. Когда вы устанавливаете температуру, это говорит термостату о вашем *желаемом состоянии*. Фактическая температура в помещении - это -*текущее состояние*. Термостат действует так, чтобы приблизить текущее состояние к елаемому состоянию, путем включения или выключения оборудования. +*текущее состояние*. Термостат действует так, чтобы приблизить текущее состояние к желаемому состоянию, путем включения или выключения оборудования. {{< glossary_definition term_id="controller" length="short">}} From fb744abdd12edd115a7a9827e88b04c544dc3bba Mon Sep 17 00:00:00 2001 From: Kat Cosgrove Date: Wed, 23 Mar 2022 22:31:18 +0000 Subject: [PATCH 074/138] adds 1.24 cluster dockershim removal ready blog (#32303) * adds 1.24 cluster dockershim removal ready blog * small rephrasings * Update content/en/blog/_posts/2022-03-31-ready-for-dockershim-removal.md Co-authored-by: Nate W. <4453979+nate-double-u@users.noreply.github.com> * Update content/en/blog/_posts/2022-03-31-ready-for-dockershim-removal.md Co-authored-by: Nate W. <4453979+nate-double-u@users.noreply.github.com> * Update content/en/blog/_posts/2022-03-31-ready-for-dockershim-removal.md Co-authored-by: Nate W. <4453979+nate-double-u@users.noreply.github.com> * Update content/en/blog/_posts/2022-03-31-ready-for-dockershim-removal.md Co-authored-by: Nate W. <4453979+nate-double-u@users.noreply.github.com> * Update content/en/blog/_posts/2022-03-31-ready-for-dockershim-removal.md Co-authored-by: Nate W. <4453979+nate-double-u@users.noreply.github.com> * small rephrasings and link adds Co-authored-by: Nate W. <4453979+nate-double-u@users.noreply.github.com> --- ...2022-03-31-ready-for-dockershim-removal.md | 34 +++++++++++++++++++ 1 file changed, 34 insertions(+) create mode 100644 content/en/blog/_posts/2022-03-31-ready-for-dockershim-removal.md diff --git a/content/en/blog/_posts/2022-03-31-ready-for-dockershim-removal.md b/content/en/blog/_posts/2022-03-31-ready-for-dockershim-removal.md new file mode 100644 index 0000000000000..63d0cc5be5f3d --- /dev/null +++ b/content/en/blog/_posts/2022-03-31-ready-for-dockershim-removal.md @@ -0,0 +1,34 @@ +--- +layout: blog +title: "Is Your Cluster Ready for v1.24?" +date: 2022-03-31 +slug: ready-for-dockershim-removal +--- + +**Author:** Kat Cosgrove + + +Way back in December of 2020, Kubernetes announced the [deprecation of Dockershim](/blog/2020/12/02/dont-panic-kubernetes-and-docker/). In Kubernetes, dockershim is a software shim that allows you to use the entire Docker engine as your container runtime within Kubernetes. In the upcoming v1.24 release, we are removing Dockershim - the delay between deprecation and removal in line with the [project’s policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/) of supporting features for at least one year after deprecation. If you are a cluster operator, this guide includes the practical realities of what you need to know going into this release. Also, what do you need to do to ensure your cluster doesn’t fall over! + +## First, does this even affect you? + +If you are rolling your own cluster or are otherwise unsure whether or not this removal affects you, stay on the safe side and [check to see if you have any dependencies on Docker Engine](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/). Please note that using Docker Desktop to build your application containers is not a Docker dependency for your cluster. Container images created by Docker are compliant with the [Open Container Initiative (OCI)](https://opencontainers.org/), a Linux Foundation governance structure that defines industry standards around container formats and runtimes. They will work just fine on any container runtime supported by Kubernetes. + +If you are using a managed Kubernetes service from a cloud provider, and you haven’t explicitly changed the container runtime, there may be nothing else for you to do. Amazon EKS, Azure AKS, and Google GKE all default to containerd now, though you should make sure they do not need updating if you have any node customizations. To check the runtime of your nodes, follow [Find Out What Container Runtime is Used on a Node](​​/docs/tasks/administer-cluster/migrating-from-dockershim/find-out-runtime-you-use/). + +Regardless of whether you are rolling your own cluster or using a managed Kubernetes service from a cloud provider, you may need to [migrate telemetry or security agents that rely on Docker Engine](/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents/). + +## I have a Docker dependency. What now? + +If your Kubernetes cluster depends on Docker Engine and you intend to upgrade to Kubernetes v1.24 (which you should eventually do for security and similar reasons), you will need to change your container runtime from Docker Engine to something else or use [cri-dockerd](https://github.com/Mirantis/cri-dockerd). Since [containerd](https://containerd.io/) is a graduated CNCF project and the runtime within Docker itself, it’s a safe bet as an alternative container runtime. Fortunately, the Kubernetes project has already documented the process of [changing a node’s container runtime](/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/), using containerd as an example. Instructions are similar for switching to one of the other supported runtimes. + +## I want to upgrade Kubernetes, and I need to maintain compatibility with Docker as a runtime. What are my options? + +Fear not, you aren’t being left out in the cold and you don’t have to take the security risk of staying on an old version of Kubernetes. Mirantis and Docker have jointly released, and are maintaining, a replacement for dockershim. That replacement is called [cri-dockerd](https://github.com/Mirantis/cri-dockerd). If you do need to maintain compatibility with Docker as a runtime, install cri-dockerd following the instructions in the project’s documentation. + +## Is that it? + + +Yes. As long as you go into this release aware of the changes being made and the details of your own clusters, and you make sure to communicate clearly with your development teams, it will be minimally dramatic. You may have some changes to make to your cluster, application code, or scripts, but all of these requirements are documented. Switching from using Docker Engine as your runtime to using [one of the other supported container runtimes](/docs/setup/production-environment/container-runtimes/) effectively means removing the middleman, since the purpose of dockershim is to access the container runtime used by Docker itself. From a practical perspective, this removal is better both for you and for Kubernetes maintainers in the long-run. + +If you still have questions, please first check the [Dockershim Removal FAQ](/blog/2022/02/17/dockershim-faq/). From 1eae49aec885fbbdba2dd70c5bd435bb3be34bcd Mon Sep 17 00:00:00 2001 From: zhangxyjlu <101087623+zhangxyjlu@users.noreply.github.com> Date: Thu, 24 Mar 2022 08:44:41 +0800 Subject: [PATCH 075/138] [zh] Add 2022-02-17-updated-dockershim-faq.md (#32342) * [zh] Add 2022-02-17-updated-dockershim-faq.md #32270 * [zh] Add 2022-02-17-updated-dockershim-faq.md * [zh] Add 2022-02-17-updated-dockershim-faq.md * [zh] Add 2022-02-17-updated-dockershim-faq.md --- .../2022-02-17-updated-dockershim-faq.md | 373 ++++++++++++++++++ 1 file changed, 373 insertions(+) create mode 100644 content/zh/blog/_posts/2022-02-17-updated-dockershim-faq.md diff --git a/content/zh/blog/_posts/2022-02-17-updated-dockershim-faq.md b/content/zh/blog/_posts/2022-02-17-updated-dockershim-faq.md new file mode 100644 index 0000000000000..9a1cac8cc255d --- /dev/null +++ b/content/zh/blog/_posts/2022-02-17-updated-dockershim-faq.md @@ -0,0 +1,373 @@ +--- +layout: blog +title: "更新:弃用 Dockershim 的常见问题" +date: 2022-02-17 +slug: dockershim-faq +aliases: [ '/dockershim' ] +--- + + + +**本文是针对2020年末发布的[弃用 Dockershim 的常见问题](/zh/blog/2020/12/02/dockershim-faq/)的博客更新。** + + +本文回顾了自 Kubernetes v1.20 版本[宣布](/zh/blog/2020/12/08/kubernetes-1-20-release-announcement/)弃用 +Dockershim 以来所引发的一些常见问题。关于弃用细节以及这些细节背后的含义,请参考博文 +[别慌: Kubernetes 和 Docker](/zh/blog/2020/12/02/dont-panic-kubernetes-and-docker/)。 + + +你还可以查阅:[检查弃用 Dockershim 对你的影响](/zh/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/)这篇文章, +以确定弃用 dockershim 会对你或你的组织带来多大的影响。 + + +随着 Kubernetes 1.24 版本的发布迫在眉睫,我们一直在努力尝试使其能够平稳升级顺利过渡。 + + +- 我们已经写了一篇博文,详细说明了我们的[承诺和后续操作](/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/)。 +- 我们我们相信可以无障碍的迁移到其他[容器运行时](/zh/docs/setup/production-environment/container-runtimes/#container-runtimes)。 +- 我们撰写了 [dockershim 迁移指南](/docs/tasks/administer-cluster/migrating-from-dockershim/)供你参考。 +- 我们还创建了一个页面来列出[有关 dockershim 移除和使用 CRI 兼容运行时的文章](/zh/docs/reference/node/topics-on-dockershim-and-cri-compatible-runtimes/)。 + 该列表包括一些已经提到的文档,还涵盖了选定的外部资源(包括供应商指南)。 + + +### 为什么会从 Kubernetes 中移除 dockershim ? + + +Kubernetes 的早期版本仅适用于特定的容器运行时:Docker Engine。 +后来,Kubernetes 增加了对使用其他容器运行时的支持。[创建](/blog/2016/12/container-runtime-interface-cri-in-kubernetes/) CRI +标准是为了实现编排器(如 Kubernetes)和许多不同的容器运行时之间交互操作。 +Docker Engine 没有实现(CRI)接口,因此 Kubernetes 项目创建了特殊代码来帮助过渡, +并使 dockershim 代码成为 Kubernetes 的一部分。 + + +dockershim 代码一直是一个临时解决方案(因此得名:shim)。 +你可以阅读 [Kubernetes 移除 Dockershim 增强方案](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim) +以了解相关的社区讨论和计划。 +事实上,维护 dockershim 已经成为 Kubernetes 维护者的沉重负担。 + + +此外,在较新的 CRI 运行时中实现了与 dockershim 不兼容的功能,例如 cgroups v2 和用户命名空间。 +取消对 dockershim 的支持将加速这些领域的发展。 + + +### 在 Kubernetes 1.23 版本中还可以使用 Docker Engine 吗? + + +可以使用,在 1.20 版本中唯一的改动是,如果使用 Docker Engine, +在 [kubelet](/zh/docs/reference/command-line-tools-reference/kubelet/) +启动时会打印一个警告日志。 +你将在 1.23 版本及以前版本看到此警告。dockershim 将在 Kubernetes 1.24 版本中移除 。 + + +### 什么时候移除 dockershim ? + + +考虑到此变更带来的影响,我们使用了一个加长的废弃时间表。 +dockershim 计划在 Kubernetes v1.24 中进行移除, +参见 [Kubernetes 移除 Dockershim 增强方案](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim)。 +Kubernetes 项目将与供应商和其他生态系统组织密切合作,以确保平稳过渡,并将依据事态的发展评估后续事项。 + + +### 我还可以使用 Docker Engine 作为我的容器运行时吗? + + +首先,如果你在自己的电脑上使用 Docker 用来做开发或测试容器:它将与之前没有任何变化。 +无论你为 Kubernetes 集群使用什么容器运行时,你都可以在本地使用 Docker。容器使这种交互成为可能。 + + +Mirantis 和 Docker 已[承诺](https://www.mirantis.com/blog/mirantis-to-take-over-support-of-kubernetes-dockershim-2/) +为 Docker Engine 维护一个替代适配器, +并在 dockershim 从 Kubernetes 移除后维护该适配器。 +替代适配器名为 [`cri-dockerd`](https://github.com/Mirantis/cri-dockerd)。 + + +### 我现有的容器镜像还能正常工作吗? + + +当然可以,`docker build` 创建的镜像适用于任何 CRI 实现。 +所有你的现有镜像将和往常一样工作。 + + +### 私有镜像呢? + + +当然可以。所有 CRI 运行时均支持在 Kubernetes 中相同的拉取(pull)Secret 配置, +无论是通过 PodSpec 还是 ServiceAccount。 + + +### Docker 和容器是一回事吗? + + +Docker 普及了 Linux 容器模式,并在开发底层技术方面发挥了重要作用, +但是 Linux 中的容器已经存在了很长时间。容器的生态相比于 Docker 具有更宽广的领域。 +OCI 和 CRI 等标准帮助许多工具在我们的生态系统中发展壮大, +其中一些替代了 Docker 的某些方面,而另一些则增强了现有功能。 + + +### 现在是否有在生产系统中使用其他运行时的例子? + + +Kubernetes 所有项目在所有版本中出产的工件(Kubernetes 二进制文件)都经过了验证。 + + +此外,[kind](https://kind.sigs.k8s.io/) 项目使用 containerd 已经有一段时间了,并且提高了其用例的稳定性。 +Kind 和 containerd 每天都会被多次使用来验证对 Kubernetes 代码库的任何更改。 +其他相关项目也遵循同样的模式,从而展示了其他容器运行时的稳定性和可用性。 +例如,OpenShift 4.x 从 2019 年 6 月以来,就一直在生产环境中使用 [CRI-O](https://cri-o.io/) 运行时。 + + +至于其他示例和参考资料,你可以查看 containerd 和 CRI-O 的使用者列表, +这两个容器运行时是云原生基金会([CNCF](https://cncf.io))下的项目。 + +- [containerd](https://github.com/containerd/containerd/blob/master/ADOPTERS.md) +- [CRI-O](https://github.com/cri-o/cri-o/blob/master/ADOPTERS.md) + + +### 人们总在谈论 OCI,它是什么? + + +OCI 是 [Open Container Initiative](https://opencontainers.org/about/overview/) 的缩写, +它标准化了容器工具和底层实现之间的大量接口。 +它们维护了打包容器镜像(OCI image)和运行时(OCI runtime)的标准规范。 +它们还以 [runc](https://github.com/opencontainers/runc) 的形式维护了一个 runtime-spec 的真实实现, +这也是 [containerd](https://containerd.io/) 和 [CRI-O](https://cri-o.io/) 依赖的默认运行时。 +CRI 建立在这些底层规范之上,为管理容器提供端到端的标准。 + + +### 我应该用哪个 CRI 实现? + + +这是一个复杂的问题,依赖于许多因素。 +如果你正在使用 Docker,迁移到 containerd 应该是一个相对容易地转换,并将获得更好的性能和更少的开销。 +然而,我们鼓励你探索 [CNCF landscape](https://landscape.cncf.io/card-mode?category=container-runtime&grouping=category) +提供的所有选项,做出更适合你的选择。 + + +### 当切换 CRI 实现时,应该注意什么? + + +虽然 Docker 和大多数 CRI(包括 containerd)之间的底层容器化代码是相同的, +但其周边部分却存在差异。迁移时要考虑如下常见事项: + + +- 日志配置 +- 运行时的资源限制 +- 调用 docker 或通过其控制套接字使用 docker 的节点配置脚本 +- 需要访问 docker 命令或控制套接字的 kubectl 插件 +- 需要直接访问 Docker Engine 的 Kubernetes 工具(例如:已弃用的 'kube-imagepuller' 工具) +- `registry-mirrors` 和不安全注册表等功能的配置 +- 保障 Docker Engine 可用、且运行在 Kubernetes 之外的脚本或守护进程(例如:监视或安全代理) +- GPU 或特殊硬件,以及它们如何与你的运行时和 Kubernetes 集成 + + +如果你只是用了 Kubernetes 资源请求/限制或基于文件的日志收集 DaemonSet,它们将继续稳定工作, +但是如果你用了自定义了 dockerd 配置,则可能需要为新的容器运行时做一些适配工作。 + + +另外还有一个需要关注的点,那就是当创建镜像时,系统维护或嵌入容器方面的任务将无法工作。 +对于前者,可以用 [`crictl`](https://github.com/kubernetes-sigs/cri-tools) 工具作为临时替代方案 +(参阅[从 docker cli 到 crictl 的映射](/zh/docs/tasks/debug-application-cluster/crictl/#mapping-from-docker-cli-to-crictl))。 +对于后者,可以用新的容器创建选项,例如 +[img](https://github.com/genuinetools/img)、 +[buildah](https://github.com/containers/buildah)、 +[kaniko](https://github.com/GoogleContainerTools/kaniko) 或 +[buildkit-cli-for-kubectl](https://github.com/vmware-tanzu/buildkit-cli-for-kubectl), +他们都不需要 Docker。 + + +对于 containerd,你可查阅有关它的[文档](https://github.com/containerd/cri/blob/master/docs/registry.md), +获取迁移时可用的配置选项。 + + +有关如何在 Kubernetes 中使用 containerd 和 CRI-O 的说明, +请参阅 [Kubernetes 相关文档](/docs/setup/production-environment/container-runtimes/) + + +### 我还有其他问题怎么办? + + +如果你使用了供应商支持的 Kubernetes 发行版,你可以咨询供应商他们产品的升级计划。 +对于最终用户的问题,请把问题发到我们的最终用户社区的论坛:https://discuss.kubernetes.io/。 + + +你也可以看看这篇优秀的博客文章:[等等,Docker 被 Kubernetes 弃用了?](https://dev.to/inductor/wait-docker-is-deprecated-in-kubernetes-now-what-do-i-do-e4m) +对这些变化进行更深入的技术讨论。 + + +### 我可以加入吗? + + +当然,只要你愿意,随时随地欢迎。🤗🤗🤗 \ No newline at end of file From d1ea334a70b59d3b5fb10f26f8c852c7fffd86c4 Mon Sep 17 00:00:00 2001 From: shixiuguo <99793706+shixiuguo@users.noreply.github.com> Date: Thu, 24 Mar 2022 08:46:41 +0800 Subject: [PATCH 076/138] Translate docs/reference/kubernetes-api/common-definitions/node-selector-requirement.md into Chinese (#32038) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Translate node-selector-requirement.md into Chinese * Modify as suggested * Modify according to the review results * 删除文档中注释的部分 * Translate node-selector-requirement.md into Chinese Modify according to the review results --- .../node-selector-requirement.md | 82 +++++++++++++++++++ 1 file changed, 82 insertions(+) create mode 100755 content/zh/docs/reference/kubernetes-api/common-definitions/node-selector-requirement.md diff --git a/content/zh/docs/reference/kubernetes-api/common-definitions/node-selector-requirement.md b/content/zh/docs/reference/kubernetes-api/common-definitions/node-selector-requirement.md new file mode 100755 index 0000000000000..e643330a1bc36 --- /dev/null +++ b/content/zh/docs/reference/kubernetes-api/common-definitions/node-selector-requirement.md @@ -0,0 +1,82 @@ +--- +api_metadata: + apiVersion: "" + import: "k8s.io/api/core/v1" + kind: "NodeSelectorRequirement" +content_type: "api_reference" +description: "节点选择器是要求包含键、值和关联键和值的运算符的选择器" +title: "NodeSelectorRequirement" +weight: 5 +auto_generated: true +--- + + +`import "k8s.io/api/core/v1"` + + + + 节点选择器是要求包含键、值和关联键和值的运算符的选择器。 + +
+ + +- **key** (string), 必选 + + 选择器适用的标签键。 + + +- **operator** (string), 必选 + + 表示键与一组值的关系的运算符。有效的运算符包括:In、NotIn、Exists、DoesNotExist、Gt 和 Lt。 + + 可选值: + - `"DoesNotExist"` + - `"Exists"` + - `"Gt"` + - `"In"` + - `"Lt"` + - `"NotIn"` + + +- **values** ([]string) + + 字符串数组。如果运算符为 In 或 NotIn,则数组必须为非空。 + 如果运算符为 Exists 或 DoesNotExist,则数组必须为空。 + 如果运算符为 Gt 或 Lt,则数组必须有一个元素,该元素将被译为整数。 + 该数组在合并计划补丁时将被替换。 + + From 49c3eda945d1ffe5f1bcd1a2e8f754469841f0a7 Mon Sep 17 00:00:00 2001 From: 0xff-dev Date: Thu, 24 Mar 2022 09:45:17 +0800 Subject: [PATCH 077/138] [zh] delete pod overhead title --- content/zh/docs/concepts/scheduling-eviction/pod-overhead.md | 5 ----- 1 file changed, 5 deletions(-) diff --git a/content/zh/docs/concepts/scheduling-eviction/pod-overhead.md b/content/zh/docs/concepts/scheduling-eviction/pod-overhead.md index 998a2c3327f97..c0abf0bf582b2 100644 --- a/content/zh/docs/concepts/scheduling-eviction/pod-overhead.md +++ b/content/zh/docs/concepts/scheduling-eviction/pod-overhead.md @@ -32,11 +32,6 @@ _POD 开销_ 是一个特性,用于计算 Pod 基础设施在容器请求和 - - -## Pod 开销 ## 驱逐 API {#the-eviction-api} 如果你不喜欢使用 [kubectl drain](/docs/reference/generated/kubectl/kubectl-commands/#drain) (比如避免调用外部命令,或者更细化地控制 pod 驱逐过程), 你也可以用驱逐 API 通过编程的方式达到驱逐的效果。 - - -首先应该熟悉使用 -[Kubernetes 语言客户端](/zh/docs/tasks/administer-cluster/access-cluster-api/#programmatic-access-to-the-api)。 - -Pod 的 Eviction 子资源可以看作是一种策略控制的 DELETE 操作,作用于 Pod 本身。 -要尝试驱逐(更准确地说,尝试 *创建* 一个 Eviction),需要用 POST 发出所尝试的操作。这里有一个例子: - -{{< tabs name="Eviction_example" >}} -{{% tab name="policy/v1" %}} - -{{< note >}} -`policy/v1` 驱逐在 v1.22+ 中可用。在之前版本中请使用 `policy/v1beta1` 。 -{{< /note >}} - - -```json -{ - "apiVersion": "policy/v1", - "kind": "Eviction", - "metadata": { - "name": "quux", - "namespace": "default" - } -} -``` -{{% /tab %}} -{{% tab name="policy/v1beta1" %}} - -{{< note >}} -在 v1.22 中已弃用,以 `policy/v1` 取代 -{{< /note >}} - -```json -{ - "apiVersion": "policy/v1beta1", - "kind": "Eviction", - "metadata": { - "name": "quux", - "namespace": "default" - } -} -``` -{{% /tab %}} -{{< /tabs >}} - - -你可以使用 `curl` 尝试驱逐: - -```bash -curl -v -H 'Content-type: application/json' http://127.0.0.1:8080/api/v1/namespaces/default/pods/quux/eviction -d @eviction.json -``` - - -API 可以通过以下三种方式之一进行响应: - -- 如果驱逐被授权,那么 Pod 将被删掉,并且你会收到 `200 OK`, - 就像你向 Pod 的 URL 发送了 `DELETE` 请求一样。 -- 如果按照预算中规定,目前的情况不允许的驱逐,你会收到 `429 Too Many Requests`。 - 这通常用于对 *一些* 请求进行通用速率限制, - 但这里我们的意思是:此请求 *现在* 不允许,但以后可能会允许。 - 目前,调用者不会得到任何 `Retry-After` 的提示,但在将来的版本中可能会得到。 -- 如果有一些错误的配置,比如多个预算指向同一个 Pod,你将得到 `500 Internal Server Error`。 - - -对于一个给定的驱逐请求,有两种情况: - -- 没有匹配这个 Pod 的预算。这种情况,服务器总是返回 `200 OK`。 -- 至少匹配一个预算。在这种情况下,上述三种回答中的任何一种都可能适用。 - - -## 驱逐阻塞 - -在某些情况下,应用程序可能会到达一个中断状态,除了 429 或 500 之外,它将永远不会返回任何内容。 -例如 ReplicaSet 创建的替换 Pod 没有变成就绪状态,或者被驱逐的最后一个 -Pod 有很长的终止宽限期,就会发生这种情况。 - - -在这种情况下,有两种可能的解决方案: - -- 中止或暂停自动操作。调查应用程序卡住的原因,并重新启动自动化。 -- 经过适当的长时间等待后,从集群中删除 Pod 而不是使用驱逐 API。 - -Kubernetes 并没有具体说明在这种情况下应该采取什么行为, -这应该由应用程序所有者和集群所有者紧密沟通,并达成对行动一致意见。 +更多信息,请参阅 [API 发起的驱逐](/zh/docs/concepts/scheduling-eviction/api-eviction/)。 ## {{% heading "whatsnext" %}} From 94970599ccabf546e02c1a68617283e57820a72d Mon Sep 17 00:00:00 2001 From: Gerzone Date: Wed, 23 Mar 2022 20:53:26 +0800 Subject: [PATCH 081/138] Translate ObjectReference #32433 --- .../common-definitions/object-reference.md | 111 ++++++++++++++++++ 1 file changed, 111 insertions(+) create mode 100644 content/zh/docs/reference/kubernetes-api/common-definitions/object-reference.md diff --git a/content/zh/docs/reference/kubernetes-api/common-definitions/object-reference.md b/content/zh/docs/reference/kubernetes-api/common-definitions/object-reference.md new file mode 100644 index 0000000000000..2655586d21087 --- /dev/null +++ b/content/zh/docs/reference/kubernetes-api/common-definitions/object-reference.md @@ -0,0 +1,111 @@ +--- +api_metadata: + apiVersion: "" + import: "k8s.io/api/core/v1" + kind: "ObjectReference" +content_type: "api_reference" +description: "ObjectReference 包含足够的信息,可以让你检查或修改引用的对象。" +title: "ObjectReference" +weight: 8 +auto_generated: true +--- + + + + + + +`import "k8s.io/api/core/v1"` + + + +ObjectReference包含足够的信息,允许你检查或修改引用的对象。 + +
+ + + + +- **apiVersion** (string) + + 被引用者的 API 版本。 + +- **fieldPath** (string) + + 如果引用的是对象的某个对象是整个对象,则该字符串而不是应包含的 JSON/Go 字段有效访问语句, + 例如`desiredState.manifest.containers[ 2 ]`。例如,如果对象引用针对的是 Pod 中的一个容器, + 此字段取值类似于:`spec.containers{name}`(`name`指触发的容器的名称), + 或者如果没有指定容器名称,`spec.containers[ 2 ]`(此Pod中索引为2的容器)。 + 选择这种只是为了有一些定义好的语法来引用对象的部分。 + +- **kind** (string) + + 被引用者的类别(kind)。 更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md #types-kinds + +- **name** (string) + + 被引用对象的名称。更多信息:https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + +- **namespace** (string) + + 被引用对象的名字空间。更多信息:https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ + +- **resourceVersion** (string) + + 被引用对象的特定资源版本(如果有)。更多信息:https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency + +- **uid** (string) + + 被引用对象的UID。更多信息:https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids + From dfc52b96ebf36e931938f5cfed4543dad53cfe92 Mon Sep 17 00:00:00 2001 From: Gsealy Date: Thu, 24 Mar 2022 11:28:50 +0800 Subject: [PATCH 082/138] typo CRS -> CSR --- .../access-authn-authz/certificate-signing-requests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/zh/docs/reference/access-authn-authz/certificate-signing-requests.md b/content/zh/docs/reference/access-authn-authz/certificate-signing-requests.md index 0fc00ced0059c..503818b5bad6f 100644 --- a/content/zh/docs/reference/access-authn-authz/certificate-signing-requests.md +++ b/content/zh/docs/reference/access-authn-authz/certificate-signing-requests.md @@ -702,7 +702,7 @@ status: -驳回(`Denied`)的 CRS: +驳回(`Denied`)的 CSR: ```yaml apiVersion: certificates.k8s.io/v1 From b501d626a1c79d22c75ff3c4dc316f81018cd85f Mon Sep 17 00:00:00 2001 From: hongming Date: Thu, 24 Mar 2022 16:34:52 +0800 Subject: [PATCH 083/138] [zh] Fix typo in securing-a-cluster.md --- content/zh/docs/tasks/administer-cluster/securing-a-cluster.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/zh/docs/tasks/administer-cluster/securing-a-cluster.md b/content/zh/docs/tasks/administer-cluster/securing-a-cluster.md index 5731d530b338d..4ce4f0c29e543 100644 --- a/content/zh/docs/tasks/administer-cluster/securing-a-cluster.md +++ b/content/zh/docs/tasks/administer-cluster/securing-a-cluster.md @@ -254,7 +254,7 @@ to the metadata API, and avoid using provisioning data to deliver secrets. --> ### 限制云 metadata API 访问 -云平台(AWS, Azure, GCE 等)经常讲 metadate 本地服务暴露给实例。 +云平台(AWS, Azure, GCE 等)经常将 metadata 本地服务暴露给实例。 默认情况下,这些 API 可由运行在实例上的 Pod 访问,并且可以包含 该云节点的凭据或配置数据(如 kubelet 凭据)。 这些凭据可以用于在集群内升级或在同一账户下升级到其他云服务。 From 49436e3dcbea2ce0fde8e0838bf07b6c4279ae53 Mon Sep 17 00:00:00 2001 From: PriyanshuAhlawat Date: Thu, 24 Mar 2022 17:56:10 +0530 Subject: [PATCH 084/138] Update service.md --- .../concepts/services-networking/service.md | 20 ++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index 0298854137f46..adfc1b09ee63d 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -184,10 +184,10 @@ In the example above, traffic is routed to the single endpoint defined in the YAML: `192.0.2.42:9376` (TCP). {{< note >}} -The Kubernetes API server does not allow proxying to endpoints that are not mapped to -pods. Actions such as `kubectl proxy ` where the service has no -selector will fail due to this constraint. This prevents the Kubernetes API server -from being used as a proxy to endpoints the caller may not be authorized to access. +The Kubernetes API server does not allow proxying to endpoints that are not mapped to +pods. Actions such as `kubectl proxy ` where the service has no +selector will fail due to this constraint. This prevents the Kubernetes API server +from being used as a proxy to endpoints the caller may not be authorized to access. {{< /note >}} An ExternalName Service is a special case of Service that does not have @@ -251,7 +251,7 @@ There are a few reasons for using proxying for Services: Later in this page you can read about various kube-proxy implementations work. Overall, you should note that, when running `kube-proxy`, kernel level rules may be -modified (for example, iptables rules might get created), which won't get cleaned up, +modified (for example, iptables rules might get created), which won't get cleaned up, in some cases until you reboot. Thus, running kube-proxy is something that should only be done by an administrator which understands the consequences of having a low level, privileged network proxying service on a computer. Although the `kube-proxy` @@ -278,6 +278,8 @@ Lastly, the user-space proxy installs iptables rules which capture traffic to the Service's `clusterIP` (which is virtual) and `port`. The rules redirect that traffic to the proxy port which proxies the backend Pod. +{{< note >}} Kube-proxy in userspace mode is deprecated. {{< /note >}} + By default, kube-proxy in userspace mode chooses a backend via a round-robin algorithm. ![Services overview diagram for userspace proxy](/images/docs/services-userspace-overview.svg) @@ -708,13 +710,13 @@ Your cluster must have the `ServiceLoadBalancerClass` [feature gate](/docs/refer other versions of Kubernetes, check the documentation for that release. By default, `spec.loadBalancerClass` is `nil` and a `LoadBalancer` type of Service uses the cloud provider's default load balancer implementation if the cluster is configured with -a cloud provider using the `--cloud-provider` component flag. +a cloud provider using the `--cloud-provider` component flag. If `spec.loadBalancerClass` is specified, it is assumed that a load balancer implementation that matches the specified class is watching for Services. Any default load balancer implementation (for example, the one provided by the cloud provider) will ignore Services that have this field set. `spec.loadBalancerClass` can be set on a Service of type `LoadBalancer` only. -Once set, it cannot be changed. +Once set, it cannot be changed. The value of `spec.loadBalancerClass` must be a label-style identifier, with an optional prefix such as "`internal-vip`" or "`example.com/internal-vip`". Unprefixed names are reserved for end-users. @@ -997,7 +999,7 @@ There are other annotations to manage Classic Elastic Load Balancers that are de service.beta.kubernetes.io/aws-load-balancer-security-groups: "sg-53fae93f" # A list of existing security groups to be configured on the ELB created. Unlike the annotation - # service.beta.kubernetes.io/aws-load-balancer-extra-security-groups, this replaces all other security groups previously assigned to the ELB and also overrides the creation + # service.beta.kubernetes.io/aws-load-balancer-extra-security-groups, this replaces all other security groups previously assigned to the ELB and also overrides the creation # of a uniquely generated security group for this ELB. # The first security group ID on this list is used as a source to permit incoming traffic to target worker nodes (service traffic and health checks). # If multiple ELBs are configured with the same security group ID, only a single permit line will be added to the worker node security groups, that means if you delete any @@ -1007,7 +1009,7 @@ There are other annotations to manage Classic Elastic Load Balancers that are de service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-53fae93f,sg-42efd82e" # A list of additional security groups to be added to the created ELB, this leaves the uniquely generated security group in place, this ensures that every ELB # has a unique security group ID and a matching permit line to allow traffic to the target worker nodes (service traffic and health checks). - # Security groups defined here can be shared between services. + # Security groups defined here can be shared between services. service.beta.kubernetes.io/aws-load-balancer-target-node-labels: "ingress-gw,gw-name=public-api" # A comma separated list of key-value pairs which are used From 2d69c477827e1590ecfd25b6d0cd9c4be572ce88 Mon Sep 17 00:00:00 2001 From: Brett Wolmarans Date: Thu, 24 Mar 2022 10:29:42 -0700 Subject: [PATCH 085/138] Update content/en/docs/reference/kubectl/cheatsheet.md Co-authored-by: Tim Bannister --- content/en/docs/reference/kubectl/cheatsheet.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index cb3aa1f27b3aa..eeee1df93edda 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -40,7 +40,7 @@ echo "[[ $commands[kubectl] ]] && source <(kubectl completion zsh)" >> ~/.zshrc ``` ### A Note on --all-namespaces -Appending --all-namespaces happens frequently enough where you should be aware of the shorthand for --all-namespaces: +Appending `--all-namespaces` happens frequently enough where you should be aware of the shorthand for `--all-namespaces`: ```kubectl -A``` From 3980c42945a622bfd5ce05f65a2333de6102030c Mon Sep 17 00:00:00 2001 From: Lukas Hass Date: Thu, 24 Mar 2022 19:58:00 +0100 Subject: [PATCH 086/138] Use cat instead of shell built-in to read checksum --- content/en/docs/tasks/tools/install-kubectl-linux.md | 4 ++-- content/en/docs/tasks/tools/install-kubectl-macos.md | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/content/en/docs/tasks/tools/install-kubectl-linux.md b/content/en/docs/tasks/tools/install-kubectl-linux.md index 987f3a4116310..cf2386dfc7148 100644 --- a/content/en/docs/tasks/tools/install-kubectl-linux.md +++ b/content/en/docs/tasks/tools/install-kubectl-linux.md @@ -52,7 +52,7 @@ For example, to download version {{< param "fullversion" >}} on Linux, type: Validate the kubectl binary against the checksum file: ```bash - echo "$( Date: Fri, 25 Mar 2022 08:39:55 +0800 Subject: [PATCH 087/138] Fix a nit in the feature-state short code --- content/en/docs/concepts/workloads/controllers/daemonset.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/concepts/workloads/controllers/daemonset.md b/content/en/docs/concepts/workloads/controllers/daemonset.md index 7eec771d7d260..ffb1fbd6146c7 100644 --- a/content/en/docs/concepts/workloads/controllers/daemonset.md +++ b/content/en/docs/concepts/workloads/controllers/daemonset.md @@ -107,7 +107,7 @@ If you do not specify either, then the DaemonSet controller will create Pods on ### Scheduled by default scheduler -{{< feature-state state="stable" for-kubernetes-version="1.17" >}} +{{< feature-state for_kubernetes_version="1.17" state="stable" >}} A DaemonSet ensures that all eligible nodes run a copy of a Pod. Normally, the node that a Pod runs on is selected by the Kubernetes scheduler. However, From 3b21d5bc69222fef1df5bd3b94ba393e78eb9415 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Fri, 25 Mar 2022 13:59:04 +0800 Subject: [PATCH 088/138] Fix the ip-masq-agent page --- .../tasks/administer-cluster/ip-masq-agent.md | 73 ++++++++++--------- 1 file changed, 40 insertions(+), 33 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/ip-masq-agent.md b/content/en/docs/tasks/administer-cluster/ip-masq-agent.md index 2997508a156f0..e923345d1ab17 100644 --- a/content/en/docs/tasks/administer-cluster/ip-masq-agent.md +++ b/content/en/docs/tasks/administer-cluster/ip-masq-agent.md @@ -4,31 +4,27 @@ content_type: task --- -This page shows how to configure and enable the ip-masq-agent. - +This page shows how to configure and enable the `ip-masq-agent`. ## {{% heading "prerequisites" %}} - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - ## IP Masquerade Agent User Guide -The ip-masq-agent configures iptables rules to hide a pod's IP address behind the cluster node's IP address. This is typically done when sending traffic to destinations outside the cluster's pod [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) range. +The `ip-masq-agent` configures iptables rules to hide a pod's IP address behind the cluster node's IP address. This is typically done when sending traffic to destinations outside the cluster's pod [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) range. ### **Key Terms** -* **NAT (Network Address Translation)** - Is a method of remapping one IP address to another by modifying either the source and/or destination address information in the IP header. Typically performed by a device doing IP routing. -* **Masquerading** - A form of NAT that is typically used to perform a many to one address translation, where multiple source IP addresses are masked behind a single address, which is typically the device doing the IP routing. In Kubernetes this is the Node's IP address. -* **CIDR (Classless Inter-Domain Routing)** - Based on the variable-length subnet masking, allows specifying arbitrary-length prefixes. CIDR introduced a new method of representation for IP addresses, now commonly known as **CIDR notation**, in which an address or routing prefix is written with a suffix indicating the number of bits of the prefix, such as 192.168.2.0/24. -* **Link Local** - A link-local address is a network address that is valid only for communications within the network segment or the broadcast domain that the host is connected to. Link-local addresses for IPv4 are defined in the address block 169.254.0.0/16 in CIDR notation. +* **NAT (Network Address Translation)** + Is a method of remapping one IP address to another by modifying either the source and/or destination address information in the IP header. Typically performed by a device doing IP routing. +* **Masquerading** + A form of NAT that is typically used to perform a many to one address translation, where multiple source IP addresses are masked behind a single address, which is typically the device doing the IP routing. In Kubernetes this is the Node's IP address. +* **CIDR (Classless Inter-Domain Routing)** + Based on the variable-length subnet masking, allows specifying arbitrary-length prefixes. CIDR introduced a new method of representation for IP addresses, now commonly known as **CIDR notation**, in which an address or routing prefix is written with a suffix indicating the number of bits of the prefix, such as 192.168.2.0/24. +* **Link Local** + A link-local address is a network address that is valid only for communications within the network segment or the broadcast domain that the host is connected to. Link-local addresses for IPv4 are defined in the address block 169.254.0.0/16 in CIDR notation. The ip-masq-agent configures iptables rules to handle masquerading node/pod IP addresses when sending traffic to destinations outside the cluster node's IP and the Cluster IP range. This essentially hides pod IP addresses behind the cluster node's IP address. In some environments, traffic to "external" addresses must come from a known machine address. For example, in Google Cloud, any traffic to the internet must come from a VM's IP. When containers are used, as in Google Kubernetes Engine, the Pod IP will be rejected for egress. To avoid this, we must hide the Pod IP behind the VM's own IP address - generally known as "masquerade". By default, the agent is configured to treat the three private IP ranges specified by [RFC 1918](https://tools.ietf.org/html/rfc1918) as non-masquerade [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). These ranges are 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. The agent will also treat link-local (169.254.0.0/16) as a non-masquerade CIDR by default. The agent is configured to reload its configuration from the location */etc/config/ip-masq-agent* every 60 seconds, which is also configurable. @@ -36,14 +32,20 @@ The ip-masq-agent configures iptables rules to handle masquerading node/pod IP a The agent configuration file must be written in YAML or JSON syntax, and may contain three optional keys: -* **nonMasqueradeCIDRs:** A list of strings in [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation that specify the non-masquerade ranges. -* **masqLinkLocal:** A Boolean (true / false) which indicates whether to masquerade traffic to the link local prefix 169.254.0.0/16. False by default. -* **resyncInterval:** A time interval at which the agent attempts to reload config from disk. For example: '30s', where 's' means seconds, 'ms' means milliseconds, etc... +* `nonMasqueradeCIDRs`: A list of strings in + [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation that specify the non-masquerade ranges. +* `masqLinkLocal`: A Boolean (true/false) which indicates whether to masquerade traffic to the + link local prefix `169.254.0.0/16`. False by default. +* `resyncInterval`: A time interval at which the agent attempts to reload config from disk. + For example: '30s', where 's' means seconds, 'ms' means milliseconds. Traffic to 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16) ranges will NOT be masqueraded. Any other traffic (assumed to be internet) will be masqueraded. An example of a local destination from a pod could be its Node's IP address as well as another node's address or one of the IP addresses in Cluster's IP range. Any other traffic will be masqueraded by default. The below entries show the default set of rules that are applied by the ip-masq-agent: -``` +```shell iptables -t nat -L IP-MASQ-AGENT +``` + +```none RETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL RETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL RETURN all -- anywhere 172.16.0.0/12 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL @@ -52,33 +54,35 @@ MASQUERADE all -- anywhere anywhere /* ip-masq-agent: ``` -By default, in GCE/Google Kubernetes Engine starting with Kubernetes version 1.7.0, if network policy is enabled or you are using a cluster CIDR not in the 10.0.0.0/8 range, the ip-masq-agent will run in your cluster. If you are running in another environment, you can add the ip-masq-agent [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) to your cluster: - - +By default, in GCE/Google Kubernetes Engine, if network policy is enabled or +you are using a cluster CIDR not in the 10.0.0.0/8 range, the `ip-masq-agent` +will run in your cluster. If you are running in another environment, +you can add the `ip-masq-agent` [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) +to your cluster. ## Create an ip-masq-agent To create an ip-masq-agent, run the following kubectl command: -` +```shell kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/ip-masq-agent/master/ip-masq-agent.yaml -` +``` You must also apply the appropriate node label to any nodes in your cluster that you want the agent to run on. -` -kubectl label nodes my-node beta.kubernetes.io/masq-agent-ds-ready=true -` +```shell +kubectl label nodes my-node node.kubernetes.io/masq-agent-ds-ready=true +``` More information can be found in the ip-masq-agent documentation [here](https://github.com/kubernetes-sigs/ip-masq-agent) In most cases, the default set of rules should be sufficient; however, if this is not the case for your cluster, you can create and apply a [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) to customize the IP ranges that are affected. For example, to allow only 10.0.0.0/8 to be considered by the ip-masq-agent, you can create the following [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) in a file called "config". {{< note >}} -It is important that the file is called config since, by default, that will be used as the key for lookup by the ip-masq-agent: +It is important that the file is called config since, by default, that will be used as the key for lookup by the `ip-masq-agent`: -``` +```yaml nonMasqueradeCIDRs: - 10.0.0.0/8 resyncInterval: 60s @@ -87,15 +91,18 @@ resyncInterval: 60s Run the following command to add the config map to your cluster: -``` +```shell kubectl create configmap ip-masq-agent --from-file=config --namespace=kube-system ``` -This will update a file located at */etc/config/ip-masq-agent* which is periodically checked every *resyncInterval* and applied to the cluster node. +This will update a file located at `/etc/config/ip-masq-agent` which is periodically checked every `resyncInterval` and applied to the cluster node. After the resync interval has expired, you should see the iptables rules reflect your changes: -``` +```shell iptables -t nat -L IP-MASQ-AGENT +``` + +```none Chain IP-MASQ-AGENT (1 references) target prot opt source destination RETURN all -- anywhere 169.254.0.0/16 /* ip-masq-agent: cluster-local traffic should not be subject to MASQUERADE */ ADDRTYPE match dst-type !LOCAL @@ -103,9 +110,9 @@ RETURN all -- anywhere 10.0.0.0/8 /* ip-masq-agent: MASQUERADE all -- anywhere anywhere /* ip-masq-agent: outbound traffic should be subject to MASQUERADE (this match must come after cluster-local CIDR matches) */ ADDRTYPE match dst-type !LOCAL ``` -By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can set *masqLinkLocal* to true in the config map. +By default, the link local range (169.254.0.0/16) is also handled by the ip-masq agent, which sets up the appropriate iptables rules. To have the ip-masq-agent ignore link local, you can set `masqLinkLocal` to true in the ConfigMap. -``` +```yaml nonMasqueradeCIDRs: - 10.0.0.0/8 resyncInterval: 60s From cfc613135fe6a8c9c308e0685e20e56a504d06f1 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Fri, 25 Mar 2022 14:12:41 +0800 Subject: [PATCH 089/138] Fix feature state for ControllerLeaderMigration The feature has been promoted to Beta in 1.22, according to content/en/docs/tasks/administer-cluster/controller-manager-leader-migration.md and upstream source. --- .../reference/command-line-tools-reference/feature-gates.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 4e1b4c6184258..f35801c4b5671 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -63,7 +63,8 @@ different Kubernetes components. | `AllowInsecureBackendProxy` | `true` | Beta | 1.17 | | | `AnyVolumeDataSource` | `false` | Alpha | 1.18 | | | `AppArmor` | `true` | Beta | 1.4 | | -| `ControllerManagerLeaderMigration` | `false` | Alpha | 1.21 | | +| `ControllerManagerLeaderMigration` | `false` | Alpha | 1.21 | 1.21 | +| `ControllerManagerLeaderMigration` | `true` | Beta | 1.22 | | | `CPUManager` | `false` | Alpha | 1.8 | 1.9 | | `CPUManager` | `true` | Beta | 1.10 | | | `CPUManagerPolicyAlphaOptions` | `false` | Alpha | 1.23 | | From 876a1d570c11f42d9e8d0632f558adc1feb87904 Mon Sep 17 00:00:00 2001 From: Arhell Date: Fri, 25 Mar 2022 09:59:41 +0200 Subject: [PATCH 090/138] [zh] sync storage-classes.md --- content/zh/docs/concepts/storage/storage-classes.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/zh/docs/concepts/storage/storage-classes.md b/content/zh/docs/concepts/storage/storage-classes.md index 753cd37e74a6d..2bfcc0e94d24c 100644 --- a/content/zh/docs/concepts/storage/storage-classes.md +++ b/content/zh/docs/concepts/storage/storage-classes.md @@ -132,7 +132,7 @@ for provisioning PVs. This field must be specified. You are not restricted to specifying the "internal" provisioners listed here (whose names are prefixed with "kubernetes.io" and shipped alongside Kubernetes). You can also run and specify external provisioners, -which are independent programs that follow a [specification](https://git.k8s.io/community/contributors/design-proposals/storage/volume-provisioning.md) +which are independent programs that follow a [specification](https://github.com/kubernetes/design-proposals-archive/blob/main/storage/volume-provisioning.md)) defined by Kubernetes. Authors of external provisioners have full discretion over where their code lives, how the provisioner is shipped, how it needs to be run, what volume plugin it uses (including Flex), etc. The repository @@ -389,8 +389,8 @@ allowedTopologies: - matchLabelExpressions: - key: failure-domain.beta.kubernetes.io/zone values: - - us-central1-a - - us-central1-b + - us-central-1a + - us-central-1b ``` -* [AKS 应用程序网关 Ingress 控制器](https://azure.github.io/application-gateway-kubernetes-ingress/) +* [AKS 应用程序网关 Ingress 控制器](https://docs.microsoft.com/azure/application-gateway/tutorial-ingress-controller-add-on-existing?toc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Faks%2Ftoc.json&bc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fbread%2Ftoc.json) 是一个配置 [Azure 应用程序网关](https://docs.microsoft.com/azure/application-gateway/overview) 的 Ingress 控制器。 * [Ambassador](https://www.getambassador.io/) API 网关是一个基于 From bbe16c5e269c5fab9c5b40422767b103821e0b4e Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Fri, 25 Mar 2022 22:43:30 +0800 Subject: [PATCH 098/138] [zh] Update cncf-code-of-conduct/readme Signed-off-by: xin.li --- content/zh/community/static/README.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/content/zh/community/static/README.md b/content/zh/community/static/README.md index 6d65182dcc2cb..994a778d9a03b 100644 --- a/content/zh/community/static/README.md +++ b/content/zh/community/static/README.md @@ -1,5 +1,9 @@ +edit them directly, except by replacing them with new versions. +Localization note: you do not need to create localized versions of any of + the files in this directory. +--> 本路径下的文件从其它地方导入。 除了版本更新,不要直接修改。 +本地化说明:你无需为此目录中的任何文件创建本地化版本。 From 5af719eac8167e14acdf056543262544ed449157 Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Fri, 25 Mar 2022 22:57:06 +0800 Subject: [PATCH 099/138] [zh] Update install-kubectl-macos.md Signed-off-by: xin.li --- content/zh/docs/tasks/tools/install-kubectl-macos.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/content/zh/docs/tasks/tools/install-kubectl-macos.md b/content/zh/docs/tasks/tools/install-kubectl-macos.md index 1ac2d7793512a..63c17c7901e62 100644 --- a/content/zh/docs/tasks/tools/install-kubectl-macos.md +++ b/content/zh/docs/tasks/tools/install-kubectl-macos.md @@ -178,12 +178,17 @@ The following methods exist for installing kubectl on macOS: 5. 测试一下,确保你安装的是最新的版本: ```bash kubectl version --client ``` + 或者使用下面命令来查看版本的详细信息: + ```cmd + kubectl version --client --output=yaml + ``` ### 问题 {#questions} @@ -71,7 +71,7 @@ and command-line interfaces (CLIs), such as [`kubectl`](/docs/user-guide/kubectl [教程](/zh/docs/tutorials/)部分则提供对现实世界、特定行业或端到端开发场景的更全面的演练。 [参考](/zh/docs/reference/)部分提供了详细的 [Kubernetes API](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) 文档 -和命令行 (CLI) 接口的文档,例如[`kubectl`](/zh/docs/reference/kubectl/overview/)。 +和命令行 (CLI) 接口的文档,例如[`kubectl`](/zh/docs/reference/kubectl/)。 ### Stack Overflow {#stack-overflow} 社区中的其他人可能已经问过和你类似的问题,也可能能够帮助解决你的问题。 Kubernetes 团队还会监视[带有 Kubernetes 标签的帖子](https://stackoverflow.com/questions/tagged/kubernetes)。 -如果现有的问题对你没有帮助,请[问一个新问题](https://stackoverflow.com/questions/ask?tags=kubernetes)! +如果现有的问题对你没有帮助,在[问一个新问题](https://stackoverflow.com/questions/ask?tags=kubernetes) +之前,**请[确保你的问题是关于 Stack Overflow 的主题](https://stackoverflow.com/help/on-topic) +并且你需要阅读关于[如何提出新问题](https://stackoverflow.com/help/how-to-ask) +的指南。** + + + + + +`import "k8s.io/apimachinery/pkg/apis/meta/v1"` + + + +提供 Patch 是为了给 Kubernetes PATCH 请求正文提供一个具体的名称和类型。 + +
+ + + + + From dd54580d17831010605a7ce8700282e21cf997f5 Mon Sep 17 00:00:00 2001 From: zhangxyjlu <101087623+zhangxyjlu@users.noreply.github.com> Date: Sat, 26 Mar 2022 15:43:20 +0800 Subject: [PATCH 106/138] [zh]Add 2021-12-16-StatefulSet-PVC-Auto-Deletion.md (#32492) * [zh]Add 2021-12-16-StatefulSet-PVC-Auto-Deletion.md * [zh]Add 2021-12-16-StatefulSet-PVC-Auto-Deletion.md * [zh]Add 2021-12-16-StatefulSet-PVC-Auto-Deletion.md * [zh]Add 2021-12-16-StatefulSet-PVC-Auto-Deletion.md --- ...021-12-16-StatefulSet-PVC-Auto-Deletion.md | 208 ++++++++++++++++++ 1 file changed, 208 insertions(+) create mode 100644 content/zh/blog/_posts/2021-12-16-StatefulSet-PVC-Auto-Deletion.md diff --git a/content/zh/blog/_posts/2021-12-16-StatefulSet-PVC-Auto-Deletion.md b/content/zh/blog/_posts/2021-12-16-StatefulSet-PVC-Auto-Deletion.md new file mode 100644 index 0000000000000..e5d592c4e648d --- /dev/null +++ b/content/zh/blog/_posts/2021-12-16-StatefulSet-PVC-Auto-Deletion.md @@ -0,0 +1,208 @@ +--- +layout: blog +title: 'Kubernetes 1.23: StatefulSet PVC 自动删除 (alpha)' +date: 2021-12-16 +slug: kubernetes-1-23-statefulset-pvc-auto-deletion +--- + + + +**作者:** Matthew Cary (谷歌) + + +Kubernetes v1.23 为 [StatefulSets](/zh/docs/concepts/workloads/controllers/statefulset/) +引入了一个新的 alpha 级策略,用来控制由 StatefulSet 规约模板生成的 +[PersistentVolumeClaims](/zh/docs/concepts/storage/persistent-volumes/) (PVCs) 的生命周期, +用于当删除 StatefulSet 或减少 StatefulSet 中的 Pods 数量时 PVCs 应该被自动删除的场景。 + + +## 它解决了什么问题? + + +StatefulSet 规约中可以包含 Pod 和 PVC 的模板。当副本先被创建时,如果 PVC 还不存在, +Kubernetes 控制面会为该副本自动创建一个 PVC。在 Kubernetes 1.23 版本之前, +控制面不会删除 StatefulSet 创建的 PVCs——这依赖集群管理员或你需要部署一些额外的适用的自动化工具来处理。 +管理 PVC 的常见模式是通过手动或使用 Helm 等工具,PVC 的具体生命周期由管理它的工具跟踪。 +使用 StatefulSet 时必须自行确定 StatefulSet 创建哪些 PVC,以及它们的生命周期应该是什么。 + + +在这个新特性之前,当一个 StatefulSet 管理的副本消失时,无论是因为 StatefulSet 减少了它的副本数, +还是因为它的 StatefulSet 被删除了,PVC 及其下层的卷仍然存在,需要手动删除。 +当存储数据比较重要时,这样做是合理的,但在许多情况下,这些 PVC 中的持久化数据要么是临时的, +要么可以从另一个源端重建。在这些情况下,删除 StatefulSet 或减少副本后留下的 PVC 及其下层的卷是不必要的, +还会产生成本,需要手动清理。 + + +## 新的 StatefulSet PVC 保留策略 + + +如果你启用这个新 alpha 特性,StatefulSet 规约中就可以包含 PersistentVolumeClaim 的保留策略。 +该策略用于控制是否以及何时删除基于 StatefulSet 的 `volumeClaimTemplate` 属性所创建的 PVCs。 +保留策略的首次迭代包含两种可能删除 PVC 的情况。 + + +第一种情况是 StatefulSet 资源被删除时(这意味着所有副本也被删除),这由 `whenDeleted` 策略控制的。 +第二种情况是 StatefulSet 缩小时,即删除 StatefulSet 部分副本,这由 `whenScaled` 策略控制。 +在这两种情况下,策略即可以是 `Retain` 不涉及相应 PVCs 的改变,也可以是 `Delete` 即删除对应的 PVCs。 +删除是通过普通的[对象删除](/zh/docs/concepts/architecture/garbage-collection/)完成的, +因此,的所有保留策略都会被遵照执行。 + + +该策略形成包含四种情况的矩阵。我将逐一介绍,并为每一种情况给出一个例子。 + + + * **`whenDeleted` 和 `whenScaled` 都是 `Retain`。** 这与 StatefulSets 的现有行为一致, + 即不删除 PVCs。 这也是默认的保留策略。它适用于 StatefulSet + 卷上的数据是不可替代的且只能手动删除的情况。 + + + * **`whenDeleted` 是 `Delete` 但 `whenScaled` 是 `Retain`。** 在这种情况下, + 只有当整个 StatefulSet 被删除时,PVCs 才会被删除。 + 如果减少 StatefulSet 副本,PVCs 不会删除,这意味着如果增加副本时,可以从前一个副本重新连接所有数据。 + 这可能用于临时的 StatefulSet,例如在 CI 实例或 ETL 管道中, + StatefulSet 上的数据仅在 StatefulSet 生命周期内才需要,但在任务运行时数据不易重构。 + 任何保留状态对于所有先缩小后扩大的副本都是必需的。 + + + * **`whenDeleted` 和 `whenScaled` 都是 `Delete`。** 当其副本不再被需要时,PVCs 会立即被删除。 + 注意,这并不包括 Pod 被删除且有新版本被调度的情况,例如当节点被腾空而 Pod 需要迁移到别处时。 + 只有当副本不再被需要时,如按比例缩小或删除 StatefulSet 时,才会删除 PVC。 + 此策略适用于数据生命周期短于副本生命周期的情况。即数据很容易重构, + 且删除未使用的 PVC 所节省的成本比快速增加副本更重要,或者当创建一个新的副本时, + 来自以前副本的任何数据都不可用,必须重新构建。 + + + * **`whenDeleted` 是 `Retain` 但 `whenScaled` 是 `Delete`。** 这与前一种情况类似, + 在增加副本时用保留的 PVCs 快速重构几乎没有什么益处。例如 Elasticsearch 集群就是使用的这种方式。 + 通常,你需要增大或缩小工作负载来满足业务诉求,同时确保最小数量的副本(例如:3)。 + 当减少副本时,数据将从已删除的副本迁移出去,保留这些 PVCs 没有任何用处。 + 但是,这对临时关闭整个 Elasticsearch 集群进行维护时是很有用的。 + 如果需要使 Elasticsearch 系统脱机,可以通过临时删除 StatefulSet 来实现, + 然后通过重新创建 StatefulSet 来恢复 Elasticsearch 集群。 + 保存 Elasticsearch 数据的 PVCs 不会被删除,新的副本将自动使用它们。 + + +查阅[文档](/zh/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-policies) +获取更多详细信息。 + + +## 下一步是什么? + + +启用该功能并尝试一下!在集群上启用 `StatefulSetAutoDeletePVC` 功能,然后使用新策略创建 StatefulSet。 +测试一下,告诉我们你的体验! + + +我很好奇这个属主引用机制在实践中是否有效。例如,我们意识到 Kubernetes 中没有可以知道谁设置了引用的机制, +因此 StatefulSet 控制器可能会与设置自己的引用的自定义控制器发生冲突。 +幸运的是,维护现有的保留行为不涉及任何新属主引用,因此默认行为是兼容的。 + + +请用标签 `sig/apps` 标记你报告的任何问题,并将它们分配给 Matthew Cary +(在 GitHub上 [@mattcary](https://github.com/mattcary))。 + + +尽情体验吧! + From fc144300d658839071e072e93aa019d2ec363cc4 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Fri, 25 Feb 2022 23:06:10 +0800 Subject: [PATCH 107/138] [zh] Translate diagram guide --- .../zh/docs/contribute/style/diagram-guide.md | 1262 +++++++++++++++++ 1 file changed, 1262 insertions(+) create mode 100644 content/zh/docs/contribute/style/diagram-guide.md diff --git a/content/zh/docs/contribute/style/diagram-guide.md b/content/zh/docs/contribute/style/diagram-guide.md new file mode 100644 index 0000000000000..3f1828275616a --- /dev/null +++ b/content/zh/docs/contribute/style/diagram-guide.md @@ -0,0 +1,1262 @@ +--- +title: 图表指南 +linktitle: 图表指南 +content_type: concept +weight: 15 +--- + + + + + +本指南为你展示如何创建、编辑和分享基于 Mermaid Javascript 库的图表。 +Mermaid.js 允许你使用简单的、类似于 Markdown 的语法来在 Markdown 文件中生成图表。 +你也可以使用 Mermaid 来创建 `.svg` 或 `.png` 图片文件,将其添加到你的文档中。 + +本指南的目标受众是所有希望了解 Mermaid 的用户,以及那些想了解如何创建图表并将其添加到 +Kubernetes 文档中的用户。 + +图 1 概要介绍的是本节所涉及的话题。 + +{{< mermaid >}} +flowchart LR +subgraph m[Mermaid.js] +direction TB +S[ ]-.- +C[使用 markdown 来
构造图表] --> +D[在线
编辑器] +end +A[为什么图表
很有用] --> m +m --> N[3 种创建
图表的方法] +N --> T[示例] +T --> X[样式
与标题] +X --> V[提示] + + + + classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000; + classDef spacewhite fill:#ffffff,stroke:#fff,stroke-width:0px,color:#000 + class A,C,D,N,X,m,T,V box + class S spacewhite + +%% you can hyperlink Mermaid diagram nodes to a URL using click statements + +click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank +click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank +click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank +click N "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank +click T "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank +click X "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank +click V "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgc3ViZ3JhcGggbVtNZXJtYWlkLmpzXVxuICAgIGRpcmVjdGlvbiBUQlxuICAgICAgICBTWyBdLS4tXG4gICAgICAgIENbYnVpbGQ8YnI-ZGlhZ3JhbXM8YnI-d2l0aCBtYXJrZG93bl0gLS0-XG4gICAgICAgIERbb24tbGluZTxicj5saXZlIGVkaXRvcl1cbiAgICBlbmRcbiAgICBBW1doeSBhcmUgZGlhZ3JhbXM8YnI-dXNlZnVsP10gLS0-IG1cbiAgICBtIC0tPiBOWzMgeCBtZXRob2RzPGJyPmZvciBjcmVhdGluZzxicj5kaWFncmFtc11cbiAgICBOIC0tPiBUW0V4YW1wbGVzXVxuICAgIFQgLS0-IFhbU3R5bGluZzxicj5hbmQ8YnI-Y2FwdGlvbnNdXG4gICAgWCAtLT4gVltUaXBzXVxuICAgIFxuIFxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIHNwYWNld2hpdGUgZmlsbDojZmZmZmZmLHN0cm9rZTojZmZmLHN0cm9rZS13aWR0aDowcHgsY29sb3I6IzAwMFxuICAgIGNsYXNzIEEsQyxELE4sWCxtLFQsViBib3hcbiAgICBjbGFzcyBTIHNwYWNld2hpdGUiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOnRydWV9" _blank + +{{< /mermaid >}} + + +图 1. 本节中涉及的话题. + + +开始使用 Mermaid 之前,你需要以下准备: + +* 对 Markdown 有一个基本的了解 +* 使用 Mermaid 在线编辑器 +* 使用 [Hugo 短代码(shortcode)](/zh/docs/contribute/style/hugo-shortcodes/) +* 使用 [Hugo {{}} 短代码](https://gohugo.io/content-management/shortcodes/#figure) +* 执行 [Hugo 本地预览](/zh/docs/contribute/new-content/open-a-pr/#preview-locally) +* 熟悉[贡献新内容](/zh/docs/contribute/new-content/)的流程 + +{{< note >}} + +你可以点击本节中的每个图表来查看其源代码,以及在 Mermaid 在线编辑器中渲染的图表结果。 +{{< /note >}} + + + + +## 你为什么应该在代码中使用图表 + +图表可以增进文档的清晰度,便于理解。对于用户和贡献者而言都有好处。 + + +用户获得的好处有: +* __较为友好的初次体验__:非常详尽的、只包含文本的欢迎页面对用户而言是蛮恐怖的, + 尤其是初次接触 Kubernetes 的用户。 +* __快速理解概念__:图表可以帮助用户理解复杂主题下的要点。 + 你的图表可以作为一种可视化的学习指南,将用户带入主题的细节。 +* __便于记忆__:对某些人而言,图形(图像)要比文字更容易记忆。 + + +对贡献者而言的好处有: + +* __帮助确立所贡献文档的结构和内容__。例如, + 你可以先提供一个覆盖所有顶层要点的图表,然后再逐步展开细节。 +* __培养用户社区并提升其能力__ 容易理解的文档,附以图表,能够吸引新的用户, + 尤其是那些因为预见到复杂性而不愿参与的用户。 + + +你需要考虑你的目标受众。除了一些有经验的 K8s 用户外,你还会遇到很多刚接触 +Kubernetes 的用户。即使一张简单的图表也可以帮助新用户吸收 Kubernetes 概念。 +他们会变得更为大胆和自信,进一步地了解 Kubernetes 及其文档。 + +## Mermaid + + +[Mermaid](https://mermaid-js.github.io/mermaid/#/) 是一个开源的 JavaScript 库, +可以帮助你创建、编辑并很容易地分享图表。这些图表使用简单的、类似 Markdown +的语法开发,并可内嵌到 Markdown 文件中。 + + +下面是 Mermaid 的一些特性: + +* 简单的编码语法 +* 包含基于 Web 的工具,便于你编制和预览你的图表 +* 支持包括流程图、状态图、时序图在内的多种格式 +* 可以通过共享图表的 URL 来与同事方便地合作 +* 有丰富的形状、线条、主题和样式可供选择 + + +使用 Mermaid 的一些好处如下: + +* 不需要使用另外的、非 Mermaid 的图表工具 +* 与现有的 PR 工作流结合的很好。你可以将 Mermaid 代码视为你的 PR 中所包含的 + Markdown 文本 +* 简单的工具生成简单的图表。你不需要精心制作或雕琢过于复杂或详尽的图片。 + 保持简单就好。 + + +Mermaid 提供一种简单的、开放且透明的方法,便于 SIG 社区为新的或现有的文档添加、 +编辑图表并展开协作。 + +{{< note >}} + +即使你的工作环境中不支持,你仍然可以使用 Mermaid 来创建、编辑图表。 +这种方法称作 __Mermaid+SVG__,在后文详细解释。 +{{< /note >}} + + +### 在线编辑器 + +[Mermaid 在线编辑器](https://mermaid-js.github.io/mermaid-live-editor)是一个基于 +Web 的工具,允许你创建、编辑和审阅图表。 + + +在线编辑器的功能主要有: + +* 显示 Mermaid 代码和渲染的图表。 +* 为所保存的每个图表生成一个 URL。该 URL 显示在你的浏览器的 URL 字段中。 + 你可以将 URL 分享给同事,便于他人访问和更改图表。 +* 提供将图表下载为 `.svg` 或 `.png` 文件的选项。 + + +{{< note >}} + +在线编辑器是创建和编辑 Mermaid 图表的最简单的,也是最快的方式。 +{{< /note >}} + + +## 创建图表的方法 {#methods-for-creating-diagrams} + +图 2 给出三种生成和添加图表的方法。 + +{{< mermaid >}} +graph TB +A[贡献者] +B[向 .md 文件

中内嵌
Mermaid 代码] +C[Mermaid+SVG

将 Mermaid 所生成的
SVG 文件添加到 .md 文件] +D[外部工具

添加外部工具
所生成的 SVG
文件到 .md 文件] + + A --> B + A --> C + A --> D + + classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000; + class A,B,C,D box + +%% 你可以使用 click 语句为 Mermaid 节点设置指向某 URL 的超链接 + +click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBBW0NvbnRyaWJ1dG9yXVxuICAgIEJbSW5saW5lPGJyPjxicj5NZXJtYWlkIGNvZGU8YnI-YWRkZWQgdG8gLm1kIGZpbGVdXG4gICAgQ1tNZXJtYWlkK1NWRzxicj48YnI-QWRkIG1lcm1haWQtZ2VuZXJhdGVkPGJyPnN2ZyBmaWxlIHRvIC5tZCBmaWxlXVxuICAgIERbRXh0ZXJuYWwgdG9vbDxicj48YnI-QWRkIGV4dGVybmFsLXRvb2wtPGJyPmdlbmVyYXRlZCBzdmcgZmlsZTxicj50byAubWQgZmlsZV1cblxuICAgIEEgLS0-IEJcbiAgICBBIC0tPiBDXG4gICAgQSAtLT4gRFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3giLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank + +click B "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBBW0NvbnRyaWJ1dG9yXVxuICAgIEJbSW5saW5lPGJyPjxicj5NZXJtYWlkIGNvZGU8YnI-YWRkZWQgdG8gLm1kIGZpbGVdXG4gICAgQ1tNZXJtYWlkK1NWRzxicj48YnI-QWRkIG1lcm1haWQtZ2VuZXJhdGVkPGJyPnN2ZyBmaWxlIHRvIC5tZCBmaWxlXVxuICAgIERbRXh0ZXJuYWwgdG9vbDxicj48YnI-QWRkIGV4dGVybmFsLXRvb2wtPGJyPmdlbmVyYXRlZCBzdmcgZmlsZTxicj50byAubWQgZmlsZV1cblxuICAgIEEgLS0-IEJcbiAgICBBIC0tPiBDXG4gICAgQSAtLT4gRFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3giLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank + +click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBBW0NvbnRyaWJ1dG9yXVxuICAgIEJbSW5saW5lPGJyPjxicj5NZXJtYWlkIGNvZGU8YnI-YWRkZWQgdG8gLm1kIGZpbGVdXG4gICAgQ1tNZXJtYWlkK1NWRzxicj48YnI-QWRkIG1lcm1haWQtZ2VuZXJhdGVkPGJyPnN2ZyBmaWxlIHRvIC5tZCBmaWxlXVxuICAgIERbRXh0ZXJuYWwgdG9vbDxicj48YnI-QWRkIGV4dGVybmFsLXRvb2wtPGJyPmdlbmVyYXRlZCBzdmcgZmlsZTxicj50byAubWQgZmlsZV1cblxuICAgIEEgLS0-IEJcbiAgICBBIC0tPiBDXG4gICAgQSAtLT4gRFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3giLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank + +click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBBW0NvbnRyaWJ1dG9yXVxuICAgIEJbSW5saW5lPGJyPjxicj5NZXJtYWlkIGNvZGU8YnI-YWRkZWQgdG8gLm1kIGZpbGVdXG4gICAgQ1tNZXJtYWlkK1NWRzxicj48YnI-QWRkIG1lcm1haWQtZ2VuZXJhdGVkPGJyPnN2ZyBmaWxlIHRvIC5tZCBmaWxlXVxuICAgIERbRXh0ZXJuYWwgdG9vbDxicj48YnI-QWRkIGV4dGVybmFsLXRvb2wtPGJyPmdlbmVyYXRlZCBzdmcgZmlsZTxicj50byAubWQgZmlsZV1cblxuICAgIEEgLS0-IEJcbiAgICBBIC0tPiBDXG4gICAgQSAtLT4gRFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3giLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank + +{{< /mermaid >}} + + +图 2. 创建图表的方法 + + + +### 内嵌(Inline) + +图 3 给出的是使用内嵌方法来添加图表所遵循的步骤。 + +{{< mermaid >}} +graph LR +A[1. 使用在线编辑器
来创建或编辑
图表] --> +B[2. 将图表的 URL
保存到某处] --> +C[3. 将 Mermaid 代码
复制到 markdown 文件中] --> +D[4. 添加图表标题] + + + classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000; + class A,B,C,D box + +%% 你可以使用 click 语句为 Mermaid 节点设置指向某 URL 的超链接 + +click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggTFJcbiAgICBBWzEuIFVzZSBsaXZlIGVkaXRvcjxicj4gdG8gY3JlYXRlL2VkaXQ8YnI-ZGlhZ3JhbV0gLS0-XG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdIC0tPlxuICAgIENbMy4gQ29weSBNZXJtYWlkIGNvZGU8YnI-dG8gcGFnZSBtYXJrZG93biBmaWxlXSAtLT5cbiAgICBEWzQuIEFkZCBjYXB0aW9uXVxuIFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank + +click B "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggTFJcbiAgICBBWzEuIFVzZSBsaXZlIGVkaXRvcjxicj4gdG8gY3JlYXRlL2VkaXQ8YnI-ZGlhZ3JhbV0gLS0-XG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdIC0tPlxuICAgIENbMy4gQ29weSBNZXJtYWlkIGNvZGU8YnI-dG8gcGFnZSBtYXJrZG93biBmaWxlXSAtLT5cbiAgICBEWzQuIEFkZCBjYXB0aW9uXVxuIFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank + +click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggTFJcbiAgICBBWzEuIFVzZSBsaXZlIGVkaXRvcjxicj4gdG8gY3JlYXRlL2VkaXQ8YnI-ZGlhZ3JhbV0gLS0-XG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdIC0tPlxuICAgIENbMy4gQ29weSBNZXJtYWlkIGNvZGU8YnI-dG8gcGFnZSBtYXJrZG93biBmaWxlXSAtLT5cbiAgICBEWzQuIEFkZCBjYXB0aW9uXVxuIFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank + +click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggTFJcbiAgICBBWzEuIFVzZSBsaXZlIGVkaXRvcjxicj4gdG8gY3JlYXRlL2VkaXQ8YnI-ZGlhZ3JhbV0gLS0-XG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdIC0tPlxuICAgIENbMy4gQ29weSBNZXJtYWlkIGNvZGU8YnI-dG8gcGFnZSBtYXJrZG93biBmaWxlXSAtLT5cbiAgICBEWzQuIEFkZCBjYXB0aW9uXVxuIFxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMsRCBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" _blank + +{{< /mermaid >}} + + +图 3. 内嵌方法的步骤 + + +下面是使用内嵌方法来添加图表时你要执行的步骤: + +1. 使用在线编辑器创建你的图表 +1. 将图表的 URL 保存在某处以便以后访问 +1. 将 Mermaid 代码复制到你的 `.md` 文件中你希望它出现的位置 +1. 使用 Markdown 文本在图表下方为其添加标题 + + +Hugo 在构造(网站)过程中会运行 Mermaid 代码,将其转换为图表。 + +{{< note >}} + +你可能认为记录图表 URL 是一个麻烦的过程。如果确实如此,你可以在 `.md` 文件中作一个记录, +标明该 Mermaid 代码是自说明的。贡献者可以将 Mermaid 代码复制到在线编辑器中编辑, +或者将其从在线编辑器中复制出来。 +{{< /note >}} + + +下面是一段包含在某 `.md` 文件中的示例代码片段: + +``` +--- +title: 我的文档 +--- +Figure 17 给出从 A 到 B 的一个简单流程。 +这里是其他 markdown 文本 +... +{{}} + graph TB + A --> B +{{}} + +图 17. 从 A 到 B + +其他文本 +``` + +{{< note >}} + +你必须在 Mermaid 代码块之前和之后分别添加 `{{}}`、`{{}}` +短代码标记,而且你应该在图表之后为其添加图表标题。 +{{< /note >}} + + +有关添加图表标题的细节,参阅[如何使用标题](#how-to-use-captions)。 + + +使用内嵌方法的好处有: + +* 可以直接使用在线编辑器工具 +* 很容易在在线编辑器与你的 `.md` 文件之间来回复制 Mermaid 代码 +* 不需要额外处理 `.svg` 图片文件 +* 内容文字、图表代码和图表标题都位于同一个 `.md` 文件中。 + + +你应该使用[本地](/zh/docs/contribute/new-content/open-a-pr/#preview-locally)和 Netlify +预览来验证图表是可以正常渲染的。 + +{{< caution >}} + +Mermaid 在线编辑器的功能特性可能不支持 K8s 网站的 Mermaid 特性。 +你可能在 Hugo 构建过程中看到语法错误提示或者空白屏幕。 +如果发生这类情况,可以考虑使用 Mermaid+SVG 方法。 +{{< /caution >}} + +### Mermaid+SVG + + +图 4 给出的是使用 Mermaid+SVG 方法添加图表所要遵循的步骤: + +{{< mermaid >}} +flowchart LR +A[1. 使用在线编辑器
创建或编辑
图表] +B[2. 将图表的 URL
保存到别处] +C[3. 生成 .svg 文件
并将其下载到
images/ 目录] +subgraph w[ ] +direction TB +D[4. 使用 figure 短代码
来在 .md 文件中
引用 .svg 文件] --> +E[5. 添加图表标题] +end +A --> B +B --> C +C --> w + + classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000; + class A,B,C,D,E,w box + +click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank + +click B "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank + +click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank + +click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank + +click E "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgbGl2ZSBlZGl0b3I8YnI-IHRvIGNyZWF0ZS9lZGl0PGJyPmRpYWdyYW1dXG4gICAgQlsyLiBTdG9yZSBkaWFncmFtPGJyPlVSTCBzb21ld2hlcmVdXG4gICAgQ1szLiBHZW5lcmF0ZSAuc3ZnIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSAuc3ZnPGJyPmZpbGUgaW4gcGFnZTxicj4ubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbkEgLS0-IEJcbkIgLS0-IENcbkMgLS0-IHdcblxuICAgIGNsYXNzRGVmIGJveCBmaWxsOiNmZmYsc3Ryb2tlOiMwMDAsc3Ryb2tlLXdpZHRoOjFweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzIEEsQixDLEQsRSx3IGJveFxuICAgICIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank + + +{{< /mermaid >}} + + +图 4. Mermaid+SVG 方法的步骤。 + + +使用 Mermaid+SVG 方法来添加图表时你要遵从的步骤: + +1. 使用在线编辑器创建你的图表 +1. 将图表的 URL 保存到某处以便以后访问 +1. 为你的图表生成 `.svg` 文件,并将其下载到合适的 `images/` 目录下 +1. 使用 `{{}}` 短代码在 `.md` 文件中引用该图表 +1. 使用 `{{}}` 短代码的 `caption` 参数为图表设置标题 + + +例如,使用在线编辑器创建一个名为 `boxnet` 的图表。 +将图表的 URL 保存到别处以便以后访问。生成 `boxnet.svg` 文件并将其下载到合适的 +`../images/` 目录下。 + + +在你的 PR 中的 `.md` 文件内使用 `{{}}` 短代码来引用 +`.svg` 图片文件,并为之添加标题。 + + +```json +{{}} +``` + + +关于图表标题的细节,可参阅[如何使用标题](#how-to-use-captions)。 + +{{< note >}} + +使用 `{{}}` 短代码是向你的文档中添加 `.svg` 图片文件的优选方法。 +你也可以使用标准的 markdown 图片语法,即 +`![my boxnet diagram](static/images/boxnet.svg)`。 +如果是后面这种,则需要在图表下面为其添加标题。 +{{< /note >}} + + +你应该使用文本编辑器以注释块的形式在 `.svg` 图片文件中添加在线编辑器的 URL。 +例如,你应该在 `.svg` 图片文件的开头部分包含下面的内容: + +``` + + +``` + + +使用 Mermaid+SVG 方法的好处有: + +* 可以直接使用在线编辑器工具 +* 在线编辑器支持的 Mermaid 特性集合最新 +* 可以利用 K8s 网站用来处理 `.svg` 图片文件的现有方法 +* 工作环境不需要 Mermaid 支持 + + +要使用[本地](/zh/docs/contribute/new-content/open-a-pr/#preview-locally)和 +Netlify 预览来检查你的图表可以正常渲染。 + + +### 外部工具 + +图 5 给出使用外部工具来添加图表时所遵循的步骤。 + +首先,要使用你的外部工具来创建图表,并将其保存为一个 `.svg` 文件或 `.png` 图片文件。 +之后,使用 __Mermaid+SVG__ 方法中相同的步骤添加 `.svg`(`.png`)文件。 + +{{< mermaid >}} +flowchart LR + +A[1. 使用外部工具
来创建或编辑
图表] +B[2. 如果可能保存
图表位置供
其他贡献者访问] +C[3. 生成 .svg 文件
或 .png 文件
并将其下载到
合适的 images/ 目录] + +subgraph w[ ] +direction TB +D[4. 使用 figure 短代码
在你的 .md 文件中
引用该 SVG 或 PNG
文件] --> +E[5. 为图表添加标题] +end +A --> B +B --> C +C --> w + + classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000; + class A,B,C,D,E,w box + +click A "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" + +click B "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" + +click C "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" + +click D "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" + +click E "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZmxvd2NoYXJ0IExSXG4gICAgQVsxLiBVc2UgZXh0ZXJuYWw8YnI-dG9vbCB0byBjcmVhdGUvZWRpdDxicj5kaWFncmFtXVxuICAgIEJbMi4gSWYgcG9zc2libGUsIHNhdmU8YnI-ZGlhZ3JhbSBjb29yZGluYXRlczxicj5mb3IgY29udHJpYnV0b3I8YnI-YWNjZXNzXVxuICAgIENbMy4gR2VuZXJhdGUgLnN2ZyA8YnI-b3IucG5nIGZpbGU8YnI-YW5kIGRvd25sb2FkIHRvPGJyPmFwcHJvcHJpYXRlPGJyPmltYWdlcy8gZm9sZGVyXVxuICAgIHN1YmdyYXBoIHdbIF1cbiAgICBkaXJlY3Rpb24gVEJcbiAgICBEWzQuIFVzZSBmaWd1cmUgc2hvcnRjb2RlPGJyPnRvIHJlZmVyZW5jZSBzdmcgb3I8YnI-cG5nIGZpbGUgaW48YnI-cGFnZSAubWQgZmlsZV0gLS0-XG4gICAgRVs1LiBBZGQgY2FwdGlvbl1cbiAgICBlbmRcbiAgICBBIC0tPiBCXG4gICAgQiAtLT4gQ1xuICAgIEMgLS0-IHdcbiAgICBjbGFzc0RlZiBib3ggZmlsbDojZmZmLHN0cm9rZTojMDAwLHN0cm9rZS13aWR0aDoxcHgsY29sb3I6IzAwMDtcbiAgICBjbGFzcyBBLEIsQyxELEUsdyBib3hcbiAgICAiLCJtZXJtYWlkIjoie1xuICBcInRoZW1lXCI6IFwiZGVmYXVsdFwiXG59IiwidXBkYXRlRWRpdG9yIjpmYWxzZSwiYXV0b1N5bmMiOnRydWUsInVwZGF0ZURpYWdyYW0iOmZhbHNlfQ" + + +{{< /mermaid >}} + + +图 5. 外部工具方法步骤. + + +使用外部工具方法来添加图表时,你要遵从的步骤如下: + +1. 使用你的外部工具来创建图表。 +1. 将图表的位置保存起来供其他贡献者访问。例如,你的工具可能提供一个指向图表的链接, + 或者你可以将源码文件(例如一个 `.xml` 文件)放置到一个公开的仓库, + 以便其他贡献者访问。 +1. 生成图表并将其下载为 `.svg` 或 `.png` 图片文件,保存到合适的 `../images/` 目录下。 +1. 使用 `{{}}` 短代码从 `.md` 文件中引用该图表。 +5. 使用 `{{}}` 短代码的 `caption` 参数为图表设置标题。 + + +下面是一个用于 `images/apple.svg` 图表的 `{{}}` 短代码: + + +```text +{{}} +``` + + +如果你的外部绘图工具支持: + +* 你可以将多个 `.svg` 或 `.png` 商标、图标或图片整合到你的图表中。 + 不过,你需要确保你查看了版权并遵守了 Kubernetes 文档关于使用第三方内容的 + [指南](/zh/docs/contribute/style/content-guide/)。 +* 你应该将图表的源位置保存起来,以便其他贡献者访问。 + 例如,你的工具可能提供指向图表文件的链接,或者你应该将源代码文件 + (例如一个 `.xml` 文件)放到某处以便其他贡献者访问。 + + +关于 K8s 和 CNCF 商标与图片的详细信息,可参阅 [CNCF Artwork](https://github.com/cncf/artwork)。 + + +使用外部工具的好处有: + +* 贡献者对外部工具更为熟悉 +* 图表可能需要 Mermaid 所无法提供的细节 + +不要忘记使用[本地](/zh/docs/contribute/new-content/open-a-pr/#preview-locally) +和 Netlify 预览来检查你的图表可以正常渲染。 + + +## 示例 + +本节给出 Mermaid 的若干样例。 + +{{< note >}} + +代码块示例中忽略了 Hugo `{{}}`、`{{}}` 短代码标记。 +这样,你就可以将这些代码段复制到在线编辑器中自行实验。 +注意,在线编辑器无法识别 Hugo 短代码。 +{{< /note >}} + + +### 示例 1 - Pod 拓扑分布约束 + +图 6 展示的是 [Pod 拓扑分布约束](/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints/#node-labels) +页面所出现的图表。 + +{{< mermaid >}} + graph TB + subgraph "zoneB" + n3(Node3) + n4(Node4) + end + subgraph "zoneA" + n1(Node1) + n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4 k8s; + class zoneA,zoneB cluster; + +click n3 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBzdWJncmFwaCBcInpvbmVCXCJcbiAgICAgICAgbjMoTm9kZTMpXG4gICAgICAgIG40KE5vZGU0KVxuICAgIGVuZFxuICAgIHN1YmdyYXBoIFwiem9uZUFcIlxuICAgICAgICBuMShOb2RlMSlcbiAgICAgICAgbjIoTm9kZTIpXG4gICAgZW5kXG5cbiAgICBjbGFzc0RlZiBwbGFpbiBmaWxsOiNkZGQsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICAgIGNsYXNzRGVmIGNsdXN0ZXIgZmlsbDojZmZmLHN0cm9rZTojYmJiLHN0cm9rZS13aWR0aDoycHgsY29sb3I6IzMyNmNlNTtcbiAgICBjbGFzcyBuMSxuMixuMyxuNCBrOHM7XG4gICAgY2xhc3Mgem9uZUEsem9uZUIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank + +click n4 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBzdWJncmFwaCBcInpvbmVCXCJcbiAgICAgICAgbjMoTm9kZTMpXG4gICAgICAgIG40KE5vZGU0KVxuICAgIGVuZFxuICAgIHN1YmdyYXBoIFwiem9uZUFcIlxuICAgICAgICBuMShOb2RlMSlcbiAgICAgICAgbjIoTm9kZTIpXG4gICAgZW5kXG5cbiAgICBjbGFzc0RlZiBwbGFpbiBmaWxsOiNkZGQsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICAgIGNsYXNzRGVmIGNsdXN0ZXIgZmlsbDojZmZmLHN0cm9rZTojYmJiLHN0cm9rZS13aWR0aDoycHgsY29sb3I6IzMyNmNlNTtcbiAgICBjbGFzcyBuMSxuMixuMyxuNCBrOHM7XG4gICAgY2xhc3Mgem9uZUEsem9uZUIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank + +click n1 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBzdWJncmFwaCBcInpvbmVCXCJcbiAgICAgICAgbjMoTm9kZTMpXG4gICAgICAgIG40KE5vZGU0KVxuICAgIGVuZFxuICAgIHN1YmdyYXBoIFwiem9uZUFcIlxuICAgICAgICBuMShOb2RlMSlcbiAgICAgICAgbjIoTm9kZTIpXG4gICAgZW5kXG5cbiAgICBjbGFzc0RlZiBwbGFpbiBmaWxsOiNkZGQsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICAgIGNsYXNzRGVmIGNsdXN0ZXIgZmlsbDojZmZmLHN0cm9rZTojYmJiLHN0cm9rZS13aWR0aDoycHgsY29sb3I6IzMyNmNlNTtcbiAgICBjbGFzcyBuMSxuMixuMyxuNCBrOHM7XG4gICAgY2xhc3Mgem9uZUEsem9uZUIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank + +click n2 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggVEJcbiAgICBzdWJncmFwaCBcInpvbmVCXCJcbiAgICAgICAgbjMoTm9kZTMpXG4gICAgICAgIG40KE5vZGU0KVxuICAgIGVuZFxuICAgIHN1YmdyYXBoIFwiem9uZUFcIlxuICAgICAgICBuMShOb2RlMSlcbiAgICAgICAgbjIoTm9kZTIpXG4gICAgZW5kXG5cbiAgICBjbGFzc0RlZiBwbGFpbiBmaWxsOiNkZGQsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojMDAwO1xuICAgIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICAgIGNsYXNzRGVmIGNsdXN0ZXIgZmlsbDojZmZmLHN0cm9rZTojYmJiLHN0cm9rZS13aWR0aDoycHgsY29sb3I6IzMyNmNlNTtcbiAgICBjbGFzcyBuMSxuMixuMyxuNCBrOHM7XG4gICAgY2xhc3Mgem9uZUEsem9uZUIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0" _blank + +{{< /mermaid >}} + + +图 6. Pod 拓扑分布约束 + +代码块: + +``` +graph TB + subgraph "zoneB" + n3(Node3) + n4(Node4) + end + subgraph "zoneA" + n1(Node1) + n2(Node2) + end + + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class n1,n2,n3,n4 k8s; + class zoneA,zoneB cluster; +``` + + +### 示例 2 - Ingress + +图 7 显示的是 [Ingress 是什么](/zh/docs/concepts/services-networking/ingress/#what-is-ingress) +页面所出现的图表。 + +{{< mermaid >}} +graph LR; +client([客户端])-. Ingress 所管理的
负载均衡器 .->ingress[Ingress]; +ingress-->|路由规则|service[服务]; +subgraph cluster +ingress; +service-->pod1[Pod]; +service-->pod2[Pod]; +end +classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; +classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; +classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; +class ingress,service,pod1,pod2 k8s; +class client plain; +class cluster cluster; + +click client "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank + +click ingress "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank + +click service "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank + +click pod1 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank + +click pod2 "https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6ZmFsc2V9" _blank + + +{{< /mermaid >}} + + +图 7. Ingress + +代码块: + +```mermaid +graph LR; + client([客户端])-. Ingress 所管理的
负载均衡器 .->ingress[Ingress]; + ingress-->|路由规则|service[服务]; + subgraph cluster + ingress; + service-->pod1[Pod]; + service-->pod2[Pod]; + end + classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; + classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; + classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; + class ingress,service,pod1,pod2 k8s; + class client plain; + class cluster cluster; +``` + + +### 示例 3 - K8s 系统流程 + +图 8 给出的是一个 Mermaid 时序图,展示启动容器时 K8s 组件间的控制流。 + +{{< figure src="/docs/images/diagram-guide-example-3.svg" alt="K8s system flow diagram" class="diagram-large" caption="Figure 8. K8s system flow diagram" link="https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiJSV7aW5pdDp7XCJ0aGVtZVwiOlwibmV1dHJhbFwifX0lJVxuc2VxdWVuY2VEaWFncmFtXG4gICAgYWN0b3IgbWVcbiAgICBwYXJ0aWNpcGFudCBhcGlTcnYgYXMgY29udHJvbCBwbGFuZTxicj48YnI-YXBpLXNlcnZlclxuICAgIHBhcnRpY2lwYW50IGV0Y2QgYXMgY29udHJvbCBwbGFuZTxicj48YnI-ZXRjZCBkYXRhc3RvcmVcbiAgICBwYXJ0aWNpcGFudCBjbnRybE1nciBhcyBjb250cm9sIHBsYW5lPGJyPjxicj5jb250cm9sbGVyPGJyPm1hbmFnZXJcbiAgICBwYXJ0aWNpcGFudCBzY2hlZCBhcyBjb250cm9sIHBsYW5lPGJyPjxicj5zY2hlZHVsZXJcbiAgICBwYXJ0aWNpcGFudCBrdWJlbGV0IGFzIG5vZGU8YnI-PGJyPmt1YmVsZXRcbiAgICBwYXJ0aWNpcGFudCBjb250YWluZXIgYXMgbm9kZTxicj48YnI-Y29udGFpbmVyPGJyPnJ1bnRpbWVcbiAgICBtZS0-PmFwaVNydjogMS4ga3ViZWN0bCBjcmVhdGUgLWYgcG9kLnlhbWxcbiAgICBhcGlTcnYtLT4-ZXRjZDogMi4gc2F2ZSBuZXcgc3RhdGVcbiAgICBjbnRybE1nci0-PmFwaVNydjogMy4gY2hlY2sgZm9yIGNoYW5nZXNcbiAgICBzY2hlZC0-PmFwaVNydjogNC4gd2F0Y2ggZm9yIHVuYXNzaWduZWQgcG9kcyhzKVxuICAgIGFwaVNydi0-PnNjaGVkOiA1LiBub3RpZnkgYWJvdXQgcG9kIHcgbm9kZW5hbWU9XCIgXCJcbiAgICBzY2hlZC0-PmFwaVNydjogNi4gYXNzaWduIHBvZCB0byBub2RlXG4gICAgYXBpU3J2LS0-PmV0Y2Q6IDcuIHNhdmUgbmV3IHN0YXRlXG4gICAga3ViZWxldC0-PmFwaVNydjogOC4gbG9vayBmb3IgbmV3bHkgYXNzaWduZWQgcG9kKHMpXG4gICAgYXBpU3J2LT4-a3ViZWxldDogOS4gYmluZCBwb2QgdG8gbm9kZVxuICAgIGt1YmVsZXQtPj5jb250YWluZXI6IDEwLiBzdGFydCBjb250YWluZXJcbiAgICBrdWJlbGV0LT4-YXBpU3J2OiAxMS4gdXBkYXRlIHBvZCBzdGF0dXNcbiAgICBhcGlTcnYtLT4-ZXRjZDogMTIuIHNhdmUgbmV3IHN0YXRlIiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRlZmF1bHRcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjp0cnVlfQ" >}} + + + +代码段: + +``` +%%{init:{"theme":"neutral"}}%% +sequenceDiagram + actor me + participant apiSrv as 控制面

api-server + participant etcd as 控制面

etcd 数据存储 + participant cntrlMgr as 控制面

控制器管理器 + participant sched as 控制面

调度器 + participant kubelet as 节点

kubelet + participant container as 节点

容器运行时 + me->>apiSrv: 1. kubectl create -f pod.yaml + apiSrv-->>etcd: 2. 保存新状态 + cntrlMgr->>apiSrv: 3. 检查变更 + sched->>apiSrv: 4. 监视未分派的 Pod(s) + apiSrv->>sched: 5. 通知 nodename=" " 的 Pod + sched->>apiSrv: 6. 指派 Pod 到节点 + apiSrv-->>etcd: 7. 保存新状态 + kubelet->>apiSrv: 8. 查询新指派的 Pod(s) + apiSrv->>kubelet: 9. 将 Pod 绑定到节点 + kubelet->>container: 10. 启动容器 + kubelet->>apiSrv: 11. 更新 Pod 状态 + apiSrv-->>etcd: 12. 保存新状态 +``` + + +## 如何设置图表样式 + + +你可以使用大家都熟悉的 CSS 术语来为一个或多个图表元素设置渲染样式。 +你可以在 Mermaid 代码中使用两种类型的语句来完成这一工作: + +* `classDef` 定义一类样式属性; +* `class` 指定 class 所适用的一种或多种元素。 + + +在[图 7](https://mermaid-js.github.io/mermaid-live-editor/edit/#eyJjb2RlIjoiZ3JhcGggIExSXG4gIGNsaWVudChbY2xpZW50XSktLiBJbmdyZXNzLW1hbmFnZWQgPGJyPiBsb2FkIGJhbGFuY2VyIC4tPmluZ3Jlc3NbSW5ncmVzc107XG4gIGluZ3Jlc3MtLT58cm91dGluZyBydWxlfHNlcnZpY2VbU2VydmljZV07XG4gIHN1YmdyYXBoIGNsdXN0ZXJcbiAgaW5ncmVzcztcbiAgc2VydmljZS0tPnBvZDFbUG9kXTtcbiAgc2VydmljZS0tPnBvZDJbUG9kXTtcbiAgZW5kXG4gIGNsYXNzRGVmIHBsYWluIGZpbGw6I2RkZCxzdHJva2U6I2ZmZixzdHJva2Utd2lkdGg6NHB4LGNvbG9yOiMwMDA7XG4gIGNsYXNzRGVmIGs4cyBmaWxsOiMzMjZjZTUsc3Ryb2tlOiNmZmYsc3Ryb2tlLXdpZHRoOjRweCxjb2xvcjojZmZmO1xuICBjbGFzc0RlZiBjbHVzdGVyIGZpbGw6I2ZmZixzdHJva2U6I2JiYixzdHJva2Utd2lkdGg6MnB4LGNvbG9yOiMzMjZjZTU7XG4gIGNsYXNzIGluZ3Jlc3Msc2VydmljZSxwb2QxLHBvZDIgazhzO1xuICBjbGFzcyBjbGllbnQgcGxhaW47XG4gIGNsYXNzIGNsdXN0ZXIgY2x1c3RlcjtcbiIsIm1lcm1haWQiOiJ7XG4gIFwidGhlbWVcIjogXCJkZWZhdWx0XCJcbn0iLCJ1cGRhdGVFZGl0b3IiOmZhbHNlLCJhdXRvU3luYyI6dHJ1ZSwidXBkYXRlRGlhZ3JhbSI6dHJ1ZX0) +中,你可以看到这两种示例。 + + +``` +classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; // 定义 k8s 类的样式 +class ingress,service,pod1,pod2 k8s; // k8s 类会应用到 ingress, service, pod1 and pod2 这些元素之上 +``` + + +你可以在你的图表中包含一个或多个 `classDef` 和 `class` 语句。 +你也可以在你的图表中为 k8s 组件使用官方的 K8s `#326ce5` 十六进制颜色代码。 + +关于样式设置和类的更多信息,可参阅 +[Mermaid Styling and classes docs](https://mermaid-js.github.io/mermaid/#/flowchart?id=styling-and-classes)。 + + +## 如何使用标题 {#how-to-use-captions} + + +标题用来为图表提供简要的描述。标题或短描述都可以作为图表标题。 +标题不是用来替换你在文档中要提供的解释性文字。 +相反,它们是用来在文字与图表之间建立“语境连接”的。 + + +将一些文字和带标题的图表组合到一起,可以为你所想要向用户传递的信息提供一种更为精确的表达。 + +没有标题的话,用户就必须在图表前后的文字中来回阅读,从而了解其含义。 +这会让用户感觉到很沮丧。 + + +图 9 给出合适的标题所需要具备的三要素:图表、图表标题和图表引用。 + +{{< mermaid >}} +flowchart +A[图表本身

内嵌 Mermaid 或
SVG 图片文件] +B[图表标题

添加图表编号和
标题文字] +C[图表引用

在文字中用图表
编号引用图表] + + classDef box fill:#fff,stroke:#000,stroke-width:1px,color:#000; + class A,B,C box + +click A "https://mermaid-js.github.io/mermaid-live-editor/edit#eyJjb2RlIjoiZmxvd2NoYXJ0XG4gICAgQVtEaWFncmFtPGJyPjxicj5JbmxpbmUgTWVybWFpZCBvcjxicj5TVkcgaW1hZ2UgZmlsZXNdXG4gICAgQltEaWFncmFtIENhcHRpb248YnI-PGJyPkFkZCBGaWd1cmUgTnVtYmVyLiBhbmQ8YnI-Q2FwdGlvbiBUZXh0XVxuICAgIENbRGlhZ3JhbSBSZWZlcnJhbDxicj48YnI-UmVmZXJlbmVuY2UgRmlndXJlIE51bWJlcjxicj5pbiB0ZXh0XVxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMgYm94IiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRlZmF1bHRcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjpmYWxzZX0" _blank + +click B "https://mermaid-js.github.io/mermaid-live-editor/edit#eyJjb2RlIjoiZmxvd2NoYXJ0XG4gICAgQVtEaWFncmFtPGJyPjxicj5JbmxpbmUgTWVybWFpZCBvcjxicj5TVkcgaW1hZ2UgZmlsZXNdXG4gICAgQltEaWFncmFtIENhcHRpb248YnI-PGJyPkFkZCBGaWd1cmUgTnVtYmVyLiBhbmQ8YnI-Q2FwdGlvbiBUZXh0XVxuICAgIENbRGlhZ3JhbSBSZWZlcnJhbDxicj48YnI-UmVmZXJlbmVuY2UgRmlndXJlIE51bWJlcjxicj5pbiB0ZXh0XVxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMgYm94IiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRlZmF1bHRcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjpmYWxzZX0" _blank + +click C "https://mermaid-js.github.io/mermaid-live-editor/edit#eyJjb2RlIjoiZmxvd2NoYXJ0XG4gICAgQVtEaWFncmFtPGJyPjxicj5JbmxpbmUgTWVybWFpZCBvcjxicj5TVkcgaW1hZ2UgZmlsZXNdXG4gICAgQltEaWFncmFtIENhcHRpb248YnI-PGJyPkFkZCBGaWd1cmUgTnVtYmVyLiBhbmQ8YnI-Q2FwdGlvbiBUZXh0XVxuICAgIENbRGlhZ3JhbSBSZWZlcnJhbDxicj48YnI-UmVmZXJlbmVuY2UgRmlndXJlIE51bWJlcjxicj5pbiB0ZXh0XVxuXG4gICAgY2xhc3NEZWYgYm94IGZpbGw6I2ZmZixzdHJva2U6IzAwMCxzdHJva2Utd2lkdGg6MXB4LGNvbG9yOiMwMDA7XG4gICAgY2xhc3MgQSxCLEMgYm94IiwibWVybWFpZCI6IntcbiAgXCJ0aGVtZVwiOiBcImRlZmF1bHRcIlxufSIsInVwZGF0ZUVkaXRvciI6ZmFsc2UsImF1dG9TeW5jIjp0cnVlLCJ1cGRhdGVEaWFncmFtIjpmYWxzZX0" _blank + +{{< /mermaid >}} + + +图 9. 标题要素 + +{{< note >}} + +你应该总是为文档中的每个图表添加标题。 +{{< /note >}} + + +**图表本身** + +Mermaid+SVG 和外部工具方法都会生成 `.svg` 图片文件。 + +下面的 `{{}}` 短代码是针对定义在保存于 +`/images/docs/components-of-kubernetes.svg` 中的 `.svg` 图片文件的。 + +```none +{{}} +``` + + +你需要将 `src`、`alt`、`class` 和 `caption` 取值传递给 `{{}}` 短代码。 +你可以使用 `diagram-large`、`diagram-medium` 和 `diagram-small` 类来调整图表的尺寸。 + +{{< note >}} + +使用内嵌方法来创建的图表不使用 `{{}}` 短代码。 +Mermaid 代码定义该图表将如何在页面上渲染。 +{{< /note >}} + + +关于创建图表的不同方法,可参阅[创建图表的方法](#methods-for-creating-diagrams)。 + + +**图表标题** + +接下来,添加图表标题。 + +如果你使用 `.svg` 图片文件来定义你的图表,你就需要使用 `{{}}` +短代码的 `caption` 参数。 + + +```none +{{}} +``` + + +如果你使用内嵌的 Mermaid 代码来定义图表,则你需要使用 Markdown 文本来添加标题。 + +```none +图 4. Kubernetes 架构组件 +``` + + +添加图表标题时需要考虑的问题: + +* 使用 `{{}}` 短代码来为 Mermaid+SVG 和外部工具方法制作的图表添加标题。 +* 对于内嵌方法制作的图表,使用简单的 Markdown 文本来为其添加标题。 +* 在你的图表标题前面添加 `图 <编号>.`. 你必须使用 `图` 字样, + 并且编号必须对于文档页面中所有图表而言唯一。 + 在编号之后添加一个英文句号。 +* 将图表标题添加到 `图 <编号>.` 之后,并且在同一行。 + 你必须为图表标题添加英文句点作为其结束标志。尽量保持标题文字简短。 +* 图表标题要放在图表 __之后__。 + + +**图表引用** + +最后,你可以添加图表引用。图表引用位于你的文档正文中,并且应该出现在图表之前。 +这样,用户可以将你的文字与对应的图表关联起来。引用时所给的 `图 <编号>` +部分要与图表标题中对应部分一致。 + + +你要避免使用空间上的相对引用,例如 `..下面的图片..` 或者 `..下面的图形..`。 + + +```text +图 10 展示的是 Kubernetes 体系结构。其控制面 ... +``` + + +图表引用是可选的,在有些场合中,添加这类引用是不合适的。 +如果你不是很确定,可以在文字中添加图表引用,以判断是否看起来、读起来比较通顺。 +如果仍然不确定,可以使用图表引用。 + + +**完整全图** + +图 10 展示的是一个 Kubernetes 架构图表,其中包含了图表本身、图表标题和图表引用。 + +这里的 `{{}}` 短代码负责渲染图表,添加标题并包含可选的 `link` +参数,以便于你为图表提供超级链接。图表引用也被包含在段落中。 + +下面是针对此图表的 `{{}}` 短代码。 + + +``` +{{}} +``` + + +{{< figure src="/images/docs/components-of-kubernetes.svg" alt="运行在集群中的 Kubernetes Pod" class="diagram-large" caption="图 10. Kubernetes 架构." link="https://kubernetes.io/docs/concepts/overview/components/" >}} + + +## 提示 {#tips} + +* 总是使用在线编辑器来创建和编辑你的图表。 +* 总是使用 Hugo 本地和 Netlify 预览来检查图表在文档中的渲染效果。 +* 提供图表源代码指针,例如 URL、源代码位置或者标明代码时是说明的。 + +* 总是提供图表标题。 +* 在问题报告或 PR 中包含 `.svg` 或 `.png` 图片与/或 Mermaid 代码会很有帮助。 +* 对于 Mermaid+SVG 方法和外部工具方法而言,尽量使用 `.svg` 图片文件, + 因为这类文件在被放大之后仍能清晰地显示。 +* 对于 `.svg` 文件的最佳实践是将其加载到一个 SVG 编辑工具中,并使用 + “将文字转换为路径”功能完成转换。 + +* Mermaid 不支持额外的图表或艺术形式。 +* Hugo Mermaid 短代码在在线编辑器中无法显示。 +* 如果你想要在在线编辑器中更改图表,你 __必须__ 保存它以便为图表生成新的 URL。 +* 点击本节中的图表,你可以查看其源代码及其在在线编辑器中的渲染效果。 + +* 查看本页的源代码,`diagram-guide.md` 文件,可以将其作为示例。 +* 查阅 [Mermaid docs](https://mermaid-js.github.io/mermaid/#/) 以获得更多的解释和示例。 + + +最重要的一点,__保持图表简单__。 +这样做会节省你和其他贡献者的时间,同时也会方便新的以及有经验的用户阅读。 + From 26e080cc1fefc4bf7b64ff7067904ee8f0c8a0bd Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Sat, 26 Mar 2022 22:23:09 +0800 Subject: [PATCH 108/138] [zh] update parallel-processing-expansion.md Signed-off-by: xin.li --- content/zh/docs/tasks/job/parallel-processing-expansion.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/zh/docs/tasks/job/parallel-processing-expansion.md b/content/zh/docs/tasks/job/parallel-processing-expansion.md index 2f5a8e13c3b13..88240fc8a5c07 100644 --- a/content/zh/docs/tasks/job/parallel-processing-expansion.md +++ b/content/zh/docs/tasks/job/parallel-processing-expansion.md @@ -302,7 +302,7 @@ spec: spec: containers: - name: c - image: busybox + image: busybox:1.28 command: ["sh", "-c", "echo Processing URL {{ url }} && sleep 5"] restartPolicy: Never {% endfor %} From c6c9ab547ac31bd6da6a5fa68e329c5351393355 Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Sat, 26 Mar 2022 22:40:05 +0800 Subject: [PATCH 109/138] [zh] Update install-kubectl-linux.md Signed-off-by: xin.li --- .../zh/docs/tasks/tools/install-kubectl-linux.md | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/content/zh/docs/tasks/tools/install-kubectl-linux.md b/content/zh/docs/tasks/tools/install-kubectl-linux.md index e8750446b53c9..b64e4df578789 100644 --- a/content/zh/docs/tasks/tools/install-kubectl-linux.md +++ b/content/zh/docs/tasks/tools/install-kubectl-linux.md @@ -98,7 +98,7 @@ The following methods exist for installing kubectl on Linux: 基于校验和文件,验证 kubectl 的可执行文件: ```bash - echo "$(}} 4. 执行测试,以保障你安装的版本是最新的: ```bash kubectl version --client ``` + + 或者使用如下命令来查看版本的详细信息: + ```cmd + kubectl version --client --output=yaml + ``` -Этот документ каталог связь между плоскостью управления (apiserver) и кластером Kubernetes. Цель состоит в том, чтобы позволить пользователям настраивать свою установку для усиления сетевой конфигурации, чтобы кластер мог работать в ненадежной сети (или на полностью общедоступных IP-адресах облачного провайдера). +Этот документ описывает связь между плоскостью управления (apiserver) и кластером Kubernetes. Цель состоит в том, чтобы позволить пользователям настраивать свою установку для усиления сетевой конфигурации, чтобы кластер мог работать в ненадежной сети (или на полностью общедоступных IP-адресах облачного провайдера). ## Связь между плоскостью управления и узлом -В Kubernetes имеется API шаблон "hub-and-spoke". Все используемые API из узлов (или которые запускают pod-ы) завершает apiserver. Ни один из других компонентов плоскости управления не предназначен для предоставления удаленных сервисов. Apiserver настроен на прослушивание удаленных подключений через безопасный порт HTTPS. (обычно 443) с одной или несколькими включенными формами [идентификации](/docs/reference/access-authn-authz/authentication/) клиена. +В Kubernetes имеется API шаблон «ступица и спица» (hub-and-spoke). Все используемые API из узлов (или которые запускают pod-ы) завершает apiserver. Ни один из других компонентов плоскости управления не предназначен для предоставления удаленных сервисов. Apiserver настроен на прослушивание удаленных подключений через безопасный порт HTTPS (обычно 443) с одной или несколькими включенными формами [аутентификации](/docs/reference/access-authn-authz/authentication/) клиента. -Должна быть включена одна или несколько форм [идентификации](/docs/reference/access-authn-authz/authorization/), особенно если разрешены [анонимные запросы](/docs/reference/access-authn-authz/authentication/#anonymous-requests) или [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens). +Должна быть включена одна или несколько форм [авторизации](/docs/reference/access-authn-authz/authorization/), особенно, если разрешены [анонимные запросы](/docs/reference/access-authn-authz/authentication/#anonymous-requests) или [ServiceAccount токены](/docs/reference/access-authn-authz/authentication/#service-account-tokens). -Узлы должны быть снабжены общедоступным корневым сертификатом для кластера, чтобы они могли безопасно подключаться к apiserver-у вместе с действительными учетными данными клиента. Хороший подход заключается в том, что учетные данные клиента, предоставляемые kubelet, имеют форму клиентского сертификата. См. Информацию о загрузке Kubelet TLS [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) для автоматической подготовки клиентских сертификатов kubelet. +Узлы должны быть снабжены публичным корневым сертификатом для кластера, чтобы они могли безопасно подключаться к apiserver-у вместе с действительными учетными данными клиента. Хороший подход заключается в том, чтобы учетные данные клиента, предоставляемые kubelet-у, имели форму клиентского сертификата. См. Информацию о загрузке kubelet TLS [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) для автоматической подготовки клиентских сертификатов kubelet. -pod-ы, которые хотят подключиться к apiserver, могут сделать это безопасно, используя учетную запись службы, чтобы Kubernetes автоматически вводил общедоступный корневой сертификат и действительный токен-носитель в pod при его создании. -Служба `kubernetes` (в пространстве имен `default`) is настроен с виртуальным IP-адресом, который перенаправляет (через kube-proxy) к endpoint HTTPS apiserver-а. +Pod-ы, которые хотят подключиться к apiserver, могут сделать это безопасно, используя ServiceAccount, чтобы Kubernetes автоматически вводил общедоступный корневой сертификат и действительный токен-носитель в pod при его создании. +Служба `kubernetes` (в пространстве имен `default`) настроена с виртуальным IP-адресом, который перенаправляет (через kube-proxy) на HTTPS эндпоинт apiserver-а. Компоненты уровня управления также взаимодействуют с кластером apiserver-а через защищенный порт. @@ -33,7 +33,7 @@ pod-ы, которые хотят подключиться к apiserver, мог ## Узел к плоскости управления -Существуют две пути взаимодействия от плоскости управления (apiserver) к узлам. Первый - от apiserver-а до kubelet процесса, который выполняется на каждом узле кластера. Второй - от apiserver к любому узлу, pod-у или службе через промежуточную функциональность apiserver-а. +Существуют два пути связи плоскости управления (apiserver) с узлами. Первый - от apiserver-а до kubelet процесса, который выполняется на каждом узле кластера. Второй - от apiserver к любому узлу, pod-у или службе через промежуточную функциональность apiserver-а. ### apiserver в kubelet @@ -43,21 +43,21 @@ pod-ы, которые хотят подключиться к apiserver, мог * Прикрепление (через kubectl) к запущенным pod-ам. * Обеспечение функциональности переадресации портов kubelet. -Эти соединения заверщаются в kubelet в endpoint HTTPS. По умолчанию apiserver не проверяет сертификат обслуживания kubelet-ов, что делает соединение подверженным к атаке человек по середине (man-in-the-middle) и **unsafe** запущенных в ненадежных или общедоступных сетях. +Эти соединения завершаются на HTTPS эндпоинте kubelet-a. По умолчанию apiserver не проверяет сертификат обслуживания kubelet-ов, что делает соединение подверженным к атаке «человек посередине» (man-in-the-middle) и **небезопасным** к запуску в ненадежных и/или общедоступных сетях. -Для проверки этого соединения, используется флаг `--kubelet-certificate-authority` чтобы предоставить apiserver-у набор корневых (root) сертификатов для проверки сертификата обслуживания kubelet-ов. +Для проверки этого соединения используется флаг `--kubelet-certificate-authority` чтобы предоставить apiserver-у набор корневых (root) сертификатов для проверки сертификата обслуживания kubelet-ов. -Если это не возможно, используйте [SSH-тунелирование](#ssh-tunnels) между apiserver-ом и kubelet, если это необходимо во избежании подключения по ненадежной или общедоступной сети. +Если это не возможно, используйте [SSH-тунелирование](#ssh-tunnels) между apiserver-ом и kubelet, если это необходимо, чтобы избежать подключения по ненадежной или общедоступной сети. -Наконец, Должны быть включены [пудентификация или авторизация Kubelet](/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/) для защиты kubelet API. +Наконец, должны быть включены [аутентификация или авторизация kubelet](/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/) для защиты kubelet API. ### apiserver для узлов, pod-ов, и служб -Соединение с apiserver-ом к узлу, pod-у или службе по умолчанию осушествяляется по обычному HTTP-соединению и поэтому не проходят проверку подлиности и не шифрование. Они могут быть запущены по защищенному HTTPS-соединению, добавив префикс `https:` к имени узла, pod-а или службы в URL-адресе API, но они не будут проверять сертификат предоставленный HTTPS endpoint, также не будут предоставлять учетные данные клиента. Таким образом, хотя соединение будет зашифровано, оно не обеспечит никаких гарантий целостности. Эти соединения **are not currently safe** запущенных в ненадежных или общедоступных сетях. +Соединения с apiserver к узлу, поду или сервису по умолчанию осуществляются по-обычному HTTP-соединению и поэтому не аутентифицируются, и не шифруются. Они могут быть запущены по защищенному HTTPS-соединению, после добавления префикса `https:` к имени узла, пода или сервиса в URL-адресе API, но они не будут проверять сертификат предоставленный HTTPS эндпоинтом, как и не будут предоставлять учетные данные клиента. Таким образом, хотя соединение будет зашифровано, оно не обеспечит никаких гарантий целостности. Эти соединения **в настоящее время небезопасны** для запуска в ненадежных или общедоступных сетях. -### SSH-тунели +### SSH-туннели -Kubernetes поддерживает SSH-туннели для защиты плоскости управления узлов от путей связи. В этой конфигурации apiserver инициирует SSH-туннель для каждого узла в кластере (подключается к ssh-серверу, прослушивая порт 22) и передает весь трафикпредназначенный для kubelet, узлу, pod-у или службе через тунель. Этот тунель гарантирует, что трафик не выводиться за пределы сети, в которой работает узел. +Kubernetes поддерживает SSH-туннели для защиты плоскости управления узлов от путей связи. В этой конфигурации apiserver инициирует SSH-туннель для каждого узла в кластере (подключается к ssh-серверу, прослушивая порт 22) и передает весь трафик предназначенный для kubelet, узлу, pod-у или службе через туннель. Этот туннель гарантирует, что трафик не выводится за пределы сети, в которой работает узел. SSH-туннели в настоящее время устарели, поэтому вы не должны использовать их, если не знаете, что делаете. Служба подключения является заменой этого канала связи. @@ -65,6 +65,6 @@ SSH-туннели в настоящее время устарели, поэто {{< feature-state for_k8s_version="v1.18" state="beta" >}} -В качестве замены SSH-туннелям, служба подключения обеспечивает уровень полномочие TCP для плоскости управления кластерной связи. Служба подключения состоит из двух частей: сервер подключения в сети плоскости управления и агентов подключения в сети узлов. Агенты службы подключения инициируют подключения к серверу подключения и поддерживают сетевое подключение. После включения службы подключения, весь трафик с плоскости управления на узлы проходит через эти соединения. +В качестве замены SSH-туннелям, служба подключения обеспечивает уровень полномочия TCP для плоскости управления кластерной связи. Служба подключения состоит из двух частей: сервер подключения к сети плоскости управления и агентов подключения в сети узлов. Агенты службы подключения инициируют подключения к серверу подключения и поддерживают сетевое подключение. После включения службы подключения, весь трафик с плоскости управления на узлы проходит через эти соединения. -Следуйте инструкциям [Задача службы подключения](/docs/tasks/extend-kubernetes/setup-konnectivity/) чтобы настроить службу подключения в кластере. +Следуйте инструкциям [Задача службы подключения,](/docs/tasks/extend-kubernetes/setup-konnectivity/) чтобы настроить службу подключения в кластере. From 48dc63e27f5ef82b4f118f2bb55cc75fda1b787a Mon Sep 17 00:00:00 2001 From: Ilya Z Date: Sat, 26 Mar 2022 22:37:48 +0400 Subject: [PATCH 118/138] fix_some_typos --- .../docs/concepts/architecture/controller.md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/content/ru/docs/concepts/architecture/controller.md b/content/ru/docs/concepts/architecture/controller.md index 3df6517fa48c2..e46fbb022b25a 100644 --- a/content/ru/docs/concepts/architecture/controller.md +++ b/content/ru/docs/concepts/architecture/controller.md @@ -10,8 +10,8 @@ weight: 30 Вот один из примеров контура управления: термостат в помещении. -Когда вы устанавливаете температуру, это говорит термостату о вашем *желаемом состоянии*. Фактическая температура в помещении - это -*текущее состояние*. Термостат действует так, чтобы приблизить текущее состояние к желаемому состоянию, путем включения или выключения оборудования. +Когда вы устанавливаете температуру, это говорит термостату о вашем *желаемом состоянии*. Фактическая температура в помещении - это +*текущее состояние*. Термостат действует так, чтобы приблизить текущее состояние к желаемому состоянию, путем включения или выключения оборудования. {{< glossary_definition term_id="controller" length="short">}} @@ -27,7 +27,7 @@ weight: 30 имеют поле спецификации, которое представляет желаемое состояние. Контроллер (ы) для этого ресурса несут ответственность за приближение текущего состояния к желаемому состоянию Контроллер может выполнить это действие сам; чаще всего в Kubernetes, -контроллер будет отправляет сообщения на +контроллер отправляет сообщения на {{< glossary_tooltip text="сервер API" term_id="kube-apiserver" >}} которые имеют полезные побочные эффекты. Пример этого вы можете увидеть ниже. @@ -40,7 +40,7 @@ weight: 30 Контроллер {{< glossary_tooltip term_id="job" >}} является примером встроенного контроллера Kubernetes. Встроенные контроллеры управляют состоянием, взаимодействуя с кластером сервера API. Задание - это ресурс Kubernetes, который запускает -{{< glossary_tooltip term_id="pod" >}}, или возможно несколько Pod-ов, которые выполняют задание и затем останавливаются. +{{< glossary_tooltip term_id="pod" >}}, или возможно несколько Pod-ов, выполняющих задачу и затем останавливающихся. (После [планирования](/docs/concepts/scheduling-eviction/), Pod объекты становятся частью желаемого состояния для kubelet). @@ -50,9 +50,9 @@ weight: 30 {{< glossary_tooltip text="плоскости управления" term_id="control-plane" >}} действуют на основе информации (имеются ли новые запланированные Pod-ы для запуска), и в итоге работа завершается. -После того, как вы создадите новое задание, желаемое состояние для этого задания будет завершено. Контроллер задания приближает текущее состояние этого задания к желаемому состоянию: создает Pod-ы, которые выполняют работу, которую вы хотели для этого задания, чтобы задание было ближе к завершению. +После того как вы создадите новое задание, желаемое состояние для этого задания будет завершено. Контроллер задания приближает текущее состояние этой задачи к желаемому состоянию: создает Pod-ы, выполняющие работу, которую вы хотели для этой задачи, чтобы задание было ближе к завершению. -Контроллеры также обровляют объекты которые их настраивают. +Контроллеры также обновляют объекты которые их настраивают. Например: как только работа выполнена для задания, контроллер задания обновляет этот объект задание, чтобы пометить его как `Завершенный`. (Это немного похоже на то, как некоторые термостаты выключают свет, чтобы указать, что теперь ваша комната имеет установленную вами температуру). @@ -66,20 +66,20 @@ weight: 30 Контроллеры, которые взаимодействуют с внешним состоянием, находят свое желаемое состояние с сервера API, а затем напрямую взаимодействуют с внешней системой, чтобы приблизить текущее состояние. -(На самом деле существует [контроллер](https://github.com/kubernetes/autoscaler/), который горизонтально маштабирует узлы в вашем кластере.) +(На самом деле существует [контроллер](https://github.com/kubernetes/autoscaler/), который горизонтально масштабирует узлы в вашем кластере.) Важным моментом здесь является то, что контроллер вносит некоторые изменения, чтобы вызвать желаемое состояние, а затем сообщает текущее состояние обратно на сервер API вашего кластера. Другие контуры управления могут наблюдать за этими отчетными данными и предпринимать собственные действия. -В примере с термостатом, если в помещении очень холодно, тогда другой контроллер может также включить обогреватель для защиты от замерзания. В кластерах Kubernetes, плоскость управления косвенно работает с инструментами управления IP-адресами,службами хранения данных, API облочных провайдеров и другими службами для релизации +В примере с термостатом, если в помещении очень холодно, тогда другой контроллер может также включить обогреватель для защиты от замерзания. В кластерах Kubernetes, плоскость управления косвенно работает с инструментами управления IP-адресами, службами хранения данных, API облачных провайдеров и другими службами для реализации [расширения Kubernetes](/docs/concepts/extend-kubernetes/). ## Желаемое против текущего состояния {#desired-vs-current} -Kubernetes использует систему вида cloud-native и способен справлятся с постоянными изменениями. +Kubernetes использует систему вида cloud-native и способен справляться с постоянными изменениями. -Ваш кластер может изменяться в любой по мере выполнения работы и контуры управления автоматически устранают сбой. Это означает, что потенциально Ваш кластер никогда не достигнет стабильного состояния. +Ваш кластер может изменяться в любой по мере выполнения работы и контуры управления автоматически устраняют сбой. Это означает, что потенциально Ваш кластер никогда не достигнет стабильного состояния. -Пока контроллеры вашего кластера работают и могут вносить полезные изменения, не имеет значения, является ли общее состояние стабильным или нет. +Пока контроллеры вашего кластера работают и могут вносить полезные изменения, не имеет значения, является ли общее состояние стабильным или нет. ## Дизайн @@ -90,7 +90,7 @@ Kubernetes использует систему вида cloud-native и спос {{< note >}} Существует несколько контроллеров, которые создают или обновляют один и тот же тип объекта. За кулисами контроллеры Kubernetes следят за тем, чтобы обращать внимание только на ресурсы, связанные с их контролирующим ресурсом. -Например, у вас могут быть развертывания и задания; они оба создают Pod-ы. Контроллер заданий не удаляет Pod-ы созданные вашим развертиыванием, потому что имеется информационные ({{< glossary_tooltip term_id="label" text="метки" >}}) +Например, у вас могут быть развертывания и задания; они оба создают Pod-ы. Контроллер заданий не удаляет Pod-ы созданные вашим развертыванием, потому что имеется информационные ({{< glossary_tooltip term_id="label" text="метки" >}}) которые могут быть использованы контроллерами тем самым показывая отличие Pod-ов. {{< /note >}} @@ -102,7 +102,7 @@ Kubernetes поставляется с набором встроенных ко Kubernetes позволяет вам запускать устойчивую плоскость управления, так что в случае отказа одного из встроенных контроллеров работу берет на себя другая часть плоскости управления. Вы можете найти контроллеры, которые работают вне плоскости управления, чтобы расширить Kubernetes. -Или, если вы хотите, можете написать новый контроллер самостоятельно. Вы можете запустить свой собственный контроллер виде наборов Pod-ов, +Или, если вы хотите, можете написать новый контроллер самостоятельно. Вы можете запустить свой собственный контроллер в виде наборов Pod-ов, или внешнее в Kubernetes. Что подойдет лучше всего, будет зависеть от того, что делает этот конкретный контроллер. From 510bd76dd7f7f55922aa8ec5b0f36a40ddbac18c Mon Sep 17 00:00:00 2001 From: Ilya Z Date: Sat, 26 Mar 2022 22:49:23 +0400 Subject: [PATCH 119/138] gc_typos --- .../architecture/garbage-collection.md | 36 +++++++++---------- 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/content/ru/docs/concepts/architecture/garbage-collection.md b/content/ru/docs/concepts/architecture/garbage-collection.md index f7ded8b44e181..e94da8fc623cb 100644 --- a/content/ru/docs/concepts/architecture/garbage-collection.md +++ b/content/ru/docs/concepts/architecture/garbage-collection.md @@ -23,7 +23,7 @@ weight: 50 Многие объекты в Kubernetes ссылаются друг на друга через [*ссылки владельцев*](/docs/concepts/overview/working-with-objects/owners-dependents/). Ссылки владельцев сообщают плоскости управления какие объекты зависят от других. Kubernetes использует ссылки владельцев, чтобы предоставить плоскости управления и другим API -клиентам, возможность очистить связанные ресурсы передудалением объекта. В большинстве случаев, Kubernetes автоматический управляет ссылками владельцев. +клиентам, возможность очистить связанные ресурсы перед удалением объекта. В большинстве случаев, Kubernetes автоматический управляет ссылками владельцев. Владелец отличается от [меток и селекторов](/docs/concepts/overview/working-with-objects/labels/) которые также используют некоторые ресурсы. Например, рассмотрим @@ -36,15 +36,15 @@ Kubernetes использует ссылки владельцев, чтобы п {{< note >}} Ссылки на владельцев перекрестных пространств имен запрещены по дизайну. Зависимости пространства имен могут указывать на область действия кластера или владельцев пространства имен. -Владелец пространства имен **должен** быть в том же пространстве имен что и зависимости. -Если это не возможно, cсылка владельца считается отсутствующей и зависимый объект подлежит удалению, как только будет проверено отсутствие всех владельцев. +Владелец пространства имен **должен** быть в том же пространстве имен, что и зависимости. +Если это не возможно, ссылка владельца считается отсутствующей и зависимый объект подлежит удалению, как только будет проверено отсутствие всех владельцев. Зависимости области действия кластер может указывать только владельцев области действия кластера. В версии v1.20+, если зависимость с областью действия кластера указывает на пространство имен как владелец, тогда он рассматривается как имеющий неразрешимую ссылку на владельца и не может быть обработан сборщиком мусора. В версии v1.20+, если сборщик мусора обнаружит недопустимое перекрестное пространство имен `ownerReference`, -или зависящие от облости действия кластера `ownerReference` ссылка на тип пространства имен, предупреждающее событие с причиной `OwnerRefInvalidNamespace` и `involvedObject` сообщающеся о не действительной зависимости. +или зависящие от области действия кластера `ownerReference` ссылка на тип пространства имен, предупреждающее событие с причиной `OwnerRefInvalidNamespace` и `involvedObject` сообщающее о недействительной зависимости. Вы можете проверить наличие такого рода событий, выполнив `kubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace`. {{< /note >}} @@ -52,22 +52,22 @@ Kubernetes использует ссылки владельцев, чтобы п Kubernetes проверяет и удаляет объекты, на которые больше нет ссылок владельцев, так же как и pod-ов, оставленных после удаления ReplicaSet. Когда Вы удаляете объект, вы можете контролировать автоматический ли Kubernetes удаляет зависимые объекты автоматически в процессе вызова *каскадного удаления*. Существует два типа каскадного удаления, а именно: - * Каскадное удалени Foreground + * Каскадное удаление Foreground * Каскадное удаление Background Вы так же можете управлять как и когда сборщик мусора удаляет ресурсы, на которые ссылаются владельцы с помощью Kubernetes {{}}. -### Каскадное удалени Foreground {#foreground-deletion} +### Каскадное удаление Foreground {#foreground-deletion} -В Каскадном удалени Foreground, объект владельца, который вы удаляете, сначало переходить в состояние *в процессе удаления*. В этом состоянии с объектом-владельцем происходить следующее: +В Каскадном удалении Foreground, объект владельца, который вы удаляете, сначала переходить в состояние *в процессе удаления*. В этом состоянии с объектом-владельцем происходить следующее: * Сервер Kubernetes API устанавливает полю объекта `metadata.deletionTimestamp` время, когда объект был помечен для удаления. * Сервер Kubernetes API так же устанавливает метку `metadata.finalizers`для поля `foregroundDeletion`. - * Объект остается видимым блогодоря Kubernetes API пока процесс удаления не завершиться + * Объект остается видимым благодаря Kubernetes API пока процесс удаления не завершиться -После того, как владелец объекта переходит в состояние прогресса удаления, контроллер удаляет зависимые объекты. После удаления всех зависимых объектов, контроллер удаляет объект владельца. На этом этапе, объект больше не отображается в Kubernetes API. +После того как владелец объекта переходит в состояние прогресса удаления, контроллер удаляет зависимые объекты. После удаления всех зависимых объектов, контроллер удаляет объект владельца. На этом этапе, объект больше не отображается в Kubernetes API. Во время каскадного удаления foreground, единственным зависимым, которые блокируют удаления владельца, являются те, у кого имеется поле `ownerReference.blockOwnerDeletion=true`. Чтобы узнать больше. Смотрите [Использование каскадного удаления foreground](/docs/tasks/administer-cluster/use-cascading-deletion/#use-foreground-cascading-deletion). @@ -80,16 +80,16 @@ Kubernetes проверяет и удаляет объекты, на котор ### Осиротевшие зависимости -Когда Kubernetes удаляет владельца объекта, оставшиеся зависимости называются *осиротевшыми* объектами. По умолчанию, Kubernetes удаляет зависимые объекты. Чтобы узнать, как переопределить это повидение смотрите [Удаление объектов владельца и осиротевших зависимостей](/docs/tasks/administer-cluster/use-cascading-deletion/#set-orphan-deletion-policy). +Когда Kubernetes удаляет владельца объекта, оставшиеся зависимости называются *осиротевшими* объектами. По умолчанию, Kubernetes удаляет зависимые объекты. Чтобы узнать, как переопределить это поведение смотрите [Удаление объектов владельца и осиротевших зависимостей](/docs/tasks/administer-cluster/use-cascading-deletion/#set-orphan-deletion-policy). -## Сбор мусора из неиспользуемых контейнеров и изобробразов {#containers-images} +## Сбор мусора из неиспользуемых контейнеров и изображений {#containers-images} {{}} выполняет сбор мусора для неиспользуемых образов каждые пять минут и для неиспользуемых контейнеров каждую минуту. Вам следует избегать использования внешних инструментов для сборки мусора, так как они могут нарушить поведение kubelet и удалить контейнеры, которые должны существовать. -Чтобы настроить параметры для сборшика мусора для неиспользуемого контейнера и сборки мусора образа, подстройте +Чтобы настроить параметры для сборщика мусора для неиспользуемого контейнера и сборки мусора образа, подстройте kubelet использую [конфигурационный файл](/docs/tasks/administer-cluster/kubelet-config-file/) -и измените параметры, связанные со сборшиком мусора используя тип ресурса +и измените параметры, связанные со сборщиком мусора используя тип ресурса [`KubeletConfiguration`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration). ### Жизненный цикл контейнерных образов Container image lifecycle @@ -99,19 +99,19 @@ Kubernetes управляет жизненным циклом всех обра * `HighThresholdPercent` * `LowThresholdPercent` -Использование диска выше настроенного значения `HighThresholdPercent` запускает сборку мусора, которая удаляет образы в порядке основанном на последнем использовании, начиная с самого старого. kubelet удлаяет образы до тех пор, пока использование диска не достигнет значения `LowThresholdPercent`. +Использование диска выше настроенного значения `HighThresholdPercent` запускает сборку мусора, которая удаляет образы в порядке основанном на последнем использовании, начиная с самого старого. Kubelet удаляет образы до тех пор, пока использование диска не достигнет значения `LowThresholdPercent`. ### Сборщик мусора контейнерных образов {#container-image-garbage-collection} -kubelet собирает не используемые контейнеры на основе следующих переменных, которые вы можете определить: +Kubelet собирает не используемые контейнеры на основе следующих переменных, которые вы можете определить: * `MinAge`: минимальный возраст, при котором kubelet может начать собирать мусор контейнеров. Отключить, установив значение `0`. - * `MaxPerPodContainer`: максимальное количество некативныз контейнеров, которое может быть у каджой пары Pod-ов. Отключить, установив значение меньше чем `0`. + * `MaxPerPodContainer`: максимальное количество неактивных контейнеров, которое может быть у каждой пары Pod-ов. Отключить, установив значение меньше чем `0`. * `MaxContainers`: максимальное количество не используемых контейнеров, которые могут быть в кластере. Отключить, установив значение меньше чем `0`. В дополнение к этим переменным, kubelet собирает неопознанные и удаленные контейнеры, обычно начиная с самого старого. -`MaxPerPodContainer` и `MaxContainer` могут потенциально конфликтовать друг с другом в ситуациях, когда требуется максимальное количество контейнеров в Pod-е (`MaxPerPodContainer`) выйдет за пределы допустимого общего количества глобальных не используемых контейнеров (`MaxContainers`). В этой ситуации kubelet регулирует `MaxPodPerContainer` для устранения конфликта. наихудшим сценарием было бы понизить `MaxPerPodContainer` да `1` и изгнать самые старые контейнеры. +`MaxPerPodContainer` и `MaxContainer` могут потенциально конфликтовать друг с другом в ситуациях, когда требуется максимальное количество контейнеров в Pod-е (`MaxPerPodContainer`) выйдет за пределы допустимого общего количества глобальных не используемых контейнеров (`MaxContainers`). В этой ситуации kubelet регулирует `MaxPodPerContainer` для устранения конфликта. Наихудшим сценарием было бы понизить `MaxPerPodContainer` да `1` и изгнать самые старые контейнеры. Кроме того, владельцы контейнеров в pod-е могут быть удалены, как только они становятся старше чем `MinAge`. {{}} @@ -120,7 +120,7 @@ Kubelet собирает мусор только у контейнеров, ко ## Настройка сборщик мусора {#configuring-gc} -Вы можете настроить сборку мусора ресурсов, настроив параметры, специфичные для контроллеров, управляющих этими ресурсами. В последующих страницах показанно, как настроить сборку мусора: +Вы можете настроить сборку мусора ресурсов, настроив параметры, специфичные для контроллеров, управляющих этими ресурсами. В последующих страницах показано, как настроить сборку мусора: * [Настройка каскадного удаления объектов Kubernetes](/docs/tasks/administer-cluster/use-cascading-deletion/) * [Настройка очистки завершенных заданий](/docs/concepts/workloads/controllers/ttlafterfinished/) From 232c79f4e4e2646064c1239a18e934b23677eb9f Mon Sep 17 00:00:00 2001 From: Ilya Z Date: Sat, 26 Mar 2022 23:03:46 +0400 Subject: [PATCH 120/138] fix_some_typos --- .../ru/docs/concepts/architecture/nodes.md | 54 +++++++++---------- 1 file changed, 27 insertions(+), 27 deletions(-) diff --git a/content/ru/docs/concepts/architecture/nodes.md b/content/ru/docs/concepts/architecture/nodes.md index 075c3140f531b..613155f0124ad 100644 --- a/content/ru/docs/concepts/architecture/nodes.md +++ b/content/ru/docs/concepts/architecture/nodes.md @@ -32,7 +32,7 @@ Kubernetes запускает ваши приложения, помещая ко 1. Kubelet на узле саморегистрируется в плоскости управления 2. Вы или другой пользователь вручную добавляете объект Узла -После того, как вы создадите объект Узла или kubelet на узле самозарегистируется, +После того как вы создадите объект Узла или kubelet на узле самозарегистируется, плоскость управления проверяет, является ли новый объект Узла валидным (правильным). Например, если вы попробуете создать Узел при помощи следующего JSON манифеста: @@ -50,9 +50,9 @@ Kubernetes запускает ваши приложения, помещая ко ``` Kubernetes создает внутри себя объект Узла (представление). Kubernetes проверяет, -что kubelet зарегистрировался на API сервере, который совпадает с значением поля `metadata.name` Узла. +что kubelet зарегистрировался на API сервере, который совпадает со значением поля `metadata.name` Узла. Если узел здоров (если все необходимые сервисы запущены), -он имеет право на запуск Пода. В противном случае, этот узел игнорируется для любой активности кластера +он имеет право на запуск Пода. В противном случае этот узел игнорируется для любой активности кластера до тех пор, пока он не станет здоровым. {{< note >}} @@ -62,13 +62,13 @@ Kubernetes сохраняет объект для невалидного Узл остановить проверку доступности узла. {{< /note >}} -Имя объекта Узла дожно быть валидным +Имя объекта Узла должно быть валидным [именем поддомена DNS](/ru/docs/concepts/overview/working-with-objects/names#имена-поддоменов-dns). ### Саморегистрация Узлов Когда kubelet флаг `--register-node` имеет значение _true_ (по умолчанию), то kubelet будет пытаться -зарегистрировать себя на API сервере. Это наиболее предпочтительная модель, используемая большиством дистрибутивов. +зарегистрировать себя на API сервере. Это наиболее предпочтительная модель, используемая большинством дистрибутивов. Для саморегистрации kubelet запускается со следующими опциями: @@ -94,16 +94,16 @@ kubelet'ы имеют право только создавать/изменят Когда вы хотите создать объекты Узла вручную, установите kubelet флаг `--register-node=false`. Вы можете изменять объекты Узла независимо от настройки `--register-node`. -Например, вы можете установить метки на существующем Узле или пометить его неназначаемым. +Например, вы можете установить метки на существующем Узле или пометить его не назначаемым. Вы можете использовать метки на Узлах в сочетании с селекторами узла на Подах для управления планированием. -Например, вы можете ограничить Под иметь право на запуск только на группе доступных узлов. +Например, вы можете ограничить Под, иметь право на запуск только на группе доступных узлов. -Маркировка узла как неназначаемого предотвращает размещение планировщиком новых подов на этом Узле, +Маркировка узла как не назначаемого предотвращает размещение планировщиком новых подов на этом Узле, но не влияет на существующие Поды на Узле. Это полезно в качестве подготовительного шага перед перезагрузкой узла или другим обслуживанием. -Чтобы отметить Узел неназначемым, выполните: +Чтобы отметить Узел не назначаемым, выполните: ```shell kubectl cordon $NODENAME @@ -111,7 +111,7 @@ kubectl cordon $NODENAME {{< note >}} Поды, являющиеся частью {{< glossary_tooltip term_id="daemonset" >}} допускают -запуск на неназначаемом Узле. DaemonSets обычно обеспечивает локальные сервисы узла, +запуск на не назначаемом Узле. DaemonSets обычно обеспечивает локальные сервисы узла, которые должны запускаться на Узле, даже если узел вытесняется для запуска приложений. {{< /note >}} @@ -157,7 +157,7 @@ kubectl describe node {{< note >}} Если вы используете инструменты командной строки для вывода сведений об блокированном узле, то Условие включает `SchedulingDisabled`. `SchedulingDisabled` не является Условием в Kubernetes API; -вместо этого блокированные узлы помечены как Неназначемые в их спецификации. +вместо этого блокированные узлы помечены как Не назначаемые в их спецификации. {{< /note >}} Состояние узла представлено в виде JSON объекта. Например, следующая структура описывает здоровый узел: @@ -203,8 +203,8 @@ kubectl describe node Описывает ресурсы, доступные на узле: CPU, память и максимальное количество подов, которые могут быть запланированы на узле. -Поля в блоке capasity указывают общее количество ресурсов, которые есть на Узле. -Блок allocatable указывает количество ресурсовна Узле, +Поля в блоке capacity указывают общее количество ресурсов, которые есть на Узле. +Блок allocatable указывает количество ресурсов на Узле, которые доступны для использования обычными Подами. Вы можете прочитать больше о емкости и выделяемых ресурсах, изучая, как [зарезервировать вычислительные ресурсы](/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable) на Узле. @@ -212,7 +212,7 @@ kubectl describe node ### Информация (Info) Описывает общую информацию об узле, такую как версия ядра, версия Kubernetes (версии kubelet и kube-proxy), версия Docker (если используется) и название ОС. -Эта информация соберается Kubelet'ом на узле. +Эта информация собирается Kubelet'ом на узле. ### Контроллер узла @@ -247,16 +247,16 @@ ConditionUnknown, когда узел становится недоступны [Lease объект](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#lease-v1-coordination-k8s-io). Каждый узел имеет связанный с ним Lease объект в `kube-node-lease` {{< glossary_tooltip term_id="namespace" text="namespace">}}. -Lease - это легковестный ресурс, который улучшает производительность +Lease - это легковесный ресурс, который улучшает производительность сердцебиений узла при масштабировании кластера. -Kubelet отвечает за создание и обновление `NodeStatus` и Lease объекта. +Kubelet отвечает за создание и обновление `NodeStatus` и Lease объекта. - Kubelet обновляет `NodeStatus` либо когда происходит изменение статуса, - либо если в течение настронного интервала обновления не было. По умолчанию + либо если в течение настроенного интервала обновления не было. По умолчанию интервал для обновлений `NodeStatus` составляет 5 минут (намного больше, чем 40-секундный стандартный таймаут для недоступных узлов). -- Kubelet созадет и затем обновляет свой Lease объект каждый 10 секунд +- Kubelet создает и затем обновляет свой Lease объект каждый 10 секунд (интервал обновления по умолчанию). Lease обновления происходят независимо от `NodeStatus` обновлений. Если обновление Lease завершается неудачно, kubelet повторяет попытку с экспоненциальным откатом, начинающимся с 200 миллисекунд и ограниченным 7 секундами. @@ -265,7 +265,7 @@ Kubelet отвечает за создание и обновление `NodeStat В большинстве случаев контроллер узла ограничивает скорость выселения до `--node-eviction-rate` (по умолчанию 0,1) в секунду, что означает, -что он не выселяет поды с узлов быстрее чем c 1 узела в 10 секунд. +что он не выселяет поды с узлов быстрее чем с одного узла в 10 секунд. Поведение выселения узла изменяется, когда узел в текущей зоне доступности становится нездоровым. Контроллер узла проверяет, какой процент узлов в зоне @@ -288,26 +288,26 @@ Kubelet отвечает за создание и обновление `NodeStat выселяет поды с нормальной скоростью `--node-eviction-rate`. Крайний случай - когда все зоны полностью нездоровы (т.е. в кластере нет здоровых узлов). В таком случае контроллер узла предполагает, что существует некоторая проблема с подключением к мастеру, -и останавеливает все выселения, пока какое-нибудь подключение не будет восстановлено. +и останавливает все выселения, пока какое-нибудь подключение не будет восстановлено. Контроллер узла также отвечает за выселение подов, запущенных на узлах с `NoExecute` ограничениями, за исключением тех подов, которые сопротивляются этим ограничениям. Контроллер узла так же добавляет {{< glossary_tooltip text="ограничения" term_id="taint" >}} -соотвествующие проблемам узла, таким как узел недоступен или не готов. Это означает, +соответствующие проблемам узла, таким как узел недоступен или не готов. Это означает, что планировщик не будет размещать поды на нездоровых узлах. {{< caution >}} -`kubectl cordon` помечает узел как 'неназначемый', что имеет побочный эфект от контроллера сервисов, +`kubectl cordon` помечает узел как 'не назначаемый', что имеет побочный эффект от контроллера сервисов, удаляющего узел из любых списков целей LoadBalancer узла, на которые он ранее имел право, -эффектино убирая входящий трафик балансировщика нагрузки с блокированного узла(ов). +эффективно убирая входящий трафик балансировщика нагрузки с блокированного узла(ов). {{< /caution >}} ### Емкость узла Объекты узла отслеживают информацию о емкости ресурсов узла (например, объем доступной памяти и количество CPU). -Узлы, которые [самостоятельно зарегистировались](#саморегистрация-узлов) сообщают -о свое емкости во время регистрации. Если вы [вручную](#ручное-администрирование-узла) +Узлы, которые [самостоятельно зарегистрировались](#саморегистрация-узлов), сообщают +о своей емкости во время регистрации. Если вы [вручную](#ручное-администрирование-узла) добавляете узел, то вам нужно задать информацию о емкости узла при его добавлении. {{< glossary_tooltip text="Планировщик" term_id="kube-scheduler" >}} Kubernetes гарантирует, @@ -318,7 +318,7 @@ Kubelet отвечает за создание и обновление `NodeStat а также исключает любые процессы, запущенные вне контроля kubelet. {{< note >}} -Если вы явно хотите зарезервировать ресурсы для процессов, не связанныз с Подами, смотрите раздел +Если вы явно хотите зарезервировать ресурсы для процессов, не связанных с Подами, смотрите раздел [зарезервировать ресурсы для системных демонов](/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved). {{< /note >}} @@ -339,4 +339,4 @@ Kubelet отвечает за создание и обновление `NodeStat * Подробнее про [Узлы](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) of the architecture design document. * Подробнее про [ограничения и допуски](/docs/concepts/configuration/taint-and-toleration/). -* Подробнее про [автомаштабирование кластера](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling). +* Подробнее про [авто масштабирование кластера](/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling). From 0c259797db186ad73214f96b036614b6c2745337 Mon Sep 17 00:00:00 2001 From: Ilya Z Date: Sat, 26 Mar 2022 23:18:08 +0400 Subject: [PATCH 121/138] cluster_admin_typos --- .../concepts/cluster-administration/_index.md | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/content/ru/docs/concepts/cluster-administration/_index.md b/content/ru/docs/concepts/cluster-administration/_index.md index b596e111f32ba..77f610bf180b9 100644 --- a/content/ru/docs/concepts/cluster-administration/_index.md +++ b/content/ru/docs/concepts/cluster-administration/_index.md @@ -11,7 +11,7 @@ no_list: true --- -Обзор администрирования кластера предназначен для всех, кто создает или администрирует кластер Kubernetes. Это предполагает некоторое знакомство с основными [концепциями] (/docs/concepts/) Kubernetes. +Обзор администрирования кластера предназначен для всех, кто создает или администрирует кластер Kubernetes. Это предполагает некоторое знакомство с основными [концепциями](/docs/concepts/) Kubernetes. @@ -19,19 +19,19 @@ no_list: true См. Руководства в разделе [настройка](/docs/setup/) для получения примеров того, как планировать, устанавливать и настраивать кластеры Kubernetes. Решения, перечисленные в этой статье, называются *distros*. - {{< note >}} + {{< note >}} не все дистрибутивы активно поддерживаются. Выбирайте дистрибутивы, протестированные с последней версией Kubernetes. {{< /note >}} Прежде чем выбрать руководство, вот некоторые соображения: - - Вы хотите опробовать Kubernetes на вашем компьюторе или собрать многоузловой кластер высокой доступности? Выбирайте дистрибутивы, наиболее подходящие для ваших нужд. - - будете ли вы использовать **размещенный кластер Kubernetes**, такой как [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) или **разместите собственный кластер**? - - Будет ли ваш кластер **в помещений** или **в облаке (IaaS)**? Kubernetes не поддерживает напрямую гибридные кластеры. Вместо этого вы можете настроить несколько кластеров. - - **Если вы будете настроаивать Kubernetes в помещений (локально)**, подумайте, какая [сетевая модель](/docs/concepts/cluster-administration/networking/) подходит лучше всего. - - Будете ли вы запускать Kubernetes на **оборудований "bare metal"** или на **вирутальных машинах (VMs)**? - - Вы хотите **запустить кластер** или планируете **активно разворачивать код проекта Kubernetes**? В последнем случае выберите активно разрабатываемый дистрибутив. Некоторые дистрибутивы используют только двоичные выпуски, но предлагают болле широкий выбор. - - Ознакомьтесь с [компонентами](/docs/concepts/overview/components/) необходивые для запуска кластера. + - Вы хотите опробовать Kubernetes на вашем компьютере или собрать много узловой кластер высокой доступности? Выбирайте дистрибутивы, наиболее подходящие для ваших нужд. + - Будете ли вы использовать **размещенный кластер Kubernetes**, такой, как [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) или **разместите собственный кластер**? + - Будет ли ваш кластер **в помещении** или **в облаке (IaaS)**? Kubernetes не поддерживает напрямую гибридные кластеры. Вместо этого вы можете настроить несколько кластеров. + - **Если вы будете настраивать Kubernetes в помещении (локально)**, подумайте, какая [сетевая модель](/docs/concepts/cluster-administration/networking/) подходит лучше всего. + - Будете ли вы запускать Kubernetes на **оборудований "bare metal"** или на **виртуальных машинах (VMs)**? + - Вы хотите **запустить кластер** или планируете **активно разворачивать код проекта Kubernetes**? В последнем случае выберите активно разрабатываемый дистрибутив. Некоторые дистрибутивы используют только двоичные выпуски, но предлагают более широкий выбор. + - Ознакомьтесь с [компонентами](/docs/concepts/overview/components/) необходимые для запуска кластера. ## Управление кластером @@ -54,7 +54,7 @@ no_list: true * [Использование контроллеров допуска](/docs/reference/access-authn-authz/admission-controllers/) explains plug-ins which intercepts requests to the Kubernetes API server after authentication and authorization. -* [Использование Sysctls в кластере Kubernetes](/docs/tasks/administer-cluster/sysctl-cluster/) описывает администратору, как использовать sysctlинструмент командной строки для установки параметров ядра. +* [Использование Sysctls в кластере Kubernetes](/docs/tasks/administer-cluster/sysctl-cluster/) описывает администратору, как использовать sysctl инструмент командной строки для установки параметров ядра. * [Аудит](/docs/tasks/debug-application-cluster/audit/) описывает, как взаимодействовать с журналами аудита Kubernetes. From 026b8773ffc530ab387f7a2117287384d110be92 Mon Sep 17 00:00:00 2001 From: Ilya Z Date: Sat, 26 Mar 2022 23:40:25 +0400 Subject: [PATCH 122/138] addons_typos --- .../concepts/cluster-administration/addons.md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/content/ru/docs/concepts/cluster-administration/addons.md b/content/ru/docs/concepts/cluster-administration/addons.md index e86d48594904f..7e93ef82b22c6 100644 --- a/content/ru/docs/concepts/cluster-administration/addons.md +++ b/content/ru/docs/concepts/cluster-administration/addons.md @@ -13,33 +13,33 @@ content_type: concept -## Сеть и сетевыя политика +## Сеть и сетевая политика * [ACI](https://www.github.com/noironetworks/aci-containers) обеспечивает интегрированную сеть контейнеров и сетевую безопасность с помощью Cisco ACI. * [Antrea](https://antrea.io/) работает на уровне 3, обеспечивая сетевые службы и службы безопасности для Kubernetes, используя Open vSwitch в качестве уровня сетевых данных. * [Calico](https://docs.projectcalico.org/latest/introduction/) Calico поддерживает гибкий набор сетевых опций, поэтому вы можете выбрать наиболее эффективный вариант для вашей ситуации, включая сети без оверлея и оверлейные сети, с или без BGP. Calico использует тот же механизм для обеспечения соблюдения сетевой политики для хостов, модулей и (при использовании Istio и Envoy) приложений на уровне сервисной сети (mesh layer). -* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) объединяет Flannel и Calico, обеспечивая сеть и сетевую политик. +* [Canal](https://github.com/tigera/canal/tree/master/k8s-install) объединяет Flannel и Calico, обеспечивая сеть и сетевую политику. * [Cilium](https://github.com/cilium/cilium) - это плагин сети L3 и сетевой политики, который может прозрачно применять политики HTTP/API/L7. Поддерживаются как режим маршрутизации, так и режим наложения/инкапсуляции, и он может работать поверх других подключаемых модулей CNI. * [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) позволяет Kubernetes легко подключаться к выбору плагинов CNI, таких как Calico, Canal, Flannel, Romana или Weave. -* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), основан на [Tungsten Fabric](https://tungsten.io), представляет собой платформу для виртуализации мультиоблачных сетей с открытым исходным кодом и управления политиками. Contrail и Tungsten Fabric are интегрированы с системами оркестровки, такими как Kubernetes, OpenShift, OpenStack и Mesos, и обеспечивают режимы изоляции для виртуальных машин, контейнеров/pod-ов и рабочих нагрузок без операционной системы. +* [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), основан на [Tungsten Fabric](https://tungsten.io), представляет собой платформу для виртуализации мультиоблачных сетей с открытым исходным кодом и управления политиками. Contrail и Tungsten Fabric интегрированы с системами оркестрации, такими как Kubernetes, OpenShift, OpenStack и Mesos, и обеспечивают режимы изоляции для виртуальных машин, контейнеров/подов и рабочих нагрузок без операционной системы. * [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) - это поставщик оверлейной сети, который можно использовать с Kubernetes. -* [Knitter](https://github.com/ZTE/Knitter/) - это плагин для поддержки нескольких сетевых интерфейсов Kubernetes pod-ов. -* Multus - это плагин Multi для поддержки нексольких сетейв Kubernetes для поддержки всех CNI плагинов (наприме: Calico, Cilium, Contiv, Flannel), в дополнение к рабочим нагрузкам основанных на SRIOV, DPDK, OVS-DPDK и VPP в Kubernetes. -* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) - это сетевой провайдер для Kubernetes основанный на [OVN (Open Virtual Network)](https://github.com/ovn-org/ovn/), реализация виртуалной сети a появившейся в результате проекта Open vSwitch (OVS). OVN-Kubernetes обеспечивает сетевую реализацию на основе наложения для Kubernetes, включая реализацию балансировки нагрузки и сетевой политики на основе OVS. -* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) - это подключаемый модуль контроллера CNI на основе OVN для обеспечения облачной цепочки сервисных функций (SFC), несколько наложеных сетей OVN, динамического создания подсети, динамического создания виртуальных сетей, сети поставщика VLAN, сети прямого поставщика и подключаемого к другим Multi Сетевые плагины, идеально подходящие для облачных рабочих нагрузок на периферии в сети с несколькими кластерами. +* [Knitter](https://github.com/ZTE/Knitter/) - это плагин для поддержки нескольких сетевых интерфейсов Kubernetes подов. +* [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) - это плагин Multi для работы с несколькими сетями в Kubernetes, который поддерживает большинство самых популярных [CNI](https://github.com/containernetworking/cni) (например: Calico, Cilium, Contiv, Flannel), в дополнение к рабочим нагрузкам основанных на SRIOV, DPDK, OVS-DPDK и VPP в Kubernetes. +* [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/) - это сетевой провайдер для Kubernetes основанный на [OVN (Open Virtual Network)](https://github.com/ovn-org/ovn/), реализация виртуальной сети, появившийся в результате проекта Open vSwitch (OVS). OVN-Kubernetes обеспечивает сетевую реализацию на основе наложения для Kubernetes, включая реализацию балансировки нагрузки и сетевой политики на основе OVS. +* [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin) - это подключаемый модуль контроллера CNI на основе OVN для обеспечения облачной цепочки сервисных функций (SFC), несколько наложенных сетей OVN, динамического создания подсети, динамического создания виртуальных сетей, сети поставщика VLAN, сети прямого поставщика и подключаемого к другим Multi Сетевые плагины, идеально подходящие для облачных рабочих нагрузок на периферии в сети с несколькими кластерами. * [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) плагин для контейнера (NCP) обеспечивающий интеграцию между VMware NSX-T и контейнерами оркестраторов, таких как Kubernetes, а так же интеграцию между NSX-T и контейнеров на основе платформы CaaS/PaaS, таких как Pivotal Container Service (PKS) и OpenShift. -* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) - эта платформа SDN, которая обеспечивает сетевое взаимодействие на основе политик между Kubernetes Pod-ами и не Kubernetes окружением с отображением и мониторингом безопасности. -* [Romana](https://romana.io) - это сетевое решение уровня 3 для pod сетей, которое также поддерживает [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Подробности установки Kubeadm доступны [здесь](https://github.com/romana/romana/tree/master/containerize). -* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) обеспечивает сетевуюи политику сетей, будет работать в сетевого раздела и не требует внешней базы данных. +* [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst) - эта платформа SDN, которая обеспечивает сетевое взаимодействие на основе политик между Kubernetes подами и не Kubernetes окружением, с отображением и мониторингом безопасности. +* [Romana](https://romana.io) - это сетевое решение уровня 3 для сетей подов, которое также поддерживает [NetworkPolicy API](/docs/concepts/services-networking/network-policies/). Подробности установки Kubeadm доступны [здесь](https://github.com/romana/romana/tree/master/containerize). +* [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/) предоставляет сеть и обеспечивает сетевую политику, будет работать на обеих сторонах сетевого раздела и не требует внешней базы данных. ## Обнаружение служб -* [CoreDNS](https://coredns.io) - это гибкий, расширяемый DNS-сервер, который может быть [установлен](https://github.com/coredns/deployment/tree/master/kubernetes) в качестве внутрикластерного DNS для pod-ов. +* [CoreDNS](https://coredns.io) - это гибкий, расширяемый DNS-сервер, который может быть [установлен](https://github.com/coredns/deployment/tree/master/kubernetes) в качестве внутрикластерного DNS для подов. ## Визуализация и контроль * [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard) - это веб-интерфейс панели инструментов для Kubernetes. -* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) - это инструмент для графической визуализации ваших контейнеров, pod-ов, сервисов и т.д. Используйте его вместе с [учетной записью Weave Cloud](https://cloud.weave.works/) или разместите пользовательский интерфейс самостоятельно. +* [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s) - это инструмент для графической визуализации ваших контейнеров, подов, сервисов и т.д. Используйте его вместе с [учетной записью Weave Cloud](https://cloud.weave.works/) или разместите пользовательский интерфейс самостоятельно. ## Инфраструктура @@ -49,4 +49,4 @@ content_type: concept В устаревшем каталоге [cluster/addons](https://git.k8s.io/kubernetes/cluster/addons) задокументировано несколько других дополнений. -Ссылки на те, в хорошем состоянии, должны быть здесь. PR приветствуются! +Ссылки на те, в хорошем состоянии, должны быть здесь. Пул реквесты приветствуются! From 6525c0cab5d755aa3a1baf4bb172266bbd2c2f4b Mon Sep 17 00:00:00 2001 From: Ilya Z Date: Sat, 26 Mar 2022 23:55:11 +0400 Subject: [PATCH 123/138] kube_api_typos --- content/ru/docs/concepts/overview/kubernetes-api.md | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/content/ru/docs/concepts/overview/kubernetes-api.md b/content/ru/docs/concepts/overview/kubernetes-api.md index e2523c5191915..e9a3debbc3f7b 100644 --- a/content/ru/docs/concepts/overview/kubernetes-api.md +++ b/content/ru/docs/concepts/overview/kubernetes-api.md @@ -36,7 +36,7 @@ Kubernetes как таковой состоит из множества комп Все детали API документируется с использованием [OpenAPI](https://www.openapis.org/). Начиная с Kubernetes 1.10, API-сервер Kubernetes основывается на спецификации OpenAPI через конечную точку `/openapi/v2`. -Нужный формат устанавливается через HTTP-заголовоки: +Нужный формат устанавливается через HTTP-заголовки: Заголовок | Возможные значения ------ | --------------- @@ -61,11 +61,11 @@ GET /swagger-2.0.0.pb-v1.gz | GET /openapi/v2 **Accept**: application/com.github Чтобы упростить удаления полей или изменение ресурсов, Kubernetes поддерживает несколько версий API, каждая из которых доступна по собственному пути, например, `/api/v1` или `/apis/extensions/v1beta1`. -Мы выбрали версионирование API, а не конкретных ресурсов или полей, чтобы API отражал четкое и согласованное представление о системных ресурсах и их поведении, а также, чтобы разграничивать API, которые уже не поддерживаются и/или находятся в экспериментальной стадии. Схемы сериализации JSON и Protobuf следуют одним и тем же правилам по внесению изменений в схему, поэтому описание ниже охватывают оба эти формата. +Мы выбрали версионирование API, а не конкретных ресурсов или полей, чтобы API отражал четкое и согласованное представление о системных ресурсах и их поведении, а также, чтобы разграничивать API, которые уже не поддерживаются и/или находятся в экспериментальной стадии. Схемы сериализации JSON и Protobuf следуют одним и тем же правилам по внесению изменений в схему, поэтому описание ниже охватывают оба эти формата. -Обратите внимание, что версиоирование API и программное обеспечение косвенно связаны друг с другом. [Предложение по версионированию API и новых выпусков](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md) описывает, как связаны между собой версии API с версиями программного обеспечения. +Обратите внимание, что версионирование API и программное обеспечение косвенно связаны друг с другом. [Предложение по версионированию API и новых выпусков](https://git.k8s.io/community/contributors/design-proposals/release/versioning.md) описывает, как связаны между собой версии API с версиями программного обеспечения. -Разные версии API имеют характеризуются разной уровнем стабильностью и поддержкой. Критерии каждого уровня более подробно описаны в [документации изменений API](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions). Ниже приводится краткое изложение: +Разные версии API характеризуются разными уровнями стабильности и поддержки. Критерии каждого уровня более подробно описаны в [документации изменений API](https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions). Ниже приводится краткое изложение: - Альфа-версии: - Названия версий включают надпись `alpha` (например, `v1alpha1`). @@ -79,7 +79,7 @@ GET /swagger-2.0.0.pb-v1.gz | GET /openapi/v2 **Accept**: application/com.github - Поддержка функциональности в целом не будет прекращена, хотя кое-что может измениться. - Схема и/или семантика объектов может стать несовместимой с более поздними бета-версиями или стабильными выпусками. Когда это случится, мы даем инструкции по миграции на следующую версию. Это обновление может включать удаление, редактирование и повторного создание API-объектов. Этот процесс может потребовать тщательного анализа. Кроме этого, это может привести к простою приложений, которые используют данную функциональность. - Рекомендуется только для неосновного производственного использования из-за риска возникновения возможных несовместимых изменений с будущими версиями. Если у вас есть несколько кластеров, которые возможно обновить независимо, вы можете снять это ограничение. - - **Пожалуйста, попробуйте в действии бета-версии функциональности и поделитесь своими впечатлениями! После того, как функциональность выйдет из бета-версии, нам может быть нецелесообразно что-то дальше изменять.** + - **Пожалуйста, попробуйте в действии бета-версии функциональности и поделитесь своими впечатлениями! После того как функциональность выйдет из бета-версии, нам может быть нецелесообразно что-то дальше изменять.** - Стабильные версии: - Имя версии `vX`, где `vX` — целое число. - Стабильные версии функциональностей появятся в новых версиях. @@ -113,5 +113,3 @@ DaemonSets, Deployments, StatefulSet, NetworkPolicies, PodSecurityPolicies и Re Например: чтобы включить deployments и daemonsets, используйте флаг `--runtime-config=extensions/v1beta1/deployments=true,extensions/v1beta1/daemonsets=true`. {{< note >}}Включение/отключение отдельных ресурсов поддерживается только в API-группе `extensions/v1beta1` по историческим причинам.{{< /note >}} - - From 6319b586d6df260b46e013fc1a385af6a543f7be Mon Sep 17 00:00:00 2001 From: Ilya Z Date: Sun, 27 Mar 2022 01:03:40 +0400 Subject: [PATCH 124/138] content-guide_fix --- .../ru/docs/contribute/style/content-guide.md | 80 +++++++++---------- 1 file changed, 40 insertions(+), 40 deletions(-) diff --git a/content/ru/docs/contribute/style/content-guide.md b/content/ru/docs/contribute/style/content-guide.md index bfb0073357b63..58cbc6af7cb3c 100644 --- a/content/ru/docs/contribute/style/content-guide.md +++ b/content/ru/docs/contribute/style/content-guide.md @@ -25,72 +25,72 @@ card: ### Контент, полученный из двух источников -Документация Kubernetes не содержит дублированный контент, полученный из разных мест (так называемый **контент из двумя источниками**). Контент из двух источников требует дублирования работы со стороны мейнтейнеров проекта и к тому же быстро теряет актуальность. +Документация Kubernetes не содержит дублированный контент, полученный из разных мест (так называемый **контент из двух источников**). Контент из двух источников требует дублирования работы со стороны мейнтейнеров проекта и к тому же быстро теряет актуальность. Перед добавлением контента, задайте себе вопрос: -- Новая информация относится к действующему проекту CNCF ИЛИ проекту в организациях на GitHub kubernetes или kubernetes-sigs? +- Новая информация относится к действующему проекту CNCF или проекту в организациях на GitHub kubernetes или kubernetes-sigs? - Если да, то: - У этого проекта есть собственная документация? - - если да, то укажите ссылку на документацию проекта в документации Kubernetes - - если нет, добавьте информацию в репозиторий проекта (если это возможно), а затем укажите ссылку на неё в документации Kubernetes + - если да, то укажите ссылку на документацию проекта в документации Kubernetes. + - если нет, добавьте информацию в репозиторий проекта (если это возможно), а затем укажите ссылку на неё в документации Kubernetes. - Если нет, то: - Остановитесь! - - Добавление информации по продуктам от других разработчиков не допускается + - Добавление информации о продуктах от других разработчиков не допускается. - Не разрешено ссылаться на документацию и сайты сторонних разработчиков. ### Разрешенная и запрещённая информация Есть несколько условий, когда в документации Kubernetes может быть информация, относящиеся не к проектам Kubernetes. -Ниже перечислены основные категории по содержанию проектов, не касающихся к Kubernetes, а также приведены рекомендации о том, что разрешено, а что нет: +Ниже перечислены основные категории по содержанию проектов, не касающихся Kubernetes, а также приведены рекомендации о том, что разрешено, а что нет: -1. Инструкции по установке или эксплуатации Kubernetes, которые не связаны с проектами Kubernetes +1. Инструкции по установке или эксплуатации Kubernetes, которые не связаны с проектами Kubernetes. - Разрешено: - - Ссылаться на документацию на CNCF-проекта или на проект в GitHub-организациях kubernetes или kubernetes-sigs - - Пример: для установки Kubernetes в процессе обучения нужно обязательно установить и настроить minikube, а также сослаться на соответствующую документацию minikube - - Добавление инструкций для проектов в организации kubernetes или kubernetes-sigs, если по ним нет инструкций - - Пример: добавление инструкций по установке и решению неполадок [kubadm](https://github.com/kubernetes/kubeadm) + - Ссылаться на документацию CNCF-проекта или на проект в GitHub-организациях kubernetes или kubernetes-sigs. + - Пример: для установки Kubernetes в процессе обучения нужно обязательно установить и настроить minikube, а также сослаться на соответствующую документацию minikube. + - Добавление инструкций для проектов в организации kubernetes или kubernetes-sigs, если по ним нет инструкций. + - Пример: добавление инструкций по установке и решению неполадок [kubeadm](https://github.com/kubernetes/kubeadm). - Запрещено: - - Добавление информацию, которая повторяет документацию в другом репозитории + - Добавление информации, которая дублирует документацию в другом репозитории. - Примеры: - - Добавление инструкций по установке и настройке minikube; Minikube имеет собственную [документацию](https://minikube.sigs.k8s.io/docs/), которая включают эти инструкции - - Добавление инструкций по установке Docker, CRI-O, containerd и других окружений для выполнения контейнеров в разных операционных системах + - Добавление инструкций по установке и настройке minikube; Minikube имеет собственную [документацию](https://minikube.sigs.k8s.io/docs/), которая включают эти инструкции. + - Добавление инструкций по установке Docker, CRI-O, containerd и других окружений для выполнения контейнеров в разных операционных системах. - Добавление инструкций по установке Kubernetes в промышленных окружениях, используя разные проекты: - -Kubernetes Rebar Integrated Bootstrap (KRIB) — это проект стороннего разработчика, поэтому все содержимое находится репозитории разработчика. + - Kubernetes Rebar Integrated Bootstrap (KRIB) — это проект стороннего разработчика, поэтому всё содержимое находится в репозитории разработчика. - У проекта [Kubernetes Operations (kops)](https://github.com/kubernetes/kops) есть инструкции по установке и руководства в GitHub-репозитории. - - У проекта [Kubespray](https://kubespray.io) есть собственная документация - - Добавление руководства, в котором объясняется, как выполнить задачу с использованием продукта определенного разработчика или проекта с открытым исходным кодом, не являющиеся CNCF-проектом или проектом в GitHub-организациях kubernetes или kubnetes-sigs. - - Добавление руководства по использованию CNCF-проекта или проекта в GitHub-организациях kubernetes или kubnetes-sigs, если у проекта есть собственная документация -1. Подробное описание технических аспектов по использованию стороннего проекта (не Kubernetes) или как этот проект разработан + - У проекта [Kubespray](https://kubespray.io) есть собственная документация. + - Добавление руководства, в котором объясняется, как выполнить задачу с использованием продукта определенного разработчика или проекта с открытым исходным кодом, не являющиеся CNCF-проектами или проектом в GitHub-организациях kubernetes, или kubernetes-sigs. + - Добавление руководства по использованию CNCF-проекта или проекта в GitHub-организациях kubernetes или kubernetes-sigs, если у проекта есть собственная документация. +1. Подробное описание технических аспектов по использованию стороннего проекта (не Kubernetes) или как этот проект разработан. Добавление такого типа информации в документацию Kubernetes не допускается. -1. Информация стороннему проекту +1. Информация стороннему проекту. - Разрешено: - - Добавление краткого введения о CNCF-проекте или проекте в GitHub-организациях kubernetes или kubernetes-sigs; этот абзац может содержать ссылки на проект + - Добавление краткого введения о CNCF-проекте или проекте в GitHub-организациях kubernetes или kubernetes-sigs; этот абзац может содержать ссылки на проект. - Запрещено: - - Добавление информации по продукту определённого разработчика - - Добавление информации по проекту с открытым исходным кодом, который не является CNCF-проектом или проектом в GitHub-организациях kubernetes или kubnetes-sigs - - Добавление информации, дублирующего документацию из другого проекта, независимо от оригинального репозитория - - Пример: добавление документации для проекта [Kubernetes in Docker (KinD)](https://kind.sigs.k8s.io) в документацию Kubernetes -1. Только ссылки на сторонний проект + - Добавление информации по продукту определённого разработчика. + - Добавление информации по проекту с открытым исходным кодом, который не является CNCF-проектом или проектом в GitHub-организациях kubernetes или kubernetes-sigs. + - Добавление информации, дублирующего документацию из другого проекта, независимо от оригинального репозитория. + - Пример: добавление документации для проекта [Kubernetes in Docker (KinD)](https://kind.sigs.k8s.io) в документацию Kubernetes. +1. Только ссылки на сторонний проект. - Разрешено: - - Ссылаться на проекты в GitHub-организациях kubernetes и kubernetes-sigs - - Пример: добавление ссылок на [документацию](https://kind.sigs.k8s.io/docs/user/quick-start) проекта Kubernetes in Docker (KinD), который находится в GitHub-организации kubernetes-sigs - - Добавление ссылок на действующие CNCF-проекты - - Пример: добавление ссылок на [документацию](https://prometheus.io/docs/introduction/overview/) проекта Prometheus; Prometheus — это действующий проект CNCF + - Ссылаться на проекты в GitHub-организациях kubernetes и kubernetes-sigs. + - Пример: добавление ссылок на [документацию](https://kind.sigs.k8s.io/docs/user/quick-start) проекта Kubernetes in Docker (KinD), который находится в GitHub-организации kubernetes-sigs. + - Добавление ссылок на действующие CNCF-проекты. + - Пример: добавление ссылок на [документацию](https://prometheus.io/docs/introduction/overview/) проекта Prometheus; Prometheus — это действующий проект CNCF. - Запрещено: - - Ссылаться на продукты стороннего разработчика - - Ссылаться на архивированные проекты CNCF - - Ссылаться на недействующие проекты в организациях GitHub в kubernetes и kubernetes-sigs - - Ссылаться на проекты с открытым исходным кодом, которые не являются проектами CNCF или не находятся в организациях GitHub kubernetes или kubernetes-sigs. -1. Содержание учебных курсов + - Ссылаться на продукты стороннего разработчика. + - Ссылаться на прекращенные проекты CNCF. + - Ссылаться на недействующие проекты в организациях GitHub в kubernetes и kubernetes-sigs. + - Ссылаться на проекты с открытым исходным кодом, которые не являются проектами CNCF или не находятся в организациях GitHub kubernetes, или kubernetes-sigs. +1. Содержание учебных курсов. - Разрешено: - - Ссылаться на независимые от разработчиков учебные курсы Kubernetes, предлагаемыми [CNCF](https://www.cncf.io/), [Linux Foundation](https://www.linuxfoundation.org/) и [Linux Academy](https://linuxacademy.com/) (партнер Linux Foundation) - - Пример: добавление ссылок на курсы Linux Academy, такие как [Kubernetes Quick Start](https://linuxacademy.com/course/kubernetes-quick-start/) в [Kubernetes Security](https://linuxacademy.com/course/kubernetes-security/) + - Ссылаться на независимые от разработчиков учебные курсы Kubernetes, предлагаемыми [CNCF](https://www.cncf.io/), [Linux Foundation](https://www.linuxfoundation.org/) и [Linux Academy](https://linuxacademy.com/) (партнер Linux Foundation). + - Пример: добавление ссылок на курсы Linux Academy, такие как [Kubernetes Quick Start](https://linuxacademy.com/course/kubernetes-quick-start/) и [Kubernetes Security](https://linuxacademy.com/course/kubernetes-security/). - Запрещено: - - Ссылаться на учебныЕе онлайн-курсы, вне CNCF, Linux Foundation или Linux Academy; документация Kubernetes не содержит ссылок на сторонний контент + - Ссылаться на учебные онлайн-курсы, не относящиеся к CNCF, Linux Foundation или Linux Academy; документация Kubernetes не содержит ссылок на сторонний контент. - Пример: добавление ссылок на учебные руководства или курсы Kubernetes на Medium, KodeKloud, Udacity, Coursera, learnk8s и т.д. - - Ссылаться на руководства определённых разработчиков вне зависимости от обучающей организации - - Пример: добавление ссылок на такие курсы Linux Academy, как [Google Kubernetes Engine Deep Dive](https://linuxacademy.com/google-cloud-platform/training/course/name/google-kubernetes-engine-deep-dive) and [Amazon EKS Deep Dive](https://linuxacademy.com/course/amazon-eks-deep-dive/) + - Ссылаться на руководства определённых разработчиков вне зависимости от обучающей организации. + - Пример: добавление ссылок на такие курсы Linux Academy, как [Google Kubernetes Engine Deep Dive](https://linuxacademy.com/google-cloud-platform/training/course/name/google-kubernetes-engine-deep-dive) и [Amazon EKS Deep Dive](https://linuxacademy.com/course/amazon-eks-deep-dive/) Если у вас есть вопросы по поводу допустимого контента, присоединяйтесь к каналу #sig-docs в [Slack Kubernetes](http://slack.k8s.io/)! From 9f0afaf0c344e1f81328ec22e43ffc0667219344 Mon Sep 17 00:00:00 2001 From: Ilya Z Date: Sun, 27 Mar 2022 01:41:03 +0400 Subject: [PATCH 125/138] advanced_guide_edited --- content/ru/docs/contribute/advanced.md | 76 +++++++++++++------------- 1 file changed, 37 insertions(+), 39 deletions(-) diff --git a/content/ru/docs/contribute/advanced.md b/content/ru/docs/contribute/advanced.md index dfe68fa685de3..bb4ff9a35f97a 100644 --- a/content/ru/docs/contribute/advanced.md +++ b/content/ru/docs/contribute/advanced.md @@ -1,5 +1,5 @@ --- -title: Участие для опытных +title: Существенный вклад slug: advanced content_type: concept weight: 30 @@ -22,20 +22,20 @@ weight: 30 - Ежедневно проверять [открытые пулреквесты](https://github.com/kubernetes/website/pulls) для контроля качества и соблюдения рекомендаций по [оформлению](/docs/contribute/style/style-guide/) и [содержимому](/docs/contribute/style/content-guide/). - В первую очередь просматривайте самые маленькие пулреквесты (`size/XS`), и только потом беритесь за самые большие (`size/XXL`). - Проверяйте столько пулреквестов, сколько сможете. -- Проследить, что CLA подписан каждым участником. +- Проследите, что CLA подписан каждым участником. - Помогайте новым участникам подписать [CLA](https://github.com/kubernetes/community/blob/master/CLA.md). - - Используйте [этот](https://github.com/zparnold/k8s-docs-pr-botherer) скрипт, чтобы автоматически напомнить участникам, не подписавшим CLA, чтобы они подписали CLA. + - Используйте [этот](https://github.com/zparnold/k8s-docs-pr-botherer) скрипт, чтобы автоматически напомнить участникам, не подписавшим CLA, подписать его. - Оставить свое мнение о предложенных изменениях и поспособствовать в проведении технического обзора от членов других SIG-групп. - Предложить исправления для измененного контента в PR. - Если вы хотите убедиться в правильности контента, прокомментируйте PR и задайте уточняющие вопросы. - - Добавьте нужны метки с `sig/`. - - Если нужно, то назначьте рецензентов из секции `reviewers:` в фронтальной части файла. + - Добавьте нужные метки с `sig/`. + - Если нужно, то назначьте рецензентов из секции `reviewers:` в верхней части файла. - Добавьте метки `Docs Review` и `Tech Review` для установки статуса проверки PR. - Добавьте метку `Needs Doc Review` или `Needs Tech Review` для пулреквестов, которые ещё не были проверены. - Добавьте метку `Doc Review: Open Issues` или `Tech Review: Open Issues` для пулреквестов, которые были проверены и требуют дополнительную информацию и выполнение действия перед слиянием. - Добавьте метки `/lgtm` и `/approve` для пулреквестов, которые могут быть приняты. - Объедините пулреквесты, если они готовы, либо закройте те, которые не могут быть приняты. -- Ежедневно отсортируйте и пометьте новые заявки. Обратитесь к странице [Участие для опытных](/ru/docs/contribute/intermediate/) для получения информации по использование метаданных SIG Docs. +- Ежедневно отсортируйте и пометьте новые заявки. Обратитесь к странице [Участие для опытных](/ru/docs/contribute/intermediate/) для получения информации по использованию метаданных SIG Docs. ### Полезные ссылки на GitHub для дежурных @@ -43,9 +43,9 @@ weight: 30 - [Нет CLA, нет права на слияние](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3A%22cncf-cla%3A+no%22+-label%3Ado-not-merge+label%3Alanguage%2Fen): напомните участнику подписать CLA. Если об этом уже напомнил и бот, и человек, то закройте PR и напишите автору, что он может открыть свой PR после подписания CLA. **Не проверяйте PR, если их авторы не подписали CLA!** -- [Требуется LGTM](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+-label%3Algtm+): если нужен проверка с технической точки зрения, попросите её провести одного из рецензентов, который предложил бот. Если требуется просмотр пулреквест со стороны группы документации или вычитка, то предложите изменения, либо сами измените PR, чтобы ускорить процесс принятия пулреквеста. -- [Имеет LGTM, нужно одобрение со стороны группы документации](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+label%3Algtm): выясните, нужно ли внести какие-либо дополнительные изменения или обновления, чтобы принять PR. Если по вашему мнению PR готов к слияния, оставьте комментарий с текстом `/approve`. -- [Быстрые результаты](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+base%3Amaster+-label%3A%22do-not-merge%2Fwork-in-progress%22+-label%3A%22do-not-merge%2Fhold%22+label%3A%22cncf-cla%3A+yes%22+label%3A%22size%2FXS%22+label%3A%22language%2Fen%22+): если маленький PR направлен в основную ветку и не имеет условий для объединения. (поменяйте "XS" в метке с размером при работе с другими пулреквестами [XS, S, M, L, XL, XXL]). +- [Требуется LGTM](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+-label%3Algtm+): если нужна проверка с технической точки зрения, попросите её провести одного из рецензентов, которого предложил бот. Если требуется просмотр пулреквеста со стороны группы документации или вычитка, то предложите изменения, либо сами измените PR, чтобы ускорить процесс принятия пулреквеста. +- [Имеет LGTM, нужно одобрение со стороны группы документации](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+label%3Algtm): выясните, нужно ли внести какие-либо дополнительные изменения или обновления, чтобы принять PR. Если по вашему мнению PR готов к слиянию, оставьте комментарий с текстом `/approve`. +- [Быстрые результаты](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+base%3Amaster+-label%3A%22do-not-merge%2Fwork-in-progress%22+-label%3A%22do-not-merge%2Fhold%22+label%3A%22cncf-cla%3A+yes%22+label%3A%22size%2FXS%22+label%3A%22language%2Fen%22+): если маленький PR направлен в основную ветку и не имеет условий для объединения (поменяйте "XS" в метке с размером при работе с другими пулреквестами [XS, S, M, L, XL, XXL]). - [Вне основной ветки](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+-base%3Amaster): если PR отправлен в ветку `dev-`, значит он предназначается для будущего выпуска. Убедитесь, что [release meister](https://github.com/kubernetes/sig-release/tree/master/release-team) знает об этом, добавив комментарий с `/assign @`. Если он направлен в старую ветку, помогите автору PR изменить на более подходящую ветку. ### Когда закрывать пулреквесты @@ -57,7 +57,7 @@ weight: 30 - Закройте любой PR, если автор не отреагировал на комментарии или проверки в течение 2 или более недель. -Не бойтесь закрывать пулреквесты. Участники с лёгкостью открыть и возобновить незаконченную работу. Зачастую уведомление о закрытии стимулировать автора возобновить и закончить свой вклад. +Не бойтесь закрывать пулреквесты. Участники с лёгкостью могут открыть и возобновить незаконченную работу. Зачастую уведомление о закрытии стимулирует автора возобновить и завершить свою работу до конца. Чтобы закрыть пулреквест, оставьте комментарий `/close` в PR. @@ -71,8 +71,8 @@ weight: 30 [Члены](/ru/docs/contribute/participating/#члены) SIG Docs могут предлагать улучшения. -После того, как вы давно начали работать над документацией Kubernetes, у наверняка появились какие-нибудь идеи по улучшению [руководства по оформлению](/docs/contribute/style/style-guide/), [руководства по оформлению](/docs/contribute/style/content-guide/), набору инструментов, который используется для создания документации, стилизации сайта, процессов проверки и объединения пулреквестов. Для максимальной открытости подобные типы предложений по улучшению должны обсуждаться на встречи SIG Docs или в [списке рассылки kubernetes-sig-docs](https://groups.google.com/forum/#!forum/kubernetes-sig-docs). -Помимо этого, это поможет разъяснить, как всё устроено в данный момент, и объяснить, почему так было принято, прежде чем предлагать радикальные изменения. Самый быстрый способ узнать ответы на вопросы о том, как в настоящее время работает документация, это задать их на канале `#sig-docs` Slack на [kubernetes.slack.com](https://kubernetes.slack.com). +Если вы давно начали работать над документацией Kubernetes, у вас наверняка появились какие-нибудь идеи по улучшению [руководства по оформлению](/docs/contribute/style/style-guide/), [руководства по содержанию](/docs/contribute/style/content-guide/), набору инструментов, который используется для создания документации, стилизации сайта, процессов проверки и объединения пулреквестов. Для максимальной открытости подобные типы предложений по улучшению должны обсуждаться на встречи SIG Docs или в [списке рассылки kubernetes-sig-docs](https://groups.google.com/forum/#!forum/kubernetes-sig-docs). +Помимо этого, это поможет разъяснить, как всё устроено в данный момент, и объяснить, почему так было принято, прежде чем предлагать радикальные изменения. Самый быстрый способ узнать ответы на вопросы о том, как в настоящее время работает документация, это задать их в канале `#sig-docs` в [официальном Slack](https://kubernetes.slack.com). Когда обсуждение состоялось, а SIG-группа согласилась с желаемым результатом, вы можете работать над предлагаемыми изменениями наиболее приемлемым способом. Например, обновление руководства по оформлению или функциональности сайта может включать открытие пулреквеста, а изменение, связанное с тестированием документации, может предполагать взаимодействие с sig-testing. @@ -84,11 +84,11 @@ weight: 30 Представитель SIG Docs для данного выпуска координирует следующие задачи: -- Мониторинг электронной таблицы с отслеживанием функциональности на наличие новых или измененных возможностей, затрагивают документацию. Если документация для определенной функциональности не будет готова к выпуску, возможно, она не попадет в выпуск. -- Регулярное посещение встречи sig-release и обновлять информацию о статусе документации в выпуске. +- Мониторинг электронной таблицы с отслеживанием функциональности на наличие новых или измененных возможностей, затрагивающих документацию. Если документация для определенной функциональности не будет готова к выпуску, возможно, она не попадет в выпуск. +- Регулярное посещение встречи sig-release и обновление информации о статусе документации к выпуску. - Проверка и вычитка документации по функциональности, подготовленной SIG-группой, ответственной за реализацию этой функциональности. - Объединение связанных с выпуском пулреквестов и поддержка Git-ветки выпуска. -- Консультируйте других участников SIG Docs, которые хотят научиться выполнять эту роль в будущем. Это называется сопровождение (shadowing). +- Консультирование других участников SIG Docs, которые хотят научиться выполнять эту роль в будущем. Это называется сопровождение (shadowing). - Публикация изменений в документации, связанные с выпуском при размещении артефактов. Координация выпуска обычно занимает 3-4 месяца, а обязанности распределяются между утверждающими SIG Docs. @@ -101,10 +101,10 @@ weight: 30 Обязанности амбассадоров новых участников включают в себя: -- Отвечать на вопросы новых участников на [Slack-канале Kubernetes #sig-docs](https://kubernetes.slack.com). +- Отвечать на вопросы новых участников в [Slack-канале Kubernetes #sig-docs](https://kubernetes.slack.com). - Совместно работать с дежурным по PR, чтобы определять заявки, которые подойдут для решения новыми участниками. - Консультировать новых участников в их PR. -- Помогать новых участникам в создании более сложных PR, чтобы они могли стать членами Kubernetes. +- Помогать новым участникам в создании более сложных PR, чтобы они могли стать членами Kubernetes. - [Оказывать содействие участникам](/ru/docs/contribute/advanced/#поддержка-нового-участника) на их пути становления членом в Kubernetes. Текущие амбассадоры новых участников объявляются на каждом собрании SIG Docs и на канале [#sig-docs в Kubernetes](https://kubernetes.slack.com). @@ -115,7 +115,7 @@ weight: 30 Если участник сделал 5 значительных пулреквестов в один или несколько репозиториев Kubernetes, он имеет право на [членство](/ru/docs/contribute/participating#члены) в организации Kubernetes. Членство участника должно быть поддержано двумя спонсорами, которые уже являются рецензентами. -Новые участники документации могут найти спонсоров в канале #sig-docs в [в Slack Kubernetes](https://kubernetes.slack.com) или в [списке рассылки SIG Docs](https://groups.google.com/forum/#!forum/kubernetes-sig-docs). Если вы осознали полезность работы автора заявки на членство, вы добровольно можете поддержать (спонсировать) его. Когда они подадут заявку на членство, отреагируйте на заявку "+1" и напишите подробный комментарий о том, почему вы считаете, что кандидат отлично вписывается в члены организации Kubernetes. +Новые участники документации могут найти спонсоров в канале #sig-docs [в Slack Kubernetes](https://kubernetes.slack.com) или в [списке рассылки SIG Docs](https://groups.google.com/forum/#!forum/kubernetes-sig-docs). Если вы осознали полезность работы автора заявки на членство, вы добровольно можете поддержать (спонсировать) его. Когда они подадут заявку на членство, отреагируйте на заявку "+1" и напишите подробный комментарий о том, почему вы считаете, что кандидат отлично вписывается в члены организации Kubernetes. ## Сопредседатель SIG @@ -125,9 +125,9 @@ weight: 30 Сопредседатели должны соответствовать следующим требованиям: -- Быть утверждающим SIG Docs не меньше 6 месяцев -- [Руководить выпуском документации Kubernetes](/docs/contribute/advanced/#coordinate-docs-for-a-kubernetes-release) или сопровождать два выпуска -- Понимание рабочих процессов и инструментов SIG Docs: git, Hugo, локализация, блог +- Быть утверждающими SIG Docs не меньше 6 месяцев. +- [Руководить выпуском документации Kubernetes](/docs/contribute/advanced/#coordinate-docs-for-a-kubernetes-release) или сопроводить два выпуска. +- Понимать рабочие процессы и инструменты SIG Docs: git, Hugo, локализация, блог. - Понимать, как другие SIG-группы и репозитории Kubernetes влияют на рабочий процесс SIG Docs, включая: [команды в k/org](https://github.com/kubernetes/org/blob/master/config/kubernetes/sig-docs/teams.yaml), [процессы в k/community](https://github.com/kubernetes/community/tree/master/sig-docs), плагины в [k/test-infra](https://github.com/kubernetes/test-infra/) и роль [SIG Architecture](https://github.com/kubernetes/community/tree/master/sig-architecture). - Уделять не менее 5 часов в неделю (но зачастую больше) в течение как минимум 6 месяцев для выполнения обязанностей. @@ -137,13 +137,13 @@ weight: 30 Обязанности включают в себя: -- Сосредоточить группу SIG Docs на достижении максимального счастья для разработчиков через отличную документацию -- Быть примером соблюдения [норм поведения сообщества]https://github.com/cncf/foundation/blob/master/code-of-conduct.md) и контролировать их выполнение членами SIG -- Изучение и внедрение передовых практик для SIG-группы, обновляя рекомендации по участию -- Планирование и проведение встреч SIG: еженедельные обновления информации, ежеквартальные ретроспективные/плановые совещания и многое другое -- Планирование и проведение спринтов по документации на мероприятиях KubeCon и других конференциях +- Сосредоточить группу SIG Docs на достижении максимального счастья для разработчиков через отличную документацию. +- Быть примером соблюдения [норм поведения сообщества](https://github.com/cncf/foundation/blob/master/code-of-conduct.md) и контролировать их выполнение членами SIG. +- Изучать и внедрять передовые практики для SIG-группы, обновляя рекомендации по участию. +- Планировать и проверять встречи SIG: еженедельные обновления информации, ежеквартальные ретроспективные/плановые совещания и многое другое. +- Планирование и проведение спринтов по документации на мероприятиях KubeCon и других конференциях. - Набирать персонал и выступать в поддержку {{< glossary_tooltip text="CNCF" term_id="cncf" >}} и его платиновых партнеров, включая Google, Oracle, Azure, IBM и Huawei. -- Поддерживать нормальную работу SIG +- Поддерживать нормальную работу SIG. ### Проведение продуктивных встреч @@ -155,33 +155,33 @@ weight: 30 **Сформулируйте четкую повестку дня**: -- Определите конкретную цель встречи -- Опубликуйте программу дня заранее +- Определите конкретную цель встречи. +- Опубликуйте программу дня заранее. Для еженедельных встреч скопируйте примечания из предыдущей недели в раздел "Past meetings". **Работайте вместе для создания точных примечания**: -- Запишите обсуждение встречи -- Подумайте над тем, чтобы делегировать роль стенографист кому-нибудь другому +- Запишите обсуждение встречи. +- Подумайте над тем, чтобы делегировать роль стенографиста кому-нибудь другому. **Определяйте решения по пунктам повестки четко и точно**: -- Записывайте решения по пунктам, кто будет ими заниматься и ожидаемую дату завершения +- Записывайте решения по пунктам, кто будет ими заниматься и ожидаемую дату завершения. **Руководите обсуждением, когда это необходимо**: -- Если обсуждение выходит за пределы повестки дня, снова обратите внимание участников на обсуждаемую тему -- Найдите место для различных стилей ведения обсуждения, не отвлекаясь от темы обсуждения и уважая время людей +- Если обсуждение выходит за пределы повестки дня, снова обратите внимание участников на обсуждаемую тему. +- Найдите место для различных стилей ведения обсуждения, не отвлекаясь от темы обсуждения и уважая время людей. **Уважайте время людей**: -- Начинайте и заканчивайте встречи своевременно +- Начинайте и заканчивайте встречи своевременно. **Используйте Zoom эффективно**: -- Ознакомьтесь с [рекомендациями Zoom для Kubernetes](https://github.com/kubernetes/community/blob/master/communication/zoom-guidelines.md) -- Попробуйте попроситься быть ведущим в самом начале встречи, введя ключ ведущего +- Ознакомьтесь с [рекомендациями Zoom для Kubernetes](https://github.com/kubernetes/community/blob/master/communication/zoom-guidelines.md). +- Попробуйте попроситься быть ведущим в самом начале встречи, введя ключ ведущего. Исполнение роли ведущего в Zoom @@ -192,5 +192,3 @@ weight: 30 Если нужно остановить запись, нажмите на кнопку Stop. Запись автоматически загрузится на YouTube. - - From bf59d01af7fa73541a57af30cabc205f33c9bcb3 Mon Sep 17 00:00:00 2001 From: Mitesh Jain <47820816+miteshskj@users.noreply.github.com> Date: Sun, 27 Mar 2022 12:26:44 +0530 Subject: [PATCH 126/138] Fix broken link for attach labels in assign-pod-node.md --- content/en/docs/concepts/scheduling-eviction/assign-pod-node.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md index c24e8a8c94ccc..2e38eb6938abd 100644 --- a/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md +++ b/content/en/docs/concepts/scheduling-eviction/assign-pod-node.md @@ -33,7 +33,7 @@ specific Pods: ## Node labels {#built-in-node-labels} Like many other Kubernetes objects, nodes have -[labels](/docs/concepts/overview/working-with-objects/labels/). You can [attach labels manually](/docs/tasks/confiure-pod-container/assign-pods-nodes/#add-a-label-to-a-node). +[labels](/docs/concepts/overview/working-with-objects/labels/). You can [attach labels manually](/docs/tasks/configure-pod-container/assign-pods-nodes/#add-a-label-to-a-node). Kubernetes also populates a standard set of labels on all nodes in a cluster. See [Well-Known Labels, Annotations and Taints](/docs/reference/labels-annotations-taints/) for a list of common node labels. From ed2ded76e3aafa3d383e71615024c0f528b79c0e Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Sun, 27 Mar 2022 15:19:12 +0800 Subject: [PATCH 127/138] [zh] modify busybox to busybox:1.28 in dir examples Signed-off-by: xin.li --- .../logging/two-files-counter-pod-agent-sidecar.yaml | 2 +- .../logging/two-files-counter-pod-streaming-sidecar.yaml | 6 +++--- .../zh/examples/admin/logging/two-files-counter-pod.yaml | 2 +- content/zh/examples/admin/resource/limit-range-pod-1.yaml | 8 ++++---- content/zh/examples/admin/resource/limit-range-pod-2.yaml | 8 ++++---- content/zh/examples/admin/resource/limit-range-pod-3.yaml | 2 +- content/zh/examples/application/job/cronjob.yaml | 2 +- content/zh/examples/application/job/job-tmpl.yaml | 2 +- content/zh/examples/debug/counter-pod.yaml | 2 +- content/zh/examples/pods/init-containers.yaml | 2 +- content/zh/examples/pods/inject/dependent-envars.yaml | 2 +- content/zh/examples/pods/security/hello-apparmor.yaml | 2 +- content/zh/examples/pods/security/security-context.yaml | 2 +- content/zh/examples/pods/share-process-namespace.yaml | 2 +- .../storage/projected-secret-downwardapi-configmap.yaml | 2 +- .../projected-secrets-nondefault-permission-mode.yaml | 2 +- .../pods/storage/projected-service-account-token.yaml | 2 +- content/zh/examples/pods/storage/projected.yaml | 2 +- .../zh/examples/service/networking/hostaliases-pod.yaml | 2 +- 19 files changed, 27 insertions(+), 27 deletions(-) diff --git a/content/zh/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml b/content/zh/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml index b37b616e6f7c7..ddfb8104cb946 100644 --- a/content/zh/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml +++ b/content/zh/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: count - image: busybox + image: busybox:1.28 args: - /bin/sh - -c diff --git a/content/zh/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml b/content/zh/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml index 87bd198cfdab7..6b7d1f120106d 100644 --- a/content/zh/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml +++ b/content/zh/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: count - image: busybox + image: busybox:1.28 args: - /bin/sh - -c @@ -22,13 +22,13 @@ spec: - name: varlog mountPath: /var/log - name: count-log-1 - image: busybox + image: busybox:1.28 args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log'] volumeMounts: - name: varlog mountPath: /var/log - name: count-log-2 - image: busybox + image: busybox:1.28 args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log'] volumeMounts: - name: varlog diff --git a/content/zh/examples/admin/logging/two-files-counter-pod.yaml b/content/zh/examples/admin/logging/two-files-counter-pod.yaml index 6ebeb717a1892..31bbed3cf8683 100644 --- a/content/zh/examples/admin/logging/two-files-counter-pod.yaml +++ b/content/zh/examples/admin/logging/two-files-counter-pod.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: count - image: busybox + image: busybox:1.28 args: - /bin/sh - -c diff --git a/content/zh/examples/admin/resource/limit-range-pod-1.yaml b/content/zh/examples/admin/resource/limit-range-pod-1.yaml index 0457792af94c4..b9bd20d06a2c7 100644 --- a/content/zh/examples/admin/resource/limit-range-pod-1.yaml +++ b/content/zh/examples/admin/resource/limit-range-pod-1.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: busybox-cnt01 - image: busybox + image: busybox:1.28 command: ["/bin/sh"] args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"] resources: @@ -16,7 +16,7 @@ spec: memory: "200Mi" cpu: "500m" - name: busybox-cnt02 - image: busybox + image: busybox:1.28 command: ["/bin/sh"] args: ["-c", "while true; do echo hello from cnt02; sleep 10;done"] resources: @@ -24,7 +24,7 @@ spec: memory: "100Mi" cpu: "100m" - name: busybox-cnt03 - image: busybox + image: busybox:1.28 command: ["/bin/sh"] args: ["-c", "while true; do echo hello from cnt03; sleep 10;done"] resources: @@ -32,6 +32,6 @@ spec: memory: "200Mi" cpu: "500m" - name: busybox-cnt04 - image: busybox + image: busybox:1.28 command: ["/bin/sh"] args: ["-c", "while true; do echo hello from cnt04; sleep 10;done"] diff --git a/content/zh/examples/admin/resource/limit-range-pod-2.yaml b/content/zh/examples/admin/resource/limit-range-pod-2.yaml index efac440269c6f..40da19c1aee05 100644 --- a/content/zh/examples/admin/resource/limit-range-pod-2.yaml +++ b/content/zh/examples/admin/resource/limit-range-pod-2.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: busybox-cnt01 - image: busybox + image: busybox:1.28 command: ["/bin/sh"] args: ["-c", "while true; do echo hello from cnt01; sleep 10;done"] resources: @@ -16,7 +16,7 @@ spec: memory: "200Mi" cpu: "500m" - name: busybox-cnt02 - image: busybox + image: busybox:1.28 command: ["/bin/sh"] args: ["-c", "while true; do echo hello from cnt02; sleep 10;done"] resources: @@ -24,7 +24,7 @@ spec: memory: "100Mi" cpu: "100m" - name: busybox-cnt03 - image: busybox + image: busybox:1.28 command: ["/bin/sh"] args: ["-c", "while true; do echo hello from cnt03; sleep 10;done"] resources: @@ -32,6 +32,6 @@ spec: memory: "200Mi" cpu: "500m" - name: busybox-cnt04 - image: busybox + image: busybox:1.28 command: ["/bin/sh"] args: ["-c", "while true; do echo hello from cnt04; sleep 10;done"] diff --git a/content/zh/examples/admin/resource/limit-range-pod-3.yaml b/content/zh/examples/admin/resource/limit-range-pod-3.yaml index 8afdb6379cf61..503200a9662fc 100644 --- a/content/zh/examples/admin/resource/limit-range-pod-3.yaml +++ b/content/zh/examples/admin/resource/limit-range-pod-3.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: busybox-cnt01 - image: busybox + image: busybox:1.28 resources: limits: memory: "300Mi" diff --git a/content/zh/examples/application/job/cronjob.yaml b/content/zh/examples/application/job/cronjob.yaml index 9f06ca7bd6758..78d0e2d314792 100644 --- a/content/zh/examples/application/job/cronjob.yaml +++ b/content/zh/examples/application/job/cronjob.yaml @@ -10,7 +10,7 @@ spec: spec: containers: - name: hello - image: busybox + image: busybox:1.28 imagePullPolicy: IfNotPresent command: - /bin/sh diff --git a/content/zh/examples/application/job/job-tmpl.yaml b/content/zh/examples/application/job/job-tmpl.yaml index 790025b38b886..d7dbbafd62bc5 100644 --- a/content/zh/examples/application/job/job-tmpl.yaml +++ b/content/zh/examples/application/job/job-tmpl.yaml @@ -13,6 +13,6 @@ spec: spec: containers: - name: c - image: busybox + image: busybox:1.28 command: ["sh", "-c", "echo Processing item $ITEM && sleep 5"] restartPolicy: Never diff --git a/content/zh/examples/debug/counter-pod.yaml b/content/zh/examples/debug/counter-pod.yaml index f997886386258..a91b2f8915830 100644 --- a/content/zh/examples/debug/counter-pod.yaml +++ b/content/zh/examples/debug/counter-pod.yaml @@ -5,6 +5,6 @@ metadata: spec: containers: - name: count - image: busybox + image: busybox:1.28 args: [/bin/sh, -c, 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done'] diff --git a/content/zh/examples/pods/init-containers.yaml b/content/zh/examples/pods/init-containers.yaml index 667b03eccd2b0..e55895d673f38 100644 --- a/content/zh/examples/pods/init-containers.yaml +++ b/content/zh/examples/pods/init-containers.yaml @@ -14,7 +14,7 @@ spec: # These containers are run during pod initialization initContainers: - name: install - image: busybox + image: busybox:1.28 command: - wget - "-O" diff --git a/content/zh/examples/pods/inject/dependent-envars.yaml b/content/zh/examples/pods/inject/dependent-envars.yaml index 2509c6f47b56d..67d07098baec6 100644 --- a/content/zh/examples/pods/inject/dependent-envars.yaml +++ b/content/zh/examples/pods/inject/dependent-envars.yaml @@ -10,7 +10,7 @@ spec: command: - sh - -c - image: busybox + image: busybox:1.28 env: - name: SERVICE_PORT value: "80" diff --git a/content/zh/examples/pods/security/hello-apparmor.yaml b/content/zh/examples/pods/security/hello-apparmor.yaml index 3e9b3b2a9c6be..000645f1c72c9 100644 --- a/content/zh/examples/pods/security/hello-apparmor.yaml +++ b/content/zh/examples/pods/security/hello-apparmor.yaml @@ -9,5 +9,5 @@ metadata: spec: containers: - name: hello - image: busybox + image: busybox:1.28 command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ] diff --git a/content/zh/examples/pods/security/security-context.yaml b/content/zh/examples/pods/security/security-context.yaml index 35cb1eeebe60a..7903c39c6467c 100644 --- a/content/zh/examples/pods/security/security-context.yaml +++ b/content/zh/examples/pods/security/security-context.yaml @@ -12,7 +12,7 @@ spec: emptyDir: {} containers: - name: sec-ctx-demo - image: busybox + image: busybox:1.28 command: [ "sh", "-c", "sleep 1h" ] volumeMounts: - name: sec-ctx-vol diff --git a/content/zh/examples/pods/share-process-namespace.yaml b/content/zh/examples/pods/share-process-namespace.yaml index af812732a247a..bd48bf0ff6e18 100644 --- a/content/zh/examples/pods/share-process-namespace.yaml +++ b/content/zh/examples/pods/share-process-namespace.yaml @@ -8,7 +8,7 @@ spec: - name: nginx image: nginx - name: shell - image: busybox + image: busybox:1.28 securityContext: capabilities: add: diff --git a/content/zh/examples/pods/storage/projected-secret-downwardapi-configmap.yaml b/content/zh/examples/pods/storage/projected-secret-downwardapi-configmap.yaml index 270db99dcd76b..453dc08c0c7d9 100644 --- a/content/zh/examples/pods/storage/projected-secret-downwardapi-configmap.yaml +++ b/content/zh/examples/pods/storage/projected-secret-downwardapi-configmap.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: container-test - image: busybox + image: busybox:1.28 volumeMounts: - name: all-in-one mountPath: "/projected-volume" diff --git a/content/zh/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml b/content/zh/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml index f69b43161ebf6..b921fd93c5833 100644 --- a/content/zh/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml +++ b/content/zh/examples/pods/storage/projected-secrets-nondefault-permission-mode.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: container-test - image: busybox + image: busybox:1.28 volumeMounts: - name: all-in-one mountPath: "/projected-volume" diff --git a/content/zh/examples/pods/storage/projected-service-account-token.yaml b/content/zh/examples/pods/storage/projected-service-account-token.yaml index 3ad06b5dc7d6e..cc307659a78ef 100644 --- a/content/zh/examples/pods/storage/projected-service-account-token.yaml +++ b/content/zh/examples/pods/storage/projected-service-account-token.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: container-test - image: busybox + image: busybox:1.28 volumeMounts: - name: token-vol mountPath: "/service-account" diff --git a/content/zh/examples/pods/storage/projected.yaml b/content/zh/examples/pods/storage/projected.yaml index 172ca0dee52de..4244048eb7558 100644 --- a/content/zh/examples/pods/storage/projected.yaml +++ b/content/zh/examples/pods/storage/projected.yaml @@ -5,7 +5,7 @@ metadata: spec: containers: - name: test-projected-volume - image: busybox + image: busybox:1.28 args: - sleep - "86400" diff --git a/content/zh/examples/service/networking/hostaliases-pod.yaml b/content/zh/examples/service/networking/hostaliases-pod.yaml index 643813b34a13d..268bffbbf5894 100644 --- a/content/zh/examples/service/networking/hostaliases-pod.yaml +++ b/content/zh/examples/service/networking/hostaliases-pod.yaml @@ -15,7 +15,7 @@ spec: - "bar.remote" containers: - name: cat-hosts - image: busybox + image: busybox:1.28 command: - cat args: From 9a49299b4f15e54e0957c0681f86abd161a492de Mon Sep 17 00:00:00 2001 From: Ilya Z Date: Sun, 27 Mar 2022 12:20:37 +0400 Subject: [PATCH 128/138] cheatsheet_fixes --- content/ru/docs/reference/kubectl/cheatsheet.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/content/ru/docs/reference/kubectl/cheatsheet.md b/content/ru/docs/reference/kubectl/cheatsheet.md index 02a8a9bc4af1f..8c4721b4fcedd 100644 --- a/content/ru/docs/reference/kubectl/cheatsheet.md +++ b/content/ru/docs/reference/kubectl/cheatsheet.md @@ -235,7 +235,7 @@ kubectl get pod mypod -o yaml | sed 's/\(image: myimage\):.*$/\1:v4/' | kubectl kubectl label pods my-pod new-label=awesome # Добавить метку kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq # Добавить аннотацию -kubectl autoscale deployment foo --min=2 --max=10 # Автоматически промасштабировать развёртывание "foo" +kubectl autoscale deployment foo --min=2 --max=10 # Автоматически масштабировать развёртывание "foo" в диапазоне от 2 до 10 подов ``` ## Обновление ресурсов @@ -269,10 +269,10 @@ KUBE_EDITOR="nano" kubectl edit svc/docker-registry # Использовать ## Масштабирование ресурсов ```bash -kubectl scale --replicas=3 rs/foo # Промасштабировать набор реплик (replicaset) 'foo' до 3 -kubectl scale --replicas=3 -f foo.yaml # Промасштабировать ресурс в "foo.yaml" до 3 -kubectl scale --current-replicas=2 --replicas=3 deployment/mysql # Если количество реплик в развёртывании mysql равен 2, промасштабировать его до 3 -kubectl scale --replicas=5 rc/foo rc/bar rc/baz # Промасштабировать несколько контроллеров репликации +kubectl scale --replicas=3 rs/foo # Масштабирование набора реплик (replicaset) 'foo' до 3 +kubectl scale --replicas=3 -f foo.yaml # Масштабирование ресурса в "foo.yaml" до 3 +kubectl scale --current-replicas=2 --replicas=3 deployment/mysql # Если количество реплик в развёртывании mysql равен 2, масштабировать его до 3 +kubectl scale --replicas=5 rc/foo rc/bar rc/baz # Масштабирование нескольких контроллеров репликации до 5 ``` ## Удаление ресурсов @@ -362,7 +362,7 @@ kubectl api-resources --api-group=extensions # Все ресурсы в API-гр ### Уровни детальности вывода и отладки в Kubectl -Уровни детальности вывода Kubectl регулируются с помощью флагов `-v` или `--v`, за которыми следует целое число, представляющее уровни логирования. Общие соглашения по логированиия Kubernetes и связанные с ними уровни описаны [здесь](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md). +Уровни детальности вывода Kubectl регулируются с помощью флагов `-v` или `--v`, за которыми следует целое число, представляющее уровни логирования. Общие соглашения по логированию Kubernetes и связанные с ними уровни описаны [здесь](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md). Уровень детальности | Описание From 482bfee980bb62774cc778de184c47c6d42e657e Mon Sep 17 00:00:00 2001 From: Ilya Z Date: Sun, 27 Mar 2022 12:47:02 +0400 Subject: [PATCH 129/138] kubectl_docs_fixes --- .../kubectl/docker-cli-to-kubectl.md | 2 +- content/ru/docs/reference/kubectl/jsonpath.md | 2 +- content/ru/docs/reference/kubectl/kubectl.md | 94 +++++++++---------- 3 files changed, 49 insertions(+), 49 deletions(-) diff --git a/content/ru/docs/reference/kubectl/docker-cli-to-kubectl.md b/content/ru/docs/reference/kubectl/docker-cli-to-kubectl.md index 4d834098c6c05..c742e1fba430f 100644 --- a/content/ru/docs/reference/kubectl/docker-cli-to-kubectl.md +++ b/content/ru/docs/reference/kubectl/docker-cli-to-kubectl.md @@ -70,7 +70,7 @@ kubectl run [-i] [--tty] --attach --image= ``` В отличие от `docker run ...`, если вы укажете `--attach`, то присоедините `stdin`, `stdout` and `stderr`. Нельзя проконтролировать, какие потоки прикрепляются (`docker -a ...`). -Чтобы отсоединиться от контейнера воспользуетесь комбинацией клавиш Ctrl+P, а затем Ctrl+Q. +Чтобы отсоединиться от контейнера, воспользуетесь комбинацией клавиш Ctrl+P, а затем Ctrl+Q. Так как команда kubectl run запускает развёртывание для контейнера, то оно начнет перезапускаться, если завершить прикрепленный процесс по нажатию Ctrl+C, в отличие от команды `docker run -it`. Для удаления объекта Deployment вместе с подами, необходимо выполнить команду `kubectl delete deployment `. diff --git a/content/ru/docs/reference/kubectl/jsonpath.md b/content/ru/docs/reference/kubectl/jsonpath.md index d6bd9b5c484c4..231d3a0dc9ce2 100644 --- a/content/ru/docs/reference/kubectl/jsonpath.md +++ b/content/ru/docs/reference/kubectl/jsonpath.md @@ -90,7 +90,7 @@ kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.st ``` {{< note >}} -В Windows нужно заключить в _двойные_ кавычки JSONPath-шаблон, который содержит пробелы (не в одинарные, как в примерах выше для bash). Таким образом, любые литералы в таких шаблонов нужно оборачивать в одинарные кавычки или экранированные двойные кавычки. Например: +В Windows нужно заключить в _двойные_ кавычки JSONPath-шаблон, который содержит пробелы (не в одинарные, как в примерах выше для bash). Таким образом, любые литералы в таких шаблонах нужно оборачивать в одинарные кавычки или экранированные двойные кавычки. Например: ```cmd kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{.status.startTime}{'\n'}{end}" diff --git a/content/ru/docs/reference/kubectl/kubectl.md b/content/ru/docs/reference/kubectl/kubectl.md index ce01d0b8abbcf..405522525dea6 100644 --- a/content/ru/docs/reference/kubectl/kubectl.md +++ b/content/ru/docs/reference/kubectl/kubectl.md @@ -158,14 +158,14 @@ kubectl [flags] --default-not-ready-toleration-seconds int     По умолчанию: 300 -Указывает tolerationSeconds для допущения notReady:NoExecute, которое по умолчанию добавляется к каждому поду, у которого нет установлено такое допущение. +Указывает tolerationSeconds для допущения notReady:NoExecute, которое по умолчанию добавляется к каждому поду, у которого не установлено такое допущение. --default-unreachable-toleration-seconds int     По умолчанию: 300 -Указывает tolerationSeconds для допущения unreachable:NoExecute, которое по умолчанию добавляется к каждому поду, у которого нет установлено такое допущение. +Указывает tolerationSeconds для допущения unreachable:NoExecute, которое по умолчанию добавляется к каждому поду, у которого не установлено такое допущение. @@ -221,7 +221,7 @@ kubectl [flags] --docker-tls-cert string     По умолчанию: "cert.pem" -путь к клиентскому сертификату +Путь к клиентскому сертификату @@ -277,7 +277,7 @@ kubectl [flags] --insecure-skip-tls-verify -Если true, значит сертификат сервера не будет проверятся на достоверность. Это сделает подключения через HTTPS небезопасными. +Если true, значит сертификат сервера не будет проверяться на достоверность. Это сделает подключения через HTTPS небезопасными. @@ -333,7 +333,7 @@ kubectl [flags] --logtostderr     По умолчанию: true -Логировать в стандартный поток ошибок вместо сохранения логов в файлы +Вывод логов в стандартный поток ошибок вместо сохранения их в файлы @@ -521,48 +521,48 @@ kubectl [flags] ## {{% heading "seealso" %}} -* [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands#annotate) - Обновить аннотации ресурса -* [kubectl api-resources](/docs/reference/generated/kubectl/kubectl-commands#api-resources) - Вывести доступные API-ресурсы на сервере -* [kubectl api-versions](/docs/reference/generated/kubectl/kubectl-commands#api-versions) - Вывести доступные API-версии на сервере в виде "group/version". -* [kubectl apply](/docs/reference/generated/kubectl/kubectl-commands#apply) - Внести изменения в конфигурацию ресурса из файла или потока stdin. -* [kubectl attach](/docs/reference/generated/kubectl/kubectl-commands#attach) - Присоединиться к запущенному контейнеру -* [kubectl auth](/docs/reference/generated/kubectl/kubectl-commands#auth) - Проверить разрешение на выполнение определённых действий -* [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands#autoscale) - Автоматически промасштабировать Deployment, ReplicaSet или ReplicationController -* [kubectl certificate](/docs/reference/generated/kubectl/kubectl-commands#certificate) - Изменить сертификаты ресурсов. -* [kubectl cluster-info](/docs/reference/generated/kubectl/kubectl-commands#cluster-info) - Показать информацию по кластеру -* [kubectl completion](/docs/reference/generated/kubectl/kubectl-commands#completion) - Вывод кода автодополнения указанной командной оболочки (bash или zsh) -* [kubectl config](/docs/reference/generated/kubectl/kubectl-commands#config) - Изменить файлы kubeconfig -* [kubectl convert](/docs/reference/generated/kubectl/kubectl-commands#convert) - Конвертировать конфигурационные файлы в различные API-версии -* [kubectl cordon](/docs/reference/generated/kubectl/kubectl-commands#cordon) - Отметить узел как неназначаемый -* [kubectl cp](/docs/reference/generated/kubectl/kubectl-commands#cp) - Копировать файлы и директории в/из контейнеров. -* [kubectl create](/docs/reference/generated/kubectl/kubectl-commands#create) - Создать ресурс из файла или потока stdin. -* [kubectl delete](/docs/reference/generated/kubectl/kubectl-commands#delete) - Удалить ресурсы из файла, потока stdin, либо с помощью селекторов меток, имен, селекторов ресурсов или ресурсов -* [kubectl describe](/docs/reference/generated/kubectl/kubectl-commands#describe) - Показать подробную информацию о конкретном ресурсе или группе ресурсов -* [kubectl diff](/docs/reference/generated/kubectl/kubectl-commands#diff) - Сравнить действующую версию с новой (применяемой) -* [kubectl drain](/docs/reference/generated/kubectl/kubectl-commands#drain) - Вытеснить узел для подготовки к эксплуатации -* [kubectl edit](/docs/reference/generated/kubectl/kubectl-commands#edit) - Отредактировать ресурс на сервере -* [kubectl exec](/docs/reference/generated/kubectl/kubectl-commands#exec) - Выполнить команду в контейнере -* [kubectl explain](/docs/reference/generated/kubectl/kubectl-commands#explain) - Получить документацию ресурсов -* [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands#expose) - Создать новый сервис Kubernetes из контроллера репликации, сервиса, развёртывания или пода -* [kubectl get](/docs/reference/generated/kubectl/kubectl-commands#get) - Вывести один или несколько ресурсов -* [kubectl kustomize](/docs/reference/generated/kubectl/kubectl-commands#kustomize) - Собрать ресурсы kustomization из директории или URL-адреса. -* [kubectl label](/docs/reference/generated/kubectl/kubectl-commands#label) - Обновить метки ресурса -* [kubectl logs](/docs/reference/generated/kubectl/kubectl-commands#logs) - Вывести логи контейнера в поде -* [kubectl options](/docs/reference/generated/kubectl/kubectl-commands#options) - Вывести список флагов, применяемых ко всем командам -* [kubectl patch](/docs/reference/generated/kubectl/kubectl-commands#patch) - Обновить один или несколько полей ресурса, используя стратегию слияния патча -* [kubectl plugin](/docs/reference/generated/kubectl/kubectl-commands#plugin) - Команда для работы с плагинами. -* [kubectl port-forward](/docs/reference/generated/kubectl/kubectl-commands#port-forward) - Переадресовать один или несколько локальных портов в под -* [kubectl proxy](/docs/reference/generated/kubectl/kubectl-commands#proxy) - Запустить прокси на API-сервер Kubernetes -* [kubectl replace](/docs/reference/generated/kubectl/kubectl-commands#replace) - Заменить ресурс из определения в файле или потоке stdin. -* [kubectl rollout](/docs/reference/generated/kubectl/kubectl-commands#rollout) - Управление плавающим обновлением ресурса -* [kubectl run](/docs/reference/generated/kubectl/kubectl-commands#run) - Запустить указанный образ в кластере -* [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands#scale) - Задать новый размер для Deployment, ReplicaSet или Replication Controller. -* [kubectl set](/docs/reference/generated/kubectl/kubectl-commands#set) - Конфигурировать ресурсы в объектах -* [kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint) - Обновить ограничения для одного или нескольких узлов -* [kubectl top](/docs/reference/generated/kubectl/kubectl-commands#top) - Показать информацию по использованию системных ресурсов (процессор, память, диск) -* [kubectl uncordon](/docs/reference/generated/kubectl/kubectl-commands#uncordon) - Отметить узел как назначаемый -* [kubectl version](/docs/reference/generated/kubectl/kubectl-commands#version) - Вывести информацию о версии клиента и сервера -* [kubectl wait](/docs/reference/generated/kubectl/kubectl-commands#wait) - Экспериментально: ожидать выполнения определенного условия в одном или нескольких ресурсах. +* [kubectl annotate](/docs/reference/generated/kubectl/kubectl-commands#annotate) - Обновить аннотации ресурса. +* [kubectl api-resources](/docs/reference/generated/kubectl/kubectl-commands#api-resources) - Вывести доступные API-ресурсы на сервере. +* [kubectl api-versions](/docs/reference/generated/kubectl/kubectl-commands#api-versions) - Вывести доступные API-версии на сервере в виде "group/version". +* [kubectl apply](/docs/reference/generated/kubectl/kubectl-commands#apply) - Внести изменения в конфигурацию ресурса из файла или потока stdin. +* [kubectl attach](/docs/reference/generated/kubectl/kubectl-commands#attach) - Присоединиться к запущенному контейнеру. +* [kubectl auth](/docs/reference/generated/kubectl/kubectl-commands#auth) - Проверить разрешение на выполнение определённых действий. +* [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands#autoscale) - Автоматически масштабировать Deployment, ReplicaSet или ReplicationController. +* [kubectl certificate](/docs/reference/generated/kubectl/kubectl-commands#certificate) - Изменить сертификаты ресурсов. +* [kubectl cluster-info](/docs/reference/generated/kubectl/kubectl-commands#cluster-info) - Показать информацию по кластеру. +* [kubectl completion](/docs/reference/generated/kubectl/kubectl-commands#completion) - Вывод кода автодополнения указанной командной оболочки (bash или zsh). +* [kubectl config](/docs/reference/generated/kubectl/kubectl-commands#config) - Изменить файлы kubeconfig. +* [kubectl convert](/docs/reference/generated/kubectl/kubectl-commands#convert) - Конвертировать конфигурационные файлы в различные API-версии. +* [kubectl cordon](/docs/reference/generated/kubectl/kubectl-commands#cordon) - Отметить узел как неназначаемый. +* [kubectl cp](/docs/reference/generated/kubectl/kubectl-commands#cp) - Копировать файлы и директории в/из контейнеров. +* [kubectl create](/docs/reference/generated/kubectl/kubectl-commands#create) - Создать ресурс из файла или потока stdin. +* [kubectl delete](/docs/reference/generated/kubectl/kubectl-commands#delete) - Удалить ресурсы из файла, потока stdin, либо с помощью селекторов меток, имен, селекторов ресурсов или ресурсов. +* [kubectl describe](/docs/reference/generated/kubectl/kubectl-commands#describe) - Показать подробную информацию о конкретном ресурсе или группе ресурсов. +* [kubectl diff](/docs/reference/generated/kubectl/kubectl-commands#diff) - Сравнить действующую версию с новой (применяемой). +* [kubectl drain](/docs/reference/generated/kubectl/kubectl-commands#drain) - Вытеснить узел для подготовки к эксплуатации. +* [kubectl edit](/docs/reference/generated/kubectl/kubectl-commands#edit) - Отредактировать ресурс на сервере. +* [kubectl exec](/docs/reference/generated/kubectl/kubectl-commands#exec) - Выполнить команду в контейнере. +* [kubectl explain](/docs/reference/generated/kubectl/kubectl-commands#explain) - Получить документацию ресурсов. +* [kubectl expose](/docs/reference/generated/kubectl/kubectl-commands#expose) - Создать новый сервис Kubernetes из контроллера репликации, сервиса, развёртывания или пода. +* [kubectl get](/docs/reference/generated/kubectl/kubectl-commands#get) - Вывести один или несколько ресурсов. +* [kubectl kustomize](/docs/reference/generated/kubectl/kubectl-commands#kustomize) - Собрать ресурсы kustomization из директории или URL-адреса. +* [kubectl label](/docs/reference/generated/kubectl/kubectl-commands#label) - Обновить метки ресурса. +* [kubectl logs](/docs/reference/generated/kubectl/kubectl-commands#logs) - Вывести логи контейнера в поде. +* [kubectl options](/docs/reference/generated/kubectl/kubectl-commands#options) - Вывести список флагов, применяемых ко всем командам. +* [kubectl patch](/docs/reference/generated/kubectl/kubectl-commands#patch) - Обновить один или несколько полей ресурса, используя стратегию слияния патча. +* [kubectl plugin](/docs/reference/generated/kubectl/kubectl-commands#plugin) - Команда для работы с плагинами. +* [kubectl port-forward](/docs/reference/generated/kubectl/kubectl-commands#port-forward) - Переадресовать один или несколько локальных портов в под. +* [kubectl proxy](/docs/reference/generated/kubectl/kubectl-commands#proxy) - Запустить прокси на API-сервер Kubernetes. +* [kubectl replace](/docs/reference/generated/kubectl/kubectl-commands#replace) - Заменить ресурс из определения в файле или потоке stdin. +* [kubectl rollout](/docs/reference/generated/kubectl/kubectl-commands#rollout) - Управление плавающим обновлением ресурса. +* [kubectl run](/docs/reference/generated/kubectl/kubectl-commands#run) - Запустить указанный образ в кластере. +* [kubectl scale](/docs/reference/generated/kubectl/kubectl-commands#scale) - Задать новый размер для Deployment, ReplicaSet или Replication Controller. +* [kubectl set](/docs/reference/generated/kubectl/kubectl-commands#set) - Конфигурировать ресурсы в объектах. +* [kubectl taint](/docs/reference/generated/kubectl/kubectl-commands#taint) - Обновить ограничения для одного или нескольких узлов. +* [kubectl top](/docs/reference/generated/kubectl/kubectl-commands#top) - Показать информацию по использованию системных ресурсов (процессор, память, диск). +* [kubectl uncordon](/docs/reference/generated/kubectl/kubectl-commands#uncordon) - Отметить узел как назначаемый. +* [kubectl version](/docs/reference/generated/kubectl/kubectl-commands#version) - Вывести информацию о версии клиента и сервера. +* [kubectl wait](/docs/reference/generated/kubectl/kubectl-commands#wait) - Экспериментально: ожидать выполнения определенного условия в одном или нескольких ресурсах. From f01be79f66100ad40eca6490b3f7a90ba78570f6 Mon Sep 17 00:00:00 2001 From: Qiming Teng Date: Sun, 27 Mar 2022 19:43:59 +0800 Subject: [PATCH 130/138] [zh] Resync pod topology spread constraints page --- .../pods/pod-topology-spread-constraints.md | 140 ++++++++++-------- 1 file changed, 76 insertions(+), 64 deletions(-) diff --git a/content/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints.md b/content/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints.md index 5ac4fed19226d..77e2127443f54 100644 --- a/content/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints.md +++ b/content/zh/docs/concepts/workloads/pods/pod-topology-spread-constraints.md @@ -75,7 +75,7 @@ node4 Ready 2m43s v1.16.0 node=node4,zone=zoneB -然后从逻辑上看集群如下: +那么,从逻辑上看集群如下: {{}} graph TB @@ -96,11 +96,9 @@ graph TB {{< /mermaid >}} -你可以复用在大多数集群上自动创建和填充的 -[常用标签](/zh/docs/reference/labels-annotations-taints/), +你可以复用在大多数集群上自动创建和填充的[常用标签](/zh/docs/reference/labels-annotations-taints/), 而不是手动添加标签。 +当 Pod 定义了不止一个 `topologySpreadConstraint`,这些约束之间是逻辑与的关系。 +kube-scheduler 会为新的 Pod 寻找一个能够满足所有约束的节点。 + @@ -353,7 +357,6 @@ graph BT class zoneA,zoneB cluster; {{< /mermaid >}} - @@ -374,54 +377,59 @@ The scheduler will skip the non-matching nodes from the skew calculations if the --> ### 节点亲和性与节点选择器的相互作用 {#interaction-with-node-affinity-and-node-selectors} -如果 Pod 定义了 `spec.nodeSelector` 或 `spec.affinity.nodeAffinity`,调度器将从倾斜计算中跳过不匹配的节点。 +如果 Pod 定义了 `spec.nodeSelector` 或 `spec.affinity.nodeAffinity`, +调度器将在偏差计算中跳过不匹配的节点。 - 假设你有一个跨越 zoneA 到 zoneC 的 5 节点集群: +### 示例:TopologySpreadConstraints 与 NodeAffinity - {{}} - graph BT - subgraph "zoneB" - p3(Pod) --> n3(Node3) - n4(Node4) - end - subgraph "zoneA" - p1(Pod) --> n1(Node1) - p2(Pod) --> n2(Node2) - end +假设你有一个跨越 zoneA 到 zoneC 的 5 节点集群: - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class n1,n2,n3,n4,p1,p2,p3 k8s; - class p4 plain; - class zoneA,zoneB cluster; - {{< /mermaid >}} +{{}} +graph BT + subgraph "zoneB" + p3(Pod) --> n3(Node3) + n4(Node4) + end + subgraph "zoneA" + p1(Pod) --> n1(Node1) + p2(Pod) --> n2(Node2) + end - {{}} - graph BT - subgraph "zoneC" - n5(Node5) - end +classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; +classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; +classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; +class n1,n2,n3,n4,p1,p2,p3 k8s; +class p4 plain; +class zoneA,zoneB cluster; +{{< /mermaid >}} + +{{}} +graph BT + subgraph "zoneC" + n5(Node5) + end + +classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; +classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; +classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; +class n5 k8s; +class zoneC cluster; +{{< /mermaid >}} - classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; - classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; - classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; - class n5 k8s; - class zoneC cluster; - {{< /mermaid >}} - 而且你知道 "zoneC" 必须被排除在外。在这种情况下,可以按如下方式编写 yaml, - 以便将 "mypod" 放置在 "zoneB" 上,而不是 "zoneC" 上。同样,`spec.nodeSelector` - 也要一样处理。 + +而且你知道 "zoneC" 必须被排除在外。在这种情况下,可以按如下方式编写 YAML, +以便将 "mypod" 放置在 "zoneB" 上,而不是 "zoneC" 上。同样,`spec.nodeSelector` +也要一样处理。 - {{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} +{{< codenew file="pods/topology-spread-constraints/one-constraint-with-nodeaffinity.yaml" >}} 此外,原来用于提供等同行为的 `SelectorSpread` 插件也会被禁用。 +{{< note >}} + +对于分布约束中所指定的拓扑键而言,`PodTopologySpread` 插件不会为不包含这些主键的节点评分。 +这可能导致在使用默认拓扑约束时,其行为与原来的 `SelectorSpread` 插件的默认行为不同, + -{{< note >}} 如果你的节点不会 **同时** 设置 `kubernetes.io/hostname` 和 `topology.kubernetes.io/zone` 标签,你应该定义自己的约束而不是使用 Kubernetes 的默认约束。 - -插件 `PodTopologySpread` 不会为未设置分布约束中所给拓扑键的节点评分。 {{< /note >}} -如果你不想为集群使用默认的 Pod 分布约束,你可以通过设置 `defaultingType` 参数为 `List` 和 -将 `PodTopologySpread` 插件配置中的 `defaultConstraints` 参数置空来禁用默认 Pod 分布约束。 +如果你不想为集群使用默认的 Pod 分布约束,你可以通过设置 `defaultingType` 参数为 `List` +并将 `PodTopologySpread` 插件配置中的 `defaultConstraints` 参数置空来禁用默认 Pod 分布约束。 ```yaml -apiVersion: kubescheduler.config.k8s.io/v1beta1 +apiVersion: kubescheduler.config.k8s.io/v1beta3 kind: KubeSchedulerConfiguration profiles: @@ -613,9 +625,9 @@ scheduled - more packed or more scattered. - 对于 `PodAffinity`,你可以尝试将任意数量的 Pod 集中到符合条件的拓扑域中。 - 对于 `PodAntiAffinity`,只能将一个 Pod 调度到某个拓扑域中。 @@ -627,12 +639,6 @@ cost-saving. This can also help on rolling update workloads and scaling out replicas smoothly. See [Motivation](https://github.com/kubernetes/enhancements/tree/master/keps/sig-scheduling/895-pod-topology-spread#motivation) for more details. - - -The "EvenPodsSpread" feature provides flexible options to distribute Pods evenly across different -topology domains - to achieve high availability or cost-saving. This can also help on rolling update -workloads and scaling out replicas smoothly. -See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/20190221-pod-topology-spread.md#motivation) for more details. --> 要实现更细粒度的控制,你可以设置拓扑分布约束来将 Pod 分布到不同的拓扑域下, 从而实现高可用性或节省成本。这也有助于工作负载的滚动更新和平稳地扩展副本规模。 @@ -642,13 +648,19 @@ See [Motivation](https://github.com/kubernetes/enhancements/blob/master/keps/sig ## 已知局限性 -- Deployment 缩容操作可能导致 Pod 分布不平衡。 -- 具有污点的节点上的 Pods 也会被统计。 +- 当 Pod 被移除时,无法保证约束仍被满足。例如,缩减某 Deployment 的规模时, + Pod 的分布可能不再均衡。 + 你可以使用 [Descheduler](https://github.com/kubernetes-sigs/descheduler) + 来重新实现 Pod 分布的均衡。 + +- 具有污点的节点上匹配的 Pods 也会被统计。 参考 [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921)。 ## {{% heading "whatsnext" %}} From 27ba7df7b2491829a55cb7c0b01ca10fc3093d73 Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Sun, 27 Mar 2022 20:58:47 +0800 Subject: [PATCH 131/138] [zh] Update update-daemon-set.md Signed-off-by: xin.li --- content/zh/docs/tasks/manage-daemon/update-daemon-set.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/zh/docs/tasks/manage-daemon/update-daemon-set.md b/content/zh/docs/tasks/manage-daemon/update-daemon-set.md index f36583c90c52e..ee8bc83eaf000 100644 --- a/content/zh/docs/tasks/manage-daemon/update-daemon-set.md +++ b/content/zh/docs/tasks/manage-daemon/update-daemon-set.md @@ -62,17 +62,17 @@ To enable the rolling update feature of a DaemonSet, you must set its 你可能想设置 -[`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/zh/docs/concepts/workloads/controllers/deployment/#max-unavailable) (默认为 1), -[`.spec.minReadySeconds`](/zh/docs/concepts/workloads/controllers/deployment/#min-ready-seconds) (默认为 0) 和 +[`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/zh/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/#DaemonSetSpec) (默认为 1), +[`.spec.minReadySeconds`](/zh/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/#DaemonSetSpec) (默认为 0) 和 [`.spec.updateStrategy.rollingUpdate.maxSurge`](/zh/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/#DaemonSetSpec) (一种 Beta 阶段的特性,默认为 0)。 From 10ff885b1ca5f554a89498b8799c2a1df0c99e1a Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Sun, 27 Mar 2022 21:02:03 +0800 Subject: [PATCH 132/138] [zh] Update update-api-object-kubectl-patch.md Signed-off-by: xin.li --- .../update-api-object-kubectl-patch.md | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/content/zh/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md b/content/zh/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md index a06cfa828b59c..a5c7cd9452357 100644 --- a/content/zh/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md +++ b/content/zh/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch.md @@ -283,13 +283,11 @@ containers: name: patch-demo-ctr ... ``` -```shell - - +```yaml tolerations: - - effect: NoSchedule - key: disktype - value: ssd + - effect: NoSchedule + key: disktype + value: ssd ``` ## 创建 PDB 对象 -你可以通过类似 `kubectl apply -f mypdb.yaml` 的命令来创建 PDB。 +你可以使用 kubectl 创建或更新 PDB 对象。 +```shell +kubectl apply -f mypdb.yaml +``` + + +**作者和采访者:** +[Anubhav Vardhan](https://github.com/anubha-v-ardhan), +[Atharva Shinde](https://github.com/Atharva-Shinde), +[Avinesh Tripathi](https://github.com/AvineshTripathi), +[Brad McCoy](https://github.com/bradmccoydev), +[Debabrata Panigrahi](https://github.com/Debanitrkl), +[Jayesh Srivastava](https://github.com/jayesh-srivastava), +[Kunal Verma](https://github.com/verma-kunal), +[Pranshu Srivastava](https://github.com/PranshuSrivastava), +[Priyanka Saggu](github.com/Priyankasaggu11929/), +[Purneswar Prasad](https://github.com/PurneswarPrasad), +[Vedant Kakde](https://github.com/vedant-kakde) + +--- + + +大家好👋 + + +欢迎来到亚太地区的”认识我们的贡献者”博文系列第二期。 + + +这篇文章将介绍来自澳大利亚和新西兰地区的四位杰出贡献者, +他们在上游 Kubernetes 项目中承担着不同子项目的领导者和社区贡献者的角色。 + + +闲话少说,让我们直接进入主题。 + +## [Caleb Woodbine](https://github.com/BobyMCbobs) + + +Caleb Woodbine 目前是 ii.nz 组织的成员。 + + +他于 2018 年作为 Kubernetes Conformance 工作组的成员开始为 Kubernetes 项目做贡献。 +他积极向上,他从一位来自新西兰的贡献者 [Hippie Hacker](https://github.com/hh) 的早期指导中受益匪浅。 + + +他在 `SIG k8s-infra` 和 `k8s-conformance` 工作组为 Kubernetes 项目做出了重大贡献。 + + +Caleb 也是 [CloudNative NZ](https://www.meetup.com/cloudnative-nz/) +社区活动的联合组织者,该活动旨在扩大 Kubernetes 项目在整个新西兰的影响力,以鼓励科技教育和改善就业机会。 + + +> _亚太地区需要更多的外联活动,教育工作者和大学必须学习 Kubernetes,因为他们非常缓慢, +而且已经落后了8年多。新西兰倾向于在海外付费,而不是教育当地人最新的云技术。_ + +## [Dylan Graham](https://github.com/DylanGraham) + + +Dylan Graham 是来自澳大利亚 Adeliade 的云计算工程师。自 2018 年以来,他一直在为上游 Kubernetes 项目做出贡献。 + + +他表示,成为如此大项目的一份子,最初压力是比较大的,但社区的友好和开放帮助他度过了难关。 + + +开始在项目文档方面做贡献,现在主要致力于为亚太地区提供社区支持。 + + +他相信,持续参加社区/项目会议,承担项目任务,并在需要时寻求社区指导,可以帮助有抱负的新开发人员成为有效的贡献者。 + + +> _成为大社区的一份子感觉真的很特别。我遇到了一些了不起的人,甚至是在现实生活中疫情发生之前。_ + +## [Hippie Hacker](https://github.com/hh) + + +Hippie 来自新西兰,曾在 CNCF.io 作为战略计划承包商工作 5 年多。他是 k8s-infra、 +API 一致性测试、云提供商一致性提交以及上游 Kubernetes 和 CNCF 项目 apisnoop.cncf.io 域的积极贡献者。 + + +他讲述了他们早期参与 Kubernetes 项目的情况,该项目始于大约 5 年前,当时他们的公司 ii.nz +演示了[使用 PXE 从 Raspberry Pi 启动网络,并在集群中运行Gitlab,以便在服务器上安装 Kubernetes ](https://ii.nz/post/bringing-the-cloud-to-your-community/) + + +他描述了自己的贡献经历:一开始,他试图独自完成所有艰巨的任务,但最终看到了团队协作贡献的好处, +分工合作减少了过度疲劳,这让人们能够凭借自己的动力继续前进。 + + +他建议新的贡献者结对编程。 + + +> _针对一个项目,多人关注和交叉交流往往比单独的评审、批准 PR 能产生更大的效果。_ + +## [Nick Young](https://github.com/youngnick) + + +Nick Young 在 VMware 工作,是 CNCF 入口控制器 Contour 的技术负责人。 +他从一开始就积极参与上游 Kubernetes 项目,最终成为 LTS 工作组的主席, +他提倡关注用户。他目前是 SIG Network Gateway API 子项目的维护者。 + + +他的贡献之路是引人注目的,因为他很早就在 Kubernetes 项目的主要领域工作,这改变了他的轨迹。 + + +他断言,一个新贡献者能做的最好的事情就是“开始贡献”。当然,如果与他的工作息息相关,那好极了; +然而,把非工作时间投入到贡献中去,从长远来看可以在工作上获得回报。 +他认为,应该鼓励新的贡献者,特别是那些目前是 Kubernetes 用户的人,参与到更高层次的项目讨论中来。 + + +> _只要积极主动,做出贡献,你就可以走很远。一旦你活跃了一段时间,你会发现你能够解答别人的问题, +这意味着会有人请教你或和你讨论,在你意识到这一点之前,你就已经是专家了。_ + +--- + + +如果你对我们接下来应该采访的人有任何意见/建议,请在 #sig-contribex 中告知我们。 +非常感谢你的建议。我们很高兴有更多的人帮助我们接触到社区中更优秀的人。 + + +我们下期再见。祝你有个愉快的贡献之旅!👋 From 99b0fce444787ee8d7acc3615a266e9657ea8028 Mon Sep 17 00:00:00 2001 From: zhangxiaoyang Date: Sat, 26 Mar 2022 15:29:21 +0800 Subject: [PATCH 135/138] [zh]Add 2021-12-08-dual-stack-networking-ga.md --- .../2021-12-08-dual-stack-networking-ga.md | 154 ++++++++++++++++++ 1 file changed, 154 insertions(+) create mode 100644 content/zh/blog/_posts/2021-12-08-dual-stack-networking-ga.md diff --git a/content/zh/blog/_posts/2021-12-08-dual-stack-networking-ga.md b/content/zh/blog/_posts/2021-12-08-dual-stack-networking-ga.md new file mode 100644 index 0000000000000..fa8cd74cc02af --- /dev/null +++ b/content/zh/blog/_posts/2021-12-08-dual-stack-networking-ga.md @@ -0,0 +1,154 @@ +--- +layout: blog +title: 'Kubernetes 1.23:IPv4/IPv6 双协议栈网络达到 GA' +date: 2021-12-08 +slug: dual-stack-networking-ga +--- + + + +**作者:** Bridget Kromhout (微软) + + +“Kubernetes 何时支持 IPv6?” 自从 k8s v1.9 版本中首次添加对 IPv6 的 alpha 支持以来,这个问题的讨论越来越频繁。 +虽然 Kubernetes 从 v1.18 版本开始就支持纯 IPv6 集群,但当时还无法支持 IPv4 迁移到 IPv6。 +[IPv4/IPv6 双协议栈网络](https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/563-dual-stack/) +在 Kubernetes v1.23 版本中进入正式发布(GA)阶段。 + +让我们来看看双协议栈网络对你来说意味着什么? + + +## 更新 Service API + + +[Services](/zh/docs/concepts/services-networking/service/) 在 1.20 版本之前是单协议栈的, +因此,使用两个 IP 协议族意味着需为每个 IP 协议族创建一个 Service。在 1.20 版本中对用户体验进行简化, +重新实现了 Service 以支持两个 IP 协议族,这意味着一个 Service 就可以处理 IPv4 和 IPv6 协议。 +对于 Service 而言,任意的 IPv4 和 IPv6 协议组合都可以实现负载均衡。 + + +Service API 现在有了支持双协议栈的新字段,取代了单一的 ipFamily 字段。 +* 你可以通过将 `ipFamilyPolicy` 字段设置为 `SingleStack`、`PreferDualStack` 或 +`RequireDualStack` 来设置 IP 协议族。Service 可以在单协议栈和双协议栈之间进行转换(在某些限制内)。 +* 设置 `ipFamilies` 为指定的协议族列表,可用来设置使用协议族的顺序。 +* 'clusterIPs' 的能力在涵盖了之前的 'clusterIP'的情况下,还允许设置多个 IP 地址。 +所以不再需要运行重复的 Service,在两个 IP 协议族中各运行一个。你可以在两个 IP 协议族中分配集群 IP 地址。 + + +请注意,Pods 也是双协议栈的。对于一个给定的 Pod,不可能在同一协议族中设置多个 IP 地址。 + + +## 默认行为仍然是单协议栈 + + +从 1.20 版本开始,重新实现的双协议栈服务处于 Alpha 阶段,无论集群是否配置了启用双协议栈的特性标志, +Kubernetes 的底层网络都已经包括了双协议栈。 + + +Kubernetes 1.23 删除了这个特性标志,说明该特性已经稳定。 +如果你想要配置双协议栈网络,这一能力总是存在的。 +你可以将集群网络设置为 IPv4 单协议栈 、IPv6 单协议栈或 IPV4/IPV6 双协议栈 。 + + +虽然 Service 是根据你的配置设置的,但 Pod 默认是由 CNI 插件设置的。 +如果你的 CNI 插件分配单协议栈 IP,那么就是单协议栈,除非 `ipFamilyPolicy` 设置为 `PreferDualStack` 或 `RequireDualStack`。 +如果你的 CNI 插件分配双协议栈 IP,则 `pod.status.PodIPs` 默认为双协议栈。 + + +尽管双协议栈是可用的,但并不强制你使用它。 +在[双协议栈服务配置](/zh/docs/concepts/services-networking/dual-stack/#dual-stack-service-configuration-scenarios) +文档中的示例列出了可能出现的各种场景. + + +## 现在尝试双协议栈 + + +虽然现在上游 Kubernetes 支持[双协议栈网络](/zh/docs/concepts/services-networking/dual-stack/) +作为 GA 或稳定特性,但每个提供商对双协议栈 Kubernetes 的支持可能会有所不同。节点需要提供可路由的 IPv4/IPv6 网络接口。 +Pod 需要是双协议栈的。[网络插件](/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) +是用来为 Pod 分配 IP 地址的,所以集群需要支持双协议栈的网络插件。一些容器网络接口(CNI)插件支持双协议栈,例如 kubenet。 + + +支持双协议栈的生态系统在不断壮大;你可以使用 +[kubeadm 创建双协议栈集群](/zh/docs/setup/production-environment/tools/kubeadm/dual-stack-support/), +在本地尝试用 [KIND 创建双协议栈集群](https://kind.sigs.k8s.io/docs/user/configuration/#ip-family), +还可以将双协议栈集群部署到云上(在查阅 CNI 或 kubenet 可用性的文档之后) + + +## 加入 Network SIG + + +SIG-Network 希望从双协议栈网络的社区体验中学习,以了解更多不断变化的需求和你的用例信息。 +[SIG-network 更新了来自 KubeCon 2021 北美大会的视频](https://www.youtube.com/watch?v=uZ0WLxpmBbY&list=PLj6h78yzYM2Nd1U4RMhv7v88fdiFqeYAP&index=4) +总结了 SIG 最近的更新,包括双协议栈将在 1.23 版本中稳定。 + + +当前 SIG-Network 在 GitHub 上的 [KEPs](https://github.com/orgs/kubernetes/projects/10) 和 +[issues](https://github.com/kubernetes/kubernetes/issues?q=is%3Aopen+is%3Aissue+label%3Asig%2Fnetwork) +说明了该 SIG 的重点领域。[双协议栈 API 服务器](https://github.com/kubernetes/enhancements/issues/2438) +是一个考虑贡献的方向。 + + +[SIG-Network 会议](https://github.com/kubernetes/community/tree/master/sig-network#meetings) +是一个友好、热情的场所,你可以与社区联系并分享你的想法。期待你的加入! + + +## 致谢 + + +许多 Kubernetes 贡献者为双协议栈网络做出了贡献。感谢所有贡献了代码、经验报告、文档、代码审查以及其他工作的人。 +Bridget Kromhout 在 [Kubernetes的双协议栈网络](https://containerjournal.com/features/dual-stack-networking-in-kubernetes/) +中详细介绍了这项社区工作。Tim Hockin 和 Khaled (Kal) Henidak 在 2019 年的 KubeCon 大会演讲 +([Kubernetes 通往 IPv4/IPv6 双协议栈的漫漫长路](https://www.youtube.com/watch?v=o-oMegdZcg4)) +和 Lachlan Evenson 在 2021 年演讲([我们来啦,Kubernetes 双协议栈网络](https://www.youtube.com/watch?v=o-oMegdZcg4)) +中讨论了双协议栈的发展旅程,耗时 5 年和海量代码。 From 03926364b56bbe971490cbe0927b10753f491377 Mon Sep 17 00:00:00 2001 From: "xin.li" Date: Mon, 28 Mar 2022 13:16:53 +0800 Subject: [PATCH 136/138] [zh] Update install-kubectl-windows.md Signed-off-by: xin.li --- .../tasks/tools/install-kubectl-windows.md | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/content/zh/docs/tasks/tools/install-kubectl-windows.md b/content/zh/docs/tasks/tools/install-kubectl-windows.md index 617ca3f3cee26..921a8e13a9494 100644 --- a/content/zh/docs/tasks/tools/install-kubectl-windows.md +++ b/content/zh/docs/tasks/tools/install-kubectl-windows.md @@ -71,20 +71,20 @@ The following methods exist for installing kubectl on Windows: 1. 验证该可执行文件(可选步骤) - 下载 kubectl 校验和文件: + 下载 `kubectl` 校验和文件: ```powershell curl -LO "https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe.sha256" ``` - 基于校验和文件,验证 kubectl 的可执行文件: + 基于校验和文件,验证 `kubectl` 的可执行文件: -1. 将 kubectl 二进制文件夹附加或添加到你的 `PATH` 环境变量中。 +1. 将 `kubectl` 二进制文件夹追加或插入到你的 `PATH` 环境变量中。 1. 测试一下,确保此 `kubectl` 的版本和期望版本一致: @@ -261,22 +261,22 @@ kubectl 为 Bash、Zsh、Fish 和 PowerShell 提供自动补全功能,可以 1. 验证该可执行文件(可选步骤) - 下载 kubectl-convert 校验和文件: + 下载 `kubectl-convert` 校验和文件: ```powershell curl -LO "https://dl.k8s.io/{{< param "fullversion" >}}/bin/windows/amd64/kubectl-convert.exe.sha256" ``` - 基于校验和,验证 kubectl-convert 的可执行文件: + 基于校验和,验证 `kubectl-convert` 的可执行文件: - 用提示的命令对 `CertUtil` 的输出和下载的校验和文件进行手动比较。 @@ -295,11 +295,11 @@ kubectl 为 Bash、Zsh、Fish 和 PowerShell 提供自动补全功能,可以 ``` -1. 将 kubectl 二进制文件夹附加或添加到你的 `PATH` 环境变量中。 +1. 将 `kubectl-convert` 二进制文件夹附加或添加到你的 `PATH` 环境变量中。 1. 验证插件是否安装成功 From b765f0fee6f2f8eeb0d8d7e9c41ac7f3bbc3938f Mon Sep 17 00:00:00 2001 From: howieyuen Date: Sat, 26 Mar 2022 14:56:22 +0800 Subject: [PATCH 137/138] [zh]translate content/docs/reference/kubernetes-api/common-definitions/quantity.md into Chinese --- .../common-definitions/quantity.md | 135 ++++++++++++++++++ 1 file changed, 135 insertions(+) create mode 100644 content/zh/docs/reference/kubernetes-api/common-definitions/quantity.md diff --git a/content/zh/docs/reference/kubernetes-api/common-definitions/quantity.md b/content/zh/docs/reference/kubernetes-api/common-definitions/quantity.md new file mode 100644 index 0000000000000..84ce0dd899511 --- /dev/null +++ b/content/zh/docs/reference/kubernetes-api/common-definitions/quantity.md @@ -0,0 +1,135 @@ +--- +api_metadata: + apiVersion: "" + import: "k8s.io/apimachinery/pkg/api/resource" + kind: "Quantity" +content_type: "api_reference" +description: "数量(Quantity)是数字的定点表示。" +title: "Quantity" +weight: 10 +auto_generated: true +--- + + + + + + + +`import "k8s.io/apimachinery/pkg/api/resource"` + + + +数量(Quantity)是数字的定点表示。 +除了 String() 和 AsInt64() 的访问接口之外, +它以 JSON 和 YAML形式提供方便的打包和解包方法。 + +序列化格式如下: + + +``` + ::= + (注意 可能为空, 例如 的 "" 情形。)
+ ::= 0 | 1 | ... | 9
+ ::= |
+ ::= | . | . | .
+ ::= "+" | "-"
+ ::= |
+ ::= | |
+ ::= Ki | Mi | Gi | Ti | Pi | Ei + (国际单位制度;查阅:http://physics.nist.gov/cuu/Units/binary.html)
+ ::= m | "" | k | M | G | T | P | E + (注意,1024 = 1ki 但 1000 = 1k;我没有选择大写。)
+ ::= "e" | "E"
+``` + + + +无论使用三种指数形式中哪一种,没有数量可以表示大于 263-1 的数,也不可能超过 3 个小数位。 +更大或更精确的数字将被截断或向上取整。(例如:0.1m 将向上取整为 1m。) +如果将来我们需要更大或更小的数量,可能会扩展。 + +当从字符串解析数量时,它将记住它具有的后缀类型,并且在序列化时将再次使用相同类型。 + + +在序列化之前,数量将以“规范形式”放置。这意味着指数或者后缀将被向上或向下调整(尾数相应增加或减少),并确保: + 1. 没有精度丢失 + 2. 不会输出小数数字 + 3. 指数(或后缀)尽可能大。 +除非数量是负数,否则将省略正负号。 + + +例如: + - 1.5 将会被序列化成 “1500m” + - 1.5Gi 将会被序列化成 “1536Mi” + + +请注意,数量永远**不会**在内部以浮点数表示。这是本设计的重中之重。 + +只要它们格式正确,非规范值仍将解析,但将以其规范形式重新输出。(所以应该总是使用规范形式,否则不要执行 diff 比较。) + +这种格式旨在使得很难在不撰写某种特殊处理代码的情况下使用这些数字,进而希望实现者也使用定点实现。 + +
From 1698664aed79daf8f9bde9fbdb97c360b8f7f108 Mon Sep 17 00:00:00 2001 From: howieyuen Date: Sat, 26 Mar 2022 15:03:49 +0800 Subject: [PATCH 138/138] [zh]translate content/docs/reference/kubernetes-api/common-definitions/resource-field-selector.md into Chinese --- .../resource-field-selector.md | 62 +++++++++++++++++++ 1 file changed, 62 insertions(+) create mode 100644 content/zh/docs/reference/kubernetes-api/common-definitions/resource-field-selector.md diff --git a/content/zh/docs/reference/kubernetes-api/common-definitions/resource-field-selector.md b/content/zh/docs/reference/kubernetes-api/common-definitions/resource-field-selector.md new file mode 100644 index 0000000000000..e77543e25f1c1 --- /dev/null +++ b/content/zh/docs/reference/kubernetes-api/common-definitions/resource-field-selector.md @@ -0,0 +1,62 @@ +--- +api_metadata: + apiVersion: "" + import: "k8s.io/api/core/v1" + kind: "ResourceFieldSelector" +content_type: "api_reference" +description: "ResourceFieldSelector 表示容器资源(CPU,内存)及其输出格式。" +title: "ResourceFieldSelector" +weight: 11 +auto_generated: true +--- + + + + + + + +`import "k8s.io/api/core/v1"` + + + +ResourceFieldSelector 表示容器资源(CPU,内存)及其输出格式。 + +
+ +- **resource** (string), 必选 + + + 必选:选择的资源 + +- **containerName** (string) + + + 容器名称:对卷必选,对环境变量可选 + +- **divisor** (}}">Quantity) + + + 指定所曝光资源的输出格式,默认值为“1” + + +