diff --git a/README-zh.md b/README-zh.md index 55b457d980c51..ee45a6aa28570 100644 --- a/README-zh.md +++ b/README-zh.md @@ -60,14 +60,34 @@ cd website - Kubernetes 网站使用的是 [Docsy Hugo 主题](https://github.com/google/docsy#readme)。 即使你打算在容器中运行网站,我们也强烈建议你通过运行以下命令来引入子模块和其他开发依赖项: -```bash -# 引入 Docsy 子模块 + +### Windows +```powershell +# 获取子模块依赖 +git submodule update --init --recursive --depth 1 +``` + + +### Linux / 其它 Unix +```bash +# 获取子模块依赖 +make module-init +``` ## Design @@ -48,10 +46,10 @@ when new servers are created in your cloud infrastructure. The node controller o hosts running inside your tenancy with the cloud provider. The node controller performs the following functions: 1. Update a Node object with the corresponding server's unique identifier obtained from the cloud provider API. -2. Annotating and labelling the Node object with cloud-specific information, such as the region the node +1. Annotating and labelling the Node object with cloud-specific information, such as the region the node is deployed into and the resources (CPU, memory, etc) that it has available. -3. Obtain the node's hostname and network addresses. -4. Verifying the node's health. In case a node becomes unresponsive, this controller checks with +1. Obtain the node's hostname and network addresses. +1. Verifying the node's health. In case a node becomes unresponsive, this controller checks with your cloud provider's API to see if the server has been deactivated / deleted / terminated. If the node has been deleted from the cloud, the controller deletes the Node object from your Kubernetes cluster. @@ -88,13 +86,13 @@ to read and modify Node objects. `v1/Node`: -- Get -- List -- Create -- Update -- Patch -- Watch -- Delete +- get +- list +- create +- update +- patch +- watch +- delete ### Route controller {#authorization-route-controller} @@ -103,37 +101,42 @@ routes appropriately. It requires Get access to Node objects. `v1/Node`: -- Get +- get ### Service controller {#authorization-service-controller} -The service controller listens to Service object Create, Update and Delete events and then configures Endpoints for those Services appropriately (for EndpointSlices, the kube-controller-manager manages these on demand). +The service controller watches for Service object **create**, **update** and **delete** events and then +configures Endpoints for those Services appropriately (for EndpointSlices, the +kube-controller-manager manages these on demand). -To access Services, it requires List, and Watch access. To update Services, it requires Patch and Update access. +To access Services, it requires **list**, and **watch** access. To update Services, it requires +**patch** and **update** access. -To set up Endpoints resources for the Services, it requires access to Create, List, Get, Watch, and Update. +To set up Endpoints resources for the Services, it requires access to **create**, **list**, +**get**, **watch**, and **update**. `v1/Service`: -- List -- Get -- Watch -- Patch -- Update +- list +- get +- watch +- patch +- update ### Others {#authorization-miscellaneous} -The implementation of the core of the cloud controller manager requires access to create Event objects, and to ensure secure operation, it requires access to create ServiceAccounts. +The implementation of the core of the cloud controller manager requires access to create Event +objects, and to ensure secure operation, it requires access to create ServiceAccounts. `v1/Event`: -- Create -- Patch -- Update +- create +- patch +- update `v1/ServiceAccount`: -- Create +- create The {{< glossary_tooltip term_id="rbac" text="RBAC" >}} ClusterRole for the cloud controller manager looks like: @@ -206,12 +209,21 @@ rules: [Cloud Controller Manager Administration](/docs/tasks/administer-cluster/running-cloud-controller/#cloud-controller-manager) has instructions on running and managing the cloud controller manager. -To upgrade a HA control plane to use the cloud controller manager, see [Migrate Replicated Control Plane To Use Cloud Controller Manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/). +To upgrade a HA control plane to use the cloud controller manager, see +[Migrate Replicated Control Plane To Use Cloud Controller Manager](/docs/tasks/administer-cluster/controller-manager-leader-migration/). Want to know how to implement your own cloud controller manager, or extend an existing project? -The cloud controller manager uses Go interfaces to allow implementations from any cloud to be plugged in. Specifically, it uses the `CloudProvider` interface defined in [`cloud.go`](https://github.com/kubernetes/cloud-provider/blob/release-1.21/cloud.go#L42-L69) from [kubernetes/cloud-provider](https://github.com/kubernetes/cloud-provider). +The cloud controller manager uses Go interfaces to allow implementations from any cloud to be plugged in. +Specifically, it uses the `CloudProvider` interface defined in +[`cloud.go`](https://github.com/kubernetes/cloud-provider/blob/release-1.26/cloud.go#L43-L69) from +[kubernetes/cloud-provider](https://github.com/kubernetes/cloud-provider). + +The implementation of the shared controllers highlighted in this document (Node, Route, and Service), +and some scaffolding along with the shared cloudprovider interface, is part of the Kubernetes core. +Implementations specific to cloud providers are outside the core of Kubernetes and implement the +`CloudProvider` interface. -The implementation of the shared controllers highlighted in this document (Node, Route, and Service), and some scaffolding along with the shared cloudprovider interface, is part of the Kubernetes core. Implementations specific to cloud providers are outside the core of Kubernetes and implement the `CloudProvider` interface. +For more information about developing plugins, see +[Developing Cloud Controller Manager](/docs/tasks/administer-cluster/developing-cloud-controller-manager/). -For more information about developing plugins, see [Developing Cloud Controller Manager](/docs/tasks/administer-cluster/developing-cloud-controller-manager/). diff --git a/content/en/docs/concepts/architecture/control-plane-node-communication.md b/content/en/docs/concepts/architecture/control-plane-node-communication.md index 785040cda316e..2cfa37d5c59bc 100644 --- a/content/en/docs/concepts/architecture/control-plane-node-communication.md +++ b/content/en/docs/concepts/architecture/control-plane-node-communication.md @@ -11,7 +11,8 @@ aliases: -This document catalogs the communication paths between the API server and the Kubernetes cluster. +This document catalogs the communication paths between the {{< glossary_tooltip term_id="kube-apiserver" text="API server" >}} +and the Kubernetes {{< glossary_tooltip text="cluster" term_id="cluster" length="all" >}}. The intent is to allow users to customize their installation to harden the network configuration such that the cluster can be run on an untrusted network (or on fully public IPs on a cloud provider). @@ -30,28 +31,28 @@ enabled, especially if [anonymous requests](/docs/reference/access-authn-authz/a or [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) are allowed. -Nodes should be provisioned with the public root certificate for the cluster such that they can +Nodes should be provisioned with the public root {{< glossary_tooltip text="certificate" term_id="certificate" >}} for the cluster such that they can connect securely to the API server along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates. -Pods that wish to connect to the API server can do so securely by leveraging a service account so +{{< glossary_tooltip text="Pods" term_id="pod" >}} that wish to connect to the API server can do so securely by leveraging a service account so that Kubernetes will automatically inject the public root certificate and a valid bearer token into the pod when it is instantiated. The `kubernetes` service (in `default` namespace) is configured with a virtual IP address that is -redirected (via `kube-proxy`) to the HTTPS endpoint on the API server. +redirected (via `{{< glossary_tooltip text="kube-proxy" term_id="kube-proxy" >}}`) to the HTTPS endpoint on the API server. The control plane components also communicate with the API server over the secure port. -As a result, the default operating mode for connections from the nodes and pods running on the +As a result, the default operating mode for connections from the nodes and pod running on the nodes to the control plane is secured by default and can run over untrusted and/or public networks. ## Control plane to node There are two primary communication paths from the control plane (the API server) to the nodes. -The first is from the API server to the kubelet process which runs on each node in the cluster. +The first is from the API server to the {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} process which runs on each node in the cluster. The second is from the API server to any node, pod, or service through the API server's _proxy_ functionality. @@ -89,7 +90,7 @@ connections **are not currently safe** to run over untrusted or public networks. ### SSH tunnels -Kubernetes supports SSH tunnels to protect the control plane to nodes communication paths. In this +Kubernetes supports [SSH tunnels](https://www.ssh.com/academy/ssh/tunneling) to protect the control plane to nodes communication paths. In this configuration, the API server initiates an SSH tunnel to each node in the cluster (connecting to the SSH server listening on port 22) and passes all traffic destined for a kubelet, node, pod, or service through the tunnel. @@ -117,3 +118,12 @@ connections. Follow the [Konnectivity service task](/docs/tasks/extend-kubernetes/setup-konnectivity/) to set up the Konnectivity service in your cluster. +## {{% heading "whatsnext" %}} + +* Read about the [Kubernetes control plane components](/docs/concepts/overview/components/#control-plane-components) +* Learn more about [Hubs and Spoke model](https://book.kubebuilder.io/multiversion-tutorial/conversion-concepts.html#hubs-spokes-and-other-wheel-metaphors) +* Learn how to [Secure a Cluster](/docs/tasks/administer-cluster/securing-a-cluster/) +* Learn more about the [Kubernetes API](/docs/concepts/overview/kubernetes-api/) +* [Set up Konnectivity service](/docs/tasks/extend-kubernetes/setup-konnectivity/) +* [Use Port Forwarding to Access Applications in a Cluster](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) +* Learn how to [Fetch logs for Pods](/docs/tasks/debug/debug-application/debug-running-pod/#examine-pod-logs), [use kubectl port-forward](/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod) \ No newline at end of file diff --git a/content/en/docs/concepts/scheduling-eviction/pod-priority-preemption.md b/content/en/docs/concepts/scheduling-eviction/pod-priority-preemption.md index 0215c4380347e..d5607f48f5927 100644 --- a/content/en/docs/concepts/scheduling-eviction/pod-priority-preemption.md +++ b/content/en/docs/concepts/scheduling-eviction/pod-priority-preemption.md @@ -63,9 +63,10 @@ The name of a PriorityClass object must be a valid and it cannot be prefixed with `system-`. A PriorityClass object can have any 32-bit integer value smaller than or equal -to 1 billion. Larger numbers are reserved for critical system Pods that should -not normally be preempted or evicted. A cluster admin should create one -PriorityClass object for each such mapping that they want. +to 1 billion. This means that the range of values for a PriorityClass object is +from -2147483648 to 1000000000 inclusive. Larger numbers are reserved for +built-in PriorityClasses that represent critical system Pods. A cluster +admin should create one PriorityClass object for each such mapping that they want. PriorityClass also has two optional fields: `globalDefault` and `description`. The `globalDefault` field indicates that the value of this PriorityClass should diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index df4895e52fbab..bb6b1d3750029 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -16,7 +16,7 @@ weight: 10 -{{< glossary_definition term_id="service" length="short" >}} +{{< glossary_definition term_id="service" length="short" prepend="In Kubernetes, a Service is" >}} A key aim of Services in Kubernetes is that you don't need to modify your existing application to use an unfamiliar service discovery mechanism. diff --git a/content/en/docs/concepts/windows/intro.md b/content/en/docs/concepts/windows/intro.md index fb813b20f2c3e..16a7ea3d8a119 100644 --- a/content/en/docs/concepts/windows/intro.md +++ b/content/en/docs/concepts/windows/intro.md @@ -238,11 +238,11 @@ work between Windows and Linux: The following list documents differences between how Pod specifications work between Windows and Linux: * `hostIPC` and `hostpid` - host namespace sharing is not possible on Windows -* `hostNetwork` - [see below](/docs/concepts/windows/intro#compatibility-v1-pod-spec-containers-hostnetwork) +* `hostNetwork` - [see below](#compatibility-v1-pod-spec-containers-hostnetwork) * `dnsPolicy` - setting the Pod `dnsPolicy` to `ClusterFirstWithHostNet` is not supported on Windows because host networking is not provided. Pods always run with a container network. -* `podSecurityContext` [see below](/docs/concepts/windows/intro#compatibility-v1-pod-spec-containers-securitycontext) +* `podSecurityContext` [see below](#compatibility-v1-pod-spec-containers-securitycontext) * `shareProcessNamespace` - this is a beta feature, and depends on Linux namespaces which are not implemented on Windows. Windows cannot share process namespaces or the container's root filesystem. Only the network can be shared. diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md index e5fc14f64d732..646aaa28349f6 100644 --- a/content/en/docs/concepts/workloads/controllers/deployment.md +++ b/content/en/docs/concepts/workloads/controllers/deployment.md @@ -45,8 +45,8 @@ The following is an example of a Deployment. It creates a ReplicaSet to bring up In this example: * A Deployment named `nginx-deployment` is created, indicated by the - `.metadata.name` field. This name will become the basis for the ReplicaSets - and Pods which are created later. See [Writing a Deployment Spec](#writing-a-deployment-spec) + `.metadata.name` field. This name will become the basis for the ReplicaSets + and Pods which are created later. See [Writing a Deployment Spec](#writing-a-deployment-spec) for more details. * The Deployment creates a ReplicaSet that creates three replicated Pods, indicated by the `.spec.replicas` field. * The `.spec.selector` field defines how the created ReplicaSet finds which Pods to manage. @@ -71,14 +71,12 @@ In this example: Before you begin, make sure your Kubernetes cluster is up and running. Follow the steps given below to create the above Deployment: - 1. Create the Deployment by running the following command: ```shell kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml ``` - 2. Run `kubectl get deployments` to check if the Deployment was created. If the Deployment is still being created, the output is similar to the following: @@ -125,7 +123,7 @@ Follow the steps given below to create the above Deployment: * `AGE` displays the amount of time that the application has been running. Notice that the name of the ReplicaSet is always formatted as - `[DEPLOYMENT-NAME]-[HASH]`. This name will become the basis for the Pods + `[DEPLOYMENT-NAME]-[HASH]`. This name will become the basis for the Pods which are created. The `HASH` string is the same as the `pod-template-hash` label on the ReplicaSet. @@ -169,56 +167,56 @@ Follow the steps given below to update your Deployment: 1. Let's update the nginx Pods to use the `nginx:1.16.1` image instead of the `nginx:1.14.2` image. - ```shell - kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 - ``` + ```shell + kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1 + ``` + + or use the following command: - or use the following command: + ```shell + kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 + ``` - ```shell - kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 - ``` - - The output is similar to: + The output is similar to: - ``` - deployment.apps/nginx-deployment image updated - ``` + ``` + deployment.apps/nginx-deployment image updated + ``` - Alternatively, you can `edit` the Deployment and change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1`: + Alternatively, you can `edit` the Deployment and change `.spec.template.spec.containers[0].image` from `nginx:1.14.2` to `nginx:1.16.1`: - ```shell - kubectl edit deployment/nginx-deployment - ``` + ```shell + kubectl edit deployment/nginx-deployment + ``` - The output is similar to: + The output is similar to: - ``` - deployment.apps/nginx-deployment edited - ``` + ``` + deployment.apps/nginx-deployment edited + ``` 2. To see the rollout status, run: - ```shell - kubectl rollout status deployment/nginx-deployment - ``` + ```shell + kubectl rollout status deployment/nginx-deployment + ``` - The output is similar to this: + The output is similar to this: - ``` - Waiting for rollout to finish: 2 out of 3 new replicas have been updated... - ``` + ``` + Waiting for rollout to finish: 2 out of 3 new replicas have been updated... + ``` - or + or - ``` - deployment "nginx-deployment" successfully rolled out - ``` + ``` + deployment "nginx-deployment" successfully rolled out + ``` Get more details on your updated Deployment: * After the rollout succeeds, you can view the Deployment by running `kubectl get deployments`. - The output is similar to this: + The output is similar to this: ```ini NAME READY UP-TO-DATE AVAILABLE AGE @@ -228,44 +226,44 @@ Get more details on your updated Deployment: * Run `kubectl get rs` to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas. - ```shell - kubectl get rs - ``` + ```shell + kubectl get rs + ``` - The output is similar to this: - ``` - NAME DESIRED CURRENT READY AGE - nginx-deployment-1564180365 3 3 3 6s - nginx-deployment-2035384211 0 0 0 36s - ``` + The output is similar to this: + ``` + NAME DESIRED CURRENT READY AGE + nginx-deployment-1564180365 3 3 3 6s + nginx-deployment-2035384211 0 0 0 36s + ``` * Running `get pods` should now show only the new Pods: - ```shell - kubectl get pods - ``` + ```shell + kubectl get pods + ``` - The output is similar to this: - ``` - NAME READY STATUS RESTARTS AGE - nginx-deployment-1564180365-khku8 1/1 Running 0 14s - nginx-deployment-1564180365-nacti 1/1 Running 0 14s - nginx-deployment-1564180365-z9gth 1/1 Running 0 14s - ``` + The output is similar to this: + ``` + NAME READY STATUS RESTARTS AGE + nginx-deployment-1564180365-khku8 1/1 Running 0 14s + nginx-deployment-1564180365-nacti 1/1 Running 0 14s + nginx-deployment-1564180365-z9gth 1/1 Running 0 14s + ``` - Next time you want to update these Pods, you only need to update the Deployment's Pod template again. + Next time you want to update these Pods, you only need to update the Deployment's Pod template again. - Deployment ensures that only a certain number of Pods are down while they are being updated. By default, - it ensures that at least 75% of the desired number of Pods are up (25% max unavailable). + Deployment ensures that only a certain number of Pods are down while they are being updated. By default, + it ensures that at least 75% of the desired number of Pods are up (25% max unavailable). - Deployment also ensures that only a certain number of Pods are created above the desired number of Pods. - By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge). + Deployment also ensures that only a certain number of Pods are created above the desired number of Pods. + By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge). - For example, if you look at the above Deployment closely, you will see that it first creates a new Pod, - then deletes an old Pod, and creates another new one. It does not kill old Pods until a sufficient number of - new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed. - It makes sure that at least 3 Pods are available and that at max 4 Pods in total are available. In case of - a Deployment with 4 replicas, the number of Pods would be between 3 and 5. + For example, if you look at the above Deployment closely, you will see that it first creates a new Pod, + then deletes an old Pod, and creates another new one. It does not kill old Pods until a sufficient number of + new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed. + It makes sure that at least 3 Pods are available and that at max 4 Pods in total are available. In case of + a Deployment with 4 replicas, the number of Pods would be between 3 and 5. * Get details of your Deployment: ```shell @@ -309,13 +307,13 @@ up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas. Normal ScalingReplicaSet 19s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 1 Normal ScalingReplicaSet 19s deployment-controller Scaled up replica set nginx-deployment-1564180365 to 3 Normal ScalingReplicaSet 14s deployment-controller Scaled down replica set nginx-deployment-2035384211 to 0 - ``` - Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) - and scaled it up to 3 replicas directly. When you updated the Deployment, it created a new ReplicaSet - (nginx-deployment-1564180365) and scaled it up to 1 and waited for it to come up. Then it scaled down the old ReplicaSet - to 2 and scaled up the new ReplicaSet to 2 so that at least 3 Pods were available and at most 4 Pods were created at all times. - It then continued scaling up and down the new and the old ReplicaSet, with the same rolling update strategy. - Finally, you'll have 3 available replicas in the new ReplicaSet, and the old ReplicaSet is scaled down to 0. + ``` + Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) + and scaled it up to 3 replicas directly. When you updated the Deployment, it created a new ReplicaSet + (nginx-deployment-1564180365) and scaled it up to 1 and waited for it to come up. Then it scaled down the old ReplicaSet + to 2 and scaled up the new ReplicaSet to 2 so that at least 3 Pods were available and at most 4 Pods were created at all times. + It then continued scaling up and down the new and the old ReplicaSet, with the same rolling update strategy. + Finally, you'll have 3 available replicas in the new ReplicaSet, and the old ReplicaSet is scaled down to 0. {{< note >}} Kubernetes doesn't count terminating Pods when calculating the number of `availableReplicas`, which must be between @@ -333,7 +331,7 @@ ReplicaSet is scaled to `.spec.replicas` and all old ReplicaSets is scaled to 0. If you update a Deployment while an existing rollout is in progress, the Deployment creates a new ReplicaSet as per the update and start scaling that up, and rolls over the ReplicaSet that it was scaling up previously - -- it will add it to its list of old ReplicaSets and start scaling it down. +-- it will add it to its list of old ReplicaSets and start scaling it down. For example, suppose you create a Deployment to create 5 replicas of `nginx:1.14.2`, but then update the Deployment to create 5 replicas of `nginx:1.16.1`, when only 3 @@ -378,107 +376,107 @@ rolled back. * Suppose that you made a typo while updating the Deployment, by putting the image name as `nginx:1.161` instead of `nginx:1.16.1`: - ```shell - kubectl set image deployment/nginx-deployment nginx=nginx:1.161 - ``` + ```shell + kubectl set image deployment/nginx-deployment nginx=nginx:1.161 + ``` - The output is similar to this: - ``` - deployment.apps/nginx-deployment image updated - ``` + The output is similar to this: + ``` + deployment.apps/nginx-deployment image updated + ``` * The rollout gets stuck. You can verify it by checking the rollout status: - ```shell - kubectl rollout status deployment/nginx-deployment - ``` + ```shell + kubectl rollout status deployment/nginx-deployment + ``` - The output is similar to this: - ``` - Waiting for rollout to finish: 1 out of 3 new replicas have been updated... - ``` + The output is similar to this: + ``` + Waiting for rollout to finish: 1 out of 3 new replicas have been updated... + ``` * Press Ctrl-C to stop the above rollout status watch. For more information on stuck rollouts, [read more here](#deployment-status). * You see that the number of old replicas (`nginx-deployment-1564180365` and `nginx-deployment-2035384211`) is 2, and new replicas (nginx-deployment-3066724191) is 1. - ```shell - kubectl get rs - ``` + ```shell + kubectl get rs + ``` - The output is similar to this: - ``` - NAME DESIRED CURRENT READY AGE - nginx-deployment-1564180365 3 3 3 25s - nginx-deployment-2035384211 0 0 0 36s - nginx-deployment-3066724191 1 1 0 6s - ``` + The output is similar to this: + ``` + NAME DESIRED CURRENT READY AGE + nginx-deployment-1564180365 3 3 3 25s + nginx-deployment-2035384211 0 0 0 36s + nginx-deployment-3066724191 1 1 0 6s + ``` * Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop. - ```shell - kubectl get pods - ``` + ```shell + kubectl get pods + ``` - The output is similar to this: - ``` - NAME READY STATUS RESTARTS AGE - nginx-deployment-1564180365-70iae 1/1 Running 0 25s - nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s - nginx-deployment-1564180365-hysrc 1/1 Running 0 25s - nginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s - ``` + The output is similar to this: + ``` + NAME READY STATUS RESTARTS AGE + nginx-deployment-1564180365-70iae 1/1 Running 0 25s + nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s + nginx-deployment-1564180365-hysrc 1/1 Running 0 25s + nginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s + ``` - {{< note >}} - The Deployment controller stops the bad rollout automatically, and stops scaling up the new ReplicaSet. This depends on the rollingUpdate parameters (`maxUnavailable` specifically) that you have specified. Kubernetes by default sets the value to 25%. - {{< /note >}} + {{< note >}} + The Deployment controller stops the bad rollout automatically, and stops scaling up the new ReplicaSet. This depends on the rollingUpdate parameters (`maxUnavailable` specifically) that you have specified. Kubernetes by default sets the value to 25%. + {{< /note >}} * Get the description of the Deployment: - ```shell - kubectl describe deployment - ``` - - The output is similar to this: - ``` - Name: nginx-deployment - Namespace: default - CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700 - Labels: app=nginx - Selector: app=nginx - Replicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable - StrategyType: RollingUpdate - MinReadySeconds: 0 - RollingUpdateStrategy: 25% max unavailable, 25% max surge - Pod Template: - Labels: app=nginx - Containers: - nginx: - Image: nginx:1.161 - Port: 80/TCP - Host Port: 0/TCP - Environment: - Mounts: - Volumes: - Conditions: - Type Status Reason - ---- ------ ------ - Available True MinimumReplicasAvailable - Progressing True ReplicaSetUpdated - OldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created) - NewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created) - Events: - FirstSeen LastSeen Count From SubObjectPath Type Reason Message - --------- -------- ----- ---- ------------- -------- ------ ------- - 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3 - 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1 - 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2 - 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2 - 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1 - 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3 - 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0 - 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1 - ``` + ```shell + kubectl describe deployment + ``` + + The output is similar to this: + ``` + Name: nginx-deployment + Namespace: default + CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700 + Labels: app=nginx + Selector: app=nginx + Replicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable + StrategyType: RollingUpdate + MinReadySeconds: 0 + RollingUpdateStrategy: 25% max unavailable, 25% max surge + Pod Template: + Labels: app=nginx + Containers: + nginx: + Image: nginx:1.161 + Port: 80/TCP + Host Port: 0/TCP + Environment: + Mounts: + Volumes: + Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing True ReplicaSetUpdated + OldReplicaSets: nginx-deployment-1564180365 (3/3 replicas created) + NewReplicaSet: nginx-deployment-3066724191 (1/1 replicas created) + Events: + FirstSeen LastSeen Count From SubObjectPath Type Reason Message + --------- -------- ----- ---- ------------- -------- ------ ------- + 1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3 + 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1 + 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2 + 22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2 + 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 1 + 21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3 + 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0 + 13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1 + ``` To fix this, you need to rollback to a previous revision of Deployment that is stable. @@ -487,131 +485,131 @@ rolled back. Follow the steps given below to check the rollout history: 1. First, check the revisions of this Deployment: - ```shell - kubectl rollout history deployment/nginx-deployment - ``` - The output is similar to this: - ``` - deployments "nginx-deployment" - REVISION CHANGE-CAUSE - 1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml - 2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 - 3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161 - ``` - - `CHANGE-CAUSE` is copied from the Deployment annotation `kubernetes.io/change-cause` to its revisions upon creation. You can specify the`CHANGE-CAUSE` message by: - - * Annotating the Deployment with `kubectl annotate deployment/nginx-deployment kubernetes.io/change-cause="image updated to 1.16.1"` - * Manually editing the manifest of the resource. + ```shell + kubectl rollout history deployment/nginx-deployment + ``` + The output is similar to this: + ``` + deployments "nginx-deployment" + REVISION CHANGE-CAUSE + 1 kubectl apply --filename=https://k8s.io/examples/controllers/nginx-deployment.yaml + 2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 + 3 kubectl set image deployment/nginx-deployment nginx=nginx:1.161 + ``` + + `CHANGE-CAUSE` is copied from the Deployment annotation `kubernetes.io/change-cause` to its revisions upon creation. You can specify the`CHANGE-CAUSE` message by: + + * Annotating the Deployment with `kubectl annotate deployment/nginx-deployment kubernetes.io/change-cause="image updated to 1.16.1"` + * Manually editing the manifest of the resource. 2. To see the details of each revision, run: - ```shell - kubectl rollout history deployment/nginx-deployment --revision=2 - ``` - - The output is similar to this: - ``` - deployments "nginx-deployment" revision 2 - Labels: app=nginx - pod-template-hash=1159050644 - Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 - Containers: - nginx: - Image: nginx:1.16.1 - Port: 80/TCP - QoS Tier: - cpu: BestEffort - memory: BestEffort - Environment Variables: - No volumes. - ``` + ```shell + kubectl rollout history deployment/nginx-deployment --revision=2 + ``` + + The output is similar to this: + ``` + deployments "nginx-deployment" revision 2 + Labels: app=nginx + pod-template-hash=1159050644 + Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 + Containers: + nginx: + Image: nginx:1.16.1 + Port: 80/TCP + QoS Tier: + cpu: BestEffort + memory: BestEffort + Environment Variables: + No volumes. + ``` ### Rolling Back to a Previous Revision Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2. 1. Now you've decided to undo the current rollout and rollback to the previous revision: - ```shell - kubectl rollout undo deployment/nginx-deployment - ``` + ```shell + kubectl rollout undo deployment/nginx-deployment + ``` - The output is similar to this: - ``` - deployment.apps/nginx-deployment rolled back - ``` - Alternatively, you can rollback to a specific revision by specifying it with `--to-revision`: + The output is similar to this: + ``` + deployment.apps/nginx-deployment rolled back + ``` + Alternatively, you can rollback to a specific revision by specifying it with `--to-revision`: - ```shell - kubectl rollout undo deployment/nginx-deployment --to-revision=2 - ``` + ```shell + kubectl rollout undo deployment/nginx-deployment --to-revision=2 + ``` - The output is similar to this: - ``` - deployment.apps/nginx-deployment rolled back - ``` + The output is similar to this: + ``` + deployment.apps/nginx-deployment rolled back + ``` - For more details about rollout related commands, read [`kubectl rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout). + For more details about rollout related commands, read [`kubectl rollout`](/docs/reference/generated/kubectl/kubectl-commands#rollout). - The Deployment is now rolled back to a previous stable revision. As you can see, a `DeploymentRollback` event - for rolling back to revision 2 is generated from Deployment controller. + The Deployment is now rolled back to a previous stable revision. As you can see, a `DeploymentRollback` event + for rolling back to revision 2 is generated from Deployment controller. 2. Check if the rollback was successful and the Deployment is running as expected, run: - ```shell - kubectl get deployment nginx-deployment - ``` - - The output is similar to this: - ``` - NAME READY UP-TO-DATE AVAILABLE AGE - nginx-deployment 3/3 3 3 30m - ``` + ```shell + kubectl get deployment nginx-deployment + ``` + + The output is similar to this: + ``` + NAME READY UP-TO-DATE AVAILABLE AGE + nginx-deployment 3/3 3 3 30m + ``` 3. Get the description of the Deployment: - ```shell - kubectl describe deployment nginx-deployment - ``` - The output is similar to this: - ``` - Name: nginx-deployment - Namespace: default - CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500 - Labels: app=nginx - Annotations: deployment.kubernetes.io/revision=4 - kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 - Selector: app=nginx - Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable - StrategyType: RollingUpdate - MinReadySeconds: 0 - RollingUpdateStrategy: 25% max unavailable, 25% max surge - Pod Template: - Labels: app=nginx - Containers: - nginx: - Image: nginx:1.16.1 - Port: 80/TCP - Host Port: 0/TCP - Environment: - Mounts: - Volumes: - Conditions: - Type Status Reason - ---- ------ ------ - Available True MinimumReplicasAvailable - Progressing True NewReplicaSetAvailable - OldReplicaSets: - NewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created) - Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3 - Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1 - Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2 - Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2 - Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1 - Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3 - Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0 - Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1 - Normal DeploymentRollback 15s deployment-controller Rolled back deployment "nginx-deployment" to revision 2 - Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0 - ``` + ```shell + kubectl describe deployment nginx-deployment + ``` + The output is similar to this: + ``` + Name: nginx-deployment + Namespace: default + CreationTimestamp: Sun, 02 Sep 2018 18:17:55 -0500 + Labels: app=nginx + Annotations: deployment.kubernetes.io/revision=4 + kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 + Selector: app=nginx + Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable + StrategyType: RollingUpdate + MinReadySeconds: 0 + RollingUpdateStrategy: 25% max unavailable, 25% max surge + Pod Template: + Labels: app=nginx + Containers: + nginx: + Image: nginx:1.16.1 + Port: 80/TCP + Host Port: 0/TCP + Environment: + Mounts: + Volumes: + Conditions: + Type Status Reason + ---- ------ ------ + Available True MinimumReplicasAvailable + Progressing True NewReplicaSetAvailable + OldReplicaSets: + NewReplicaSet: nginx-deployment-c4747d96c (3/3 replicas created) + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deployment-75675f5897 to 3 + Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 1 + Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 2 + Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 2 + Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 1 + Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-c4747d96c to 3 + Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set nginx-deployment-75675f5897 to 0 + Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set nginx-deployment-595696685f to 1 + Normal DeploymentRollback 15s deployment-controller Rolled back deployment "nginx-deployment" to revision 2 + Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-595696685f to 0 + ``` ## Scaling a Deployment @@ -658,26 +656,26 @@ For example, you are running a Deployment with 10 replicas, [maxSurge](#max-surg ``` * You update to a new image which happens to be unresolvable from inside the cluster. - ```shell - kubectl set image deployment/nginx-deployment nginx=nginx:sometag - ``` + ```shell + kubectl set image deployment/nginx-deployment nginx=nginx:sometag + ``` - The output is similar to this: - ``` - deployment.apps/nginx-deployment image updated - ``` + The output is similar to this: + ``` + deployment.apps/nginx-deployment image updated + ``` * The image update starts a new rollout with ReplicaSet nginx-deployment-1989198191, but it's blocked due to the `maxUnavailable` requirement that you mentioned above. Check out the rollout status: - ```shell - kubectl get rs - ``` - The output is similar to this: - ``` - NAME DESIRED CURRENT READY AGE - nginx-deployment-1989198191 5 5 0 9s - nginx-deployment-618515232 8 8 8 1m - ``` + ```shell + kubectl get rs + ``` + The output is similar to this: + ``` + NAME DESIRED CURRENT READY AGE + nginx-deployment-1989198191 5 5 0 9s + nginx-deployment-618515232 8 8 8 1m + ``` * Then a new scaling request for the Deployment comes along. The autoscaler increments the Deployment replicas to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you weren't using @@ -741,103 +739,103 @@ apply multiple fixes in between pausing and resuming without triggering unnecess ``` * Pause by running the following command: - ```shell - kubectl rollout pause deployment/nginx-deployment - ``` + ```shell + kubectl rollout pause deployment/nginx-deployment + ``` - The output is similar to this: - ``` - deployment.apps/nginx-deployment paused - ``` + The output is similar to this: + ``` + deployment.apps/nginx-deployment paused + ``` * Then update the image of the Deployment: - ```shell - kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 - ``` + ```shell + kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 + ``` - The output is similar to this: - ``` - deployment.apps/nginx-deployment image updated - ``` + The output is similar to this: + ``` + deployment.apps/nginx-deployment image updated + ``` * Notice that no new rollout started: - ```shell - kubectl rollout history deployment/nginx-deployment - ``` - - The output is similar to this: - ``` - deployments "nginx" - REVISION CHANGE-CAUSE - 1 - ``` + ```shell + kubectl rollout history deployment/nginx-deployment + ``` + + The output is similar to this: + ``` + deployments "nginx" + REVISION CHANGE-CAUSE + 1 + ``` * Get the rollout status to verify that the existing ReplicaSet has not changed: - ```shell - kubectl get rs - ``` + ```shell + kubectl get rs + ``` - The output is similar to this: - ``` - NAME DESIRED CURRENT READY AGE - nginx-2142116321 3 3 3 2m - ``` + The output is similar to this: + ``` + NAME DESIRED CURRENT READY AGE + nginx-2142116321 3 3 3 2m + ``` * You can make as many updates as you wish, for example, update the resources that will be used: - ```shell - kubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi - ``` + ```shell + kubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi + ``` - The output is similar to this: - ``` - deployment.apps/nginx-deployment resource requirements updated - ``` + The output is similar to this: + ``` + deployment.apps/nginx-deployment resource requirements updated + ``` - The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to - the Deployment will not have any effect as long as the Deployment rollout is paused. + The initial state of the Deployment prior to pausing its rollout will continue its function, but new updates to + the Deployment will not have any effect as long as the Deployment rollout is paused. * Eventually, resume the Deployment rollout and observe a new ReplicaSet coming up with all the new updates: - ```shell - kubectl rollout resume deployment/nginx-deployment - ``` - - The output is similar to this: - ``` - deployment.apps/nginx-deployment resumed - ``` + ```shell + kubectl rollout resume deployment/nginx-deployment + ``` + + The output is similar to this: + ``` + deployment.apps/nginx-deployment resumed + ``` * Watch the status of the rollout until it's done. - ```shell - kubectl get rs -w - ``` - - The output is similar to this: - ``` - NAME DESIRED CURRENT READY AGE - nginx-2142116321 2 2 2 2m - nginx-3926361531 2 2 0 6s - nginx-3926361531 2 2 1 18s - nginx-2142116321 1 2 2 2m - nginx-2142116321 1 2 2 2m - nginx-3926361531 3 2 1 18s - nginx-3926361531 3 2 1 18s - nginx-2142116321 1 1 1 2m - nginx-3926361531 3 3 1 18s - nginx-3926361531 3 3 2 19s - nginx-2142116321 0 1 1 2m - nginx-2142116321 0 1 1 2m - nginx-2142116321 0 0 0 2m - nginx-3926361531 3 3 3 20s - ``` + ```shell + kubectl get rs -w + ``` + + The output is similar to this: + ``` + NAME DESIRED CURRENT READY AGE + nginx-2142116321 2 2 2 2m + nginx-3926361531 2 2 0 6s + nginx-3926361531 2 2 1 18s + nginx-2142116321 1 2 2 2m + nginx-2142116321 1 2 2 2m + nginx-3926361531 3 2 1 18s + nginx-3926361531 3 2 1 18s + nginx-2142116321 1 1 1 2m + nginx-3926361531 3 3 1 18s + nginx-3926361531 3 3 2 19s + nginx-2142116321 0 1 1 2m + nginx-2142116321 0 1 1 2m + nginx-2142116321 0 0 0 2m + nginx-3926361531 3 3 3 20s + ``` * Get the status of the latest rollout: - ```shell - kubectl get rs - ``` - - The output is similar to this: - ``` - NAME DESIRED CURRENT READY AGE - nginx-2142116321 0 0 0 2m - nginx-3926361531 3 3 3 28s - ``` + ```shell + kubectl get rs + ``` + + The output is similar to this: + ``` + NAME DESIRED CURRENT READY AGE + nginx-2142116321 0 0 0 2m + nginx-3926361531 3 3 3 28s + ``` {{< note >}} You cannot rollback a paused Deployment until you resume it. {{< /note >}} @@ -1084,9 +1082,9 @@ For general information about working with config files, see configuring containers, and [using kubectl to manage resources](/docs/concepts/overview/working-with-objects/object-management/) documents. When the control plane creates new Pods for a Deployment, the `.metadata.name` of the -Deployment is part of the basis for naming those Pods. The name of a Deployment must be a valid +Deployment is part of the basis for naming those Pods. The name of a Deployment must be a valid [DNS subdomain](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names) -value, but this can produce unexpected results for the Pod hostnames. For best compatibility, +value, but this can produce unexpected results for the Pod hostnames. For best compatibility, the name should follow the more restrictive rules for a [DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names). @@ -1153,11 +1151,11 @@ the default value. All existing Pods are killed before new ones are created when `.spec.strategy.type==Recreate`. {{< note >}} -This will only guarantee Pod termination previous to creation for upgrades. If you upgrade a Deployment, all Pods -of the old revision will be terminated immediately. Successful removal is awaited before any Pod of the new -revision is created. If you manually delete a Pod, the lifecycle is controlled by the ReplicaSet and the -replacement will be created immediately (even if the old Pod is still in a Terminating state). If you need an -"at most" guarantee for your Pods, you should consider using a +This will only guarantee Pod termination previous to creation for upgrades. If you upgrade a Deployment, all Pods +of the old revision will be terminated immediately. Successful removal is awaited before any Pod of the new +revision is created. If you manually delete a Pod, the lifecycle is controlled by the ReplicaSet and the +replacement will be created immediately (even if the old Pod is still in a Terminating state). If you need an +"at most" guarantee for your Pods, you should consider using a [StatefulSet](/docs/concepts/workloads/controllers/statefulset/). {{< /note >}} diff --git a/content/en/docs/concepts/workloads/pods/_index.md b/content/en/docs/concepts/workloads/pods/_index.md index 29446151282f1..e31978f7fa193 100644 --- a/content/en/docs/concepts/workloads/pods/_index.md +++ b/content/en/docs/concepts/workloads/pods/_index.md @@ -296,14 +296,14 @@ Your {{< glossary_tooltip text="container runtime" term_id="container-runtime" > Any container in a pod can run in privileged mode to use operating system administrative capabilities that would otherwise be inaccessible. This is available for both Windows and Linux. -### Linux priviledged containers +### Linux privileged containers In Linux, any container in a Pod can enable privileged mode using the `privileged` (Linux) flag on the [security context](/docs/tasks/configure-pod-container/security-context/) of the container spec. This is useful for containers that want to use operating system administrative capabilities such as manipulating the network stack or accessing hardware devices. -### Windows priviledged containers +### Windows privileged containers {{< feature-state for_k8s_version="v1.26" state="stable" >}} diff --git a/content/en/docs/contribute/style/style-guide.md b/content/en/docs/contribute/style/style-guide.md index 651da5e946148..dce6eb291ff30 100644 --- a/content/en/docs/contribute/style/style-guide.md +++ b/content/en/docs/contribute/style/style-guide.md @@ -459,12 +459,8 @@ Do | Don't Update the title in the front matter of the page or blog post. | Use first level heading, as Hugo automatically converts the title in the front matter of the page into a first-level heading. Use ordered headings to provide a meaningful high-level outline of your content. | Use headings level 4 through 6, unless it is absolutely necessary. If your content is that detailed, it may need to be broken into separate articles. Use pound or hash signs (`#`) for non-blog post content. | Use underlines (`---` or `===`) to designate first-level headings. -Use sentence case for headings in the page body. For example, -**Extend kubectl with plugins** | Use title case for headings in the page body. For example, **Extend Kubectl With Plugins** -Use title case for the page title in the front matter. For example, -`title: Kubernetes API Server Bypass Risks` | Use sentence case for page titles -in the front matter. For example, don't use -`title: Kubernetes API server bypass risks` +Use sentence case for headings in the page body. For example, **Extend kubectl with plugins** | Use title case for headings in the page body. For example, **Extend Kubectl With Plugins** +Use title case for the page title in the front matter. For example, `title: Kubernetes API Server Bypass Risks` | Use sentence case for page titles in the front matter. For example, don't use `title: Kubernetes API server bypass risks` {{< /table >}} ### Paragraphs diff --git a/content/en/docs/reference/_index.md b/content/en/docs/reference/_index.md index 05db47b7b46e3..a24535ba0ce98 100644 --- a/content/en/docs/reference/_index.md +++ b/content/en/docs/reference/_index.md @@ -89,6 +89,7 @@ operator to use or manage a cluster. * [kube-scheduler configuration (v1beta2)](/docs/reference/config-api/kube-scheduler-config.v1beta2/), [kube-scheduler configuration (v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/) and [kube-scheduler configuration (v1)](/docs/reference/config-api/kube-scheduler-config.v1/) +* [kube-controller-manager configuration (v1alpha1)](/docs/reference/config-api/kube-controller-manager-config.v1alpha1/) * [kube-proxy configuration (v1alpha1)](/docs/reference/config-api/kube-proxy-config.v1alpha1/) * [`audit.k8s.io/v1` API](/docs/reference/config-api/apiserver-audit.v1/) * [Client authentication API (v1beta1)](/docs/reference/config-api/client-authentication.v1beta1/) and diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md index f58fe099d9577..9d1b17796daff 100644 --- a/content/en/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md @@ -107,7 +107,7 @@ CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultI {{< note >}} The [`ValidatingAdmissionPolicy`](#validatingadmissionpolicy) admission plugin is enabled -by default, but is only active if you enable the the `ValidatingAdmissionPolicy` +by default, but is only active if you enable the `ValidatingAdmissionPolicy` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) **and** the `admissionregistration.k8s.io/v1alpha1` API. {{< /note >}} diff --git a/content/en/docs/reference/config-api/kube-controller-manager-config.v1alpha1.md b/content/en/docs/reference/config-api/kube-controller-manager-config.v1alpha1.md new file mode 100644 index 0000000000000..4ec29226a5d0c --- /dev/null +++ b/content/en/docs/reference/config-api/kube-controller-manager-config.v1alpha1.md @@ -0,0 +1,1811 @@ +--- +title: kube-controller-manager Configuration (v1alpha1) +content_type: tool-reference +package: controllermanager.config.k8s.io/v1alpha1 +auto_generated: true +--- + + +## Resource Types + + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) +- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) + + + +## `ControllerLeaderConfiguration` {#controllermanager-config-k8s-io-v1alpha1-ControllerLeaderConfiguration} + + +**Appears in:** + +- [LeaderMigrationConfiguration](#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration) + + +

ControllerLeaderConfiguration provides the configuration for a migrating leader lock.

+ + + + + + + + + + + + + + +
FieldDescription
name [Required]
+string +
+

Name is the name of the controller being migrated +E.g. service-controller, route-controller, cloud-node-controller, etc

+
component [Required]
+string +
+

Component is the name of the component in which the controller should be running. +E.g. kube-controller-manager, cloud-controller-manager, etc +Or '*' meaning the controller can be run under any component that participates in the migration

+
+ +## `GenericControllerManagerConfiguration` {#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration} + + +**Appears in:** + +- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

GenericControllerManagerConfiguration holds configuration for a generic controller-manager.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
Port [Required]
+int32 +
+

port is the port that the controller-manager's http service runs on.

+
Address [Required]
+string +
+

address is the IP address to serve on (set to 0.0.0.0 for all interfaces).

+
MinResyncPeriod [Required]
+meta/v1.Duration +
+

minResyncPeriod is the resync period in reflectors; will be random between +minResyncPeriod and 2*minResyncPeriod.

+
ClientConnection [Required]
+ClientConnectionConfiguration +
+

ClientConnection specifies the kubeconfig file and client connection +settings for the proxy server to use when communicating with the apiserver.

+
ControllerStartInterval [Required]
+meta/v1.Duration +
+

How long to wait between starting controller managers

+
LeaderElection [Required]
+LeaderElectionConfiguration +
+

leaderElection defines the configuration of leader election client.

+
Controllers [Required]
+[]string +
+

Controllers is the list of controllers to enable or disable +'*' means "all enabled by default controllers" +'foo' means "enable 'foo'" +'-foo' means "disable 'foo'" +first item for a particular name wins

+
Debugging [Required]
+DebuggingConfiguration +
+

DebuggingConfiguration holds configuration for Debugging related features.

+
LeaderMigrationEnabled [Required]
+bool +
+

LeaderMigrationEnabled indicates whether Leader Migration should be enabled for the controller manager.

+
LeaderMigration [Required]
+LeaderMigrationConfiguration +
+

LeaderMigration holds the configuration for Leader Migration.

+
+ +## `LeaderMigrationConfiguration` {#controllermanager-config-k8s-io-v1alpha1-LeaderMigrationConfiguration} + + +**Appears in:** + +- [GenericControllerManagerConfiguration](#controllermanager-config-k8s-io-v1alpha1-GenericControllerManagerConfiguration) + + +

LeaderMigrationConfiguration provides versioned configuration for all migrating leader locks.

+ + + + + + + + + + + + + + + + + +
FieldDescription
leaderName [Required]
+string +
+

LeaderName is the name of the leader election resource that protects the migration +E.g. 1-20-KCM-to-1-21-CCM

+
resourceLock [Required]
+string +
+

ResourceLock indicates the resource object type that will be used to lock +Should be "leases" or "endpoints"

+
controllerLeaders [Required]
+[]ControllerLeaderConfiguration +
+

ControllerLeaders contains a list of migrating leader lock configurations

+
+ + + + +## `KubeControllerManagerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration} + + + +

KubeControllerManagerConfiguration contains elements describing kube-controller manager.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
apiVersion
string
kubecontrollermanager.config.k8s.io/v1alpha1
kind
string
KubeControllerManagerConfiguration
Generic [Required]
+GenericControllerManagerConfiguration +
+

Generic holds configuration for a generic controller-manager

+
KubeCloudShared [Required]
+KubeCloudSharedConfiguration +
+

KubeCloudSharedConfiguration holds configuration for shared related features +both in cloud controller manager and kube-controller manager.

+
AttachDetachController [Required]
+AttachDetachControllerConfiguration +
+

AttachDetachControllerConfiguration holds configuration for +AttachDetachController related features.

+
CSRSigningController [Required]
+CSRSigningControllerConfiguration +
+

CSRSigningControllerConfiguration holds configuration for +CSRSigningController related features.

+
DaemonSetController [Required]
+DaemonSetControllerConfiguration +
+

DaemonSetControllerConfiguration holds configuration for DaemonSetController +related features.

+
DeploymentController [Required]
+DeploymentControllerConfiguration +
+

DeploymentControllerConfiguration holds configuration for +DeploymentController related features.

+
StatefulSetController [Required]
+StatefulSetControllerConfiguration +
+

StatefulSetControllerConfiguration holds configuration for +StatefulSetController related features.

+
DeprecatedController [Required]
+DeprecatedControllerConfiguration +
+

DeprecatedControllerConfiguration holds configuration for some deprecated +features.

+
EndpointController [Required]
+EndpointControllerConfiguration +
+

EndpointControllerConfiguration holds configuration for EndpointController +related features.

+
EndpointSliceController [Required]
+EndpointSliceControllerConfiguration +
+

EndpointSliceControllerConfiguration holds configuration for +EndpointSliceController related features.

+
EndpointSliceMirroringController [Required]
+EndpointSliceMirroringControllerConfiguration +
+

EndpointSliceMirroringControllerConfiguration holds configuration for +EndpointSliceMirroringController related features.

+
EphemeralVolumeController [Required]
+EphemeralVolumeControllerConfiguration +
+

EphemeralVolumeControllerConfiguration holds configuration for EphemeralVolumeController +related features.

+
GarbageCollectorController [Required]
+GarbageCollectorControllerConfiguration +
+

GarbageCollectorControllerConfiguration holds configuration for +GarbageCollectorController related features.

+
HPAController [Required]
+HPAControllerConfiguration +
+

HPAControllerConfiguration holds configuration for HPAController related features.

+
JobController [Required]
+JobControllerConfiguration +
+

JobControllerConfiguration holds configuration for JobController related features.

+
CronJobController [Required]
+CronJobControllerConfiguration +
+

CronJobControllerConfiguration holds configuration for CronJobController related features.

+
NamespaceController [Required]
+NamespaceControllerConfiguration +
+

NamespaceControllerConfiguration holds configuration for NamespaceController +related features. +NamespaceControllerConfiguration holds configuration for NamespaceController +related features.

+
NodeIPAMController [Required]
+NodeIPAMControllerConfiguration +
+

NodeIPAMControllerConfiguration holds configuration for NodeIPAMController +related features.

+
NodeLifecycleController [Required]
+NodeLifecycleControllerConfiguration +
+

NodeLifecycleControllerConfiguration holds configuration for +NodeLifecycleController related features.

+
PersistentVolumeBinderController [Required]
+PersistentVolumeBinderControllerConfiguration +
+

PersistentVolumeBinderControllerConfiguration holds configuration for +PersistentVolumeBinderController related features.

+
PodGCController [Required]
+PodGCControllerConfiguration +
+

PodGCControllerConfiguration holds configuration for PodGCController +related features.

+
ReplicaSetController [Required]
+ReplicaSetControllerConfiguration +
+

ReplicaSetControllerConfiguration holds configuration for ReplicaSet related features.

+
ReplicationController [Required]
+ReplicationControllerConfiguration +
+

ReplicationControllerConfiguration holds configuration for +ReplicationController related features.

+
ResourceQuotaController [Required]
+ResourceQuotaControllerConfiguration +
+

ResourceQuotaControllerConfiguration holds configuration for +ResourceQuotaController related features.

+
SAController [Required]
+SAControllerConfiguration +
+

SAControllerConfiguration holds configuration for ServiceAccountController +related features.

+
ServiceController [Required]
+ServiceControllerConfiguration +
+

ServiceControllerConfiguration holds configuration for ServiceController +related features.

+
TTLAfterFinishedController [Required]
+TTLAfterFinishedControllerConfiguration +
+

TTLAfterFinishedControllerConfiguration holds configuration for +TTLAfterFinishedController related features.

+
+ +## `AttachDetachControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-AttachDetachControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

AttachDetachControllerConfiguration contains elements describing AttachDetachController.

+ + + + + + + + + + + + + + +
FieldDescription
DisableAttachDetachReconcilerSync [Required]
+bool +
+

Reconciler runs a periodic loop to reconcile the desired state of the with +the actual state of the world by triggering attach detach operations. +This flag enables or disables reconcile. Is false by default, and thus enabled.

+
ReconcilerSyncLoopPeriod [Required]
+meta/v1.Duration +
+

ReconcilerSyncLoopPeriod is the amount of time the reconciler sync states loop +wait between successive executions. Is set to 5 sec by default.

+
+ +## `CSRSigningConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningConfiguration} + + +**Appears in:** + +- [CSRSigningControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningControllerConfiguration) + + +

CSRSigningConfiguration holds information about a particular CSR signer

+ + + + + + + + + + + + + + +
FieldDescription
CertFile [Required]
+string +
+

certFile is the filename containing a PEM-encoded +X509 CA certificate used to issue certificates

+
KeyFile [Required]
+string +
+

keyFile is the filename containing a PEM-encoded +RSA or ECDSA private key used to issue certificates

+
+ +## `CSRSigningControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CSRSigningControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

CSRSigningControllerConfiguration contains elements describing CSRSigningController.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
ClusterSigningCertFile [Required]
+string +
+

clusterSigningCertFile is the filename containing a PEM-encoded +X509 CA certificate used to issue cluster-scoped certificates

+
ClusterSigningKeyFile [Required]
+string +
+

clusterSigningCertFile is the filename containing a PEM-encoded +RSA or ECDSA private key used to issue cluster-scoped certificates

+
KubeletServingSignerConfiguration [Required]
+CSRSigningConfiguration +
+

kubeletServingSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kubelet-serving signer

+
KubeletClientSignerConfiguration [Required]
+CSRSigningConfiguration +
+

kubeletClientSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kube-apiserver-client-kubelet

+
KubeAPIServerClientSignerConfiguration [Required]
+CSRSigningConfiguration +
+

kubeAPIServerClientSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/kube-apiserver-client

+
LegacyUnknownSignerConfiguration [Required]
+CSRSigningConfiguration +
+

legacyUnknownSignerConfiguration holds the certificate and key used to issue certificates for the kubernetes.io/legacy-unknown

+
ClusterSigningDuration [Required]
+meta/v1.Duration +
+

clusterSigningDuration is the max length of duration signed certificates will be given. +Individual CSRs may request shorter certs by setting spec.expirationSeconds.

+
+ +## `CronJobControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-CronJobControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

CronJobControllerConfiguration contains elements describing CrongJob2Controller.

+ + + + + + + + + + + +
FieldDescription
ConcurrentCronJobSyncs [Required]
+int32 +
+

concurrentCronJobSyncs is the number of job objects that are +allowed to sync concurrently. Larger number = more responsive jobs, +but more CPU (and network) load.

+
+ +## `DaemonSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DaemonSetControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

DaemonSetControllerConfiguration contains elements describing DaemonSetController.

+ + + + + + + + + + + +
FieldDescription
ConcurrentDaemonSetSyncs [Required]
+int32 +
+

concurrentDaemonSetSyncs is the number of daemonset objects that are +allowed to sync concurrently. Larger number = more responsive daemonset, +but more CPU (and network) load.

+
+ +## `DeploymentControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DeploymentControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

DeploymentControllerConfiguration contains elements describing DeploymentController.

+ + + + + + + + + + + +
FieldDescription
ConcurrentDeploymentSyncs [Required]
+int32 +
+

concurrentDeploymentSyncs is the number of deployment objects that are +allowed to sync concurrently. Larger number = more responsive deployments, +but more CPU (and network) load.

+
+ +## `DeprecatedControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-DeprecatedControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

DeprecatedControllerConfiguration contains elements be deprecated.

+ + + + +## `EndpointControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

EndpointControllerConfiguration contains elements describing EndpointController.

+ + + + + + + + + + + + + + +
FieldDescription
ConcurrentEndpointSyncs [Required]
+int32 +
+

concurrentEndpointSyncs is the number of endpoint syncing operations +that will be done concurrently. Larger number = faster endpoint updating, +but more CPU (and network) load.

+
EndpointUpdatesBatchPeriod [Required]
+meta/v1.Duration +
+

EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period. +Processing of pod changes will be delayed by this duration to join them with potential +upcoming updates and reduce the overall number of endpoints updates.

+
+ +## `EndpointSliceControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointSliceControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

EndpointSliceControllerConfiguration contains elements describing +EndpointSliceController.

+ + + + + + + + + + + + + + + + + +
FieldDescription
ConcurrentServiceEndpointSyncs [Required]
+int32 +
+

concurrentServiceEndpointSyncs is the number of service endpoint syncing +operations that will be done concurrently. Larger number = faster +endpoint slice updating, but more CPU (and network) load.

+
MaxEndpointsPerSlice [Required]
+int32 +
+

maxEndpointsPerSlice is the maximum number of endpoints that will be +added to an EndpointSlice. More endpoints per slice will result in fewer +and larger endpoint slices, but larger resources.

+
EndpointUpdatesBatchPeriod [Required]
+meta/v1.Duration +
+

EndpointUpdatesBatchPeriod describes the length of endpoint updates batching period. +Processing of pod changes will be delayed by this duration to join them with potential +upcoming updates and reduce the overall number of endpoints updates.

+
+ +## `EndpointSliceMirroringControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EndpointSliceMirroringControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

EndpointSliceMirroringControllerConfiguration contains elements describing +EndpointSliceMirroringController.

+ + + + + + + + + + + + + + + + + +
FieldDescription
MirroringConcurrentServiceEndpointSyncs [Required]
+int32 +
+

mirroringConcurrentServiceEndpointSyncs is the number of service endpoint +syncing operations that will be done concurrently. Larger number = faster +endpoint slice updating, but more CPU (and network) load.

+
MirroringMaxEndpointsPerSubset [Required]
+int32 +
+

mirroringMaxEndpointsPerSubset is the maximum number of endpoints that +will be mirrored to an EndpointSlice for an EndpointSubset.

+
MirroringEndpointUpdatesBatchPeriod [Required]
+meta/v1.Duration +
+

mirroringEndpointUpdatesBatchPeriod can be used to batch EndpointSlice +updates. All updates triggered by EndpointSlice changes will be delayed +by up to 'mirroringEndpointUpdatesBatchPeriod'. If other addresses in the +same Endpoints resource change in that period, they will be batched to a +single EndpointSlice update. Default 0 value means that each Endpoints +update triggers an EndpointSlice update.

+
+ +## `EphemeralVolumeControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-EphemeralVolumeControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

EphemeralVolumeControllerConfiguration contains elements describing EphemeralVolumeController.

+ + + + + + + + + + + +
FieldDescription
ConcurrentEphemeralVolumeSyncs [Required]
+int32 +
+

ConcurrentEphemeralVolumeSyncseSyncs is the number of ephemeral volume syncing operations +that will be done concurrently. Larger number = faster ephemeral volume updating, +but more CPU (and network) load.

+
+ +## `GarbageCollectorControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-GarbageCollectorControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

GarbageCollectorControllerConfiguration contains elements describing GarbageCollectorController.

+ + + + + + + + + + + + + + + + + +
FieldDescription
EnableGarbageCollector [Required]
+bool +
+

enables the generic garbage collector. MUST be synced with the +corresponding flag of the kube-apiserver. WARNING: the generic garbage +collector is an alpha feature.

+
ConcurrentGCSyncs [Required]
+int32 +
+

concurrentGCSyncs is the number of garbage collector workers that are +allowed to sync concurrently.

+
GCIgnoredResources [Required]
+[]GroupResource +
+

gcIgnoredResources is the list of GroupResources that garbage collection should ignore.

+
+ +## `GroupResource` {#kubecontrollermanager-config-k8s-io-v1alpha1-GroupResource} + + +**Appears in:** + +- [GarbageCollectorControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-GarbageCollectorControllerConfiguration) + + +

GroupResource describes an group resource.

+ + + + + + + + + + + + + + +
FieldDescription
Group [Required]
+string +
+

group is the group portion of the GroupResource.

+
Resource [Required]
+string +
+

resource is the resource portion of the GroupResource.

+
+ +## `HPAControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-HPAControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

HPAControllerConfiguration contains elements describing HPAController.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
ConcurrentHorizontalPodAutoscalerSyncs [Required]
+int32 +
+

ConcurrentHorizontalPodAutoscalerSyncs is the number of HPA objects that are allowed to sync concurrently. +Larger number = more responsive HPA processing, but more CPU (and network) load.

+
HorizontalPodAutoscalerSyncPeriod [Required]
+meta/v1.Duration +
+

HorizontalPodAutoscalerSyncPeriod is the period for syncing the number of +pods in horizontal pod autoscaler.

+
HorizontalPodAutoscalerUpscaleForbiddenWindow [Required]
+meta/v1.Duration +
+

HorizontalPodAutoscalerUpscaleForbiddenWindow is a period after which next upscale allowed.

+
HorizontalPodAutoscalerDownscaleStabilizationWindow [Required]
+meta/v1.Duration +
+

HorizontalPodAutoscalerDowncaleStabilizationWindow is a period for which autoscaler will look +backwards and not scale down below any recommendation it made during that period.

+
HorizontalPodAutoscalerDownscaleForbiddenWindow [Required]
+meta/v1.Duration +
+

HorizontalPodAutoscalerDownscaleForbiddenWindow is a period after which next downscale allowed.

+
HorizontalPodAutoscalerTolerance [Required]
+float64 +
+

HorizontalPodAutoscalerTolerance is the tolerance for when +resource usage suggests upscaling/downscaling

+
HorizontalPodAutoscalerCPUInitializationPeriod [Required]
+meta/v1.Duration +
+

HorizontalPodAutoscalerCPUInitializationPeriod is the period after pod start when CPU samples +might be skipped.

+
HorizontalPodAutoscalerInitialReadinessDelay [Required]
+meta/v1.Duration +
+

HorizontalPodAutoscalerInitialReadinessDelay is period after pod start during which readiness +changes are treated as readiness being set for the first time. The only effect of this is that +HPA will disregard CPU samples from unready pods that had last readiness change during that +period.

+
+ +## `JobControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-JobControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

JobControllerConfiguration contains elements describing JobController.

+ + + + + + + + + + + +
FieldDescription
ConcurrentJobSyncs [Required]
+int32 +
+

concurrentJobSyncs is the number of job objects that are +allowed to sync concurrently. Larger number = more responsive jobs, +but more CPU (and network) load.

+
+ +## `NamespaceControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NamespaceControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

NamespaceControllerConfiguration contains elements describing NamespaceController.

+ + + + + + + + + + + + + + +
FieldDescription
NamespaceSyncPeriod [Required]
+meta/v1.Duration +
+

namespaceSyncPeriod is the period for syncing namespace life-cycle +updates.

+
ConcurrentNamespaceSyncs [Required]
+int32 +
+

concurrentNamespaceSyncs is the number of namespace objects that are +allowed to sync concurrently.

+
+ +## `NodeIPAMControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NodeIPAMControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

NodeIPAMControllerConfiguration contains elements describing NodeIpamController.

+ + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
ServiceCIDR [Required]
+string +
+

serviceCIDR is CIDR Range for Services in cluster.

+
SecondaryServiceCIDR [Required]
+string +
+

secondaryServiceCIDR is CIDR Range for Services in cluster. This is used in dual stack clusters. SecondaryServiceCIDR must be of different IP family than ServiceCIDR

+
NodeCIDRMaskSize [Required]
+int32 +
+

NodeCIDRMaskSize is the mask size for node cidr in cluster.

+
NodeCIDRMaskSizeIPv4 [Required]
+int32 +
+

NodeCIDRMaskSizeIPv4 is the mask size for node cidr in dual-stack cluster.

+
NodeCIDRMaskSizeIPv6 [Required]
+int32 +
+

NodeCIDRMaskSizeIPv6 is the mask size for node cidr in dual-stack cluster.

+
+ +## `NodeLifecycleControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-NodeLifecycleControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

NodeLifecycleControllerConfiguration contains elements describing NodeLifecycleController.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
EnableTaintManager [Required]
+bool +
+

If set to true enables NoExecute Taints and will evict all not-tolerating +Pod running on Nodes tainted with this kind of Taints.

+
NodeEvictionRate [Required]
+float32 +
+

nodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is healthy

+
SecondaryNodeEvictionRate [Required]
+float32 +
+

secondaryNodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy

+
NodeStartupGracePeriod [Required]
+meta/v1.Duration +
+

nodeStartupGracePeriod is the amount of time which we allow starting a node to +be unresponsive before marking it unhealthy.

+
NodeMonitorGracePeriod [Required]
+meta/v1.Duration +
+

nodeMontiorGracePeriod is the amount of time which we allow a running node to be +unresponsive before marking it unhealthy. Must be N times more than kubelet's +nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet +to post node status.

+
PodEvictionTimeout [Required]
+meta/v1.Duration +
+

podEvictionTimeout is the grace period for deleting pods on failed nodes.

+
LargeClusterSizeThreshold [Required]
+int32 +
+

secondaryNodeEvictionRate is implicitly overridden to 0 for clusters smaller than or equal to largeClusterSizeThreshold

+
UnhealthyZoneThreshold [Required]
+float32 +
+

Zone is treated as unhealthy in nodeEvictionRate and secondaryNodeEvictionRate when at least +unhealthyZoneThreshold (no less than 3) of Nodes in the zone are NotReady

+
+ +## `PersistentVolumeBinderControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeBinderControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

PersistentVolumeBinderControllerConfiguration contains elements describing +PersistentVolumeBinderController.

+ + + + + + + + + + + + + + + + + + + + +
FieldDescription
PVClaimBinderSyncPeriod [Required]
+meta/v1.Duration +
+

pvClaimBinderSyncPeriod is the period for syncing persistent volumes +and persistent volume claims.

+
VolumeConfiguration [Required]
+VolumeConfiguration +
+

volumeConfiguration holds configuration for volume related features.

+
VolumeHostCIDRDenylist [Required]
+[]string +
+

VolumeHostCIDRDenylist is a list of CIDRs that should not be reachable by the +controller from plugins.

+
VolumeHostAllowLocalLoopback [Required]
+bool +
+

VolumeHostAllowLocalLoopback indicates if local loopback hosts (127.0.0.1, etc) +should be allowed from plugins.

+
+ +## `PersistentVolumeRecyclerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeRecyclerConfiguration} + + +**Appears in:** + +- [VolumeConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-VolumeConfiguration) + + +

PersistentVolumeRecyclerConfiguration contains elements describing persistent volume plugins.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
MaximumRetry [Required]
+int32 +
+

maximumRetry is number of retries the PV recycler will execute on failure to recycle +PV.

+
MinimumTimeoutNFS [Required]
+int32 +
+

minimumTimeoutNFS is the minimum ActiveDeadlineSeconds to use for an NFS Recycler +pod.

+
PodTemplateFilePathNFS [Required]
+string +
+

podTemplateFilePathNFS is the file path to a pod definition used as a template for +NFS persistent volume recycling

+
IncrementTimeoutNFS [Required]
+int32 +
+

incrementTimeoutNFS is the increment of time added per Gi to ActiveDeadlineSeconds +for an NFS scrubber pod.

+
PodTemplateFilePathHostPath [Required]
+string +
+

podTemplateFilePathHostPath is the file path to a pod definition used as a template for +HostPath persistent volume recycling. This is for development and testing only and +will not work in a multi-node cluster.

+
MinimumTimeoutHostPath [Required]
+int32 +
+

minimumTimeoutHostPath is the minimum ActiveDeadlineSeconds to use for a HostPath +Recycler pod. This is for development and testing only and will not work in a multi-node +cluster.

+
IncrementTimeoutHostPath [Required]
+int32 +
+

incrementTimeoutHostPath is the increment of time added per Gi to ActiveDeadlineSeconds +for a HostPath scrubber pod. This is for development and testing only and will not work +in a multi-node cluster.

+
+ +## `PodGCControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-PodGCControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

PodGCControllerConfiguration contains elements describing PodGCController.

+ + + + + + + + + + + +
FieldDescription
TerminatedPodGCThreshold [Required]
+int32 +
+

terminatedPodGCThreshold is the number of terminated pods that can exist +before the terminated pod garbage collector starts deleting terminated pods. +If <= 0, the terminated pod garbage collector is disabled.

+
+ +## `ReplicaSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ReplicaSetControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

ReplicaSetControllerConfiguration contains elements describing ReplicaSetController.

+ + + + + + + + + + + +
FieldDescription
ConcurrentRSSyncs [Required]
+int32 +
+

concurrentRSSyncs is the number of replica sets that are allowed to sync +concurrently. Larger number = more responsive replica management, but more +CPU (and network) load.

+
+ +## `ReplicationControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ReplicationControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

ReplicationControllerConfiguration contains elements describing ReplicationController.

+ + + + + + + + + + + +
FieldDescription
ConcurrentRCSyncs [Required]
+int32 +
+

concurrentRCSyncs is the number of replication controllers that are +allowed to sync concurrently. Larger number = more responsive replica +management, but more CPU (and network) load.

+
+ +## `ResourceQuotaControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-ResourceQuotaControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

ResourceQuotaControllerConfiguration contains elements describing ResourceQuotaController.

+ + + + + + + + + + + + + + +
FieldDescription
ResourceQuotaSyncPeriod [Required]
+meta/v1.Duration +
+

resourceQuotaSyncPeriod is the period for syncing quota usage status +in the system.

+
ConcurrentResourceQuotaSyncs [Required]
+int32 +
+

concurrentResourceQuotaSyncs is the number of resource quotas that are +allowed to sync concurrently. Larger number = more responsive quota +management, but more CPU (and network) load.

+
+ +## `SAControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-SAControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

SAControllerConfiguration contains elements describing ServiceAccountController.

+ + + + + + + + + + + + + + + + + +
FieldDescription
ServiceAccountKeyFile [Required]
+string +
+

serviceAccountKeyFile is the filename containing a PEM-encoded private RSA key +used to sign service account tokens.

+
ConcurrentSATokenSyncs [Required]
+int32 +
+

concurrentSATokenSyncs is the number of service account token syncing operations +that will be done concurrently.

+
RootCAFile [Required]
+string +
+

rootCAFile is the root certificate authority will be included in service +account's token secret. This must be a valid PEM-encoded CA bundle.

+
+ +## `StatefulSetControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-StatefulSetControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

StatefulSetControllerConfiguration contains elements describing StatefulSetController.

+ + + + + + + + + + + +
FieldDescription
ConcurrentStatefulSetSyncs [Required]
+int32 +
+

concurrentStatefulSetSyncs is the number of statefulset objects that are +allowed to sync concurrently. Larger number = more responsive statefulsets, +but more CPU (and network) load.

+
+ +## `TTLAfterFinishedControllerConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-TTLAfterFinishedControllerConfiguration} + + +**Appears in:** + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

TTLAfterFinishedControllerConfiguration contains elements describing TTLAfterFinishedController.

+ + + + + + + + + + + +
FieldDescription
ConcurrentTTLSyncs [Required]
+int32 +
+

concurrentTTLSyncs is the number of TTL-after-finished collector workers that are +allowed to sync concurrently.

+
+ +## `VolumeConfiguration` {#kubecontrollermanager-config-k8s-io-v1alpha1-VolumeConfiguration} + + +**Appears in:** + +- [PersistentVolumeBinderControllerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-PersistentVolumeBinderControllerConfiguration) + + +

VolumeConfiguration contains all enumerated flags meant to configure all volume +plugins. From this config, the controller-manager binary will create many instances of +volume.VolumeConfig, each containing only the configuration needed for that plugin which +are then passed to the appropriate plugin. The ControllerManager binary is the only part +of the code which knows what plugins are supported and which flags correspond to each plugin.

+ + + + + + + + + + + + + + + + + + + + +
FieldDescription
EnableHostPathProvisioning [Required]
+bool +
+

enableHostPathProvisioning enables HostPath PV provisioning when running without a +cloud provider. This allows testing and development of provisioning features. HostPath +provisioning is not supported in any way, won't work in a multi-node cluster, and +should not be used for anything other than testing or development.

+
EnableDynamicProvisioning [Required]
+bool +
+

enableDynamicProvisioning enables the provisioning of volumes when running within an environment +that supports dynamic provisioning. Defaults to true.

+
PersistentVolumeRecyclerConfiguration [Required]
+PersistentVolumeRecyclerConfiguration +
+

persistentVolumeRecyclerConfiguration holds configuration for persistent volume plugins.

+
FlexVolumePluginDir [Required]
+string +
+

volumePluginDir is the full path of the directory in which the flex +volume plugin should search for additional third party volume plugins

+
+ + + + +## `ServiceControllerConfiguration` {#ServiceControllerConfiguration} + + +**Appears in:** + +- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

ServiceControllerConfiguration contains elements describing ServiceController.

+ + + + + + + + + + + +
FieldDescription
ConcurrentServiceSyncs [Required]
+int32 +
+

concurrentServiceSyncs is the number of services that are +allowed to sync concurrently. Larger number = more responsive service +management, but more CPU (and network) load.

+
+ + + +## `CloudControllerManagerConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration} + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
apiVersion
string
cloudcontrollermanager.config.k8s.io/v1alpha1
kind
string
CloudControllerManagerConfiguration
Generic [Required]
+GenericControllerManagerConfiguration +
+

Generic holds configuration for a generic controller-manager

+
KubeCloudShared [Required]
+KubeCloudSharedConfiguration +
+

KubeCloudSharedConfiguration holds configuration for shared related features +both in cloud controller manager and kube-controller manager.

+
ServiceController [Required]
+ServiceControllerConfiguration +
+

ServiceControllerConfiguration holds configuration for ServiceController +related features.

+
NodeStatusUpdateFrequency [Required]
+meta/v1.Duration +
+

NodeStatusUpdateFrequency is the frequency at which the controller updates nodes' status

+
+ +## `CloudProviderConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudProviderConfiguration} + + +**Appears in:** + +- [KubeCloudSharedConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration) + + +

CloudProviderConfiguration contains basically elements about cloud provider.

+ + + + + + + + + + + + + + +
FieldDescription
Name [Required]
+string +
+

Name is the provider for cloud services.

+
CloudConfigFile [Required]
+string +
+

cloudConfigFile is the path to the cloud provider configuration file.

+
+ +## `KubeCloudSharedConfiguration` {#cloudcontrollermanager-config-k8s-io-v1alpha1-KubeCloudSharedConfiguration} + + +**Appears in:** + +- [CloudControllerManagerConfiguration](#cloudcontrollermanager-config-k8s-io-v1alpha1-CloudControllerManagerConfiguration) + +- [KubeControllerManagerConfiguration](#kubecontrollermanager-config-k8s-io-v1alpha1-KubeControllerManagerConfiguration) + + +

KubeCloudSharedConfiguration contains elements shared by both kube-controller manager +and cloud-controller manager, but not genericconfig.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldDescription
CloudProvider [Required]
+CloudProviderConfiguration +
+

CloudProviderConfiguration holds configuration for CloudProvider related features.

+
ExternalCloudVolumePlugin [Required]
+string +
+

externalCloudVolumePlugin specifies the plugin to use when cloudProvider is "external". +It is currently used by the in repo cloud providers to handle node and volume control in the KCM.

+
UseServiceAccountCredentials [Required]
+bool +
+

useServiceAccountCredentials indicates whether controllers should be run with +individual service account credentials.

+
AllowUntaggedCloud [Required]
+bool +
+

run with untagged cloud instances

+
RouteReconciliationPeriod [Required]
+meta/v1.Duration +
+

routeReconciliationPeriod is the period for reconciling routes created for Nodes by cloud provider..

+
NodeMonitorPeriod [Required]
+meta/v1.Duration +
+

nodeMonitorPeriod is the period for syncing NodeStatus in NodeController.

+
ClusterName [Required]
+string +
+

clusterName is the instance prefix for the cluster.

+
ClusterCIDR [Required]
+string +
+

clusterCIDR is CIDR Range for Pods in cluster.

+
AllocateNodeCIDRs [Required]
+bool +
+

AllocateNodeCIDRs enables CIDRs for Pods to be allocated and, if +ConfigureCloudRoutes is true, to be set on the cloud provider.

+
CIDRAllocatorType [Required]
+string +
+

CIDRAllocatorType determines what kind of pod CIDR allocator will be used.

+
ConfigureCloudRoutes [Required]
+bool +
+

configureCloudRoutes enables CIDRs allocated with allocateNodeCIDRs +to be configured on the cloud provider.

+
NodeSyncPeriod [Required]
+meta/v1.Duration +
+

nodeSyncPeriod is the period for syncing nodes from cloudprovider. Longer +periods will result in fewer calls to cloud provider, but may delay addition +of new nodes to cluster.

+
+ \ No newline at end of file diff --git a/content/en/docs/reference/glossary/kubectl.md b/content/en/docs/reference/glossary/kubectl.md index 61f93b9cf6244..7963cd77b2ab4 100644 --- a/content/en/docs/reference/glossary/kubectl.md +++ b/content/en/docs/reference/glossary/kubectl.md @@ -2,7 +2,7 @@ title: Kubectl id: kubectl date: 2018-04-12 -full_link: /docs/user-guide/kubectl-overview/ +full_link: /docs/reference/kubectl/ short_description: > A command line tool for communicating with a Kubernetes cluster. diff --git a/content/en/docs/reference/glossary/service.md b/content/en/docs/reference/glossary/service.md index eb2b745e222ab..305418dbc4677 100644 --- a/content/en/docs/reference/glossary/service.md +++ b/content/en/docs/reference/glossary/service.md @@ -5,14 +5,21 @@ date: 2018-04-12 full_link: /docs/concepts/services-networking/service/ short_description: > A way to expose an application running on a set of Pods as a network service. - -aka: tags: - fundamental - core-object --- -An abstract way to expose an application running on a set of {{< glossary_tooltip text="Pods" term_id="pod" >}} as a network service. +A method for exposing a network application that is running as one or more +{{< glossary_tooltip text="Pods" term_id="pod" >}} in your cluster. - The set of Pods targeted by a Service is (usually) determined by a {{< glossary_tooltip text="selector" term_id="selector" >}}. If more Pods are added or removed, the set of Pods matching the selector will change. The Service makes sure that network traffic can be directed to the current set of Pods for the workload. +The set of Pods targeted by a Service is (usually) determined by a +{{< glossary_tooltip text="selector" term_id="selector" >}}. If more Pods are added or removed, +the set of Pods matching the selector will change. The Service makes sure that network traffic +can be directed to the current set of Pods for the workload. + +Kubernetes Services either use IP networking (IPv4, IPv6, or both), or reference an external name in +the Domain Name System (DNS). + +The Service abstraction enables other mechanisms, such as Ingress and Gateway. diff --git a/content/en/docs/reference/instrumentation/cri-pod-container-metrics.md b/content/en/docs/reference/instrumentation/cri-pod-container-metrics.md new file mode 100644 index 0000000000000..c526d4b20edbf --- /dev/null +++ b/content/en/docs/reference/instrumentation/cri-pod-container-metrics.md @@ -0,0 +1,38 @@ +--- +title: CRI Pod & Container Metrics +content_type: reference +weight: 50 +description: >- + Collection of Pod & Container metrics via the CRI. +--- + + + + +{{< feature-state for_k8s_version="v1.23" state="alpha" >}} + +The [kubelet](/docs/reference/command-line-tools-reference/kubelet/) collects pod and +container metrics via [cAdvisor](https://github.com/google/cadvisor). As an alpha feature, +Kubernetes lets you configure the collection of pod and container +metrics via the {{< glossary_tooltip term_id="cri" text="Container Runtime Interface">}} (CRI). You +must enable the `PodAndContainerStatsFromCRI` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) and +use a compatible CRI implementation (containerd >= 1.6.0, CRI-O >= 1.23.0) to +use the CRI based collection mechanism. + + + +## CRI Pod & Container Metrics + +With `PodAndContainerStatsFromCRI` enabled, the kubelet polls the underlying container +runtime for pod and container stats instead of inspecting the host system directly using cAdvisor. +The benefits of relying on the container runtime for this information as opposed to direct +collection with cAdvisor include: + +- Potential improved performance if the container runtime already collects this information + during normal operations. In this case, the data can be re-used instead of being aggregated + again by the kubelet. + +- It further decouples the kubelet and the container runtime allowing collection of metrics for + container runtimes that don't run processes directly on the host with kubelet where they are + observable by cAdvisor (for example: container runtimes that use virtualization). + \ No newline at end of file diff --git a/content/en/docs/reference/instrumentation/node-metrics.md b/content/en/docs/reference/instrumentation/node-metrics.md index ce5984e5ed89a..32eab955bdce2 100644 --- a/content/en/docs/reference/instrumentation/node-metrics.md +++ b/content/en/docs/reference/instrumentation/node-metrics.md @@ -37,17 +37,11 @@ kubelet endpoint, and not `/stats/summary`. ## Summary metrics API source {#summary-api-source} By default, Kubernetes fetches node summary metrics data using an embedded -[cAdvisor](https://github.com/google/cadvisor) that runs within the kubelet. - -## Summary API data via CRI {#pod-and-container-stats-from-cri} - -{{< feature-state for_k8s_version="v1.23" state="alpha" >}} - -If you enable the `PodAndContainerStatsFromCRI` -[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) in your -cluster, and you use a container runtime that supports statistics access via +[cAdvisor](https://github.com/google/cadvisor) that runs within the kubelet. If you +enable the `PodAndContainerStatsFromCRI` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) +in your cluster, and you use a container runtime that supports statistics access via {{< glossary_tooltip term_id="cri" text="Container Runtime Interface">}} (CRI), then -the kubelet fetches Pod- and container-level metric data using CRI, and not via cAdvisor. +the kubelet [fetches Pod- and container-level metric data using CRI](/docs/reference/instrumentation/cri-pod-container-metrics), and not via cAdvisor. ## {{% heading "whatsnext" %}} diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index 0e3e3f285d54e..0f5e6921ede92 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -41,7 +41,7 @@ echo '[[ $commands[kubectl] ]] && source <(kubectl completion zsh)' >> ~/.zshrc ``` ### A note on `--all-namespaces` -Appending `--all-namespaces` happens frequently enough where you should be aware of the shorthand for `--all-namespaces`: +Appending `--all-namespaces` happens frequently enough that you should be aware of the shorthand for `--all-namespaces`: ```kubectl -A``` diff --git a/content/en/docs/reference/labels-annotations-taints/_index.md b/content/en/docs/reference/labels-annotations-taints/_index.md index 9b836415a3d0c..b343c556cd206 100644 --- a/content/en/docs/reference/labels-annotations-taints/_index.md +++ b/content/en/docs/reference/labels-annotations-taints/_index.md @@ -431,6 +431,17 @@ Used on: PersistentVolumeClaim This annotation has been deprecated. +### volume.beta.kubernetes.io/storage-class (deprecated) + +Example: `volume.beta.kubernetes.io/storage-class: "example-class"` + +Used on: PersistentVolume, PersistentVolumeClaim + +This annotation can be used for PersistentVolume(PV) or PersistentVolumeClaim(PVC) to specify the name of [StorageClass](/docs/concepts/storage/storage-classes/). When both `storageClassName` attribute and `volume.beta.kubernetes.io/storage-class` annotation are specified, the annotation `volume.beta.kubernetes.io/storage-class` takes precedence over the `storageClassName` attribute. + +This annotation has been deprecated. Instead, set the [`storageClassName` field](/docs/concepts/storage/persistent-volumes/#class) +for the PersistentVolumeClaim or PersistentVolume. + ### volume.beta.kubernetes.io/mount-options (deprecated) {#mount-options} Example : `volume.beta.kubernetes.io/mount-options: "ro,soft"` diff --git a/content/en/docs/reference/node/_index.md b/content/en/docs/reference/node/_index.md index 9d015e7e3c7ff..13363202a5f15 100644 --- a/content/en/docs/reference/node/_index.md +++ b/content/en/docs/reference/node/_index.md @@ -14,4 +14,4 @@ Kubernetes documentation, including: * [Node Metrics Data](/docs/reference/instrumentation/node-metrics). - +* [CRI Pod & Container Metrics](/docs/reference/instrumentation/cri-pod-container-metrics). \ No newline at end of file diff --git a/content/en/docs/tasks/access-application-cluster/_index.md b/content/en/docs/tasks/access-application-cluster/_index.md index 4d7af48310008..e6556d9c923b8 100644 --- a/content/en/docs/tasks/access-application-cluster/_index.md +++ b/content/en/docs/tasks/access-application-cluster/_index.md @@ -1,6 +1,6 @@ --- title: "Access Applications in a Cluster" description: Configure load balancing, port forwarding, or setup firewall or DNS configurations to access applications in a cluster. -weight: 60 +weight: 100 --- diff --git a/content/en/docs/tasks/access-application-cluster/access-cluster-services.md b/content/en/docs/tasks/access-application-cluster/access-cluster-services.md index 456662692ee25..8d2d47e349190 100644 --- a/content/en/docs/tasks/access-application-cluster/access-cluster-services.md +++ b/content/en/docs/tasks/access-application-cluster/access-cluster-services.md @@ -1,6 +1,7 @@ --- title: Access Services Running on Clusters content_type: task +weight: 140 --- diff --git a/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md b/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md index 897d44a54f926..69aa7a9668e16 100644 --- a/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md +++ b/content/en/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume.md @@ -1,7 +1,7 @@ --- title: Communicate Between Containers in the Same Pod Using a Shared Volume content_type: task -weight: 110 +weight: 120 --- diff --git a/content/en/docs/tasks/access-application-cluster/configure-dns-cluster.md b/content/en/docs/tasks/access-application-cluster/configure-dns-cluster.md index 3535fdb8bcdf8..71ba2694a7772 100644 --- a/content/en/docs/tasks/access-application-cluster/configure-dns-cluster.md +++ b/content/en/docs/tasks/access-application-cluster/configure-dns-cluster.md @@ -1,6 +1,6 @@ --- title: Configure DNS for a Cluster -weight: 120 +weight: 130 content_type: concept --- diff --git a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md index 9459b7b665350..6f0ecda6caf67 100644 --- a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md +++ b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md @@ -1,7 +1,7 @@ --- title: Set up Ingress on Minikube with the NGINX Ingress Controller content_type: task -weight: 100 +weight: 110 min-kubernetes-server-version: 1.19 --- diff --git a/content/en/docs/tasks/administer-cluster/certificates.md b/content/en/docs/tasks/administer-cluster/certificates.md index 3da130ca64a80..8901bc34fefaf 100644 --- a/content/en/docs/tasks/administer-cluster/certificates.md +++ b/content/en/docs/tasks/administer-cluster/certificates.md @@ -18,7 +18,7 @@ manually through [`easyrsa`](https://github.com/OpenVPN/easy-rsa), [`openssl`](h 1. Download, unpack, and initialize the patched version of `easyrsa3`. ```shell - curl -LO https://storage.googleapis.com/kubernetes-release/easy-rsa/easy-rsa.tar.gz + curl -LO https://dl.k8s.io/easy-rsa/easy-rsa.tar.gz tar xzf easy-rsa.tar.gz cd easy-rsa-master/easyrsa3 ./easyrsa init-pki diff --git a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md index 267d614ef9dc0..d31f708043d54 100644 --- a/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md +++ b/content/en/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-removal-affects-you.md @@ -8,7 +8,7 @@ weight: 50 -The `dockershim` component of Kubernetes allows to use Docker as a Kubernetes's +The `dockershim` component of Kubernetes allows the use of Docker as a Kubernetes's {{< glossary_tooltip text="container runtime" term_id="container-runtime" >}}. Kubernetes' built-in `dockershim` component was removed in release v1.24. @@ -40,11 +40,11 @@ dependency on Docker: 1. Third-party tools that perform above mentioned privileged operations. See [Migrating telemetry and security agents from dockershim](/docs/tasks/administer-cluster/migrating-from-dockershim/migrating-telemetry-and-security-agents) for more information. -1. Make sure there is no indirect dependencies on dockershim behavior. +1. Make sure there are no indirect dependencies on dockershim behavior. This is an edge case and unlikely to affect your application. Some tooling may be configured to react to Docker-specific behaviors, for example, raise alert on specific metrics or search for a specific log message as part of troubleshooting instructions. - If you have such tooling configured, test the behavior on test + If you have such tooling configured, test the behavior on a test cluster before migration. ## Dependency on Docker explained {#role-of-dockershim} @@ -74,7 +74,7 @@ before to check on these containers is no longer available. You cannot get container information using `docker ps` or `docker inspect` commands. As you cannot list containers, you cannot get logs, stop containers, -or execute something inside container using `docker exec`. +or execute something inside a container using `docker exec`. {{< note >}} diff --git a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md index 3fa2f64098cd8..05d87ee6bb927 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md +++ b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md @@ -161,7 +161,7 @@ kubectl config set-context prod --namespace=production \ --user=lithe-cocoa-92103_kubernetes ``` -By default, the above commands adds two contexts that are saved into file +By default, the above commands add two contexts that are saved into file `.kube/config`. You can now view the contexts and alternate against the two new request contexts depending on which namespace you wish to work against. diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md index af266688d5e64..be41448480974 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md @@ -82,7 +82,7 @@ policies using an example application. ## Deploying Cilium for Production Use For detailed instructions around deploying Cilium for production, see: -[Cilium Kubernetes Installation Guide](https://docs.cilium.io/en/stable/concepts/kubernetes/intro/) +[Cilium Kubernetes Installation Guide](https://docs.cilium.io/en/stable/network/kubernetes/concepts/) This documentation includes detailed requirements, instructions and example production DaemonSet files. diff --git a/content/en/docs/tasks/configmap-secret/_index.md b/content/en/docs/tasks/configmap-secret/_index.md index d80692c96701f..900d96aa7e593 100644 --- a/content/en/docs/tasks/configmap-secret/_index.md +++ b/content/en/docs/tasks/configmap-secret/_index.md @@ -1,6 +1,6 @@ --- title: "Managing Secrets" -weight: 28 +weight: 60 description: Managing confidential settings data using Secrets. --- diff --git a/content/en/docs/tasks/configure-pod-container/_index.md b/content/en/docs/tasks/configure-pod-container/_index.md index 462b19e4e9385..230cb8da91163 100644 --- a/content/en/docs/tasks/configure-pod-container/_index.md +++ b/content/en/docs/tasks/configure-pod-container/_index.md @@ -1,6 +1,6 @@ --- title: "Configure Pods and Containers" description: Perform common configuration tasks for Pods and containers. -weight: 20 +weight: 30 --- diff --git a/content/en/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md b/content/en/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md index ff5df861df9c1..27fd1d3a6ce0f 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md +++ b/content/en/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity.md @@ -2,7 +2,7 @@ title: Assign Pods to Nodes using Node Affinity min-kubernetes-server-version: v1.10 content_type: task -weight: 120 +weight: 160 --- diff --git a/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md b/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md index 1e19a26fbba88..9c70faca16815 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md +++ b/content/en/docs/tasks/configure-pod-container/assign-pods-nodes.md @@ -1,7 +1,7 @@ --- title: Assign Pods to Nodes content_type: task -weight: 120 +weight: 150 --- diff --git a/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md b/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md index c952ab361cbbc..c84c8dd4b1561 100644 --- a/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md +++ b/content/en/docs/tasks/configure-pod-container/attach-handler-lifecycle-event.md @@ -1,7 +1,6 @@ --- title: Attach Handlers to Container Lifecycle Events -content_type: task -weight: 140 +weight: 180 --- diff --git a/content/en/docs/tasks/configure-pod-container/configure-gmsa.md b/content/en/docs/tasks/configure-pod-container/configure-gmsa.md index e191de41ef21c..b74c0f46ea350 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-gmsa.md +++ b/content/en/docs/tasks/configure-pod-container/configure-gmsa.md @@ -1,7 +1,7 @@ --- title: Configure GMSA for Windows Pods and containers content_type: task -weight: 20 +weight: 30 --- diff --git a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md index 19d0a9cfa33e9..dd6f8eaf13507 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md +++ b/content/en/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes.md @@ -1,7 +1,7 @@ --- title: Configure Liveness, Readiness and Startup Probes content_type: task -weight: 110 +weight: 140 --- diff --git a/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md index 11a8dd44b2336..f60b36f7128bc 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md +++ b/content/en/docs/tasks/configure-pod-container/configure-persistent-volume-storage.md @@ -1,7 +1,7 @@ --- title: Configure a Pod to Use a PersistentVolume for Storage content_type: task -weight: 60 +weight: 90 --- @@ -12,27 +12,24 @@ for storage. Here is a summary of the process: 1. You, as cluster administrator, create a PersistentVolume backed by physical -storage. You do not associate the volume with any Pod. + storage. You do not associate the volume with any Pod. 1. You, now taking the role of a developer / cluster user, create a -PersistentVolumeClaim that is automatically bound to a suitable -PersistentVolume. + PersistentVolumeClaim that is automatically bound to a suitable + PersistentVolume. 1. You create a Pod that uses the above PersistentVolumeClaim for storage. - - ## {{% heading "prerequisites" %}} - * You need to have a Kubernetes cluster that has only one Node, and the -{{< glossary_tooltip text="kubectl" term_id="kubectl" >}} -command-line tool must be configured to communicate with your cluster. If you -do not already have a single-node cluster, you can create one by using -[Minikube](https://minikube.sigs.k8s.io/docs/). + {{< glossary_tooltip text="kubectl" term_id="kubectl" >}} + command-line tool must be configured to communicate with your cluster. If you + do not already have a single-node cluster, you can create one by using + [Minikube](https://minikube.sigs.k8s.io/docs/). * Familiarize yourself with the material in -[Persistent Volumes](/docs/concepts/storage/persistent-volumes/). + [Persistent Volumes](/docs/concepts/storage/persistent-volumes/). @@ -50,7 +47,6 @@ In your shell on that Node, create a `/mnt/data` directory: sudo mkdir /mnt/data ``` - In the `/mnt/data` directory, create an `index.html` file: ```shell @@ -71,6 +67,7 @@ cat /mnt/data/index.html ``` The output should be: + ``` Hello from Kubernetes storage ``` @@ -116,8 +113,10 @@ kubectl get pv task-pv-volume The output shows that the PersistentVolume has a `STATUS` of `Available`. This means it has not yet been bound to a PersistentVolumeClaim. - NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE - task-pv-volume 10Gi RWO Retain Available manual 4s +``` +NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE +task-pv-volume 10Gi RWO Retain Available manual 4s +``` ## Create a PersistentVolumeClaim @@ -132,7 +131,9 @@ Here is the configuration file for the PersistentVolumeClaim: Create the PersistentVolumeClaim: - kubectl apply -f https://k8s.io/examples/pods/storage/pv-claim.yaml +```shell +kubectl apply -f https://k8s.io/examples/pods/storage/pv-claim.yaml +``` After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control @@ -147,8 +148,10 @@ kubectl get pv task-pv-volume Now the output shows a `STATUS` of `Bound`. - NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE - task-pv-volume 10Gi RWO Retain Bound default/task-pv-claim manual 2m +``` +NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE +task-pv-volume 10Gi RWO Retain Bound default/task-pv-claim manual 2m +``` Look at the PersistentVolumeClaim: @@ -159,8 +162,10 @@ kubectl get pvc task-pv-claim The output shows that the PersistentVolumeClaim is bound to your PersistentVolume, `task-pv-volume`. - NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE - task-pv-claim Bound task-pv-volume 10Gi RWO manual 30s +``` +NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE +task-pv-claim Bound task-pv-volume 10Gi RWO manual 30s +``` ## Create a Pod @@ -206,15 +211,16 @@ curl http://localhost/ The output shows the text that you wrote to the `index.html` file on the hostPath volume: - Hello from Kubernetes storage - +``` +Hello from Kubernetes storage +``` If you see that message, you have successfully configured a Pod to use storage from a PersistentVolumeClaim. ## Clean up -Delete the Pod, the PersistentVolumeClaim and the PersistentVolume: +Delete the Pod, the PersistentVolumeClaim and the PersistentVolume: ```shell kubectl delete pod task-pv-pod @@ -242,8 +248,8 @@ You can now close the shell to your Node. You can perform 2 volume mounts on your nginx container: -`/usr/share/nginx/html` for the static website -`/etc/nginx/nginx.conf` for the default config +- `/usr/share/nginx/html` for the static website +- `/etc/nginx/nginx.conf` for the default config @@ -256,6 +262,7 @@ with a GID. Then the GID is automatically added to any Pod that uses the PersistentVolume. Use the `pv.beta.kubernetes.io/gid` annotation as follows: + ```yaml apiVersion: v1 kind: PersistentVolume @@ -264,6 +271,7 @@ metadata: annotations: pv.beta.kubernetes.io/gid: "1234" ``` + When a Pod consumes a PersistentVolume that has a GID annotation, the annotated GID is applied to all containers in the Pod in the same way that GIDs specified in the Pod's security context are. Every GID, whether it originates from a PersistentVolume @@ -275,12 +283,8 @@ When a Pod consumes a PersistentVolume, the GIDs associated with the PersistentVolume are not present on the Pod resource itself. {{< /note >}} - - - ## {{% heading "whatsnext" %}} - * Learn more about [PersistentVolumes](/docs/concepts/storage/persistent-volumes/). * Read the [Persistent Storage design document](https://git.k8s.io/design-proposals-archive/storage/persistent-storage.md). @@ -290,7 +294,3 @@ PersistentVolume are not present on the Pod resource itself. * [PersistentVolumeSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumespec-v1-core) * [PersistentVolumeClaim](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaim-v1-core) * [PersistentVolumeClaimSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaimspec-v1-core) - - - - diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index 61a1fefe00fa0..b15467ab94d04 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -1,7 +1,7 @@ --- title: Configure a Pod to Use a ConfigMap content_type: task -weight: 150 +weight: 190 card: name: tasks weight: 50 @@ -9,61 +9,92 @@ card: Many applications rely on configuration which is used during either application initialization or runtime. -Most of the times there is a requirement to adjust values assigned to configuration parameters. -ConfigMaps are the Kubernetes way to inject application pods with configuration data. -ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. This page provides a series of usage examples demonstrating how to create ConfigMaps and configure Pods using data stored in ConfigMaps. +Most times, there is a requirement to adjust values assigned to configuration parameters. +ConfigMaps are a Kubernetes mechanism that let you inject configuration data into application +{{< glossary_tooltip text="pods" term_id="pod" >}}. +The ConfigMap concept allow you to decouple configuration artifacts from image content to +keep containerized applications portable. For example, you can download and run the same +{{< glossary_tooltip text="container image" term_id="image" >}} to spin up containers for +the purposes of local development, system test, or running a live end-user workload. -## {{% heading "prerequisites" %}} - +This page provides a series of usage examples demonstrating how to create ConfigMaps and +configure Pods using data stored in ConfigMaps. -{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} +## {{% heading "prerequisites" %}} +{{< include "task-tutorial-prereqs.md" >}} +You need to have the `wget` tool installed. If you have a different tool +such as `curl`, and you do not have `wget`, you will need to adapt the +step that downloads example data. - ## Create a ConfigMap -You can use either `kubectl create configmap` or a ConfigMap generator in `kustomization.yaml` to create a ConfigMap. Note that `kubectl` starts to support `kustomization.yaml` since 1.14. -### Create a ConfigMap Using kubectl create configmap +You can use either `kubectl create configmap` or a ConfigMap generator in `kustomization.yaml` +to create a ConfigMap. -Use the `kubectl create configmap` command to create ConfigMaps from [directories](#create-configmaps-from-directories), [files](#create-configmaps-from-files), or [literal values](#create-configmaps-from-literal-values): +### Create a ConfigMap using `kubectl create configmap` + +Use the `kubectl create configmap` command to create ConfigMaps from +[directories](#create-configmaps-from-directories), [files](#create-configmaps-from-files), +or [literal values](#create-configmaps-from-literal-values): ```shell kubectl create configmap ``` -where \ is the name you want to assign to the ConfigMap and \ is the directory, file, or literal value to draw the data from. +where \ is the name you want to assign to the ConfigMap and \ is the +directory, file, or literal value to draw the data from. The name of a ConfigMap object must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names). -When you are creating a ConfigMap based on a file, the key in the \ defaults to the basename of the file, and the value defaults to the file content. +When you are creating a ConfigMap based on a file, the key in the \ defaults to +the basename of the file, and the value defaults to the file content. You can use [`kubectl describe`](/docs/reference/generated/kubectl/kubectl-commands/#describe) or [`kubectl get`](/docs/reference/generated/kubectl/kubectl-commands/#get) to retrieve information about a ConfigMap. -#### Create ConfigMaps from directories +#### Create a ConfigMap from a directory {#create-configmaps-from-directories} -You can use `kubectl create configmap` to create a ConfigMap from multiple files in the same directory. When you are creating a ConfigMap based on a directory, kubectl identifies files whose basename is a valid key in the directory and packages each of those files into the new ConfigMap. Any directory entries except regular files are ignored (e.g. subdirectories, symlinks, devices, pipes, etc). +You can use `kubectl create configmap` to create a ConfigMap from multiple files in the same +directory. When you are creating a ConfigMap based on a directory, kubectl identifies files +whose filename is a valid key in the directory and packages each of those files into the new +ConfigMap. Any directory entries except regular files are ignored (for example: subdirectories, +symlinks, devices, pipes, and more). -For example: +{{< note >}} +Each filename being used for ConfigMap creation must consist of only acceptable characters, +which are: letters (`A` to `Z` and `a` to z`), digits (`0` to `9`), '-', '_', or '.'. +If you use `kubectl create configmap` with a directory where any of the file names contains +an unacceptable character, the `kubectl` command may fail. + +The `kubectl` command does not print an error when it encounters an invalid filename. +{{< /note >}} + +Create the local directory: ```shell -# Create the local directory mkdir -p configure-pod-container/configmap/ +``` + +Now, download the sample configuration and create the ConfigMap: +```shell # Download the sample files into `configure-pod-container/configmap/` directory wget https://kubernetes.io/examples/configmap/game.properties -O configure-pod-container/configmap/game.properties wget https://kubernetes.io/examples/configmap/ui.properties -O configure-pod-container/configmap/ui.properties -# Create the configmap +# Create the ConfigMap kubectl create configmap game-config --from-file=configure-pod-container/configmap/ ``` -The above command packages each file, in this case, `game.properties` and `ui.properties` in the `configure-pod-container/configmap/` directory into the game-config ConfigMap. You can display details of the ConfigMap using the following command: +The above command packages each file, in this case, `game.properties` and `ui.properties` +in the `configure-pod-container/configmap/` directory into the game-config ConfigMap. You can +display details of the ConfigMap using the following command: ```shell kubectl describe configmaps game-config @@ -95,7 +126,8 @@ allow.textmode=true how.nice.to.look=fairlyNice ``` -The `game.properties` and `ui.properties` files in the `configure-pod-container/configmap/` directory are represented in the `data` section of the ConfigMap. +The `game.properties` and `ui.properties` files in the `configure-pod-container/configmap/` +directory are represented in the `data` section of the ConfigMap. ```shell kubectl get configmaps game-config -o yaml @@ -106,7 +138,7 @@ The output is similar to this: apiVersion: v1 kind: ConfigMap metadata: - creationTimestamp: 2016-02-18T18:52:05Z + creationTimestamp: 2022-02-18T18:52:05Z name: game-config namespace: default resourceVersion: "516" @@ -129,7 +161,8 @@ data: #### Create ConfigMaps from files -You can use `kubectl create configmap` to create a ConfigMap from an individual file, or from multiple files. +You can use `kubectl create configmap` to create a ConfigMap from an individual file, or from +multiple files. For example, @@ -164,7 +197,8 @@ secret.code.allowed=true secret.code.lives=30 ``` -You can pass in the `--from-file` argument multiple times to create a ConfigMap from multiple data sources. +You can pass in the `--from-file` argument multiple times to create a ConfigMap from multiple +data sources. ```shell kubectl create configmap game-config-2 --from-file=configure-pod-container/configmap/game.properties --from-file=configure-pod-container/configmap/ui.properties @@ -203,9 +237,6 @@ allow.textmode=true how.nice.to.look=fairlyNice ``` -When `kubectl` creates a ConfigMap from inputs that are not ASCII or UTF-8, the tool puts these into the `binaryData` field of the ConfigMap, and not in `data`. Both text and binary data sources can be combined in one ConfigMap. -If you want to view the `binaryData` keys (and their values) in a ConfigMap, you can run `kubectl get configmap -o jsonpath='{.binaryData}' `. - Use the option `--from-env-file` to create a ConfigMap from an env-file, for example: ```shell @@ -234,18 +265,18 @@ kubectl create configmap game-config-env-file \ --from-env-file=configure-pod-container/configmap/game-env-file.properties ``` -would produce the following ConfigMap: +would produce a ConfigMap. View the ConfigMap: ```shell kubectl get configmap game-config-env-file -o yaml ``` -where the output is similar to this: +the output is similar to: ```yaml apiVersion: v1 kind: ConfigMap metadata: - creationTimestamp: 2017-12-27T18:36:28Z + creationTimestamp: 2019-12-27T18:36:28Z name: game-config-env-file namespace: default resourceVersion: "809965" @@ -276,7 +307,7 @@ where the output is similar to this: apiVersion: v1 kind: ConfigMap metadata: - creationTimestamp: 2017-12-27T18:38:34Z + creationTimestamp: 2019-12-27T18:38:34Z name: config-multi-env-files namespace: default resourceVersion: "810136" @@ -292,13 +323,15 @@ data: #### Define the key to use when creating a ConfigMap from a file -You can define a key other than the file name to use in the `data` section of your ConfigMap when using the `--from-file` argument: +You can define a key other than the file name to use in the `data` section of your ConfigMap +when using the `--from-file` argument: ```shell kubectl create configmap game-config-3 --from-file== ``` -where `` is the key you want to use in the ConfigMap and `` is the location of the data source file you want the key to represent. +where `` is the key you want to use in the ConfigMap and `` is the +location of the data source file you want the key to represent. For example: @@ -316,7 +349,7 @@ where the output is similar to this: apiVersion: v1 kind: ConfigMap metadata: - creationTimestamp: 2016-02-18T18:54:22Z + creationTimestamp: 2022-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: "530" @@ -334,13 +367,15 @@ data: #### Create ConfigMaps from literal values -You can use `kubectl create configmap` with the `--from-literal` argument to define a literal value from the command line: +You can use `kubectl create configmap` with the `--from-literal` argument to define a literal +value from the command line: ```shell kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm ``` -You can pass in multiple key-value pairs. Each pair provided on the command line is represented as a separate entry in the `data` section of the ConfigMap. +You can pass in multiple key-value pairs. Each pair provided on the command line is represented +as a separate entry in the `data` section of the ConfigMap. ```shell kubectl get configmaps special-config -o yaml @@ -351,7 +386,7 @@ The output is similar to this: apiVersion: v1 kind: ConfigMap metadata: - creationTimestamp: 2016-02-18T19:14:38Z + creationTimestamp: 2022-02-18T19:14:38Z name: special-config namespace: default resourceVersion: "651" @@ -362,26 +397,33 @@ data: ``` ### Create a ConfigMap from generator -`kubectl` supports `kustomization.yaml` since 1.14. -You can also create a ConfigMap from generators and then apply it to create the object on -the Apiserver. The generators -should be specified in a `kustomization.yaml` inside a directory. + +You can also create a ConfigMap from generators and then apply it to create the object +in the cluster's API server. +You should specify the generators in a `kustomization.yaml` file within a directory. #### Generate ConfigMaps from files + For example, to generate a ConfigMap from files `configure-pod-container/configmap/game.properties` + ```shell # Create a kustomization.yaml file with ConfigMapGenerator cat <./kustomization.yaml configMapGenerator: - name: game-config-4 + labels: + game-config: config-4 files: - configure-pod-container/configmap/game.properties EOF ``` -Apply the kustomization directory to create the ConfigMap object. +Apply the kustomization directory to create the ConfigMap object: + ```shell kubectl apply -k . +``` +``` configmap/game-config-4-m9dm2f92bt created ``` @@ -389,14 +431,21 @@ You can check that the ConfigMap was created like this: ```shell kubectl get configmap +``` +``` NAME DATA AGE game-config-4-m9dm2f92bt 1 37s +``` +and also: +```shell kubectl describe configmaps/game-config-4-m9dm2f92bt +``` +``` Name: game-config-4-m9dm2f92bt Namespace: default -Labels: +Labels: game-config=config-4 Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","data":{"game.properties":"enemies=aliens\nlives=3\nenemies.cheat=true\nenemies.cheat.level=noGoodRotten\nsecret.code.p... @@ -414,10 +463,11 @@ secret.code.lives=30 Events: ``` -Note that the generated ConfigMap name has a suffix appended by hashing the contents. This ensures that a -new ConfigMap is generated each time the content is modified. +Notice that the generated ConfigMap name has a suffix appended by hashing the contents. This +ensures that a new ConfigMap is generated each time the content is modified. #### Define the key to use when generating a ConfigMap from a file + You can define a key other than the file name to use in the ConfigMap generator. For example, to generate a ConfigMap from files `configure-pod-container/configmap/game.properties` with the key `game-special-key` @@ -427,6 +477,8 @@ with the key `game-special-key` cat <./kustomization.yaml configMapGenerator: - name: game-config-5 + labels: + game-config: config-5 files: - game-special-key=configure-pod-container/configmap/game.properties EOF @@ -435,28 +487,51 @@ EOF Apply the kustomization directory to create the ConfigMap object. ```shell kubectl apply -k . +``` +``` configmap/game-config-5-m67dt67794 created ``` -#### Generate ConfigMaps from Literals -To generate a ConfigMap from literals `special.type=charm` and `special.how=very`, -you can specify the ConfigMap generator in `kustomization.yaml` as -```shell -# Create a kustomization.yaml file with ConfigMapGenerator -cat <./kustomization.yaml +#### Generate ConfigMaps from literals + +This example shows you how to create a `ConfigMap` from two literal key/value pairs: +`special.type=charm` and `special.how=very`, using Kustomize and kubectl. To achieve +this, you can specify the `ConfigMap` generator. Create (or replace) +`kustomization.yaml` so that it has the following contents: + +```yaml +--- +# kustomization.yaml contents for creating a ConfigMap from literals configMapGenerator: - name: special-config-2 literals: - special.how=very - special.type=charm -EOF ``` -Apply the kustomization directory to create the ConfigMap object. + +Apply the kustomization directory to create the ConfigMap object: ```shell kubectl apply -k . +``` +``` configmap/special-config-2-c92b5mmcf2 created ``` +## Interim cleanup + +Before proceeding, clean up some of the ConfigMaps you made: + +```bash +kubectl delete configmap special-config +kubectl delete configmap env-config +kubectl delete configmap -l 'game-config in (config-4,config-5)’ +``` + +Now that you have learned to define ConfigMaps, you can move on to the next +section, and learn how to use these objects with Pods. + +--- + ## Define container environment variables using ConfigMap data ### Define a container environment variable with data from a single ConfigMap @@ -467,7 +542,8 @@ configmap/special-config-2-c92b5mmcf2 created kubectl create configmap special-config --from-literal=special.how=very ``` -2. Assign the `special.how` value defined in the ConfigMap to the `SPECIAL_LEVEL_KEY` environment variable in the Pod specification. +2. Assign the `special.how` value defined in the ConfigMap to the `SPECIAL_LEVEL_KEY` + environment variable in the Pod specification. {{< codenew file="pods/pod-single-configmap-env-variable.yaml" >}} @@ -481,11 +557,12 @@ configmap/special-config-2-c92b5mmcf2 created ### Define container environment variables with data from multiple ConfigMaps -* As with the previous example, create the ConfigMaps first. +As with the previous example, create the ConfigMaps first. +Here is the manifest you will use: - {{< codenew file="configmap/configmaps.yaml" >}} +{{< codenew file="configmap/configmaps.yaml" >}} - Create the ConfigMap: +* Create the ConfigMap: ```shell kubectl create -f https://kubernetes.io/examples/configmap/configmaps.yaml @@ -503,6 +580,11 @@ configmap/special-config-2-c92b5mmcf2 created Now, the Pod's output includes environment variables `SPECIAL_LEVEL_KEY=very` and `LOG_LEVEL=INFO`. + Once you're happy to move on, delete that Pod: + ```shell + kubectl delete pod dapi-test-pod --now + ``` + ## Configure all key-value pairs in a ConfigMap as container environment variables * Create a ConfigMap containing multiple key-value pairs. @@ -515,7 +597,8 @@ configmap/special-config-2-c92b5mmcf2 created kubectl create -f https://kubernetes.io/examples/configmap/configmap-multikeys.yaml ``` -* Use `envFrom` to define all of the ConfigMap's data as container environment variables. The key from the ConfigMap becomes the environment variable name in the Pod. +* Use `envFrom` to define all of the ConfigMap's data as container environment variables. The + key from the ConfigMap becomes the environment variable name in the Pod. {{< codenew file="pods/pod-configmap-envFrom.yaml" >}} @@ -524,35 +607,47 @@ configmap/special-config-2-c92b5mmcf2 created ```shell kubectl create -f https://kubernetes.io/examples/pods/pod-configmap-envFrom.yaml ``` + Now, the Pod's output includes environment variables `SPECIAL_LEVEL=very` and + `SPECIAL_TYPE=charm`. - Now, the Pod's output includes environment variables `SPECIAL_LEVEL=very` and `SPECIAL_TYPE=charm`. - + Once you're happy to move on, delete that Pod: + ```shell + kubectl delete pod dapi-test-pod --now + ``` ## Use ConfigMap-defined environment variables in Pod commands -You can use ConfigMap-defined environment variables in the `command` and `args` of a container using the `$(VAR_NAME)` Kubernetes substitution syntax. +You can use ConfigMap-defined environment variables in the `command` and `args` of a container +using the `$(VAR_NAME)` Kubernetes substitution syntax. -For example, the following Pod specification +For example, the following Pod manifest: {{< codenew file="pods/pod-configmap-env-var-valueFrom.yaml" >}} -created by running +Create that Pod, by running: ```shell kubectl create -f https://kubernetes.io/examples/pods/pod-configmap-env-var-valueFrom.yaml ``` -produces the following output in the `test-container` container: +That pod produces the following output from the `test-container` container: ``` very charm ``` +Once you're happy to move on, delete that Pod: +```shell +kubectl delete pod dapi-test-pod --now +``` + ## Add ConfigMap data to a Volume -As explained in [Create ConfigMaps from files](#create-configmaps-from-files), when you create a ConfigMap using ``--from-file``, the filename becomes a key stored in the `data` section of the ConfigMap. The file contents become the key's value. +As explained in [Create ConfigMaps from files](#create-configmaps-from-files), when you create +a ConfigMap using `--from-file`, the filename becomes a key stored in the `data` section of +the ConfigMap. The file contents become the key's value. -The examples in this section refer to a ConfigMap named special-config, shown below. +The examples in this section refer to a ConfigMap named `special-config`: {{< codenew file="configmap/configmap-multikeys.yaml" >}} @@ -565,8 +660,9 @@ kubectl create -f https://kubernetes.io/examples/configmap/configmap-multikeys.y ### Populate a Volume with data stored in a ConfigMap Add the ConfigMap name under the `volumes` section of the Pod specification. -This adds the ConfigMap data to the directory specified as `volumeMounts.mountPath` (in this case, `/etc/config`). -The `command` section lists directory files with names that match the keys in ConfigMap. +This adds the ConfigMap data to the directory specified as `volumeMounts.mountPath` (in this +case, `/etc/config`). The `command` section lists directory files with names that match the +keys in ConfigMap. {{< codenew file="pods/pod-configmap-volume.yaml" >}} @@ -583,14 +679,20 @@ SPECIAL_LEVEL SPECIAL_TYPE ``` -{{< caution >}} -If there are some files in the `/etc/config/` directory, they will be deleted. -{{< /caution >}} +Text data is exposed as files using the UTF-8 character encoding. To use some other +character encoding, use `binaryData` +(see [ConfigMap object](/docs/concepts/configuration/configmap/#configmap-object) for more details). {{< note >}} -Text data is exposed as files using the UTF-8 character encoding. To use some other character encoding, use binaryData. +If there are any files in the `/etc/config` directory of that container image, the volume +mount will make those files from the image inaccessible. {{< /note >}} +Once you're happy to move on, delete that Pod: +```shell +kubectl delete pod dapi-test-pod --now +``` + ### Add ConfigMap data to a specific path in the Volume Use the `path` field to specify the desired file path for specific ConfigMap items. @@ -614,24 +716,63 @@ very Like before, all previous files in the `/etc/config/` directory will be deleted. {{< /caution >}} +Delete that Pod: +```shell +kubectl delete pod dapi-test-pod --now +``` + ### Project keys to specific paths and file permissions You can project keys to specific paths and specific permissions on a per-file -basis. The [Secrets](/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod) user guide explains the syntax. +basis. The +[Secrets](/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod) +guide explains the syntax. +### Optional references +A ConfigMap reference may be marked _optional_. If the ConfigMap is non-existent, the mounted +volume will be empty. If the ConfigMap exists, but the referenced key is non-existent, the path +will be absent beneath the mount point. See [Optional ConfigMaps](#optional-configmaps) for more +details. + +### Mounted ConfigMaps are updated automatically + +When a mounted ConfigMap is updated, the projected content is eventually updated too. +This applies in the case where an optionally referenced ConfigMap comes into +existence after a pod has started. + +Kubelet checks whether the mounted ConfigMap is fresh on every periodic sync. However, +it uses its local TTL-based cache for getting the current value of the ConfigMap. As a +result, the total delay from the moment when the ConfigMap is updated to the moment +when new keys are projected to the pod can be as long as kubelet sync period (1 +minute by default) + TTL of ConfigMaps cache (1 minute by default) in kubelet. You +can trigger an immediate refresh by updating one of the pod's annotations. + +{{< note >}} +A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes/#using-subpath) +volume will not receive ConfigMap updates. +{{< /note >}} ## Understanding ConfigMaps and Pods -The ConfigMap API resource stores configuration data as key-value pairs. The data can be consumed in pods or provide the configurations for system components such as controllers. ConfigMap is similar to [Secrets](/docs/concepts/configuration/secret/), but provides a means of working with strings that don't contain sensitive information. Users and system components alike can store configuration data in ConfigMap. +The ConfigMap API resource stores configuration data as key-value pairs. The data can be consumed +in pods or provide the configurations for system components such as controllers. ConfigMap is +similar to [Secrets](/docs/concepts/configuration/secret/), but provides a means of working +with strings that don't contain sensitive information. Users and system components alike can +store configuration data in ConfigMap. {{< note >}} -ConfigMaps should reference properties files, not replace them. Think of the ConfigMap as representing something similar to the Linux `/etc` directory and its contents. For example, if you create a [Kubernetes Volume](/docs/concepts/storage/volumes/) from a ConfigMap, each data item in the ConfigMap is represented by an individual file in the volume. +ConfigMaps should reference properties files, not replace them. Think of the ConfigMap as +representing something similar to the Linux `/etc` directory and its contents. For example, +if you create a [Kubernetes Volume](/docs/concepts/storage/volumes/) from a ConfigMap, each +data item in the ConfigMap is represented by an individual file in the volume. {{< /note >}} -The ConfigMap's `data` field contains the configuration data. As shown in the example below, this can be simple -- like individual properties defined using `--from-literal` -- or complex -- like configuration files or JSON blobs defined using `--from-file`. +The ConfigMap's `data` field contains the configuration data. As shown in the example below, +this can be simple (like individual properties defined using `--from-literal`) or complex +(like configuration files or JSON blobs defined using `--from-file`). ```yaml apiVersion: v1 @@ -651,33 +792,24 @@ data: property.3=value-3 ``` -### Restrictions - -- You must create the `ConfigMap` object before you reference it in a Pod specification. Alternatively, mark the ConfigMap reference as `optional` in the Pod spec (see [Optional ConfigMaps](#optional-configmaps)). If you reference a ConfigMap that doesn't exist and you don't mark the reference as `optional`, the Pod won't start. Similarly, references to keys that don't exist in the ConfigMap will also prevent the Pod from starting, unless you mark the key references as `optional`. - -- If you use `envFrom` to define environment variables from ConfigMaps, keys that are considered invalid will be skipped. The pod will be allowed to start, but the invalid names will be recorded in the event log (`InvalidVariableNames`). The log message lists each skipped key. For example: - - ```shell - kubectl get events - ``` - - The output is similar to this: - ``` - LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE - 0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames {kubelet, 127.0.0.1} Keys [1badkey, 2alsobad] from the EnvFrom configMap default/myconfig were skipped since they are considered invalid environment variable names. - ``` +When `kubectl` creates a ConfigMap from inputs that are not ASCII or UTF-8, the tool puts +these into the `binaryData` field of the ConfigMap, and not in `data`. Both text and binary +data sources can be combined in one ConfigMap. -- ConfigMaps reside in a specific {{< glossary_tooltip term_id="namespace" >}}. A ConfigMap can only be referenced by pods residing in the same namespace. +If you want to view the `binaryData` keys (and their values) in a ConfigMap, you can run +`kubectl get configmap -o jsonpath='{.binaryData}' `. -- You can't use ConfigMaps for {{< glossary_tooltip text="static pods" term_id="static-pod" >}}, because the Kubelet does not support this. +Pods can load data from a ConfigMap that uses either `data` or `binaryData`. -### Optional ConfigMaps +## Optional ConfigMaps You can mark a reference to a ConfigMap as _optional_ in a Pod specification. -If the ConfigMap doesn't exist, the configuration for which it provides data in the Pod (e.g. environment variable, mounted volume) will be empty. +If the ConfigMap doesn't exist, the configuration for which it provides data in the Pod +(for example: environment variable, mounted volume) will be empty. If the ConfigMap exists, but the referenced key is non-existent the data is also empty. -For example, the following Pod specification marks an environment variable from a ConfigMap as optional: +For example, the following Pod specification marks an environment variable from a ConfigMap +as optional: ```yaml apiVersion: v1 @@ -688,7 +820,7 @@ spec: containers: - name: test-container image: gcr.io/google_containers/busybox - command: [ "/bin/sh", "-c", "env" ] + command: ["/bin/sh", "-c", "env"] env: - name: SPECIAL_LEVEL_KEY valueFrom: @@ -704,8 +836,9 @@ If you run this pod, and there is a ConfigMap named `a-config` but that ConfigMa a key named `akey`, the output is also empty. If you do set a value for `akey` in the `a-config` ConfigMap, this pod prints that value and then terminates. -You can also mark the volumes and files provided by a ConfigMap as optional. Kubernetes always creates the mount paths for the volume, even if the referenced ConfigMap or key doesn't exist. For example, the following -Pod specification marks a volume that references a ConfigMap as optional: +You can also mark the volumes and files provided by a ConfigMap as optional. Kubernetes always +creates the mount paths for the volume, even if the referenced ConfigMap or key doesn't exist. For +example, the following Pod specification marks a volume that references a ConfigMap as optional: ```yaml apiVersion: v1 @@ -716,7 +849,7 @@ spec: containers: - name: test-container image: gcr.io/google_containers/busybox - command: [ "/bin/sh", "-c", "ls /etc/config" ] + command: ["/bin/sh", "-c", "ls /etc/config"] volumeMounts: - name: config-volume mountPath: /etc/config @@ -730,17 +863,70 @@ spec: ### Mounted ConfigMaps are updated automatically -When a mounted ConfigMap is updated, the projected content is eventually updated too. This applies in the case where an optionally referenced ConfigMap comes into -existence after a pod has started. +When a mounted ConfigMap is updated, the projected content is eventually updated too. +This applies in the case where an optionally referenced ConfigMap comes into existence after +a pod has started. -The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync. However, it uses its local TTL-based cache for getting the current value of the -ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as -kubelet sync period (1 minute by default) + TTL of ConfigMaps cache (1 minute by default) in kubelet. +The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync. However, it +uses its local TTL-based cache for getting the current value of the ConfigMap. As a result, +the total delay from the moment when the ConfigMap is updated to the moment when new keys +are projected to the pod can be as long as kubelet sync period (1 minute by default) + TTL of +ConfigMaps cache (1 minute by default) in kubelet. {{< note >}} -A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes/#using-subpath) volume will not receive ConfigMap updates. +A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes/#using-subpath) +volume will not receive ConfigMap updates. {{< /note >}} +## Restrictions + +- You must create the `ConfigMap` object before you reference it in a Pod + specification. Alternatively, mark the ConfigMap reference as `optional` in the Pod spec (see + [Optional ConfigMaps](#optional-configmaps)). If you reference a ConfigMap that doesn't exist + and you don't mark the reference as `optional`, the Pod won't start. Similarly, references + to keys that don't exist in the ConfigMap will also prevent the Pod from starting, unless + you mark the key references as `optional`. + +- If you use `envFrom` to define environment variables from ConfigMaps, keys that are considered + invalid will be skipped. The pod will be allowed to start, but the invalid names will be + recorded in the event log (`InvalidVariableNames`). The log message lists each skipped + key. For example: + + ```shell + kubectl get events + ``` + + The output is similar to this: + ``` + LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE + 0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames {kubelet, 127.0.0.1} Keys [1badkey, 2alsobad] from the EnvFrom configMap default/myconfig were skipped since they are considered invalid environment variable names. + ``` + +- ConfigMaps reside in a specific {{< glossary_tooltip term_id="namespace" >}}. + Pods can only refer to ConfigMaps that are in the same namespace as the Pod. + +- You can't use ConfigMaps for + {{< glossary_tooltip text="static pods" term_id="static-pod" >}}, because the + kubelet does not support this. + +## {{% heading "cleanup" %}} + +Delete the ConfigMaps and Pods that you made: + +```bash +kubectl delete configmaps/game-config configmaps/game-config-2 configmaps/game-config-3 \ + configmaps/game-config-env-file +kubectl delete pod dapi-test-pod --now + +# You might already have removed the next set +kubectl delete configmaps/special-config configmaps/env-config +kubectl delete configmap -l 'game-config in (config-4,config-5)’ +``` + +If you created a directory `configure-pod-container` and no longer need it, you should remove that too, +or move it into the trash can / deleted files location. + ## {{% heading "whatsnext" %}} -* Follow a real world example of [Configuring Redis using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/). +* Follow a real world example of + [Configuring Redis using a ConfigMap](/docs/tutorials/configuration/configure-redis-using-configmap/). diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-initialization.md b/content/en/docs/tasks/configure-pod-container/configure-pod-initialization.md index 97457b55f1115..5f185341dcef5 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-initialization.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-initialization.md @@ -1,22 +1,18 @@ --- title: Configure Pod Initialization content_type: task -weight: 130 +weight: 170 --- + This page shows how to use an Init Container to initialize a Pod before an application Container runs. - - ## {{% heading "prerequisites" %}} - {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - ## Create a Pod that has an Init Container @@ -37,55 +33,63 @@ shared Volume at `/work-dir`, and the application container mounts the shared Volume at `/usr/share/nginx/html`. The init container runs the following command and then terminates: - wget -O /work-dir/index.html http://info.cern.ch +```shell +wget -O /work-dir/index.html http://info.cern.ch +``` Notice that the init container writes the `index.html` file in the root directory of the nginx server. Create the Pod: - kubectl apply -f https://k8s.io/examples/pods/init-containers.yaml +```shell +kubectl apply -f https://k8s.io/examples/pods/init-containers.yaml +``` Verify that the nginx container is running: - kubectl get pod init-demo +```shell +kubectl get pod init-demo +``` The output shows that the nginx container is running: - NAME READY STATUS RESTARTS AGE - init-demo 1/1 Running 0 1m +``` +NAME READY STATUS RESTARTS AGE +init-demo 1/1 Running 0 1m +``` Get a shell into the nginx container running in the init-demo Pod: - kubectl exec -it init-demo -- /bin/bash +```shell +kubectl exec -it init-demo -- /bin/bash +``` In your shell, send a GET request to the nginx server: - root@nginx:~# apt-get update - root@nginx:~# apt-get install curl - root@nginx:~# curl localhost +``` +root@nginx:~# apt-get update +root@nginx:~# apt-get install curl +root@nginx:~# curl localhost +``` The output shows that nginx is serving the web page that was written by the init container: -
- http://info.cern.ch -
- -

http://info.cern.ch - home of the first website

- ... -
  • Browse the first website
  • - ... - +```html +
    +http://info.cern.ch +
    +

    http://info.cern.ch - home of the first website

    + ... +
  • Browse the first website
  • + ... +``` ## {{% heading "whatsnext" %}} - * Learn more about -[communicating between Containers running in the same Pod](/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/). + [communicating between Containers running in the same Pod](/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/). * Learn more about [Init Containers](/docs/concepts/workloads/pods/init-containers/). * Learn more about [Volumes](/docs/concepts/storage/volumes/). * Learn more about [Debugging Init Containers](/docs/tasks/debug/debug-application/debug-init-containers/) - - - diff --git a/content/en/docs/tasks/configure-pod-container/configure-projected-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-projected-volume-storage.md index fb558931db6fe..a3a7ec10f3c2a 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-projected-volume-storage.md +++ b/content/en/docs/tasks/configure-pod-container/configure-projected-volume-storage.md @@ -4,7 +4,7 @@ reviewers: - pmorie title: Configure a Pod to Use a Projected Volume for Storage content_type: task -weight: 70 +weight: 100 --- diff --git a/content/en/docs/tasks/configure-pod-container/configure-runasusername.md b/content/en/docs/tasks/configure-pod-container/configure-runasusername.md index 58028f9c8983a..43f60a7f73c9d 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-runasusername.md +++ b/content/en/docs/tasks/configure-pod-container/configure-runasusername.md @@ -1,7 +1,7 @@ --- title: Configure RunAsUserName for Windows pods and containers content_type: task -weight: 20 +weight: 40 --- diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md index 5b783d2dcca01..50d68e4bc0ecc 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md +++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md @@ -5,7 +5,7 @@ reviewers: - thockin title: Configure Service Accounts for Pods content_type: task -weight: 90 +weight: 120 --- Kubernetes offers two distinct ways for clients that run within your diff --git a/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md b/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md index 1ee34aa225721..d33d221a4e137 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md +++ b/content/en/docs/tasks/configure-pod-container/configure-volume-storage.md @@ -1,7 +1,7 @@ --- title: Configure a Pod to Use a Volume for Storage content_type: task -weight: 50 +weight: 80 --- diff --git a/content/en/docs/tasks/configure-pod-container/create-hostprocess-pod.md b/content/en/docs/tasks/configure-pod-container/create-hostprocess-pod.md index 24b8efea5a8cd..4ba0b26fe76bc 100644 --- a/content/en/docs/tasks/configure-pod-container/create-hostprocess-pod.md +++ b/content/en/docs/tasks/configure-pod-container/create-hostprocess-pod.md @@ -1,7 +1,7 @@ --- title: Create a Windows HostProcess Pod content_type: task -weight: 20 +weight: 50 min-kubernetes-server-version: 1.23 --- diff --git a/content/en/docs/tasks/configure-pod-container/enforce-standards-admission-controller.md b/content/en/docs/tasks/configure-pod-container/enforce-standards-admission-controller.md index 614d5c3b56ade..5f11dba1a1451 100644 --- a/content/en/docs/tasks/configure-pod-container/enforce-standards-admission-controller.md +++ b/content/en/docs/tasks/configure-pod-container/enforce-standards-admission-controller.md @@ -4,6 +4,7 @@ reviewers: - tallclair - liggitt content_type: task +weight: 240 --- Kubernetes provides a built-in [admission controller](/docs/reference/access-authn-authz/admission-controllers/#podsecurity) diff --git a/content/en/docs/tasks/configure-pod-container/enforce-standards-namespace-labels.md b/content/en/docs/tasks/configure-pod-container/enforce-standards-namespace-labels.md index a9f767b269e5c..1bfce66d1d2df 100644 --- a/content/en/docs/tasks/configure-pod-container/enforce-standards-namespace-labels.md +++ b/content/en/docs/tasks/configure-pod-container/enforce-standards-namespace-labels.md @@ -4,6 +4,7 @@ reviewers: - tallclair - liggitt content_type: task +weight: 250 --- Namespaces can be labeled to enforce the [Pod Security Standards](/docs/concepts/security/pod-security-standards). The three policies diff --git a/content/en/docs/tasks/configure-pod-container/extended-resource.md b/content/en/docs/tasks/configure-pod-container/extended-resource.md index 25fa11b0d9f6b..6b9d8446648c6 100644 --- a/content/en/docs/tasks/configure-pod-container/extended-resource.md +++ b/content/en/docs/tasks/configure-pod-container/extended-resource.md @@ -1,7 +1,7 @@ --- title: Assign Extended Resources to a Container content_type: task -weight: 40 +weight: 70 --- diff --git a/content/en/docs/tasks/configure-pod-container/migrate-from-psp.md b/content/en/docs/tasks/configure-pod-container/migrate-from-psp.md index 39096921626df..f48ad1e7bf32b 100644 --- a/content/en/docs/tasks/configure-pod-container/migrate-from-psp.md +++ b/content/en/docs/tasks/configure-pod-container/migrate-from-psp.md @@ -5,6 +5,7 @@ reviewers: - liggitt content_type: task min-kubernetes-server-version: v1.22 +weight: 260 --- diff --git a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md index 05193031f45c2..0b21934cfcce1 100644 --- a/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md +++ b/content/en/docs/tasks/configure-pod-container/pull-image-private-registry.md @@ -1,7 +1,6 @@ --- title: Pull an Image from a Private Registry -content_type: task -weight: 100 +weight: 130 --- diff --git a/content/en/docs/tasks/configure-pod-container/quality-service-pod.md b/content/en/docs/tasks/configure-pod-container/quality-service-pod.md index 15d324b955082..d242f105f2d8c 100644 --- a/content/en/docs/tasks/configure-pod-container/quality-service-pod.md +++ b/content/en/docs/tasks/configure-pod-container/quality-service-pod.md @@ -1,7 +1,7 @@ --- title: Configure Quality of Service for Pods content_type: task -weight: 30 +weight: 60 --- diff --git a/content/en/docs/tasks/configure-pod-container/security-context.md b/content/en/docs/tasks/configure-pod-container/security-context.md index 7e1a04e9a439e..756222eb426eb 100644 --- a/content/en/docs/tasks/configure-pod-container/security-context.md +++ b/content/en/docs/tasks/configure-pod-container/security-context.md @@ -5,7 +5,7 @@ reviewers: - thockin title: Configure a Security Context for a Pod or Container content_type: task -weight: 80 +weight: 110 --- @@ -442,7 +442,7 @@ To assign SELinux labels, the SELinux security module must be loaded on the host {{< feature-state for_k8s_version="v1.25" state="alpha" >}} -By default, the contrainer runtime recursively assigns SELinux label to all +By default, the container runtime recursively assigns SELinux label to all files on all Pod volumes. To speed up this process, Kubernetes can change the SELinux label of a volume instantly by using a mount option `-o context=