diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES
index 291db36bf2345..43f27a88b292f 100644
--- a/OWNERS_ALIASES
+++ b/OWNERS_ALIASES
@@ -41,7 +41,6 @@ aliases:
sig-docs-en-owners: # Admins for English content
- bradtopol
- daminisatya
- - gochist
- jaredbhatti
- jimangel
- kbarnard10
@@ -58,7 +57,6 @@ aliases:
sig-docs-en-reviews: # PR reviews for English content
- bradtopol
- daminisatya
- - gochist
- jaredbhatti
- jimangel
- kbarnard10
@@ -192,10 +190,12 @@ aliases:
- femrtnz
- jcjesus
- devlware
+ - jhonmike
sig-docs-pt-reviews: # PR reviews for Portugese content
- femrtnz
- jcjesus
- devlware
+ - jhonmike
sig-docs-vi-owners: # Admins for Vietnamese content
- huynguyennovem
- ngtuna
diff --git a/README-id.md b/README-id.md
index b178b220bc38e..1d6b830f2e8d8 100644
--- a/README-id.md
+++ b/README-id.md
@@ -9,7 +9,7 @@ Selamat datang! Repositori ini merupakan wadah bagi semua komponen yang dibutuhk
Pertama, kamu dapat menekan tombol **Fork** yang berada pada bagian atas layar, untuk menyalin repositori pada akun Github-mu. Salinan ini disebut sebagai **fork**. Kamu dapat menambahkan konten pada **fork** yang kamu miliki, setelah kamu merasa cukup untuk menambahkan konten yang kamu miliki dan ingin memberikan konten tersebut pada kami, kamu dapat melihat **fork** yang telah kamu buat dan membuat **pull request** untuk memberi tahu kami bahwa kamu ingin menambahkan konten yang telah kamu buat.
-Setelah kamu membuat sebuah **pull request**, seorang **reviewer** akan memberikan masukan terhadap konten yang kamu sediakan serta beberapa hal yang dapat kamu lakukan apabila perbaikan diperlukan terhadap konten yang telah kamu sediakan. Sebagai seorang yang membuat **pull request**, **sudah menjadi kewajiban kamu untuk melakukan modifikasi terhadap konten yang kamu berikan sesuai dengan masukan yang diberikan oleh seorang reviewer Kubernetes**. Perlu kamu ketahui bahwa kamu dapat saja memiliki lebih dari satu orang **reviewer Kubernetes** atau dalam kasus kamu bisa saja mendapatkan **reviewer Kubernetes** yang berbeda dengan **reviewer Kubernetes** awal yang ditugaskan untuk memberikan masukan terhadap konten yang kamu sediakan. Selain itu, seorang **reviewer Kubernetes** bisa saja meminta masukan teknis dari [reviewer teknis Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) jika diperlukan.
+Setelah kamu membuat sebuah **pull request**, seorang **reviewer** akan memberikan masukan terhadap konten yang kamu sediakan serta beberapa hal yang dapat kamu lakukan apabila perbaikan diperlukan terhadap konten yang telah kamu sediakan. Sebagai seorang yang membuat **pull request**, **sudah menjadi kewajiban kamu untuk melakukan modifikasi terhadap konten yang kamu berikan sesuai dengan masukan yang diberikan oleh seorang reviewer Kubernetes**. Perlu kamu ketahui bahwa kamu dapat saja memiliki lebih dari satu orang **reviewer Kubernetes** atau dalam kasus kamu bisa saja mendapatkan **reviewer Kubernetes** yang berbeda dengan **reviewer Kubernetes** awal yang ditugaskan untuk memberikan masukan terhadap konten yang kamu sediakan. Selain itu, seorang **reviewer Kubernetes** bisa saja meminta masukan teknis dari [reviewer teknis Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) jika diperlukan.
Untuk informasi lebih lanjut mengenai tata cara melakukan kontribusi, kamu dapat melihat tautan di bawah ini:
@@ -21,11 +21,11 @@ Untuk informasi lebih lanjut mengenai tata cara melakukan kontribusi, kamu dapat
## Menjalankan Dokumentasi Kubernetes pada Mesin Lokal Kamu
-Petunjuk yang disarankan untuk menjalankan Dokumentasi Kubernetes pada mesin lokal kamus adalah dengan menggunakan [Docker](https://docker.com) **image** yang memiliki **package** [Hugo](https://gohugo.io), **Hugo** sendiri merupakan generator website statis.
+Petunjuk yang disarankan untuk menjalankan Dokumentasi Kubernetes pada mesin lokal kamus adalah dengan menggunakan [Docker](https://docker.com) **image** yang memiliki **package** [Hugo](https://gohugo.io), **Hugo** sendiri merupakan generator website statis.
> Jika kamu menggunakan Windows, kamu mungkin membutuhkan beberapa langkah tambahan untuk melakukan instalasi perangkat lunak yang dibutuhkan. Instalasi ini dapat dilakukan dengan menggunakan [Chocolatey](https://chocolatey.org). `choco install make`
-> Jika kamu ingin menjalankan **website** tanpa menggunakan **Docker**, kamu dapat melihat tautan berikut [Petunjuk untuk menjalankan website pada mesin lokal dengan menggunakan Hugo](#petunjuk-untuk-menjalankan-website-pada-mesin-lokal-denga-menggunakan-hugo) di bagian bawah.
+> Jika kamu ingin menjalankan **website** tanpa menggunakan **Docker**, kamu dapat melihat tautan berikut [Petunjuk untuk menjalankan website pada mesin lokal dengan menggunakan Hugo](#petunjuk-untuk-menjalankan-website-pada-mesin-lokal-dengan-menggunakan-hugo) di bagian bawah.
Jika kamu sudah memiliki **Docker** [yang sudah dapat digunakan](https://www.docker.com/get-started), kamu dapat melakukan **build** `kubernetes-hugo` **Docker image** secara lokal:
@@ -44,7 +44,7 @@ Buka **browser** kamu ke http://localhost:1313 untuk melihat laman dokumentasi.
## Petunjuk untuk menjalankan website pada mesin lokal dengan menggunakan Hugo
-Kamu dapat melihat [dokumentasi resmi Hugo](https://gohugo.io/getting-started/installing/) untuk mengetahui langkah yang diperlukan untuk melakukan instalasi **Hugo**. Pastikan kamu melakukan instalasi versi **Hugo** sesuai dengan versi yang tersedia pada **environment variable** `HUGO_VERSION` pada **file**[`netlify.toml`](netlify.toml#L9).
+Kamu dapat melihat [dokumentasi resmi Hugo](https://gohugo.io/getting-started/installing/) untuk mengetahui langkah yang diperlukan untuk melakukan instalasi **Hugo**. Pastikan kamu melakukan instalasi versi **Hugo** sesuai dengan versi yang tersedia pada **environment variable** `HUGO_VERSION` pada **file**[`netlify.toml`](netlify.toml#L9).
Untuk menjalankan laman pada mesin lokal setelah instalasi **Hugo**, kamu dapat menjalankan perintah berikut:
diff --git a/README-vi.md b/README-vi.md
index 454d687d02352..b06b6df368b66 100644
--- a/README-vi.md
+++ b/README-vi.md
@@ -26,7 +26,7 @@ Cách được đề xuất để chạy trang web Kubernetes cục bộ là dù
> Nếu bạn làm việc trên môi trường Windows, bạn sẽ cần thêm môt vài công cụ mà bạn có thể cài đặt với [Chocolatey](https://chocolatey.org). `choco install make`
-> Nếu bạn không muốn dùng Docker để chạy trang web cục bộ, hãy xem [Chạy website cục bộ dùng Hugo](#Chạy website cục bộ dùng Hugo) dưới đây.
+> Nếu bạn không muốn dùng Docker để chạy trang web cục bộ, hãy xem [Chạy website cục bộ dùng Hugo](#chạy-website-cục-bộ-dùng-hugo) dưới đây.
Nếu bạn có Docker đang [up và running](https://www.docker.com/get-started), build `kubernetes-hugo` Docker image cục bộ:
diff --git a/content/en/blog/_posts/2019-07-18-some-apis-are-being-deprecated.md b/content/en/blog/_posts/2019-07-18-some-apis-are-being-deprecated.md
index 9639e87a5366a..587080617c7c6 100644
--- a/content/en/blog/_posts/2019-07-18-some-apis-are-being-deprecated.md
+++ b/content/en/blog/_posts/2019-07-18-some-apis-are-being-deprecated.md
@@ -23,30 +23,30 @@ The **v1.16** release will stop serving the following deprecated API versions in
Existing persisted data can be retrieved/updated via the new version.
* Notable changes:
* `spec.templateGeneration` is removed
- * `spec.selector` is now required and immutable after creation
- * `spec.updateStrategy.type` now defaults to `RollingUpdate`
+ * `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
+ * `spec.updateStrategy.type` now defaults to `RollingUpdate` (the default in `extensions/v1beta1` was `OnDelete`)
* Deployment in the **extensions/v1beta1**, **apps/v1beta1**, and **apps/v1beta2** API versions is no longer served
* Migrate to use the **apps/v1** API version, available since v1.9.
Existing persisted data can be retrieved/updated via the new version.
* Notable changes:
* `spec.rollbackTo` is removed
- * `spec.selector` is now required and immutable after creation
- * `spec.progressDeadlineSeconds` now defaults to `600` seconds
- * `spec.revisionHistoryLimit` now defaults to `10`
- * `maxSurge` and `maxUnavailable` now default to `25%`
+ * `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
+ * `spec.progressDeadlineSeconds` now defaults to `600` seconds (the default in `extensions/v1beta1` was no deadline)
+ * `spec.revisionHistoryLimit` now defaults to `10` (the default in `apps/v1beta1` was `2`, the default in `extensions/v1beta1` was to retain all)
+ * `maxSurge` and `maxUnavailable` now default to `25%` (the default in `extensions/v1beta1` was `1`)
* StatefulSet in the **apps/v1beta1** and **apps/v1beta2** API versions is no longer served
* Migrate to use the **apps/v1** API version, available since v1.9.
Existing persisted data can be retrieved/updated via the new version.
* Notable changes:
- * `spec.selector` is now required and immutable after creation
- * `spec.updateStrategy.type` now defaults to `RollingUpdate`
+ * `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
+ * `spec.updateStrategy.type` now defaults to `RollingUpdate` (the default in `apps/v1beta1` was `OnDelete`)
* ReplicaSet in the **extensions/v1beta1**, **apps/v1beta1**, and **apps/v1beta2** API versions is no longer served
* Migrate to use the **apps/v1** API version, available since v1.9.
Existing persisted data can be retrieved/updated via the new version.
* Notable changes:
- * `spec.selector` is now required and immutable after creation
+ * `spec.selector` is now required and immutable after creation; use the existing template labels as the selector for seamless upgrades
-The **v1.20** release will stop serving the following deprecated API versions in favor of newer and more stable API versions:
+The **v1.22** release will stop serving the following deprecated API versions in favor of newer and more stable API versions:
* Ingress in the **extensions/v1beta1** API version will no longer be served
* Migrate to use the **networking.k8s.io/v1beta1** API version, available since v1.14.
@@ -84,8 +84,8 @@ apiserver startup arguments:
Deprecations are announced in the Kubernetes release notes. You can see these
announcements in
-[1.14](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.14.md#deprecations)
-and [1.15](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#deprecations-and-removals).
+[1.14](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.14.md#deprecations)
+and [1.15](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md#deprecations-and-removals).
You can read more [in our deprecation policy document](https://kubernetes.io/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api)
about the deprecation policies for Kubernetes APIs, and other Kubernetes components.
diff --git a/content/en/docs/concepts/architecture/nodes.md b/content/en/docs/concepts/architecture/nodes.md
index cf7bdac64b5c7..338e9a240897c 100644
--- a/content/en/docs/concepts/architecture/nodes.md
+++ b/content/en/docs/concepts/architecture/nodes.md
@@ -131,6 +131,8 @@ Kubernetes creates a node object internally (the representation), and
validates the node by health checking based on the `metadata.name` field. If the node is valid -- that is, if all necessary
services are running -- it is eligible to run a pod. Otherwise, it is
ignored for any cluster activity until it becomes valid.
+The name of a Node object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
{{< note >}}
Kubernetes keeps the object for the invalid node and keeps checking to see whether it becomes valid.
diff --git a/content/en/docs/concepts/configuration/manage-compute-resources-container.md b/content/en/docs/concepts/configuration/manage-compute-resources-container.md
index 43e71b15f73d1..576a008ba9866 100644
--- a/content/en/docs/concepts/configuration/manage-compute-resources-container.md
+++ b/content/en/docs/concepts/configuration/manage-compute-resources-container.md
@@ -191,7 +191,7 @@ resource limits, see the
The resource usage of a Pod is reported as part of the Pod status.
-If [optional monitoring](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/README.md)
+If [optional monitoring](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/)
is configured for your cluster, then Pod resource usage can be retrieved from
the monitoring system.
diff --git a/content/en/docs/concepts/configuration/pod-priority-preemption.md b/content/en/docs/concepts/configuration/pod-priority-preemption.md
index 5c2edb1a0a2b0..77fbd93c58723 100644
--- a/content/en/docs/concepts/configuration/pod-priority-preemption.md
+++ b/content/en/docs/concepts/configuration/pod-priority-preemption.md
@@ -117,6 +117,9 @@ priority class name to the integer value of the priority. The name is specified
in the `name` field of the PriorityClass object's metadata. The value is
specified in the required `value` field. The higher the value, the higher the
priority.
+The name of a PriorityClass object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names),
+and it cannot be prefixed with `system-`.
A PriorityClass object can have any 32-bit integer value smaller than or equal
to 1 billion. Larger numbers are reserved for critical system Pods that should
diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md
index 61995234e3db7..356c252fb50bd 100644
--- a/content/en/docs/concepts/configuration/secret.md
+++ b/content/en/docs/concepts/configuration/secret.md
@@ -68,6 +68,8 @@ echo -n '1f2d1e2e67df' > ./password.txt
The `kubectl create secret` command packages these files into a Secret and creates
the object on the API server.
+The name of a Secret object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
```shell
kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt
@@ -137,8 +139,10 @@ See [decoding a secret](#decoding-a-secret) to learn how to view the contents of
#### Creating a Secret manually
You can also create a Secret in a file first, in JSON or YAML format,
-and then create that object. The
-[Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core)
+and then create that object.
+The name of a Secret object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
+The [Secret](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#secret-v1-core)
contains two maps:
`data` and `stringData`. The `data` field is used to store arbitrary data, encoded using
base64. The `stringData` field is provided for convenience, and allows you to provide
diff --git a/content/en/docs/concepts/containers/runtime-class.md b/content/en/docs/concepts/containers/runtime-class.md
index 00bd9fae34a4f..9ea21fa6b0536 100644
--- a/content/en/docs/concepts/containers/runtime-class.md
+++ b/content/en/docs/concepts/containers/runtime-class.md
@@ -82,6 +82,9 @@ metadata:
handler: myconfiguration # The name of the corresponding CRI configuration
```
+The name of a RuntimeClass object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
+
{{< note >}}
It is recommended that RuntimeClass write operations (create/update/patch/delete) be
restricted to the cluster administrator. This is typically the default. See [Authorization
diff --git a/content/en/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md b/content/en/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md
index bf2d5501920e9..8bc6e22861761 100644
--- a/content/en/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md
+++ b/content/en/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation.md
@@ -5,30 +5,34 @@ reviewers:
- cheftako
- chenopis
content_template: templates/concept
-weight: 10
+weight: 20
---
{{% capture overview %}}
-The aggregation layer allows Kubernetes to be extended with additional APIs, beyond what is offered by the core Kubernetes APIs.
+The aggregation layer allows Kubernetes to be extended with additional APIs, beyond what is offered by the core Kubernetes APIs.
+The additional APIs can either be ready-made solutions such as [service-catalog](/docs/concepts/extend-kubernetes/service-catalog/), or APIs that you develop yourself.
+
+The aggregation layer is different from [Custom Resources](/docs/concepts/extend-kubernetes/api-extension/custom-resources/), which are a way to make the {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}} recognise new kinds of object.
{{% /capture %}}
{{% capture body %}}
-## Overview
+## Aggregation layer
+
+The aggregation layer runs in-process with the kube-apiserver. Until an extension resource is registered, the aggregation layer will do nothing. To register an API, you add an _APIService_ object, which "claims" the URL path in the Kubernetes API. At that point, the aggregation layer will proxy anything sent to that API path (e.g. `/apis/myextension.mycompany.io/v1/…`) to the registered APIService.
-The aggregation layer enables installing additional Kubernetes-style APIs in your cluster. These can either be pre-built, existing 3rd party solutions, such as [service-catalog](https://github.com/kubernetes-incubator/service-catalog/blob/master/README.md), or user-created APIs like [apiserver-builder](https://github.com/kubernetes-incubator/apiserver-builder/blob/master/README.md), which can get you started.
+The most common way to implement the APIService is to run an *extension API server* in Pod(s) that run in your cluster. If you're using the extension API server to manage resources in your cluster, the extension API server (also written as "extension-apiserver") is typically paired with one or more {{< glossary_tooltip text="controllers" term_id="controller" >}}. The apiserver-builder library provides a skeleton for both extension API servers and the associated controller(s).
-The aggregation layer runs in-process with the kube-apiserver. Until an extension resource is registered, the aggregation layer will do nothing. To register an API, users must add an APIService object, which "claims" the URL path in the Kubernetes API. At that point, the aggregation layer will proxy anything sent to that API path (e.g. /apis/myextension.mycompany.io/v1/…) to the registered APIService.
+### Response latency
-Ordinarily, the APIService will be implemented by an *extension-apiserver* in a pod running in the cluster. This extension-apiserver will normally need to be paired with one or more controllers if active management of the added resources is needed. As a result, the apiserver-builder will actually provide a skeleton for both. As another example, when the service-catalog is installed, it provides both the extension-apiserver and controller for the services it provides.
+Extension API servers should have low latency networking to and from the kube-apiserver.
+Discovery requests are required to round-trip from the kube-apiserver in five seconds or less.
-Extension-apiservers should have low latency connections to and from the kube-apiserver.
-In particular, discovery requests are required to round-trip from the kube-apiserver in five seconds or less.
-If your deployment cannot achieve this, you should consider how to change it. For now, setting the
-`EnableAggregatedDiscoveryTimeout=false` feature gate on the kube-apiserver
-will disable the timeout restriction. It will be removed in a future release.
+If your extension API server cannot achieve that latency requirement, consider making changes that let you meet it. You can also set the
+`EnableAggregatedDiscoveryTimeout=false` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on the kube-apiserver
+to disable the timeout restriction. This deprecated feature gate will be removed in a future release.
{{% /capture %}}
@@ -37,7 +41,6 @@ will disable the timeout restriction. It will be removed in a future release.
* To get the aggregator working in your environment, [configure the aggregation layer](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/).
* Then, [setup an extension api-server](/docs/tasks/access-kubernetes-api/setup-extension-api-server/) to work with the aggregation layer.
* Also, learn how to [extend the Kubernetes API using Custom Resource Definitions](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/).
+* Read the specification for [APIService](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#apiservice-v1-apiregistration-k8s-io)
{{% /capture %}}
-
-
diff --git a/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md b/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md
index d0b990e0da6f6..f2d4b814ad09e 100644
--- a/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md
+++ b/content/en/docs/concepts/extend-kubernetes/api-extension/custom-resources.md
@@ -4,7 +4,7 @@ reviewers:
- enisoc
- deads2k
content_template: templates/concept
-weight: 20
+weight: 10
---
{{% capture overview %}}
@@ -167,7 +167,7 @@ CRDs are easier to create than Aggregated APIs.
| CRDs | Aggregated API |
| --------------------------- | -------------- |
-| Do not require programming. Users can choose any language for a CRD controller. | Requires programming in Go and building binary and image. Users can choose any language for a CRD controller. |
+| Do not require programming. Users can choose any language for a CRD controller. | Requires programming in Go and building binary and image. |
| No additional service to run; CRs are handled by API Server. | An additional service to create and that could fail. |
| No ongoing support once the CRD is created. Any bug fixes are picked up as part of normal Kubernetes Master upgrades. | May need to periodically pickup bug fixes from upstream and rebuild and update the Aggregated APIserver. |
| No need to handle multiple versions of your API. For example: when you control the client for this resource, you can upgrade it in sync with the API. | You need to handle multiple versions of your API, for example: when developing an extension to share with the world. |
diff --git a/content/en/docs/concepts/extend-kubernetes/poseidon-firmament-alternate-scheduler.md b/content/en/docs/concepts/extend-kubernetes/poseidon-firmament-alternate-scheduler.md
index 1afbc17c09b94..4c5ab12c03aae 100644
--- a/content/en/docs/concepts/extend-kubernetes/poseidon-firmament-alternate-scheduler.md
+++ b/content/en/docs/concepts/extend-kubernetes/poseidon-firmament-alternate-scheduler.md
@@ -1,117 +1,111 @@
---
-title: Poseidon-Firmament - An alternate scheduler
+title: Poseidon-Firmament Scheduler
content_template: templates/concept
weight: 80
---
{{% capture overview %}}
-**Current release of Poseidon-Firmament scheduler is an alpha release.**
+{{< feature-state for_k8s_version="v1.6" state="alpha" >}}
-Poseidon-Firmament scheduler is an alternate scheduler that can be deployed alongside the default Kubernetes scheduler.
+The Poseidon-Firmament scheduler is an alternate scheduler that can be deployed alongside the default Kubernetes scheduler.
{{% /capture %}}
{{% capture body %}}
-## Introduction
+## Introduction
-Poseidon is a service that acts as the integration glue for the [Firmament scheduler](https://github.com/Huawei-PaaS/firmament) with Kubernetes. Poseidon-Firmament scheduler augments the current Kubernetes scheduling capabilities. It incorporates novel flow network graph based scheduling capabilities alongside the default Kubernetes Scheduler. Firmament scheduler models workloads and clusters as flow networks and runs min-cost flow optimizations over these networks to make scheduling decisions.
+Poseidon is a service that acts as the integration glue between the [Firmament scheduler](https://github.com/Huawei-PaaS/firmament) and Kubernetes. Poseidon-Firmament augments the current Kubernetes scheduling capabilities. It incorporates novel flow network graph based scheduling capabilities alongside the default Kubernetes scheduler. The Firmament scheduler models workloads and clusters as flow networks and runs min-cost flow optimizations over these networks to make scheduling decisions.
-It models the scheduling problem as a constraint-based optimization over a flow network graph. This is achieved by reducing scheduling to a min-cost max-flow optimization problem. The Poseidon-Firmament scheduler dynamically refines the workload placements.
+Firmament models the scheduling problem as a constraint-based optimization over a flow network graph. This is achieved by reducing scheduling to a min-cost max-flow optimization problem. The Poseidon-Firmament scheduler dynamically refines the workload placements.
-Poseidon-Firmament scheduler runs alongside the default Kubernetes Scheduler as an alternate scheduler, so multiple schedulers run simultaneously.
+Poseidon-Firmament scheduler runs alongside the default Kubernetes scheduler as an alternate scheduler. You can simultaneously run multiple, different schedulers.
-## Key Advantages
+Flow graph scheduling with the Poseidon-Firmament scheduler provides the following advantages:
-### Flow graph scheduling based Poseidon-Firmament scheduler provides the following key advantages:
-- Workloads (pods) are bulk scheduled to enable scheduling at massive scale..
-- Based on the extensive performance test results, Poseidon-Firmament scales much better than the Kubernetes default scheduler as the number of nodes increase in a cluster. This is due to the fact that Poseidon-Firmament is able to amortize more and more work across workloads.
-- Poseidon-Firmament Scheduler outperforms the Kubernetes default scheduler by a wide margin when it comes to throughput performance numbers for scenarios where compute resource requirements are somewhat uniform across jobs (Replicasets/Deployments/Jobs). Poseidon-Firmament scheduler end-to-end throughput performance numbers, including bind time, consistently get better as the number of nodes in a cluster increase. For example, for a 2,700 node cluster (shown in the graphs [here](https://github.com/kubernetes-sigs/poseidon/blob/master/docs/benchmark/README.md)), Poseidon-Firmament scheduler achieves a 7X or greater end-to-end throughput than the Kubernetes default scheduler, which includes bind time.
+- Workloads (Pods) are bulk scheduled to enable scheduling at massive scale.
+ The Poseidon-Firmament scheduler outperforms the Kubernetes default scheduler by a wide margin when it comes to throughput performance for scenarios where compute resource requirements are somewhat uniform across your workload (Deployments, ReplicaSets, Jobs).
+- The Poseidon-Firmament's scheduler's end-to-end throughput performance and bind time improves as the number of nodes in a cluster increases. As you scale out, Poseidon-Firmament scheduler is able to amortize more and more work across workloads.
+- Scheduling in Poseidon-Firmament is dynamic; it keeps cluster resources in a global optimal state during every scheduling run.
+- The Poseidon-Firmament scheduler supports scheduling complex rule constraints.
-- Availability of complex rule constraints.
-- Scheduling in Poseidon-Firmament is dynamic; it keeps cluster resources in a global optimal state during every scheduling run.
-- Highly efficient resource utilizations.
+## How the Poseidon-Firmament scheduler works
-## Poseidon-Firmament Scheduler - How it works
+Kubernetes supports [using multiple schedulers](/docs/tasks/administer-cluster/configure-multiple-schedulers/). You can specify, for a particular Pod, that it is scheduled by a custom scheduler (“poseidon” for this case), by setting the `schedulerName` field in the PodSpec at the time of pod creation. The default scheduler will ignore that Pod and allow Poseidon-Firmament scheduler to schedule the Pod on a relevant node.
-As part of the Kubernetes multiple schedulers support, each new pod is typically scheduled by the default scheduler. Kubernetes can be instructed to use another scheduler by specifying the name of another custom scheduler (“poseidon” in our case) in the **schedulerName** field of the PodSpec at the time of pod creation. In this case, the default scheduler will ignore that Pod and allow Poseidon scheduler to schedule the Pod on a relevant node.
+For example:
```yaml
apiVersion: v1
kind: Pod
-
...
spec:
- schedulerName: poseidon
-```
-
-
-{{< note >}}
-For details about the design of this project see the [design document](https://github.com/kubernetes-sigs/poseidon/blob/master/docs/design/README.md).
-{{< /note >}}
+ schedulerName: poseidon
+...
+```
-## Possible Use Case Scenarios - When to use it
+## Batch scheduling
As mentioned earlier, Poseidon-Firmament scheduler enables an extremely high throughput scheduling environment at scale due to its bulk scheduling approach versus Kubernetes pod-at-a-time approach. In our extensive tests, we have observed substantial throughput benefits as long as resource requirements (CPU/Memory) for incoming Pods are uniform across jobs (Replicasets/Deployments/Jobs), mainly due to efficient amortization of work across jobs.
Although, Poseidon-Firmament scheduler is capable of scheduling various types of workloads, such as service, batch, etc., the following are a few use cases where it excels the most:
-1. For “Big Data/AI” jobs consisting of large number of tasks, throughput benefits are tremendous.
-2. Service or batch jobs where workload resource requirements are uniform across jobs (Replicasets/Deployments/Jobs).
+1. For “Big Data/AI” jobs consisting of large number of tasks, throughput benefits are tremendous.
+2. Service or batch jobs where workload resource requirements are uniform across jobs (Replicasets/Deployments/Jobs).
-## Current Project Stage
+## Feature state
-- **Alpha Release - Incubation repo.** at https://github.com/kubernetes-sigs/poseidon.
-- Currently, Poseidon-Firmament scheduler **does not provide support for high availability**, our implementation assumes that the scheduler cannot fail. The [design document](https://github.com/kubernetes-sigs/poseidon/blob/master/docs/design/README.md) describes possible ways to enable high availability, but we leave this to future work.
-- We are **not aware of any production deployment** of Poseidon-Firmament scheduler at this time.
-- Poseidon-Firmament is supported from Kubernetes release 1.6 and works with all subsequent releases.
-- Release process for Poseidon and Firmament repos are in lock step. The current Poseidon release can be found [here](https://github.com/kubernetes-sigs/poseidon/releases) and the corresponding Firmament release can be found [here](https://github.com/Huawei-PaaS/firmament/releases).
+Poseidon-Firmament is designed to work with Kubernetes release 1.6 and all subsequent releases.
-## Features Comparison Matrix
+{{< caution >}}
+Poseidon-Firmament scheduler does not provide support for high availability; its implementation assumes that the scheduler cannot fail.
+{{< /caution >}}
+## Feature comparison {#feature-comparison-matrix}
+{{< table caption="Feature comparison of Kubernetes and Poseidon-Firmament schedulers." >}}
|Feature|Kubernetes Default Scheduler|Poseidon-Firmament Scheduler|Notes|
|--- |--- |--- |--- |
|Node Affinity/Anti-Affinity|Y|Y||
-|Pod Affinity/Anti-Affinity - including support for pod anti-affinity symmetry|Y|Y|Currently, the default scheduler outperforms the Poseidon-Firmament scheduler pod affinity/anti-affinity functionality. We are working towards resolving this.|
+|Pod Affinity/Anti-Affinity - including support for pod anti-affinity symmetry|Y|Y|The default scheduler outperforms the Poseidon-Firmament scheduler pod affinity/anti-affinity functionality.|
|Taints & Tolerations|Y|Y||
-|Baseline Scheduling capability in accordance to available compute resources (CPU & Memory) on a node|Y|Y**|Not all Predicates & Priorities are supported at this time.|
-|Extreme Throughput at scale|Y**|Y|Bulk scheduling approach scales or increases workload placement. Substantial throughput benefits using Firmament scheduler as long as resource requirements (CPU/Memory) for incoming Pods is uniform across Replicasets/Deployments/Jobs. This is mainly due to efficient amortization of work across Replicasets/Deployments/Jobs . 1) For “Big Data/AI” jobs consisting of large no. of tasks, throughput benefits are tremendous. 2) Substantial throughput benefits also for service or batch job scenarios where workload resource requirements are uniform across Replicasets/Deployments/Jobs.|
-|Optimal Scheduling|Pod-by-Pod scheduler, processes one pod at a time (may result into sub-optimal scheduling)|Bulk Scheduling (Optimal scheduling)|Pod-by-Pod Kubernetes default scheduler may assign tasks to a sub-optimal machine. By contrast, Firmament considers all unscheduled tasks at the same time together with their soft and hard constraints.|
-|Colocation Interference Avoidance|N|N**|Planned in Poseidon-Firmament.|
-|Priority Pre-emption|Y|N**|Partially exists in Poseidon-Firmament versus extensive support in Kubernetes default scheduler.|
-|Inherent Re-Scheduling|N|Y**|Poseidon-Firmament scheduler supports workload re-scheduling. In each scheduling run it considers all the pods, including running pods, and as a result can migrate or evict pods – a globally optimal scheduling environment.|
+|Baseline Scheduling capability in accordance to available compute resources (CPU & Memory) on a node|Y|Y†|**†** Not all Predicates & Priorities are supported with Poseidon-Firmament.|
+|Extreme Throughput at scale|Y†|Y|**†** Bulk scheduling approach scales or increases workload placement. Firmament scheduler offers high throughput when resource requirements (CPU/Memory) for incoming Pods are uniform across ReplicaSets/Deployments/Jobs.|
+|Colocation Interference Avoidance|N|N||
+|Priority Preemption|Y|N†|**†** Partially exists in Poseidon-Firmament versus extensive support in Kubernetes default scheduler.|
+|Inherent Rescheduling|N|Y†|**†** Poseidon-Firmament scheduler supports workload re-scheduling. In each scheduling run, Poseidon-Firmament considers all Pods, including running Pods, and as a result can migrate or evict Pods – a globally optimal scheduling environment.|
|Gang Scheduling|N|Y||
|Support for Pre-bound Persistence Volume Scheduling|Y|Y||
-|Support for Local Volume & Dynamic Persistence Volume Binding Scheduling|Y|N**|Planned.|
-|High Availability|Y|N**|Planned.|
-|Real-time metrics based scheduling|N|Y**|Initially supported using Heapster (now deprecated) for placing pods using actual cluster utilization statistics rather than reservations. Plans to switch over to "metric server".|
+|Support for Local Volume & Dynamic Persistence Volume Binding Scheduling|Y|N||
+|High Availability|Y|N||
+|Real-time metrics based scheduling|N|Y†|**†** Partially supported in Poseidon-Firmament using Heapster (now deprecated) for placing Pods using actual cluster utilization statistics rather than reservations.|
|Support for Max-Pod per node|Y|Y|Poseidon-Firmament scheduler seamlessly co-exists with Kubernetes default scheduler.|
|Support for Ephemeral Storage, in addition to CPU/Memory|Y|Y||
+{{< /table >}}
+## Installation
-## Installation
-
-For in-cluster installation of Poseidon, please start at the [Installation instructions](https://github.com/kubernetes-sigs/poseidon/blob/master/docs/install/README.md).
-
-
-## Development
+The [Poseidon-Firmament installation guide](https://github.com/kubernetes-sigs/poseidon/blob/master/docs/install/README.md#Installation) explains how to deploy Poseidon-Firmament to your cluster.
-For developers, please refer to the [Developer Setup instructions](https://github.com/kubernetes-sigs/poseidon/blob/master/docs/devel/README.md).
+## Performance comparison
-## Latest Throughput Performance Testing Results
+{{< note >}}
+ Please refer to the [latest benchmark results](https://github.com/kubernetes-sigs/poseidon/blob/master/docs/benchmark/README.md) for detailed throughput performance comparison test results between Poseidon-Firmament scheduler and the Kubernetes default scheduler.
+{{< /note >}}
-Pod-by-pod schedulers, such as the Kubernetes default scheduler, typically process one pod at a time. These schedulers have the following crucial drawbacks:
+Pod-by-pod schedulers, such as the Kubernetes default scheduler, process Pods in small batches (typically one at a time). These schedulers have the following crucial drawbacks:
1. The scheduler commits to a pod placement early and restricts the choices for other pods that wait to be placed.
-2. There is limited opportunities for amortizing work across pods because they are considered for placement individually.
+2. There is limited opportunities for amortizing work across pods because they are considered for placement individually.
These downsides of pod-by-pod schedulers are addressed by batching or bulk scheduling in Poseidon-Firmament scheduler. Processing several pods in a batch allows the scheduler to jointly consider their placement, and thus to find the best trade-off for the whole batch instead of one pod. At the same time it amortizes work across pods resulting in much higher throughput.
-{{< note >}}
- Please refer to the [latest benchmark results](https://github.com/kubernetes-sigs/poseidon/blob/master/docs/benchmark/README.md) for detailed throughput performance comparison test results between Poseidon-Firmament scheduler and the Kubernetes default scheduler.
-{{< /note >}}
-
+{{% /capture %}}
+{{% capture whatsnext %}}
+* See [Poseidon-Firmament](https://github.com/kubernetes-sigs/poseidon#readme) on GitHub for more information.
+* See the [design document](https://github.com/kubernetes-sigs/poseidon/blob/master/docs/design/README.md) for Poseidon.
+* Read [Firmament: Fast, Centralized Cluster Scheduling at Scale](https://www.usenix.org/system/files/conference/osdi16/osdi16-gog.pdf), the academic paper on the Firmament scheduling design.
+* If you'd like to contribute to Poseidon-Firmament, refer to the [developer setup instructions](https://github.com/kubernetes-sigs/poseidon/blob/master/docs/devel/README.md).
{{% /capture %}}
diff --git a/content/en/docs/concepts/overview/what-is-kubernetes.md b/content/en/docs/concepts/overview/what-is-kubernetes.md
index 34e1ba2f8fadf..6849083ccf0f5 100644
--- a/content/en/docs/concepts/overview/what-is-kubernetes.md
+++ b/content/en/docs/concepts/overview/what-is-kubernetes.md
@@ -48,7 +48,7 @@ Containers have become popular because they provide extra benefits, such as:
* Resource isolation: predictable application performance.
* Resource utilization: high efficiency and density.
-## Why you need Kubernetes and what can it do
+## Why you need Kubernetes and what it can do {#why-you-need-kubernetes-and-what-can-it-do}
Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn't it be easier if this behavior was handled by a system?
diff --git a/content/en/docs/concepts/policy/limit-range.md b/content/en/docs/concepts/policy/limit-range.md
index b4a9579a36308..3493983bdedc9 100644
--- a/content/en/docs/concepts/policy/limit-range.md
+++ b/content/en/docs/concepts/policy/limit-range.md
@@ -33,7 +33,7 @@ one of its arguments.
A limit range is enforced in a particular namespace when there is a
`LimitRange` object in that namespace.
-### Overview of Limit Range:
+### Overview of Limit Range
- The administrator creates one `LimitRange` in one namespace.
- Users create resources like Pods, Containers, and PersistentVolumeClaims in the namespace.
@@ -43,7 +43,6 @@ A limit range is enforced in a particular namespace when there is a
requests or limits for those values; otherwise, the system may reject pod creation.
- LimitRange validations occurs only at Pod Admission stage, not on Running pods.
-
Examples of policies that could be created using limit range are:
- In a 2 node cluster with a capacity of 8 GiB RAM, and 16 cores, constrain Pods in a namespace to request 100m and not exceeds 500m for CPU , request 200Mi and not exceed 600Mi
@@ -76,6 +75,8 @@ Here is the configuration file for a LimitRange object:
{{< codenew file="admin/resource/limit-mem-cpu-container.yaml" >}}
This object defines minimum and maximum Memory/CPU limits, default cpu/Memory requests and default limits for CPU/Memory resources to be apply to containers.
+The name of a LimitRange object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
Create the `limit-mem-cpu-per-container` LimitRange in the `limitrange-demo` namespace with the following kubectl command:
diff --git a/content/en/docs/concepts/policy/pod-security-policy.md b/content/en/docs/concepts/policy/pod-security-policy.md
index 45b48f62aea94..44c4a41e7a862 100644
--- a/content/en/docs/concepts/policy/pod-security-policy.md
+++ b/content/en/docs/concepts/policy/pod-security-policy.md
@@ -197,6 +197,8 @@ alias kubectl-user='kubectl --as=system:serviceaccount:psp-example:fake-user -n
Define the example PodSecurityPolicy object in a file. This is a policy that
simply prevents the creation of privileged pods.
+The name of a PodSecurityPolicy object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
{{< codenew file="policy/example-psp.yaml" >}}
diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md
index 92f43fe3a3ba2..d48c2db88ade8 100644
--- a/content/en/docs/concepts/policy/resource-quotas.md
+++ b/content/en/docs/concepts/policy/resource-quotas.md
@@ -37,6 +37,9 @@ Resource quotas work like this:
the `LimitRanger` admission controller to force defaults for pods that make no compute resource requirements.
See the [walkthrough](/docs/tasks/administer-cluster/quota-memory-cpu-namespace/) for an example of how to avoid this problem.
+The name of a `ResourceQuota` object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
+
Examples of policies that could be created using namespaces and quotas are:
- In a cluster with a capacity of 32 GiB RAM, and 16 cores, let team A use 20 GiB and 10 cores,
diff --git a/content/en/docs/concepts/services-networking/connect-applications-service.md b/content/en/docs/concepts/services-networking/connect-applications-service.md
index 6a958ea31e8de..bc17b74d15b89 100644
--- a/content/en/docs/concepts/services-networking/connect-applications-service.md
+++ b/content/en/docs/concepts/services-networking/connect-applications-service.md
@@ -422,10 +422,8 @@ LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.el
{{% capture whatsnext %}}
-Kubernetes also supports Federated Services, which can span multiple
-clusters and cloud providers, to provide increased availability,
-better fault tolerance and greater scalability for your services. See
-the [Federated Services User Guide](/docs/concepts/cluster-administration/federation-service-discovery/)
-for further information.
+* Learn more about [Using a Service to Access an Application in a Cluster](/docs/tasks/access-application-cluster/service-access-application-cluster/)
+* Learn more about [Connecting a Front End to a Back End Using a Service](/docs/tasks/access-application-cluster/connecting-frontend-backend/)
+* Learn more about [Creating an External Load Balancer](/docs/tasks/access-application-cluster/create-external-load-balancer/)
{{% /capture %}}
diff --git a/content/en/docs/concepts/services-networking/endpoint-slices.md b/content/en/docs/concepts/services-networking/endpoint-slices.md
index 99df54759277a..4b347f47ec5aa 100644
--- a/content/en/docs/concepts/services-networking/endpoint-slices.md
+++ b/content/en/docs/concepts/services-networking/endpoint-slices.md
@@ -32,6 +32,8 @@ for a Kubernetes Service when a {{< glossary_tooltip text="selector"
term_id="selector" >}} is specified. These EndpointSlices will include
references to any Pods that match the Service selector. EndpointSlices group
network endpoints together by unique Service and Port combinations.
+The name of a EndpointSlice object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
As an example, here's a sample EndpointSlice resource for the `example`
Kubernetes Service.
diff --git a/content/en/docs/concepts/services-networking/ingress.md b/content/en/docs/concepts/services-networking/ingress.md
index 6739cf1bd06ac..f69cce446f54b 100644
--- a/content/en/docs/concepts/services-networking/ingress.md
+++ b/content/en/docs/concepts/services-networking/ingress.md
@@ -78,11 +78,13 @@ spec:
servicePort: 80
```
- As with all other Kubernetes resources, an Ingress needs `apiVersion`, `kind`, and `metadata` fields.
- For general information about working with config files, see [deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/), [configuring containers](/docs/tasks/configure-pod-container/configure-pod-configmap/), [managing resources](/docs/concepts/cluster-administration/manage-deployment/).
+As with all other Kubernetes resources, an Ingress needs `apiVersion`, `kind`, and `metadata` fields.
+The name of an Ingress object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
+For general information about working with config files, see [deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/), [configuring containers](/docs/tasks/configure-pod-container/configure-pod-configmap/), [managing resources](/docs/concepts/cluster-administration/manage-deployment/).
Ingress frequently uses annotations to configure some options depending on the Ingress controller, an example of which
is the [rewrite-target annotation](https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md).
- Different [Ingress controller](/docs/concepts/services-networking/ingress-controllers) support different annotations. Review the documentation for
+Different [Ingress controller](/docs/concepts/services-networking/ingress-controllers) support different annotations. Review the documentation for
your choice of Ingress controller to learn which annotations are supported.
The Ingress [spec](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)
@@ -134,10 +136,10 @@ kubectl get ingress test-ingress
```
NAME HOSTS ADDRESS PORTS AGE
-test-ingress * 107.178.254.228 80 59s
+test-ingress * 203.0.113.123 80 59s
```
-Where `107.178.254.228` is the IP allocated by the Ingress controller to satisfy
+Where `203.0.113.123` is the IP allocated by the Ingress controller to satisfy
this Ingress.
{{< note >}}
diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md
index c568b36231fde..4e8326b4aefef 100644
--- a/content/en/docs/concepts/services-networking/service.md
+++ b/content/en/docs/concepts/services-networking/service.md
@@ -73,6 +73,8 @@ balancer in between your application and the backend Pods.
A Service in Kubernetes is a REST object, similar to a Pod. Like all of the
REST objects, you can `POST` a Service definition to the API server to create
a new instance.
+The name of a Service object must be a valid
+[DNS label name](/docs/concepts/overview/working-with-objects/names#dns-label-names).
For example, suppose you have a set of Pods that each listen on TCP port 9376
and carry a label `app=MyApp`:
@@ -167,6 +169,9 @@ subsets:
- port: 9376
```
+The name of the Endpoints object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
+
{{< note >}}
The endpoint IPs _must not_ be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), or
link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6).
@@ -1173,19 +1178,6 @@ SCTP is not supported on Windows based nodes.
The kube-proxy does not support the management of SCTP associations when it is in userspace mode.
{{< /warning >}}
-## Future work
-
-In the future, the proxy policy for Services can become more nuanced than
-simple round-robin balancing, for example master-elected or sharded. We also
-envision that some Services will have "real" load balancers, in which case the
-virtual IP address will simply transport the packets there.
-
-The Kubernetes project intends to improve support for L7 (HTTP) Services.
-
-The Kubernetes project intends to have more flexible ingress modes for Services
-that encompass the current ClusterIP, NodePort, and LoadBalancer modes and more.
-
-
{{% /capture %}}
{{% capture whatsnext %}}
diff --git a/content/en/docs/concepts/storage/dynamic-provisioning.md b/content/en/docs/concepts/storage/dynamic-provisioning.md
index 6bdca6b8af93a..77885981f7cac 100644
--- a/content/en/docs/concepts/storage/dynamic-provisioning.md
+++ b/content/en/docs/concepts/storage/dynamic-provisioning.md
@@ -46,6 +46,9 @@ To enable dynamic provisioning, a cluster administrator needs to pre-create
one or more StorageClass objects for users.
StorageClass objects define which provisioner should be used and what parameters
should be passed to that provisioner when dynamic provisioning is invoked.
+The name of a StorageClass object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
+
The following manifest creates a storage class "slow" which provisions standard
disk-like persistent disks.
diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md
index c59cb2ee3c3e2..a4451f2867361 100644
--- a/content/en/docs/concepts/storage/persistent-volumes.md
+++ b/content/en/docs/concepts/storage/persistent-volumes.md
@@ -4,6 +4,7 @@ reviewers:
- saad-ali
- thockin
- msau42
+- xing-yang
title: Persistent Volumes
feature:
title: Storage orchestration
@@ -41,7 +42,6 @@ resource.
See the [detailed walkthrough with working examples](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/).
-
## Lifecycle of a volume and claim
PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource. The interaction between PVs and PVCs follows this lifecycle:
@@ -51,9 +51,11 @@ PVs are resources in the cluster. PVCs are requests for those resources and also
There are two ways PVs may be provisioned: statically or dynamically.
#### Static
+
A cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.
#### Dynamic
+
When none of the static PVs the administrator created match a user's `PersistentVolumeClaim`,
the cluster may try to dynamically provision a volume specially for the PVC.
This provisioning is based on `StorageClasses`: the PVC must request a
@@ -286,6 +288,8 @@ Expanding EBS volumes is a time-consuming operation. Also, there is a per-volume
## Persistent Volumes
Each PV contains a spec and status, which is the specification and status of the volume.
+The name of a PersistentVolume object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
```yaml
apiVersion: v1
@@ -440,6 +444,8 @@ The CLI will show the name of the PVC bound to the PV.
## PersistentVolumeClaims
Each PVC contains a spec and status, which is the specification and status of the claim.
+The name of a PersistentVolumeClaim object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
```yaml
apiVersion: v1
@@ -574,6 +580,7 @@ Support for the additional plugins was added in 1.10.
{{< /note >}}
### Persistent Volumes using a Raw Block Volume
+
```yaml
apiVersion: v1
kind: PersistentVolume
@@ -592,6 +599,7 @@ spec:
readOnly: false
```
### Persistent Volume Claim requesting a Raw Block Volume
+
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
@@ -605,7 +613,9 @@ spec:
requests:
storage: 10Gi
```
+
### Pod specification adding Raw Block Device path in container
+
```yaml
apiVersion: v1
kind: Pod
@@ -654,7 +664,7 @@ Only statically provisioned volumes are supported for alpha release. Administrat
## Volume Snapshot and Restore Volume from Snapshot Support
-{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
+{{< feature-state for_k8s_version="v1.17" state="beta" >}}
Volume snapshot feature was added to support CSI Volume Plugins only. For details, see [volume snapshots](/docs/concepts/storage/volume-snapshots/).
@@ -662,6 +672,7 @@ To enable support for restoring a volume from a volume snapshot data source, ena
`VolumeSnapshotDataSource` feature gate on the apiserver and controller-manager.
### Create Persistent Volume Claim from Volume Snapshot
+
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
@@ -690,6 +701,7 @@ To enable support for cloning a volume from a PVC data source, enable the
`VolumePVCDataSource` feature gate on the apiserver and controller-manager.
### Create Persistent Volume Claim from an existing pvc
+
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
@@ -732,5 +744,17 @@ and need persistent storage, it is recommended that you use the following patter
dynamic storage support (in which case the user should create a matching PV)
or the cluster has no storage system (in which case the user cannot deploy
config requiring PVCs).
+{{% /capture %}}
+ {{% capture whatsnext %}}
+
+* Learn more about [Creating a Persistent Volume](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume).
+* Learn more about [Creating a Persistent Volume Claim](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim).
+* Read the [Persistent Storage design document](https://git.k8s.io/community/contributors/design-proposals/storage/persistent-storage.md).
+
+### Reference
+* [PersistentVolume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolume-v1-core)
+* [PersistentVolumeSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumespec-v1-core)
+* [PersistentVolumeClaim](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaim-v1-core)
+* [PersistentVolumeClaimSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#persistentvolumeclaimspec-v1-core)
{{% /capture %}}
diff --git a/content/en/docs/concepts/workloads/controllers/cron-jobs.md b/content/en/docs/concepts/workloads/controllers/cron-jobs.md
index c56467322b183..f1bfca95193c4 100644
--- a/content/en/docs/concepts/workloads/controllers/cron-jobs.md
+++ b/content/en/docs/concepts/workloads/controllers/cron-jobs.md
@@ -27,7 +27,8 @@ that the cron job controller uses.
{{< /caution >}}
When creating the manifest for a CronJob resource, make sure the name you provide
-is no longer than 52 characters. This is because the CronJob controller will automatically
+is a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
+The name must be no longer than 52 characters. This is because the CronJob controller will automatically
append 11 characters to the job name provided and there is a constraint that the
maximum length of a Job name is no more than 63 characters.
diff --git a/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md
index 70f5c7e0faa94..8024984f481f0 100644
--- a/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md
+++ b/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md
@@ -114,6 +114,7 @@ The output is similar to this:
## Writing a Job Spec
As with all other Kubernetes config, a Job needs `apiVersion`, `kind`, and `metadata` fields.
+Its name must be a valid [DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
A Job also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
diff --git a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md
index d214fca6120cd..bf020b958b49a 100644
--- a/content/en/docs/concepts/workloads/controllers/replicationcontroller.md
+++ b/content/en/docs/concepts/workloads/controllers/replicationcontroller.md
@@ -116,6 +116,8 @@ specifies an expression that just gets the name from each pod in the returned li
## Writing a ReplicationController Spec
As with all other Kubernetes config, a ReplicationController needs `apiVersion`, `kind`, and `metadata` fields.
+The name of a ReplicationController object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
For general information about working with config files, see [object management ](/docs/concepts/overview/working-with-objects/object-management/).
A ReplicationController also needs a [`.spec` section](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status).
diff --git a/content/en/docs/concepts/workloads/pods/pod.md b/content/en/docs/concepts/workloads/pods/pod.md
index 7dff25cbb5d41..d64227be48edc 100644
--- a/content/en/docs/concepts/workloads/pods/pod.md
+++ b/content/en/docs/concepts/workloads/pods/pod.md
@@ -175,7 +175,7 @@ An example flow:
1. The Pod in the API server is updated with the time beyond which the Pod is considered "dead" along with the grace period.
1. Pod shows up as "Terminating" when listed in client commands
1. (simultaneous with 3) When the Kubelet sees that a Pod has been marked as terminating because the time in 2 has been set, it begins the Pod shutdown process.
- 1. If one of the Pod's containers has defined a [preStop hook](/docs/concepts/containers/container-lifecycle-hooks/#hook-details), it is invoked inside of the container. If the `preStop` hook is still running after the grace period expires, step 2 is then invoked with a small (2 second) extended grace period.
+ 1. If one of the Pod's containers has defined a [preStop hook](/docs/concepts/containers/container-lifecycle-hooks/#hook-details), it is invoked inside of the container. If the `preStop` hook is still running after the grace period expires, step 2 is then invoked with a small (2 second) one-time extended grace period. You must modify `terminationGracePeriodSeconds` if the `preStop` hook needs longer to complete.
1. The container is sent the TERM signal. Note that not all containers in the Pod will receive the TERM signal at the same time and may each require a `preStop` hook if the order in which they shut down matters.
1. (simultaneous with 3) Pod is removed from endpoints list for service, and are no longer considered part of the set of running Pods for replication controllers. Pods that shutdown slowly cannot continue to serve traffic as load balancers (like the service proxy) remove them from their rotations.
1. When the grace period expires, any processes still running in the Pod are killed with SIGKILL.
@@ -203,5 +203,7 @@ Your container runtime must support the concept of a privileged container for th
Pod is a top-level resource in the Kubernetes REST API.
The [Pod API object](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#pod-v1-core) definition
describes the object in detail.
+When creating the manifest for a Pod object, make sure the name specified is a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
{{% /capture %}}
diff --git a/content/en/docs/contribute/intermediate.md b/content/en/docs/contribute/intermediate.md
index 2da6ccf5232d5..9e477a90a4df1 100644
--- a/content/en/docs/contribute/intermediate.md
+++ b/content/en/docs/contribute/intermediate.md
@@ -911,8 +911,8 @@ deadlines. Some deadlines related to documentation are:
If your feature is an Alpha feature and is behind a feature gate, make sure you
add it to [Feature gates](/docs/reference/command-line-tools-reference/feature-gates/)
-as part of your pull request. If your feature is moving out of Alpha, make sure to
-remove it from that file.
+as part of your pull request. If your feature is moving to Beta
+or to General Availability, update the feature gates file.
## Contribute to other repos
diff --git a/content/en/docs/contribute/start.md b/content/en/docs/contribute/start.md
index acd5a5bfdf2e9..181e35968229e 100644
--- a/content/en/docs/contribute/start.md
+++ b/content/en/docs/contribute/start.md
@@ -209,7 +209,7 @@ to base your work on. Use these guidelines to make the decision:
- Some localization teams work with a series of long-lived branches, and
periodically merge these to `master`. This kind of branch has a name like
dev-\-\.\; for example:
- `dev-{{< release-branch >}}-ja.1`.
+ `dev-{{< latest-semver >}}-ja.1`
- If you're writing or updating documentation for a feature change release,
then you need to know the major and minor version of Kubernetes that
the change will first appear in.
@@ -217,8 +217,8 @@ to base your work on. Use these guidelines to make the decision:
to beta in the next minor version, you need to know what the next minor
version number is.
- Find the release branch named for that version. For example, features that
- changed in the v{{< release-branch >}} release got documented in the branch
- named `dev-{{< release-branch >}}`.
+ changed in the {{< latest-version >}} release got documented in the branch
+ named `dev-{{< latest-semver >}}`.
If you're still not sure which branch to choose, ask in `#sig-docs` on Slack or
attend a weekly SIG Docs meeting to get clarity.
diff --git a/content/en/docs/contribute/style/content-organization.md b/content/en/docs/contribute/style/content-organization.md
index 55997dcaf5ab7..e93cf8126edc4 100644
--- a/content/en/docs/contribute/style/content-organization.md
+++ b/content/en/docs/contribute/style/content-organization.md
@@ -107,7 +107,6 @@ Another widely used example is the `includes` bundle. It sets `headless: true` i
```bash
en/includes
├── default-storage-class-prereqs.md
-├── federated-task-tutorial-prereqs.md
├── index.md
├── partner-script.js
├── partner-style.css
diff --git a/content/en/docs/home/_index.md b/content/en/docs/home/_index.md
index 31f37880ff631..6ce04e800b2a7 100644
--- a/content/en/docs/home/_index.md
+++ b/content/en/docs/home/_index.md
@@ -36,9 +36,14 @@ cards:
button_path: "/docs/setup"
- name: tasks
title: "Learn how to use Kubernetes"
- description: "Look up common tasks and how to perform them using a short sequence of steps."
+ description: "Look up common tasks and how to perform them using a short sequence of steps."
button: "View Tasks"
button_path: "/docs/tasks"
+- name: training
+ title: "Training"
+ description: "Get certified in Kubernetes and make your cloud native projects successful!"
+ button: "View training"
+ button_path: "/training"
- name: reference
title: Look up reference information
description: Browse terminology, command line syntax, API resource types, and setup tool documentation.
diff --git a/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md b/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md
index 4da9bc951c13c..c57bcdaf341fe 100644
--- a/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md
+++ b/content/en/docs/reference/access-authn-authz/extensible-admission-controllers.md
@@ -114,7 +114,7 @@ webhooks:
service:
namespace: "example-namespace"
name: "example-service"
- caBundle: "Ci0tLS0tQk......tLS0K"
+ caBundle: "Ci0tLS0tQk...<`caBundle` is a PEM encoded CA bundle which will be used to validate the webhook's server certificate.>...tLS0K"
admissionReviewVersions: ["v1", "v1beta1"]
sideEffects: None
timeoutSeconds: 5
@@ -139,7 +139,7 @@ webhooks:
service:
namespace: "example-namespace"
name: "example-service"
- caBundle: "Ci0tLS0tQk......tLS0K"
+ caBundle: "Ci0tLS0tQk...<`caBundle` is a PEM encoded CA bundle which will be used to validate the webhook's server certificate>...tLS0K"
admissionReviewVersions: ["v1beta1"]
timeoutSeconds: 5
```
@@ -1122,7 +1122,7 @@ kind: MutatingWebhookConfiguration
webhooks:
- name: my-webhook.example.com
clientConfig:
- caBundle: "Ci0tLS0tQk......tLS0K"
+ caBundle: "Ci0tLS0tQk...<`caBundle` is a PEM encoded CA bundle which will be used to validate the webhook's server certificate>...tLS0K"
service:
namespace: my-service-namespace
name: my-service-name
diff --git a/content/en/docs/reference/access-authn-authz/rbac.md b/content/en/docs/reference/access-authn-authz/rbac.md
index 852e73fd799bc..3e18ae283ef3b 100644
--- a/content/en/docs/reference/access-authn-authz/rbac.md
+++ b/content/en/docs/reference/access-authn-authz/rbac.md
@@ -74,6 +74,9 @@ rules:
verbs: ["get", "watch", "list"]
```
+The name of a Role or a ClusterRole object must be a valid
+[path segment name](/docs/concepts/overview/working-with-objects/names#path-segment-names).
+
### RoleBinding and ClusterRoleBinding
A role binding grants the permissions defined in a role to a user or set of users.
@@ -81,6 +84,9 @@ It holds a list of subjects (users, groups, or service accounts), and a referenc
Permissions can be granted within a namespace with a `RoleBinding`, or cluster-wide with a `ClusterRoleBinding`.
A `RoleBinding` may reference a `Role` in the same namespace.
+The name of a `RoleBinding` object must be a valid
+[path segment name](/docs/concepts/overview/working-with-objects/names#path-segment-names).
+
The following `RoleBinding` grants the "pod-reader" role to the user "jane" within the "default" namespace.
This allows "jane" to read pods in the "default" namespace.
@@ -129,8 +135,10 @@ roleRef:
apiGroup: rbac.authorization.k8s.io
```
-Finally, a `ClusterRoleBinding` may be used to grant permission at the cluster level and in all
-namespaces. The following `ClusterRoleBinding` allows any user in the group "manager" to read
+Finally, a `ClusterRoleBinding` may be used to grant permission at the cluster level and in all namespaces.
+ The name of a `ClusterRoleBinding` object must be a valid
+[path segment name](/docs/concepts/overview/working-with-objects/names#path-segment-names).
+The following `ClusterRoleBinding` allows any user in the group "manager" to read
secrets in any namespace.
```yaml
diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md
index adb7fb8b6f339..38d723e9815cd 100644
--- a/content/en/docs/reference/kubectl/cheatsheet.md
+++ b/content/en/docs/reference/kubectl/cheatsheet.md
@@ -42,7 +42,7 @@ complete -F __start_kubectl k
```bash
source <(kubectl completion zsh) # setup autocomplete in zsh into the current shell
-echo "if [ $commands[kubectl] ]; then source <(kubectl completion zsh); fi" >> ~/.zshrc # add autocomplete permanently to your zsh shell
+echo "[[ $commands[kubectl] ]] && source <(kubectl completion zsh)" >> ~/.zshrc # add autocomplete permanently to your zsh shell
```
## Kubectl Context and Configuration
@@ -95,7 +95,7 @@ kubectl apply -f ./my1.yaml -f ./my2.yaml # create from multiple files
kubectl apply -f ./dir # create resource(s) in all manifest files in dir
kubectl apply -f https://git.io/vPieo # create resource(s) from url
kubectl create deployment nginx --image=nginx # start a single instance of nginx
-kubectl explain pods,svc # get the documentation for pod and svc manifests
+kubectl explain pods # get the documentation for pod manifests
# Create multiple YAML objects from stdin
cat <kubectl annotate (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags] | Add or update the annotations of one or more resources.
`api-versions` | `kubectl api-versions [flags]` | List the API versions that are available.
`apply` | `kubectl apply -f FILENAME [flags]`| Apply a configuration change to a resource from a file or stdin.
`attach` | `kubectl attach POD -c CONTAINER [-i] [-t] [flags]` | Attach to a running container either to view the output stream or interact with the container (stdin).
-`autoscale` | `kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [flags]` | Automatically scale the set of pods that are managed by a replication controller.
+`autoscale` | kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [flags] | Automatically scale the set of pods that are managed by a replication controller.
`cluster-info` | `kubectl cluster-info [flags]` | Display endpoint information about the master and services in the cluster.
`config` | `kubectl config SUBCOMMAND [flags]` | Modifies kubeconfig files. See the individual subcommands for details.
`create` | `kubectl create -f FILENAME [flags]` | Create one or more resources from a file or stdin.
-`delete` | `kubectl delete (-f FILENAME | TYPE [NAME | /NAME | -l label | --all]) [flags]` | Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.
-`describe` | `kubectl describe (-f FILENAME | TYPE [NAME_PREFIX | /NAME | -l label]) [flags]` | Display the detailed state of one or more resources.
+`delete` | kubectl delete (-f FILENAME | TYPE [NAME | /NAME | -l label | --all]) [flags] | Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.
+`describe` | kubectl describe (-f FILENAME | TYPE [NAME_PREFIX | /NAME | -l label]) [flags] | Display the detailed state of one or more resources.
`diff` | `kubectl diff -f FILENAME [flags]`| Diff file or stdin against live configuration (**BETA**)
-`edit` | `kubectl edit (-f FILENAME | TYPE NAME | TYPE/NAME) [flags]` | Edit and update the definition of one or more resources on the server by using the default editor.
+`edit` | kubectl edit (-f FILENAME | TYPE NAME | TYPE/NAME) [flags] | Edit and update the definition of one or more resources on the server by using the default editor.
`exec` | `kubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]` | Execute a command against a container in a pod.
`explain` | `kubectl explain [--recursive=false] [flags]` | Get documentation of various resources. For instance pods, nodes, services, etc.
-`expose` | `kubectl expose (-f FILENAME | TYPE NAME | TYPE/NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type] [flags]` | Expose a replication controller, service, or pod as a new Kubernetes service.
-`get` | `kubectl get (-f FILENAME | TYPE [NAME | /NAME | -l label]) [--watch] [--sort-by=FIELD] [[-o | --output]=OUTPUT_FORMAT] [flags]` | List one or more resources.
-`label` | `kubectl label (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags]` | Add or update the labels of one or more resources.
+`expose` | kubectl expose (-f FILENAME | TYPE NAME | TYPE/NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type] [flags] | Expose a replication controller, service, or pod as a new Kubernetes service.
+`get` | kubectl get (-f FILENAME | TYPE [NAME | /NAME | -l label]) [--watch] [--sort-by=FIELD] [[-o | --output]=OUTPUT_FORMAT] [flags] | List one or more resources.
+`label` | kubectl label (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags] | Add or update the labels of one or more resources.
`logs` | `kubectl logs POD [-c CONTAINER] [--follow] [flags]` | Print the logs for a container in a pod.
-`patch` | `kubectl patch (-f FILENAME | TYPE NAME | TYPE/NAME) --patch PATCH [flags]` | Update one or more fields of a resource by using the strategic merge patch process.
+`patch` | kubectl patch (-f FILENAME | TYPE NAME | TYPE/NAME) --patch PATCH [flags] | Update one or more fields of a resource by using the strategic merge patch process.
`port-forward` | `kubectl port-forward POD [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] [flags]` | Forward one or more local ports to a pod.
`proxy` | `kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] [flags]` | Run a proxy to the Kubernetes API server.
`replace` | `kubectl replace -f FILENAME` | Replace a resource from a file or stdin.
-`rolling-update` | `kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f NEW_CONTROLLER_SPEC) [flags]` | Perform a rolling update by gradually replacing the specified replication controller and its pods.
+`rolling-update` | kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f NEW_CONTROLLER_SPEC) [flags] | Perform a rolling update by gradually replacing the specified replication controller and its pods.
`run` | `kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=bool] [--overrides=inline-json] [flags]` | Run a specified image on the cluster.
-`scale` | `kubectl scale (-f FILENAME | TYPE NAME | TYPE/NAME) --replicas=COUNT [--resource-version=version] [--current-replicas=count] [flags]` | Update the size of the specified replication controller.
+`scale` | kubectl scale (-f FILENAME | TYPE NAME | TYPE/NAME) --replicas=COUNT [--resource-version=version] [--current-replicas=count] [flags] | Update the size of the specified replication controller.
`version` | `kubectl version [--client] [flags]` | Display the Kubernetes version running on the client and server.
Remember: For more about command operations, see the [kubectl](/docs/user-guide/kubectl/) reference documentation.
diff --git a/content/en/docs/reference/using-api/client-libraries.md b/content/en/docs/reference/using-api/client-libraries.md
index 093490b3453e1..4f76e16352ecd 100644
--- a/content/en/docs/reference/using-api/client-libraries.md
+++ b/content/en/docs/reference/using-api/client-libraries.md
@@ -60,6 +60,7 @@ their authors, not the Kubernetes team.
| PHP | [github.com/allansun/kubernetes-php-client](https://github.com/allansun/kubernetes-php-client) |
| PHP | [github.com/travisghansen/kubernetes-client-php](https://github.com/travisghansen/kubernetes-client-php) |
| Python | [github.com/eldarion-gondor/pykube](https://github.com/eldarion-gondor/pykube) |
+| Python | [github.com/fiaas/k8s](https://github.com/fiaas/k8s) |
| Python | [github.com/mnubo/kubernetes-py](https://github.com/mnubo/kubernetes-py) |
| Python | [github.com/tomplus/kubernetes_asyncio](https://github.com/tomplus/kubernetes_asyncio) |
| Ruby | [github.com/Ch00k/kuber](https://github.com/Ch00k/kuber) |
diff --git a/content/en/docs/setup/_index.md b/content/en/docs/setup/_index.md
index d0af24d7f91d7..880dd460241aa 100644
--- a/content/en/docs/setup/_index.md
+++ b/content/en/docs/setup/_index.md
@@ -24,7 +24,7 @@ This section covers different options to set up and run Kubernetes.
Different Kubernetes solutions meet different requirements: ease of maintenance, security, control, available resources, and expertise required to operate and manage a cluster.
-You can deploy a Kubernetes cluster on a local machine, cloud, on-prem datacenter; or choose a managed Kubernetes cluster. You can also create custom solutions across a wide range of cloud providers, or bare metal environments.
+You can deploy a Kubernetes cluster on a local machine, cloud, on-prem datacenter, or choose a managed Kubernetes cluster. You can also create custom solutions across a wide range of cloud providers, or bare metal environments.
More simply, you can create a Kubernetes cluster in learning and production environments.
diff --git a/content/en/docs/setup/learning-environment/minikube.md b/content/en/docs/setup/learning-environment/minikube.md
index 7233f98c0b72a..439e4a10a2920 100644
--- a/content/en/docs/setup/learning-environment/minikube.md
+++ b/content/en/docs/setup/learning-environment/minikube.md
@@ -24,7 +24,7 @@ Minikube supports the following Kubernetes features:
* NodePorts
* ConfigMaps and Secrets
* Dashboards
-* Container Runtime: Docker, [CRI-O](https://cri-o.io/), and [containerd](https://github.com/containerd/containerd)
+* Container Runtime: [Docker](https://www.docker.com/), [CRI-O](https://cri-o.io/), and [containerd](https://github.com/containerd/containerd)
* Enabling CNI (Container Network Interface)
* Ingress
diff --git a/content/en/docs/setup/production-environment/tools/kops.md b/content/en/docs/setup/production-environment/tools/kops.md
index 03d45f7827a13..10ae6dfa65285 100644
--- a/content/en/docs/setup/production-environment/tools/kops.md
+++ b/content/en/docs/setup/production-environment/tools/kops.md
@@ -49,14 +49,12 @@ Download the latest release with the command:
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-darwin-amd64
```
-To download a specific version, replace the
+To download a specific version, replace the following portion of the command with the specific kops version.
```shell
$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)
```
-portion of the command with the specific version.
-
For example, to download kops version v1.15.0 type:
```shell
diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md
index 31f94ef137e6c..c0c1e2b0b223d 100644
--- a/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md
+++ b/content/en/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm.md
@@ -307,17 +307,17 @@ The tracking issue for this problem is [here](https://github.com/kubernetes/kube
*Note: This [issue](https://github.com/kubernetes/kubeadm/issues/1358) only applies to tools that marshal kubeadm types (e.g. to a YAML configuration file). It will be fixed in kubeadm API v1beta2.*
-By default, kubeadm applies the `role.kubernetes.io/master:NoSchedule` taint to control-plane nodes.
+By default, kubeadm applies the `node-role.kubernetes.io/master:NoSchedule` taint to control-plane nodes.
If you prefer kubeadm to not taint the control-plane node, and set `InitConfiguration.NodeRegistration.Taints` to an empty slice,
the field will be omitted when marshalling. When the field is omitted, kubeadm applies the default taint.
There are at least two workarounds:
-1. Use the `role.kubernetes.io/master:PreferNoSchedule` taint instead of an empty slice. [Pods will get scheduled on masters](/docs/concepts/configuration/taint-and-toleration/), unless other nodes have capacity.
+1. Use the `node-role.kubernetes.io/master:PreferNoSchedule` taint instead of an empty slice. [Pods will get scheduled on masters](/docs/concepts/configuration/taint-and-toleration/), unless other nodes have capacity.
2. Remove the taint after kubeadm init exits:
```bash
-kubectl taint nodes NODE_NAME role.kubernetes.io/master:NoSchedule-
+kubectl taint nodes NODE_NAME node-role.kubernetes.io/master:NoSchedule-
```
## `/usr` is mounted read-only on nodes {#usr-mounted-read-only}
diff --git a/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md b/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md
index 2366f61018dc7..a4f177b364ace 100644
--- a/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md
+++ b/content/en/docs/setup/production-environment/windows/user-guide-windows-containers.md
@@ -107,6 +107,14 @@ Port mapping is also supported, but for simplicity in this example the container
Windows container hosts are not able to access the IP of services scheduled on them due to current platform limitations of the Windows networking stack. Only Windows pods are able to access service IPs.
{{< /note >}}
+## Observability
+
+### Capturing logs from workloads
+
+Logs are an important element of observability; they enable users to gain insights into the operational aspect of workloads and are a key ingredient to troubleshooting issues. Because Windows containers and workloads inside Windows containers behave differently from Linux containers, users had a hard time collecting logs, limiting operational visibility. Windows workloads for example are usually configured to log to ETW (Event Tracing for Windows) or push entries to the application event log. [LogMonitor](https://github.com/microsoft/windows-container-tools/tree/master/LogMonitor), an open source tool by Microsoft, is the recommended way to monitor configured log sources inside a Windows container. LogMonitor supports monitoring event logs, ETW providers, and custom application logs, piping them to STDOUT for consumption by `kubectl logs `.
+
+Follow the instructions in the LogMonitor GitHub page to copy its binaries and configuration files to all your containers and add the necessary entrypoints for LogMonitor to push your logs to STDOUT.
+
## Using configurable Container usernames
Starting with Kubernetes v1.16, Windows containers can be configured to run their entrypoints and processes with different usernames than the image defaults. The way this is achieved is a bit different from the way it is done for Linux containers. Learn more about it [here](/docs/tasks/configure-pod-container/configure-runasusername/).
diff --git a/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md b/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md
index 3cb90383c7abb..123271be44e80 100644
--- a/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md
+++ b/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md
@@ -39,7 +39,7 @@ frontend and backend are connected using a Kubernetes
{{% capture lessoncontent %}}
-### Creating the backend using a Deployment
+## Creating the backend using a Deployment
The backend is a simple hello greeter microservice. Here is the configuration
file for the backend Deployment:
@@ -95,7 +95,7 @@ Events:
...
```
-### Creating the backend Service object
+## Creating the backend Service object
The key to connecting a frontend to a backend is the backend
Service. A Service creates a persistent IP address and DNS name entry
@@ -119,7 +119,7 @@ kubectl apply -f https://k8s.io/examples/service/access/hello-service.yaml
At this point, you have a backend Deployment running, and you have a
Service that can route traffic to it.
-### Creating the frontend
+## Creating the frontend
Now that you have your backend, you can create a frontend that connects to the backend.
The frontend connects to the backend worker Pods by using the DNS name
@@ -158,7 +158,7 @@ be to use a
so that you can change the configuration more easily.
{{< /note >}}
-### Interact with the frontend Service
+## Interact with the frontend Service
Once you’ve created a Service of type LoadBalancer, you can use this
command to find the external IP:
@@ -186,7 +186,7 @@ frontend LoadBalancer 10.51.252.116 XXX.XXX.XXX.XXX 80/TCP 1m
That IP can now be used to interact with the `frontend` service from outside the
cluster.
-### Send traffic through the frontend
+## Send traffic through the frontend
The frontend and backends are now connected. You can hit the endpoint
by using the curl command on the external IP of your frontend Service.
diff --git a/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md b/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md
index 8b4480de2f0fa..b3fb886d1143a 100644
--- a/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md
+++ b/content/en/docs/tasks/access-application-cluster/list-all-running-container-images.md
@@ -23,7 +23,7 @@ In this exercise you will use kubectl to fetch all of the Pods
running in a cluster, and format the output to pull out the list
of Containers for each.
-## List all Containers in all namespaces
+## List all Container images in all namespaces
- Fetch all Pods in all namespaces using `kubectl get pods --all-namespaces`
- Format the output to include only the list of Container image names
@@ -68,7 +68,7 @@ the `.items[*]` portion of the path should be omitted because a single
Pod is returned instead of a list of items.
{{< /note >}}
-## List Containers by Pod
+## List Container images by Pod
The formatting can be controlled further by using the `range` operation to
iterate over elements individually.
@@ -78,7 +78,7 @@ kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata
sort
```
-## List Containers filtering by Pod label
+## List Container images filtering by Pod label
To target only Pods matching a specific label, use the -l flag. The
following matches only Pods with labels matching `app=nginx`.
@@ -87,7 +87,7 @@ following matches only Pods with labels matching `app=nginx`.
kubectl get pods --all-namespaces -o=jsonpath="{..image}" -l app=nginx
```
-## List Containers filtering by Pod namespace
+## List Container images filtering by Pod namespace
To target only pods in a specific namespace, use the namespace flag. The
following matches only Pods in the `kube-system` namespace.
@@ -96,7 +96,7 @@ following matches only Pods in the `kube-system` namespace.
kubectl get pods --namespace kube-system -o jsonpath="{..image}"
```
-## List Containers using a go-template instead of jsonpath
+## List Container images using a go-template instead of jsonpath
As an alternative to jsonpath, Kubectl supports using [go-templates](https://golang.org/pkg/text/template/)
for formatting the output:
diff --git a/content/en/docs/tasks/administer-cluster/change-default-storage-class.md b/content/en/docs/tasks/administer-cluster/change-default-storage-class.md
index 4c3351956d325..a2070bcfe3fc0 100644
--- a/content/en/docs/tasks/administer-cluster/change-default-storage-class.md
+++ b/content/en/docs/tasks/administer-cluster/change-default-storage-class.md
@@ -62,10 +62,10 @@ for details about addon manager and how to disable individual addons.
To mark a StorageClass as non-default, you need to change its value to `false`:
```bash
- kubectl patch storageclass -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
+ kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
```
- where `` is the name of your chosen StorageClass.
+ where `standard` is the name of your chosen StorageClass.
1. Mark a StorageClass as default:
@@ -73,7 +73,7 @@ for details about addon manager and how to disable individual addons.
`storageclass.kubernetes.io/is-default-class=true`.
```bash
- kubectl patch storageclass -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
+ kubectl patch storageclass gold -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
```
Please note that at most one StorageClass can be marked as default. If two
diff --git a/content/en/docs/tasks/administer-cluster/declare-network-policy.md b/content/en/docs/tasks/administer-cluster/declare-network-policy.md
index edb389c46f3ac..0fdbff57b8758 100644
--- a/content/en/docs/tasks/administer-cluster/declare-network-policy.md
+++ b/content/en/docs/tasks/administer-cluster/declare-network-policy.md
@@ -90,10 +90,11 @@ To limit the access to the `nginx` service so that only Pods with the label `acc
{{< codenew file="service/networking/nginx-policy.yaml" >}}
-{{< note >}}
+The name of a NetworkPolicy object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
+{{< note >}}
NetworkPolicy includes a `podSelector` which selects the grouping of Pods to which the policy applies. You can see this policy selects Pods with the label `app=nginx`. The label was automatically added to the Pod in the `nginx` Deployment. An empty `podSelector` selects all pods in the namespace.
-
{{< /note >}}
## Assign the policy to the service
diff --git a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md
index 352a7093865aa..8c494935b65ad 100644
--- a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md
+++ b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md
@@ -265,15 +265,6 @@ work properly owing to a known issue with Alpine.
Check [here](https://github.com/kubernetes/kubernetes/issues/30215)
for more information.
-## Kubernetes Federation (Multiple Zone support)
-
-Release 1.3 introduced Cluster Federation support for multi-site Kubernetes
-installations. This required some minor (backward-compatible) changes to the
-way the Kubernetes cluster DNS server processes DNS queries, to facilitate
-the lookup of federated services (which span multiple Kubernetes clusters).
-See the [Cluster Federation Administrators' Guide](/docs/concepts/cluster-administration/federation/)
-for more details on Cluster Federation and multi-site support.
-
## References
- [DNS for Services and Pods](/docs/concepts/services-networking/dns-pod-service/)
diff --git a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md
index bd136e3ae2b48..9a69058ceb795 100644
--- a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md
+++ b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md
@@ -224,12 +224,14 @@ At this point, all requests we make to the Kubernetes cluster from the command l
Let's create some contents.
+{{< codenew file="admin/snowflake-deployment.yaml" >}}
+
+Apply the manifest to create a Deployment
+
```shell
-kubectl run snowflake --image=k8s.gcr.io/serve_hostname --replicas=2
+kubectl apply -f https://k8s.io/examples/admin/snowflake-deployment.yaml
```
We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname.
-Note that `kubectl run` creates deployments only on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead.
-If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) for more details.
```shell
kubectl get deployment
diff --git a/content/en/docs/tasks/administer-cluster/namespaces.md b/content/en/docs/tasks/administer-cluster/namespaces.md
index ef12b24c98963..de2230a6a4864 100644
--- a/content/en/docs/tasks/administer-cluster/namespaces.md
+++ b/content/en/docs/tasks/administer-cluster/namespaces.md
@@ -101,7 +101,8 @@ See the [design doc](https://git.k8s.io/community/contributors/design-proposals/
kubectl create namespace
```
-Note that the name of your namespace must be a DNS compatible label.
+The name of your namespace must be a valid
+[DNS label](/docs/concepts/overview/working-with-objects/names#dns-label-names).
There's an optional field `finalizers`, which allows observables to purge resources whenever the namespace is deleted. Keep in mind that if you specify a nonexistent finalizer, the namespace will be created but will get stuck in the `Terminating` state if the user tries to delete it.
@@ -187,88 +188,22 @@ This delete is asynchronous, so for a time you will see the namespace in the `Te
To demonstrate this, let's spin up a simple Deployment and Pods in the `development` namespace.
- We first check what is the current context:
-
- ```shell
- kubectl config view
- ```
- ```yaml
- apiVersion: v1
- clusters:
- cluster:
- certificate-authority-data: REDACTED
- server: https://130.211.122.180
- name: lithe-cocoa-92103_kubernetes
- contexts:
- context:
- cluster: lithe-cocoa-92103_kubernetes
- user: lithe-cocoa-92103_kubernetes
- name: lithe-cocoa-92103_kubernetes
- current-context: lithe-cocoa-92103_kubernetes
- kind: Config
- preferences: {}
- users:
- name: lithe-cocoa-92103_kubernetes
- user:
- client-certificate-data: REDACTED
- client-key-data: REDACTED
- token: 65rZW78y8HbwXXtSXuUw9DbP4FLjHi4b
- name: lithe-cocoa-92103_kubernetes-basic-auth
- user:
- password: h5M0FtUUIflBSdI7
- username: admin
- ```
-
- ```shell
- kubectl config current-context
- ```
- ```
- lithe-cocoa-92103_kubernetes
- ```
-
- The next step is to define a context for the kubectl client to work in each namespace. The values of "cluster" and "user" fields are copied from the current context.
-
- ```shell
- kubectl config set-context dev --namespace=development --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
- kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-92103_kubernetes --user=lithe-cocoa-92103_kubernetes
- ```
-
- The above commands provided two request contexts you can alternate against depending on what namespace you
- wish to work against.
-
- Let's switch to operate in the `development` namespace.
-
- ```shell
- kubectl config use-context dev
- ```
-
- You can verify your current context by doing the following:
-
```shell
- kubectl config current-context
- dev
- ```
-
- At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the `development` namespace.
-
- Let's create some contents.
-
- ```shell
- kubectl run snowflake --image=k8s.gcr.io/serve_hostname --replicas=2
+ kubectl run snowflake --image=k8s.gcr.io/serve_hostname --replicas=2 -n=development
```
We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname.
Note that `kubectl run` creates deployments only on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead.
If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) for more details.
```shell
- kubectl get deployment
+ kubectl get deployment -n=development
```
```
NAME READY UP-TO-DATE AVAILABLE AGE
snowflake 2/2 2 2 2m
```
```shell
- kubectl get pods -l run=snowflake
+ kubectl get pods -l run=snowflake -n=development
```
```
NAME READY STATUS RESTARTS AGE
@@ -280,23 +215,19 @@ This delete is asynchronous, so for a time you will see the namespace in the `Te
Let's switch to the `production` namespace and show how resources in one namespace are hidden from the other.
- ```shell
- kubectl config use-context prod
- ```
-
The `production` namespace should be empty, and the following commands should return nothing.
```shell
- kubectl get deployment
- kubectl get pods
+ kubectl get deployment -n=production
+ kubectl get pods -n=production
```
Production likes to run cattle, so let's create some cattle pods.
```shell
- kubectl run cattle --image=k8s.gcr.io/serve_hostname --replicas=5
+ kubectl run cattle --image=k8s.gcr.io/serve_hostname --replicas=5 -n=production
- kubectl get deployment
+ kubectl get deployment -n=production
```
```
NAME READY UP-TO-DATE AVAILABLE AGE
@@ -304,7 +235,7 @@ This delete is asynchronous, so for a time you will see the namespace in the `Te
```
```shell
- kubectl get pods -l run=cattle
+ kubectl get pods -l run=cattle -n=production
```
```
NAME READY STATUS RESTARTS AGE
diff --git a/content/en/docs/tasks/administer-cluster/nodelocaldns.md b/content/en/docs/tasks/administer-cluster/nodelocaldns.md
index 7d15596112976..f23525e377433 100644
--- a/content/en/docs/tasks/administer-cluster/nodelocaldns.md
+++ b/content/en/docs/tasks/administer-cluster/nodelocaldns.md
@@ -2,6 +2,7 @@
reviewers:
- bowei
- zihongz
+- sftim
title: Using NodeLocal DNSCache in Kubernetes clusters
content_template: templates/task
---
@@ -47,18 +48,44 @@ This is the path followed by DNS Queries after NodeLocal DNSCache is enabled:
{{< figure src="/images/docs/nodelocaldns.jpg" alt="NodeLocal DNSCache flow" title="Nodelocal DNSCache flow" caption="This image shows how NodeLocal DNSCache handles DNS queries." >}}
## Configuration
-
-This feature can be enabled using the command:
-
-`KUBE_ENABLE_NODELOCAL_DNS=true kubetest --up`
-
-This works for e2e clusters created on GCE. On all other environments, the following steps will setup NodeLocal DNSCache:
-
-* A yaml similar to [this](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml) can be applied using `kubectl create -f` command.
-* No need to modify the --cluster-dns flag since NodeLocal DNSCache listens on both the kube-dns service IP as well as a link-local IP (169.254.20.10 by default)
+{{< note >}} The local listen IP address for NodeLocal DNSCache can be any IP in the 169.254.20.0/16 space or any other IP address that can be guaranteed to not collide with any existing IP. This document uses 169.254.20.10 as an example.
+{{< /note >}}
+
+This feature can be enabled using the following steps:
+
+* Prepare a manifest similar to the sample [`nodelocaldns.yaml`](https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml) and save it as `nodelocaldns.yaml.`
+* Substitute the variables in the manifest with the right values:
+
+ * kubedns=`kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}`
+
+ * domain=``
+
+ * localdns=``
+
+ `` is "cluster.local" by default. `` is the local listen IP address chosen for NodeLocal DNSCache.
+
+ * If kube-proxy is running in IPTABLES mode:
+
+ ``` bash
+ sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g" nodelocaldns.yaml
+ ```
+
+ `__PILLAR__CLUSTER__DNS__` and `__PILLAR__UPSTREAM__SERVERS__` will be populated by the node-local-dns pods.
+ In this mode, node-local-dns pods listen on both the kube-dns service IP as well as ``, so pods can lookup DNS records using either IP address.
+
+ * If kube-proxy is running in IPVS mode:
+
+ ``` bash
+ sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__//g; s/__PILLAR__CLUSTER__DNS__/$kubedns/g" nodelocaldns.yaml
+ ```
+ In this mode, node-local-dns pods listen only on ``. The node-local-dns interface cannot bind the kube-dns cluster IP since the interface used for IPVS loadbalancing already uses this address.
+ `__PILLAR__UPSTREAM__SERVERS__` will be populated by the node-local-dns pods.
+
+* Run `kubectl create -f nodelocaldns.yaml`
+* If using kube-proxy in IPVS mode, `--cluster-dns` flag to kubelet needs to be modified to use `` that NodeLocal DNSCache is listening on.
+ Otherwise, there is no need to modify the value of the `--cluster-dns` flag, since NodeLocal DNSCache listens on both the kube-dns service IP as well as ``.
Once enabled, node-local-dns Pods will run in the kube-system namespace on each of the cluster nodes. This Pod runs [CoreDNS](https://github.com/coredns/coredns) in cache mode, so all CoreDNS metrics exposed by the different plugins will be available on a per-node basis.
-The feature can be disabled by removing the daemonset, using `kubectl delete -f` command. On e2e clusters created on GCE, the daemonset can be removed by deleting the node-local-dns yaml from `/etc/kubernetes/addons/0-dns/nodelocaldns.yaml`
-
+You can disable this feature by removing the DaemonSet, using `kubectl delete -f ` . You should also revert any changes you made to the kubelet configuration.
{{% /capture %}}
diff --git a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md
index e4fe4a5ac9815..39d0e825b8d89 100644
--- a/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md
+++ b/content/en/docs/tasks/administer-cluster/reserve-compute-resources.md
@@ -94,13 +94,7 @@ be configured to use the `systemd` cgroup driver.
`kube-reserved` is meant to capture resource reservation for kubernetes system
daemons like the `kubelet`, `container runtime`, `node problem detector`, etc.
It is not meant to reserve resources for system daemons that are run as pods.
-`kube-reserved` is typically a function of `pod density` on the nodes. [This
-performance dashboard](http://node-perf-dash.k8s.io/#/builds) exposes `cpu` and
-`memory` usage profiles of `kubelet` and `docker engine` at multiple levels of
-pod density. [This blog
-post](https://kubernetes.io/blog/2016/11/visualize-kubelet-performance-with-node-dashboard)
-explains how the dashboard can be interpreted to come up with a suitable
-`kube-reserved` reservation.
+`kube-reserved` is typically a function of `pod density` on the nodes.
In addition to `cpu`, `memory`, and `ephemeral-storage`, `pid` may be
specified to reserve the specified number of process IDs for
diff --git a/content/en/docs/tasks/administer-cluster/safely-drain-node.md b/content/en/docs/tasks/administer-cluster/safely-drain-node.md
index e7a4a093ac6ef..29006ff754318 100644
--- a/content/en/docs/tasks/administer-cluster/safely-drain-node.md
+++ b/content/en/docs/tasks/administer-cluster/safely-drain-node.md
@@ -9,7 +9,7 @@ content_template: templates/task
---
{{% capture overview %}}
-This page shows how to safely drain a machine, respecting the PodDisruptionBudget you have defined.
+This page shows how to safely drain a node, respecting the PodDisruptionBudget you have defined.
{{% /capture %}}
{{% capture prerequisites %}}
@@ -156,6 +156,7 @@ application owners and cluster owners to establish an agreement on behavior in t
{{% capture whatsnext %}}
* Follow steps to protect your application by [configuring a Pod Disruption Budget](/docs/tasks/run-application/configure-pdb/).
+* Learn more about [maintenance on a node](/docs/tasks/administer-cluster/cluster-management/#maintenance-on-a-node).
{{% /capture %}}
diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md
index dcba78d81a5d8..71fad95de9c60 100644
--- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md
+++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md
@@ -33,6 +33,8 @@ kubectl create configmap
```
where \ is the name you want to assign to the ConfigMap and \ is the directory, file, or literal value to draw the data from.
+The name of a ConfigMap object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
When you are creating a ConfigMap based on a file, the key in the \ defaults to the basename of the file, and the value defaults to the file content.
diff --git a/content/en/docs/tasks/configure-pod-container/configure-service-account.md b/content/en/docs/tasks/configure-pod-container/configure-service-account.md
index f42bc8e1fd6f3..f4917b36a91a8 100644
--- a/content/en/docs/tasks/configure-pod-container/configure-service-account.md
+++ b/content/en/docs/tasks/configure-pod-container/configure-service-account.md
@@ -95,6 +95,9 @@ metadata:
EOF
```
+The name of a ServiceAccount object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
+
If you get a complete dump of the service account object, like this:
```shell
diff --git a/content/en/docs/tasks/debug-application-cluster/audit.md b/content/en/docs/tasks/debug-application-cluster/audit.md
index ed95d353ac29d..8e7497bdb8007 100644
--- a/content/en/docs/tasks/debug-application-cluster/audit.md
+++ b/content/en/docs/tasks/debug-application-cluster/audit.md
@@ -235,6 +235,8 @@ spec:
```
For the complete API definition, see [AuditSink](/docs/reference/generated/kubernetes-api/v1.13/#auditsink-v1alpha1-auditregistration). Multiple objects will exist as independent solutions.
+The name of an AuditSink object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
Existing static backends that you configure with runtime flags are not affected by this feature. However, the dynamic backends share the truncate options of the static webhook. If webhook truncate options are set with runtime flags, they are applied to all dynamic backends.
diff --git a/content/en/docs/tasks/debug-application-cluster/debug-cluster.md b/content/en/docs/tasks/debug-application-cluster/debug-cluster.md
index ae56c42411053..495545ee1ac1e 100644
--- a/content/en/docs/tasks/debug-application-cluster/debug-cluster.md
+++ b/content/en/docs/tasks/debug-application-cluster/debug-cluster.md
@@ -29,6 +29,11 @@ kubectl get nodes
And verify that all of the nodes you expect to see are present and that they are all in the `Ready` state.
+To get detailed information about the overall health of your cluster, you can run:
+
+```shell
+kubectl cluster-info dump
+```
## Looking at logs
For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations
diff --git a/content/en/docs/tasks/debug-application-cluster/debug-service.md b/content/en/docs/tasks/debug-application-cluster/debug-service.md
index e0683fbd02e93..a065c9fa85009 100644
--- a/content/en/docs/tasks/debug-application-cluster/debug-service.md
+++ b/content/en/docs/tasks/debug-application-cluster/debug-service.md
@@ -8,57 +8,30 @@ title: Debug Services
{{% capture overview %}}
An issue that comes up rather frequently for new installations of Kubernetes is
-that a `Service` is not working properly. You've run your `Deployment` and
-created a `Service`, but you get no response when you try to access it.
-This document will hopefully help you to figure out what's going wrong.
+that a Service is not working properly. You've run your Pods through a
+Deployment (or other workload controller) and created a Service, but you
+get no response when you try to access it. This document will hopefully help
+you to figure out what's going wrong.
{{% /capture %}}
{{% capture body %}}
-## Conventions
-
-Throughout this doc you will see various commands that you can run. Some
-commands need to be run within a `Pod`, others on a Kubernetes `Node`, and others
-can run anywhere you have `kubectl` and credentials for the cluster. To make it
-clear what is expected, this document will use the following conventions.
-
-If the command "COMMAND" is expected to run in a `Pod` and produce "OUTPUT":
-
-```shell
-u@pod$ COMMAND
-OUTPUT
-```
-
-If the command "COMMAND" is expected to run on a `Node` and produce "OUTPUT":
-
-```shell
-u@node$ COMMAND
-OUTPUT
-```
-
-If the command is "kubectl ARGS":
-
-```shell
-kubectl ARGS
-OUTPUT
-```
-
## Running commands in a Pod
-For many steps here you will want to see what a `Pod` running in the cluster
-sees. The simplest way to do this is to run an interactive alpine `Pod`:
+For many steps here you will want to see what a Pod running in the cluster
+sees. The simplest way to do this is to run an interactive alpine Pod:
```none
kubectl run -it --rm --restart=Never alpine --image=alpine sh
-/ #
```
+
{{< note >}}
If you don't see a command prompt, try pressing enter.
{{< /note >}}
-If you already have a running `Pod` that you prefer to use, you can run a
+If you already have a running Pod that you prefer to use, you can run a
command in it using:
```shell
@@ -67,21 +40,23 @@ kubectl exec -c --
## Setup
-For the purposes of this walk-through, let's run some `Pods`. Since you're
-probably debugging your own `Service` you can substitute your own details, or you
+For the purposes of this walk-through, let's run some Pods. Since you're
+probably debugging your own Service you can substitute your own details, or you
can follow along and get a second data point.
```shell
kubectl run hostnames --image=k8s.gcr.io/serve_hostname \
- --labels=app=hostnames \
- --port=9376 \
- --replicas=3
+ --replicas=3
+```
+```none
deployment.apps/hostnames created
```
`kubectl` commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands.
+
{{< note >}}
-This is the same as if you started the `Deployment` with the following YAML:
+This is the same as if you had started the Deployment with the following
+YAML:
```yaml
apiVersion: apps/v1
@@ -91,61 +66,111 @@ metadata:
spec:
selector:
matchLabels:
- app: hostnames
+ run: hostnames
replicas: 3
template:
metadata:
labels:
- app: hostnames
+ run: hostnames
spec:
containers:
- name: hostnames
image: k8s.gcr.io/serve_hostname
- ports:
- - containerPort: 9376
- protocol: TCP
```
+
+The label "run" is automatically set by `kubectl run` to the name of the
+Deployment.
{{< /note >}}
-Confirm your `Pods` are running:
+You can confirm your Pods are running:
```shell
-kubectl get pods -l app=hostnames
+kubectl get pods -l run=hostnames
+```
+```none
NAME READY STATUS RESTARTS AGE
hostnames-632524106-bbpiw 1/1 Running 0 2m
hostnames-632524106-ly40y 1/1 Running 0 2m
hostnames-632524106-tlaok 1/1 Running 0 2m
```
+You can also confirm that your Pods are serving. You can get the list of
+Pod IP addresses and test them directly.
+
+```shell
+kubectl get pods -l run=hostnames \
+ -o go-template='{{range .items}}{{.status.podIP}}{{"\n"}}{{end}}'
+```
+```none
+10.244.0.5
+10.244.0.6
+10.244.0.7
+```
+
+The example container used for this walk-through simply serves its own hostname
+via HTTP on port 9376, but if you are debugging your own app, you'll want to
+use whatever port number your Pods are listening on.
+
+From within a pod:
+
+```shell
+for ep in 10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376; do
+ wget -qO- $ep
+done
+```
+
+This should produce something like:
+
+```
+hostnames-0uton
+hostnames-bvc05
+hostnames-yp2kp
+```
+
+If you are not getting the responses you expect at this point, your Pods
+might not be healthy or might not be listening on the port you think they are.
+You might find `kubectl logs` to be useful for seeing what is happening, or
+perhaps you need to `kubectl exec` directly into your Pods and debug from
+there.
+
+Assuming everything has gone to plan so far, you can start to investigate why
+your Service doesn't work.
+
## Does the Service exist?
-The astute reader will have noticed that we did not actually create a `Service`
+The astute reader will have noticed that you did not actually create a Service
yet - that is intentional. This is a step that sometimes gets forgotten, and
is the first thing to check.
-So what would happen if I tried to access a non-existent `Service`? Assuming you
-have another `Pod` that consumes this `Service` by name you would get something
-like:
+What would happen if you tried to access a non-existent Service? If
+you have another Pod that consumes this Service by name you would get
+something like:
```shell
-u@pod$ wget -O- hostnames
+wget -O- hostnames
+```
+```none
Resolving hostnames (hostnames)... failed: Name or service not known.
wget: unable to resolve host address 'hostnames'
```
-So the first thing to check is whether that `Service` actually exists:
+The first thing to check is whether that Service actually exists:
```shell
kubectl get svc hostnames
+```
+```none
No resources found.
Error from server (NotFound): services "hostnames" not found
```
-So we have a culprit, let's create the `Service`. As before, this is for the
-walk-through - you can use your own `Service`'s details here.
+Let's create the Service. As before, this is for the walk-through - you can
+use your own Service's details here.
```shell
kubectl expose deployment hostnames --port=80 --target-port=9376
+```
+```none
service/hostnames exposed
```
@@ -153,11 +178,16 @@ And read it back, just to be sure:
```shell
kubectl get svc hostnames
+```
+```none
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hostnames ClusterIP 10.0.1.175 80/TCP 5s
```
-As before, this is the same as if you had started the `Service` with YAML:
+Now you know that the Service exists.
+
+{{< note >}}
+As before, this is the same as if you had started the Service with YAML:
```yaml
apiVersion: v1
@@ -166,7 +196,7 @@ metadata:
name: hostnames
spec:
selector:
- app: hostnames
+ run: hostnames
ports:
- name: default
protocol: TCP
@@ -174,25 +204,35 @@ spec:
targetPort: 9376
```
-Now you can confirm that the `Service` exists.
+In order to highlight the full range of configuration, the Service you created
+here uses a different port number than the Pods. For many real-world
+Services, these values might be the same.
+{{< /note >}}
+
+## Does the Service work by DNS name?
-## Does the Service work by DNS?
+One of the most common ways that clients consume a Service is through a DNS
+name.
-From a `Pod` in the same `Namespace`:
+From a Pod in the same Namespace:
```shell
-u@pod$ nslookup hostnames
+nslookup hostnames
+```
+```none
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Name: hostnames
Address 1: 10.0.1.175 hostnames.default.svc.cluster.local
```
-If this fails, perhaps your `Pod` and `Service` are in different
-`Namespaces`, try a namespace-qualified name:
+If this fails, perhaps your Pod and Service are in different
+Namespaces, try a namespace-qualified name (again, from within a Pod):
```shell
-u@pod$ nslookup hostnames.default
+nslookup hostnames.default
+```
+```none
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Name: hostnames.default
@@ -200,11 +240,13 @@ Address 1: 10.0.1.175 hostnames.default.svc.cluster.local
```
If this works, you'll need to adjust your app to use a cross-namespace name, or
-run your app and `Service` in the same `Namespace`. If this still fails, try a
+run your app and Service in the same Namespace. If this still fails, try a
fully-qualified name:
```shell
-u@pod$ nslookup hostnames.default.svc.cluster.local
+nslookup hostnames.default.svc.cluster.local
+```
+```none
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Name: hostnames.default.svc.cluster.local
@@ -212,18 +254,20 @@ Address 1: 10.0.1.175 hostnames.default.svc.cluster.local
```
Note the suffix here: "default.svc.cluster.local". The "default" is the
-`Namespace` we're operating in. The "svc" denotes that this is a `Service`.
+Namespace you're operating in. The "svc" denotes that this is a Service.
The "cluster.local" is your cluster domain, which COULD be different in your
own cluster.
-You can also try this from a `Node` in the cluster:
+You can also try this from a Node in the cluster:
{{< note >}}
-10.0.0.10 is my DNS `Service`, yours might be different.
+10.0.0.10 is the cluster's DNS Service IP, yours might be different.
{{< /note >}}
```shell
-u@node$ nslookup hostnames.default.svc.cluster.local 10.0.0.10
+nslookup hostnames.default.svc.cluster.local 10.0.0.10
+```
+```none
Server: 10.0.0.10
Address: 10.0.0.10#53
@@ -232,39 +276,49 @@ Address: 10.0.1.175
```
If you are able to do a fully-qualified name lookup but not a relative one, you
-need to check that your `/etc/resolv.conf` file is correct.
+need to check that your `/etc/resolv.conf` file in your Pod is correct. From
+within a Pod:
```shell
-u@pod$ cat /etc/resolv.conf
+cat /etc/resolv.conf
+```
+
+You should see something like:
+
+```
nameserver 10.0.0.10
search default.svc.cluster.local svc.cluster.local cluster.local example.com
options ndots:5
```
-The `nameserver` line must indicate your cluster's DNS `Service`. This is
+The `nameserver` line must indicate your cluster's DNS Service. This is
passed into `kubelet` with the `--cluster-dns` flag.
The `search` line must include an appropriate suffix for you to find the
-`Service` name. In this case it is looking for `Services` in the local
-`Namespace` (`default.svc.cluster.local`), `Services` in all `Namespaces`
-(`svc.cluster.local`), and the cluster (`cluster.local`). Depending on your own
-install you might have additional records after that (up to 6 total). The
-cluster suffix is passed into `kubelet` with the `--cluster-domain` flag. We
-assume that is "cluster.local" in this document, but yours might be different,
-in which case you should change that in all of the commands above.
+Service name. In this case it is looking for Services in the local
+Namespace ("default.svc.cluster.local"), Services in all Namespaces
+("svc.cluster.local"), and lastly for names in the cluster ("cluster.local").
+Depending on your own install you might have additional records after that (up
+to 6 total). The cluster suffix is passed into `kubelet` with the
+`--cluster-domain` flag. Throughout this document, the cluster suffix is
+assumed to be "cluster.local". Your own clusters might be configured
+differently, in which case you should change that in all of the previous
+commands.
The `options` line must set `ndots` high enough that your DNS client library
considers search paths at all. Kubernetes sets this to 5 by default, which is
high enough to cover all of the DNS names it generates.
-### Does any Service exist in DNS?
+### Does any Service work by DNS name? {#does-any-service-exist-in-dns}
-If the above still fails - DNS lookups are not working for your `Service` - we
+If the above still fails, DNS lookups are not working for your Service. You
can take a step back and see what else is not working. The Kubernetes master
-`Service` should always work:
+Service should always work. From within a Pod:
```shell
-u@pod$ nslookup kubernetes.default
+nslookup kubernetes.default
+```
+```none
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
@@ -272,34 +326,37 @@ Name: kubernetes.default
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
```
-If this fails, you might need to go to the kube-proxy section of this doc, or
-even go back to the top of this document and start over, but instead of
-debugging your own `Service`, debug DNS.
+If this fails, please see the [kube-proxy](#is-the-kube-proxy-working) section
+of this document, or even go back to the top of this document and start over,
+but instead of debugging your own Service, debug the DNS Service.
## Does the Service work by IP?
-Assuming we can confirm that DNS works, the next thing to test is whether your
-`Service` works at all. From a node in your cluster, access the `Service`'s
-IP (from `kubectl get` above).
+Assuming you have confirmed that DNS works, the next thing to test is whether your
+Service works by its IP address. From a Pod in your cluster, access the
+Service's IP (from `kubectl get` above).
```shell
-u@node$ curl 10.0.1.175:80
-hostnames-0uton
+for i in $(seq 1 3); do
+ wget -qO- 10.0.1.175:80
+done
+```
-u@node$ curl 10.0.1.175:80
-hostnames-yp2kp
+This should produce something like:
-u@node$ curl 10.0.1.175:80
+```
+hostnames-0uton
hostnames-bvc05
+hostnames-yp2kp
```
-If your `Service` is working, you should get correct responses. If not, there
+If your Service is working, you should get correct responses. If not, there
are a number of things that could be going wrong. Read on.
-## Is the Service correct?
+## Is the Service defined correctly?
It might sound silly, but you should really double and triple check that your
-`Service` is correct and matches your `Pod`'s port. Read back your `Service`
+Service is correct and matches your Pod's port. Read back your Service
and verify it:
```shell
@@ -316,7 +373,7 @@ kubectl get service hostnames -o json
"resourceVersion": "347189",
"creationTimestamp": "2015-07-07T15:24:29Z",
"labels": {
- "app": "hostnames"
+ "run": "hostnames"
}
},
"spec": {
@@ -330,7 +387,7 @@ kubectl get service hostnames -o json
}
],
"selector": {
- "app": "hostnames"
+ "run": "hostnames"
},
"clusterIP": "10.0.1.175",
"type": "ClusterIP",
@@ -342,110 +399,116 @@ kubectl get service hostnames -o json
}
```
-* Is the port you are trying to access in `spec.ports[]`?
-* Is the `targetPort` correct for your `Pods` (many `Pods` choose to use a different port than the `Service`)?
-* If you meant it to be a numeric port, is it a number (9376) or a
-string "9376"?
-* If you meant it to be a named port, do your `Pods` expose a port
-with the same name?
-* Is the port's `protocol` the same as the `Pod`'s?
+* Is the Service port you are trying to access listed in `spec.ports[]`?
+* Is the `targetPort` correct for your Pods (some Pods use a different port than the Service)?
+* If you meant to use a numeric port, is it a number (9376) or a string "9376"?
+* If you meant to use a named port, do your Pods expose a port with the same name?
+* Is the port's `protocol` correct for your Pods?
## Does the Service have any Endpoints?
-If you got this far, we assume that you have confirmed that your `Service`
-exists and is resolved by DNS. Now let's check that the `Pods` you ran are
-actually being selected by the `Service`.
+If you got this far, you have confirmed that your Service is correctly
+defined and is resolved by DNS. Now let's check that the Pods you ran are
+actually being selected by the Service.
-Earlier we saw that the `Pods` were running. We can re-check that:
+Earlier you saw that the Pods were running. You can re-check that:
```shell
-kubectl get pods -l app=hostnames
+kubectl get pods -l run=hostnames
+```
+```none
NAME READY STATUS RESTARTS AGE
hostnames-0uton 1/1 Running 0 1h
hostnames-bvc05 1/1 Running 0 1h
hostnames-yp2kp 1/1 Running 0 1h
```
-The "AGE" column says that these `Pods` are about an hour old, which implies that
+The `-l run=hostnames` argument is a label selector - just like our Service
+has.
+
+The "AGE" column says that these Pods are about an hour old, which implies that
they are running fine and not crashing.
-The `-l app=hostnames` argument is a label selector - just like our `Service`
-has. Inside the Kubernetes system is a control loop which evaluates the
-selector of every `Service` and saves the results into an `Endpoints` object.
+The "RESTARTS" column says that these pods are not crashing frequently or being
+restarted. Frequent restarts could lead to intermittent connectivity issues.
+If the restart count is high, read more about how to [debug pods](/docs/tasks/debug-application-cluster/debug-pod-replication-controller/#debugging-pods).
+
+Inside the Kubernetes system is a control loop which evaluates the selector of
+every Service and saves the results into a corresponding Endpoints object.
```shell
kubectl get endpoints hostnames
+
NAME ENDPOINTS
hostnames 10.244.0.5:9376,10.244.0.6:9376,10.244.0.7:9376
```
-This confirms that the endpoints controller has found the correct `Pods` for
-your `Service`. If the `hostnames` row is blank, you should check that the
-`spec.selector` field of your `Service` actually selects for `metadata.labels`
-values on your `Pods`. A common mistake is to have a typo or other error, such
-as the `Service` selecting for `run=hostnames`, but the `Deployment` specifying
-`app=hostnames`.
+This confirms that the endpoints controller has found the correct Pods for
+your Service. If the `ENDPOINTS` column is ``, you should check that
+the `spec.selector` field of your Service actually selects for
+`metadata.labels` values on your Pods. A common mistake is to have a typo or
+other error, such as the Service selecting for `app=hostnames`, but the
+Deployment specifying `run=hostnames`.
## Are the Pods working?
-At this point, we know that your `Service` exists and has selected your `Pods`.
-Let's check that the `Pods` are actually working - we can bypass the `Service`
-mechanism and go straight to the `Pods`.
+At this point, you know that your Service exists and has selected your Pods.
+At the beginning of this walk-through, you verified the Pods themselves.
+Let's check again that the Pods are actually working - you can bypass the
+Service mechanism and go straight to the Pods, as listed by the Endpoints
+above.
{{< note >}}
-These commands use the `Pod` port (9376), rather than the `Service` port (80).
+These commands use the Pod port (9376), rather than the Service port (80).
{{< /note >}}
+From within a Pod:
+
```shell
-u@pod$ wget -qO- 10.244.0.5:9376
-hostnames-0uton
+for ep in 10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376; do
+ wget -qO- $ep
+done
+```
-pod $ wget -qO- 10.244.0.6:9376
-hostnames-bvc05
+This should produce something like:
-u@pod$ wget -qO- 10.244.0.7:9376
+```
+hostnames-0uton
+hostnames-bvc05
hostnames-yp2kp
```
-We expect each `Pod` in the `Endpoints` list to return its own hostname. If
+You expect each Pod in the Endpoints list to return its own hostname. If
this is not what happens (or whatever the correct behavior is for your own
-`Pods`), you should investigate what's happening there. You might find
-`kubectl logs` to be useful or `kubectl exec` directly to your `Pods` and check
-service from there.
-
-Another thing to check is that your `Pods` are not crashing or being restarted.
-Frequent restarts could lead to intermittent connectivity issues.
-
-```shell
-kubectl get pods -l app=hostnames
-NAME READY STATUS RESTARTS AGE
-hostnames-632524106-bbpiw 1/1 Running 0 2m
-hostnames-632524106-ly40y 1/1 Running 0 2m
-hostnames-632524106-tlaok 1/1 Running 0 2m
-```
-
-If the restart count is high, read more about how to [debug
-pods](/docs/tasks/debug-application-cluster/debug-pod-replication-controller/#debugging-pods).
+Pods), you should investigate what's happening there.
## Is the kube-proxy working?
-If you get here, your `Service` is running, has `Endpoints`, and your `Pods`
-are actually serving. At this point, the whole `Service` proxy mechanism is
+If you get here, your Service is running, has Endpoints, and your Pods
+are actually serving. At this point, the whole Service proxy mechanism is
suspect. Let's confirm it, piece by piece.
+The default implementation of Services, and the one used on most clusters, is
+kube-proxy. This is a program that runs on every node and configures one of a
+small set of mechanisms for providing the Service abstraction. If your
+cluster does not use kube-proxy, the following sections will not apply, and you
+will have to investigate whatever implementation of Services you are using.
+
### Is kube-proxy running?
-Confirm that `kube-proxy` is running on your `Nodes`. You should get something
-like the below:
+Confirm that `kube-proxy` is running on your Nodes. Running directly on a
+Node, you should get something like the below:
```shell
-u@node$ ps auxw | grep kube-proxy
+ps auxw | grep kube-proxy
+```
+```none
root 4194 0.4 0.1 101864 17696 ? Sl Jul04 25:43 /usr/local/bin/kube-proxy --master=https://kubernetes-master --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2
```
Next, confirm that it is not failing something obvious, like contacting the
master. To do this, you'll have to look at the logs. Accessing the logs
-depends on your `Node` OS. On some OSes it is a file, such as
+depends on your Node OS. On some OSes it is a file, such as
/var/log/kube-proxy.log, while other OSes use `journalctl` to access logs. You
should see something like:
@@ -463,7 +526,7 @@ I1027 22:14:54.040223 5063 proxier.go:294] Adding new service "kube-system/ku
```
If you see error messages about not being able to contact the master, you
-should double-check your `Node` configuration and installation steps.
+should double-check your Node configuration and installation steps.
One of the possible reasons that `kube-proxy` cannot run correctly is that the
required `conntrack` binary cannot be found. This may happen on some Linux
@@ -472,36 +535,19 @@ installing Kubernetes from scratch. If this is the case, you need to manually
install the `conntrack` package (e.g. `sudo apt install conntrack` on Ubuntu)
and then retry.
-### Is kube-proxy writing iptables rules?
+Kube-proxy can run in one of a few modes. In the log listed above, the
+line `Using iptables Proxier` indicates that kube-proxy is running in
+"iptables" mode. The most common other mode is "ipvs". The older "userspace"
+mode has largely been replaced by these.
-One of the main responsibilities of `kube-proxy` is to write the `iptables`
-rules which implement `Services`. Let's check that those rules are getting
-written.
+#### Iptables mode
-The kube-proxy can run in "userspace" mode, "iptables" mode or "ipvs" mode.
-Hopefully you are using the "iptables" mode or "ipvs" mode. You
-should see one of the following cases.
-
-#### Userspace
+In "iptables" mode, you should see something like the following on a Node:
```shell
-u@node$ iptables-save | grep hostnames
--A KUBE-PORTALS-CONTAINER -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j REDIRECT --to-ports 48577
--A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577
+iptables-save | grep hostnames
```
-
-There should be 2 rules for each port on your `Service` (just one in this
-example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST". If you do
-not see these, try restarting `kube-proxy` with the `-v` flag set to 4, and
-then look at the logs again.
-
-Almost nobody should be using the "userspace" mode any more, so we won't spend
-more time on it here.
-
-#### Iptables
-
-```shell
-u@node$ iptables-save | grep hostnames
+```none
-A KUBE-SEP-57KPRZ3JQVENLNBR -s 10.244.3.6/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SEP-57KPRZ3JQVENLNBR -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.3.6:9376
-A KUBE-SEP-WNBA2IHDGP2BOBGZ -s 10.244.1.7/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
@@ -514,15 +560,20 @@ u@node$ iptables-save | grep hostnames
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -j KUBE-SEP-57KPRZ3JQVENLNBR
```
-There should be 1 rule in `KUBE-SERVICES`, 1 or 2 rules per endpoint in
-`KUBE-SVC-(hash)` (depending on `SessionAffinity`), one `KUBE-SEP-(hash)` chain
-per endpoint, and a few rules in each `KUBE-SEP-(hash)` chain. The exact rules
-will vary based on your exact config (including node-ports and load-balancers).
+For each port of each Service, there should be 1 rule in `KUBE-SERVICES` and
+one `KUBE-SVC-` chain. For each Pod endpoint, there should be a small
+number of rules in that `KUBE-SVC-` and one `KUBE-SEP-` chain with
+a small number of rules in it. The exact rules will vary based on your exact
+config (including node-ports and load-balancers).
-#### IPVS
+#### IPVS mode
+
+In "ipvs" mode, you should see something like the following on a Node:
```shell
-u@node$ ipvsadm -ln
+ipvsadm -ln
+```
+```none
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
...
@@ -533,14 +584,39 @@ TCP 10.0.1.175:80 rr
...
```
-IPVS proxy will create a virtual server for each service address(e.g. Cluster IP, External IP, NodePort IP, Load Balancer IP etc.) and some corresponding real servers for endpoints of the service, if any. In this example, service hostnames(`10.0.1.175:80`) has 3 endpoints(`10.244.0.5:9376`, `10.244.0.6:9376`, `10.244.0.7:9376`) and you'll get results similar to above.
+For each port of each Service, plus any NodePorts, external IPs, and
+load-balancer IPs, kube-proxy will create a virtual server. For each Pod
+endpoint, it will create corresponding real servers. In this example, service
+hostnames(`10.0.1.175:80`) has 3 endpoints(`10.244.0.5:9376`,
+`10.244.0.6:9376`, `10.244.0.7:9376`).
+
+#### Userspace mode
+
+In rare cases, you may be using "userspace" mode. From your Node:
+
+```shell
+iptables-save | grep hostnames
+```
+```none
+-A KUBE-PORTALS-CONTAINER -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j REDIRECT --to-ports 48577
+-A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNAT --to-destination 10.240.115.247:48577
+```
+
+There should be 2 rules for each port of your Service (just one in this
+example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST".
+
+Almost nobody should be using the "userspace" mode any more, so you won't spend
+more time on it here.
### Is kube-proxy proxying?
-Assuming you do see the above rules, try again to access your `Service` by IP:
+Assuming you do see one the above cases, try again to access your Service by
+IP from one of your Nodes:
```shell
-u@node$ curl 10.0.1.175:80
+curl 10.0.1.175:80
+```
+```none
hostnames-0uton
```
@@ -548,31 +624,36 @@ If this fails and you are using the userspace proxy, you can try accessing the
proxy directly. If you are using the iptables proxy, skip this section.
Look back at the `iptables-save` output above, and extract the
-port number that `kube-proxy` is using for your `Service`. In the above
+port number that `kube-proxy` is using for your Service. In the above
examples it is "48577". Now connect to that:
```shell
-u@node$ curl localhost:48577
+curl localhost:48577
+```
+```none
hostnames-yp2kp
```
If this still fails, look at the `kube-proxy` logs for specific lines like:
-```shell
+```none
Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376]
```
If you don't see those, try restarting `kube-proxy` with the `-v` flag set to 4, and
then look at the logs again.
-### A Pod cannot reach itself via Service IP
+### Edge case: A Pod fails to reach itself via the Service IP {#a-pod-fails-to-reach-itself-via-the-service-ip}
+
+This might sound unlikely, but it does happen and it is supposed to work.
This can happen when the network is not properly configured for "hairpin"
traffic, usually when `kube-proxy` is running in `iptables` mode and Pods
are connected with bridge network. The `Kubelet` exposes a `hairpin-mode`
-[flag](/docs/admin/kubelet/) that allows endpoints of a Service to loadbalance back to themselves
-if they try to access their own Service VIP. The `hairpin-mode` flag must either be
-set to `hairpin-veth` or `promiscuous-bridge`.
+[flag](/docs/admin/kubelet/) that allows endpoints of a Service to loadbalance
+back to themselves if they try to access their own Service VIP. The
+`hairpin-mode` flag must either be set to `hairpin-veth` or
+`promiscuous-bridge`.
The common steps to trouble shoot this are as follows:
@@ -581,9 +662,10 @@ You should see something like the below. `hairpin-mode` is set to
`promiscuous-bridge` in the following example.
```shell
-u@node$ ps auxw|grep kubelet
+ps auxw | grep kubelet
+```
+```none
root 3392 1.1 0.8 186804 65208 ? Sl 00:51 11:11 /usr/local/bin/kubelet --enable-debugging-handlers=true --config=/etc/kubernetes/manifests --allow-privileged=True --v=4 --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --configure-cbr0=true --cgroup-root=/ --system-cgroups=/system --hairpin-mode=promiscuous-bridge --runtime-cgroups=/docker-daemon --kubelet-cgroups=/kubelet --babysit-daemons=true --max-pods=110 --serialize-image-pulls=false --outofdisk-transition-frequency=0
-
```
* Confirm the effective `hairpin-mode`. To do this, you'll have to look at
@@ -594,7 +676,7 @@ match `--hairpin-mode` flag due to compatibility. Check if there is any log
lines with key word `hairpin` in kubelet.log. There should be log lines
indicating the effective hairpin mode, like something below.
-```shell
+```none
I0629 00:51:43.648698 3252 kubelet.go:380] Hairpin mode set to "promiscuous-bridge"
```
@@ -604,6 +686,8 @@ you should see something like:
```shell
for intf in /sys/devices/virtual/net/cbr0/brif/*; do cat $intf/hairpin_mode; done
+```
+```none
1
1
1
@@ -615,20 +699,21 @@ has the permission to manipulate linux bridge on node. If `cbr0` bridge is
used and configured properly, you should see:
```shell
-u@node$ ifconfig cbr0 |grep PROMISC
+ifconfig cbr0 |grep PROMISC
+```
+```none
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1460 Metric:1
-
```
* Seek help if none of above works out.
## Seek help
-If you get this far, something very strange is happening. Your `Service` is
-running, has `Endpoints`, and your `Pods` are actually serving. You have DNS
-working, `iptables` rules installed, and `kube-proxy` does not seem to be
-misbehaving. And yet your `Service` is not working. You should probably let
-us know, so we can help investigate!
+If you get this far, something very strange is happening. Your Service is
+running, has Endpoints, and your Pods are actually serving. You have DNS
+working, and `kube-proxy` does not seem to be misbehaving. And yet your
+Service is not working. Please let us know what is going on, so we can help
+investigate!
Contact us on
[Slack](/docs/troubleshooting/#slack) or
diff --git a/content/en/docs/tasks/inject-data-application/podpreset.md b/content/en/docs/tasks/inject-data-application/podpreset.md
index beb57754c3661..de41c0f73a5d9 100644
--- a/content/en/docs/tasks/inject-data-application/podpreset.md
+++ b/content/en/docs/tasks/inject-data-application/podpreset.md
@@ -2,23 +2,19 @@
reviewers:
- jessfraz
title: Inject Information into Pods Using a PodPreset
+min-kubernetes-server-version: v1.10
content_template: templates/task
weight: 60
---
{{% capture overview %}}
-You can use a `PodPreset` object to inject information like secrets, volume
-mounts, and environment variables etc into pods at creation time.
-This task shows some examples on using the `PodPreset` resource.
+This page shows how to use PodPreset objects to inject information like {{< glossary_tooltip text="Secrets" term_id="secret" >}}, volume mounts, and {{< glossary_tooltip text="environment variables" term_id="container-env-variables" >}} into Pods at creation time.
{{% /capture %}}
{{% capture prerequisites %}}
-Get an overview of PodPresets at
-[Understanding Pod Presets](/docs/concepts/workloads/pods/podpreset/).
-
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
{{% /capture %}}
@@ -26,157 +22,298 @@ Get an overview of PodPresets at
{{% capture steps %}}
-## Simple Pod Spec Example
+## Use Pod presets to inject environment variables and volumes
-This is a simple example to show how a Pod spec is modified by the Pod
-Preset.
+In this step, you create a preset that has a volume mount and one environment variable.
+Here is the manifest for the PodPreset:
{{< codenew file="podpreset/preset.yaml" >}}
+The name of a PodPreset object must be a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
+
+In the manifest, you can see that the preset has an environment variable definition called `DB_PORT`
+and a volume mount definition called `cache-volume` which is mounted under `/cache`. The {{< glossary_tooltip text="selector" term_id="selector" >}} specifies that
+the preset will act upon any Pod that is labeled `role:frontend`.
+
Create the PodPreset:
```shell
kubectl apply -f https://k8s.io/examples/podpreset/preset.yaml
```
-Examine the created PodPreset:
+Verify that the PodPreset has been created:
```shell
kubectl get podpreset
```
```
-NAME AGE
-allow-database 1m
+NAME CREATED AT
+allow-database 2020-01-24T08:54:29Z
```
-The new PodPreset will act upon any pod that has label `role: frontend`.
+This manifest defines a Pod labelled `role: frontend` (matching the PodPreset's selector):
{{< codenew file="podpreset/pod.yaml" >}}
-Create a pod:
+Create the Pod:
```shell
kubectl create -f https://k8s.io/examples/podpreset/pod.yaml
```
-List the running Pods:
+Verify that the Pod is running:
```shell
kubectl get pods
```
+
+The output shows that the Pod is running:
+
```
NAME READY STATUS RESTARTS AGE
website 1/1 Running 0 4m
```
-**Pod spec after admission controller:**
-
-{{< codenew file="podpreset/merged.yaml" >}}
-
-To see above output, run the following command:
+View the Pod spec altered by the admission controller in order to see the effects of the preset
+having been applied:
```shell
kubectl get pod website -o yaml
```
-## Pod Spec with ConfigMap Example
+{{< codenew file="podpreset/merged.yaml" >}}
-This is an example to show how a Pod spec is modified by the Pod Preset
-that defines a `ConfigMap` for Environment Variables.
+The `DB_PORT` environment variable, the `volumeMount` and the `podpreset.admission.kubernetes.io` annotation
+of the Pod verify that the preset has been applied.
-**User submitted pod spec:**
+## Pod spec with ConfigMap example
-{{< codenew file="podpreset/pod.yaml" >}}
+This is an example to show how a Pod spec is modified by a Pod preset
+that references a ConfigMap containing environment variables.
-**User submitted `ConfigMap`:**
+Here is the manifest containing the definition of the ConfigMap:
{{< codenew file="podpreset/configmap.yaml" >}}
-**Example Pod Preset:**
+Create the ConfigMap:
+
+```shell
+kubectl create -f https://k8s.io/examples/podpreset/configmap.yaml
+```
+
+Here is a PodPreset manifest referencing that ConfigMap:
{{< codenew file="podpreset/allow-db.yaml" >}}
-**Pod spec after admission controller:**
+Create the preset that references the ConfigMap:
+
+```shell
+kubectl create -f https://k8s.io/examples/podpreset/allow-db.yaml
+```
+
+The following manifest defines a Pod matching the PodPreset for this example:
+
+{{< codenew file="podpreset/pod.yaml" >}}
+
+Create the Pod:
+
+```shell
+kubectl create -f https://k8s.io/examples/podpreset/pod.yaml
+```
+
+View the Pod spec altered by the admission controller in order to see the effects of the preset
+having been applied:
+
+```shell
+kubectl get pod website -o yaml
+```
{{< codenew file="podpreset/allow-db-merged.yaml" >}}
-## ReplicaSet with Pod Spec Example
+The `DB_PORT` environment variable and the `podpreset.admission.kubernetes.io` annotation of the Pod
+verify that the preset has been applied.
+
+## ReplicaSet with Pod spec example
+
+This is an example to show that only Pod specs are modified by Pod presets. Other workload types
+like ReplicaSets or Deployments are unaffected.
+
+Here is the manifest for the PodPreset for this example:
-The following example shows that only the pod spec is modified by the Pod
-Preset.
+{{< codenew file="podpreset/preset.yaml" >}}
+
+Create the preset:
+
+```shell
+kubectl apply -f https://k8s.io/examples/podpreset/preset.yaml
+```
-**User submitted ReplicaSet:**
+This manifest defines a ReplicaSet that manages three application Pods:
{{< codenew file="podpreset/replicaset.yaml" >}}
-**Example Pod Preset:**
+Create the ReplicaSet:
-{{< codenew file="podpreset/preset.yaml" >}}
+```shell
+kubectl create -f https://k8s.io/examples/podpreset/replicaset.yaml
+```
+
+Verify that the Pods created by the ReplicaSet are running:
+
+```shell
+kubectl get pods
+```
+
+The output shows that the Pods are running:
+
+```
+NAME READY STATUS RESTARTS AGE
+frontend-2l94q 1/1 Running 0 2m18s
+frontend-6vdgn 1/1 Running 0 2m18s
+frontend-jzt4p 1/1 Running 0 2m18s
+```
-**Pod spec after admission controller:**
+View the `spec` of the ReplicaSet:
-Note that the ReplicaSet spec was not changed, users have to check individual pods
-to validate that the PodPreset has been applied.
+```shell
+kubectl get replicasets frontend -o yaml
+```
+
+{{< note >}}
+The ReplicaSet object's `spec` was not changed, nor does the ReplicaSet contain a
+`podpreset.admission.kubernetes.io` annotation. This is because a PodPreset only
+applies to Pod objects.
+
+To see the effects of the preset having been applied, you need to look at individual Pods.
+{{< /note >}}
+
+The command to view the specs of the affected Pods is:
+
+```shell
+kubectl get pod --selector=role=frontend -o yaml
+```
{{< codenew file="podpreset/replicaset-merged.yaml" >}}
-## Multiple PodPreset Example
+Again the `podpreset.admission.kubernetes.io` annotation of the Pods
+verifies that the preset has been applied.
-This is an example to show how a Pod spec is modified by multiple Pod
-Injection Policies.
+## Multiple Pod presets example
-**User submitted pod spec:**
+This is an example to show how a Pod spec is modified by multiple Pod presets.
-{{< codenew file="podpreset/pod.yaml" >}}
-**Example Pod Preset:**
+Here is the manifest for the first PodPreset:
{{< codenew file="podpreset/preset.yaml" >}}
-**Another Pod Preset:**
+Create the first PodPreset for this example:
+
+```shell
+kubectl apply -f https://k8s.io/examples/podpreset/preset.yaml
+```
+
+Here is the manifest for the second PodPreset:
{{< codenew file="podpreset/proxy.yaml" >}}
-**Pod spec after admission controller:**
+Create the second preset:
+
+```shell
+kubectl apply -f https://k8s.io/examples/podpreset/proxy.yaml
+```
+
+Here's a manifest containing the definition of an applicable Pod (matched by two PodPresets):
+
+{{< codenew file="podpreset/pod.yaml" >}}
+
+Create the Pod:
+
+```shell
+kubectl create -f https://k8s.io/examples/podpreset/pod.yaml
+```
+
+View the Pod spec altered by the admission controller in order to see the effects of both presets
+having been applied:
+
+```shell
+kubectl get pod website -o yaml
+```
{{< codenew file="podpreset/multi-merged.yaml" >}}
-## Conflict Example
+The `DB_PORT` environment variable, the `proxy-volume` VolumeMount and the two `podpreset.admission.kubernetes.io`
+annotations of the Pod verify that both presets have been applied.
+
+## Conflict example
+
+This is an example to show how a Pod spec is not modified by a Pod preset when there is a conflict.
+The conflict in this example consists of a `VolumeMount` in the PodPreset conflicting with a Pod that defines the same `mountPath`.
+
+Here is the manifest for the PodPreset:
-This is an example to show how a Pod spec is not modified by the Pod Preset
-when there is a conflict.
+{{< codenew file="podpreset/conflict-preset.yaml" >}}
+
+Note the `mountPath` value of `/cache`.
+
+Create the preset:
+
+```shell
+kubectl apply -f https://k8s.io/examples/podpreset/conflict-preset.yaml
+```
-**User submitted pod spec:**
+Here is the manifest for the Pod:
{{< codenew file="podpreset/conflict-pod.yaml" >}}
-**Example Pod Preset:**
+Note the volumeMount element with the same path as in the PodPreset.
-{{< codenew file="podpreset/conflict-preset.yaml" >}}
+Create the Pod:
+
+```shell
+kubectl create -f https://k8s.io/examples/podpreset/conflict-pod.yaml
+```
+
+View the Pod spec:
-**Pod spec after admission controller will not change because of the conflict:**
+```shell
+kubectl get pod website -o yaml
+```
{{< codenew file="podpreset/conflict-pod.yaml" >}}
-**If we run `kubectl describe...` we can see the event:**
+You can see there is no preset annotation (`podpreset.admission.kubernetes.io`). Seeing no annotation tells you that no preset has not been applied to the Pod.
+
+However, the
+[PodPreset admission controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#podpreset)
+logs a warning containing details of the conflict.
+You can view the warning using `kubectl`:
```shell
-kubectl describe ...
+kubectl -n kube-system logs -l=component=kube-apiserver
```
+
+The output should look similar to:
+
```
-....
-Events:
- FirstSeen LastSeen Count From SubobjectPath Reason Message
- Tue, 07 Feb 2017 16:56:12 -0700 Tue, 07 Feb 2017 16:56:12 -0700 1 {podpreset.admission.kubernetes.io/podpreset-allow-database } conflict Conflict on pod preset. Duplicate mountPath /cache.
+W1214 13:00:12.987884 1 admission.go:147] conflict occurred while applying podpresets: allow-database on pod: err: merging volume mounts for allow-database has a conflict on mount path /cache:
+v1.VolumeMount{Name:"other-volume", ReadOnly:false, MountPath:"/cache", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}
+does not match
+core.VolumeMount{Name:"cache-volume", ReadOnly:false, MountPath:"/cache", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}
+ in container
```
-## Deleting a Pod Preset
+Note the conflict message on the path for the VolumeMount.
+
+## Deleting a PodPreset
-Once you don't need a pod preset anymore, you can delete it with `kubectl`:
+Once you don't need a PodPreset anymore, you can delete it with `kubectl`:
```shell
kubectl delete podpreset allow-database
```
+The output shows that the PodPreset was deleted:
```
podpreset "allow-database" deleted
```
diff --git a/content/en/docs/tasks/run-application/configure-pdb.md b/content/en/docs/tasks/run-application/configure-pdb.md
index 673823feb7f87..d33dc24364f6d 100644
--- a/content/en/docs/tasks/run-application/configure-pdb.md
+++ b/content/en/docs/tasks/run-application/configure-pdb.md
@@ -180,8 +180,8 @@ then you'll see something like this:
kubectl get poddisruptionbudgets
```
```
-NAME MIN-AVAILABLE ALLOWED-DISRUPTIONS AGE
-zk-pdb 2 0 7s
+NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
+zk-pdb 2 N/A 0 7s
```
If there are matching pods (say, 3), then you would see something like this:
@@ -190,11 +190,11 @@ If there are matching pods (say, 3), then you would see something like this:
kubectl get poddisruptionbudgets
```
```
-NAME MIN-AVAILABLE ALLOWED-DISRUPTIONS AGE
-zk-pdb 2 1 7s
+NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
+zk-pdb 2 N/A 1 7s
```
-The non-zero value for `ALLOWED-DISRUPTIONS` means that the disruption controller has seen the pods,
+The non-zero value for `ALLOWED DISRUPTIONS` means that the disruption controller has seen the pods,
counted the matching pods, and updated the status of the PDB.
You can get more information about the status of a PDB with this command:
@@ -206,14 +206,15 @@ kubectl get poddisruptionbudgets zk-pdb -o yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
- creationTimestamp: 2017-08-28T02:38:26Z
+ annotations:
+…
+ creationTimestamp: "2020-03-04T04:22:56Z"
generation: 1
name: zk-pdb
…
status:
currentHealthy: 3
- desiredHealthy: 3
- disruptedPods: null
+ desiredHealthy: 2
disruptionsAllowed: 1
expectedPods: 3
observedGeneration: 1
diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
index 6a9fe4de324d5..800abfea69718 100644
--- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
+++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
@@ -369,8 +369,8 @@ label, you can specify the following metric block to scale only on GET requests:
type: Object
object:
metric:
- name: `http_requests`
- selector: `verb=GET`
+ name: http_requests
+ selector: {matchLabels: {verb: GET}}
```
This selector uses the same syntax as the full Kubernetes label selectors. The monitoring pipeline
diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md
index e515206308ff4..3ed6fa0e41835 100644
--- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md
+++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md
@@ -178,6 +178,8 @@ The beta version, which includes support for scaling on memory and custom metric
can be found in `autoscaling/v2beta2`. The new fields introduced in `autoscaling/v2beta2`
are preserved as annotations when working with `autoscaling/v1`.
+When you create a HorizontalPodAutoscaler API object, make sure the name specified is a valid
+[DNS subdomain name](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names).
More details about the API object can be found at
[HorizontalPodAutoscaler Object](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object).
diff --git a/content/en/docs/tasks/tools/install-kubectl.md b/content/en/docs/tasks/tools/install-kubectl.md
index 4a799e5f011b4..83c9d1761bca4 100644
--- a/content/en/docs/tasks/tools/install-kubectl.md
+++ b/content/en/docs/tasks/tools/install-kubectl.md
@@ -385,6 +385,27 @@ However, the kubectl completion script depends on [**bash-completion**](https://
there are two versions of bash-completion, v1 and v2. V1 is for Bash 3.2 (which is the default on macOS), and v2 is for Bash 4.1+. The kubectl completion script **doesn't work** correctly with bash-completion v1 and Bash 3.2. It requires **bash-completion v2** and **Bash 4.1+**. Thus, to be able to correctly use kubectl completion on macOS, you have to install and use Bash 4.1+ ([*instructions*](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)). The following instructions assume that you use Bash 4.1+ (that is, any Bash version of 4.1 or newer).
{{< /warning >}}
+### Upgrade Bash
+
+The instructions here assume you use Bash 4.1+. You can check your Bash's version by running:
+
+```shell
+echo $BASH_VERSION
+```
+
+If it is too old, you can install/upgrade it using Homebrew:
+
+```shell
+brew install bash
+```
+
+Reload your shell and verify that the desired version is being used:
+
+```shell
+echo $BASH_VERSION $SHELL
+```
+
+Homebrew usually installs it at `/usr/local/bin/bash`.
### Install bash-completion
diff --git a/content/en/docs/tasks/tools/install-minikube.md b/content/en/docs/tasks/tools/install-minikube.md
index 03c3b07cd5cec..b106c23ef6081 100644
--- a/content/en/docs/tasks/tools/install-minikube.md
+++ b/content/en/docs/tasks/tools/install-minikube.md
@@ -86,6 +86,12 @@ The `none` VM driver can result in security and data loss issues.
Before using `--vm-driver=none`, consult [this documentation](https://minikube.sigs.k8s.io/docs/reference/drivers/none/) for more information.
{{< /caution >}}
+Minikube also supports a `vm-driver=podman` similar to the Docker driver. Podman run as superuser privilege (root user) is the best way to ensure that your containers have full access to any feature available on your system.
+
+{{< caution >}}
+The `podman` driver requires running the containers as root because regular user accounts don’t have full access to all operating system features that their containers might need to run.
+{{< /caution >}}
+
### Install Minikube using a package
There are *experimental* packages for Minikube available; you can find Linux (AMD64) packages
diff --git a/content/en/docs/tutorials/_index.md b/content/en/docs/tutorials/_index.md
index 04013216c3dcf..9f8de2129e658 100644
--- a/content/en/docs/tutorials/_index.md
+++ b/content/en/docs/tutorials/_index.md
@@ -22,8 +22,6 @@ Before walking through each tutorial, you may want to bookmark the
* [Kubernetes Basics](/docs/tutorials/kubernetes-basics/) is an in-depth interactive tutorial that helps you understand the Kubernetes system and try out some basic Kubernetes features.
-* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615)
-
* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x#)
* [Hello Minikube](/docs/tutorials/hello-minikube/)
diff --git a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
index 2a6af0af4c953..13d3d99758338 100644
--- a/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
+++ b/content/en/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro.html
@@ -77,7 +77,7 @@
Cluster Diagram
-
Masters manage the cluster and the nodes are used to host the running applications.
+
Masters manage the cluster and the nodes that are used to host the running applications.
+ A Pod is the basic execution unit of a Kubernetes application. Each Pod represents a part of a workload that is running on your cluster. Learn more about Pods.
+
You can create a Service at the same time you create a Deployment by using --expose in kubectl.
-
+
diff --git a/content/en/docs/tutorials/online-training/_index.md b/content/en/docs/tutorials/online-training/_index.md
deleted file mode 100755
index 9b4b09f17f83e..0000000000000
--- a/content/en/docs/tutorials/online-training/_index.md
+++ /dev/null
@@ -1,5 +0,0 @@
----
-title: "Online Training Courses"
-weight: 20
----
-
diff --git a/content/en/docs/tutorials/online-training/overview.md b/content/en/docs/tutorials/online-training/overview.md
deleted file mode 100644
index e76b22481a975..0000000000000
--- a/content/en/docs/tutorials/online-training/overview.md
+++ /dev/null
@@ -1,71 +0,0 @@
----
-title: Overview of Kubernetes Online Training
-content_template: templates/concept
----
-
-{{% capture overview %}}
-
-Here are some of the sites that offer online training for Kubernetes:
-
-{{% /capture %}}
-
-{{% capture body %}}
-
-* [AIOps Essentials (Autoscaling Kubernetes with Prometheus Metrics) with Hands-On Labs (Linux Academy)](https://linuxacademy.com/devops/training/course/name/using-machine-learning-to-scale-kubernetes-clusters)
-
-* [Amazon EKS Deep Dive with Hands-On Labs (Linux Academy)] (https://linuxacademy.com/amazon-web-services/training/course/name/amazon-eks-deep-dive)
-
-* [Cloud Native Certified Kubernetes Administrator (CKA) with Hands-On Labs & Practice Exams (Linux Academy)](https://linuxacademy.com/linux/training/course/name/cloud-native-certified-kubernetes-administrator-cka)
-
-* [Certified Kubernetes Administrator (CKA) Preparation Course (CloudYuga)](https://cloudyuga.guru/courses/cka-online-self-paced)
-
-* [Certified Kubernetes Administrator Preparation Course with Practice Tests (KodeKloud)](https://kodekloud.com/p/certified-kubernetes-administrator-with-practice-tests)
-
-* [Certified Kubernetes Application Developer (CKAD) with Hands-On Labs & Practice Exams (Linux Academy)] (https://linuxacademy.com/containers/training/course/name/certified-kubernetes-application-developer-ckad/)
-
-* [Certified Kubernetes Application Developer (CKAD) Preparation Course (CloudYuga)](https://cloudyuga.guru/courses/ckad-online-self-paced)
-
-* [Certified Kubernetes Application Developer Preparation Course with Practice Tests (KodeKloud)](https://kodekloud.com/p/kubernetes-certification-course)
-
-* [Getting Started with Google Kubernetes Engine (Coursera)](https://www.coursera.org/learn/google-kubernetes-engine)
-
-* [Getting Started with Kubernetes (Pluralsight)](https://www.pluralsight.com/courses/getting-started-kubernetes)
-
-* [Getting Started with Kubernetes Clusters on OCI Oracle Kubernetes Engine (OKE) (Learning Library)](https://apexapps.oracle.com/pls/apex/f?p=44785:50:0:::50:P50_EVENT_ID,P50_COURSE_ID:5935,256)
-
-* [Google Kubernetes Engine Deep Dive (Linux Academy)] (https://linuxacademy.com/google-cloud-platform/training/course/name/google-kubernetes-engine-deep-dive)
-
-* [Helm Deep Dive with Hands-On Labs (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/helm-deep-dive-part-1)
-
-* [Hands-on Introduction to Kubernetes (Instruqt)](https://play.instruqt.com/public/topics/getting-started-with-kubernetes)
-
-* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x)
-
-* [Kubernetes Essentials with Hands-On Labs (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/kubernetes-essentials)
-
-* [Kubernetes for the Absolute Beginners with Hands-on Labs (KodeKloud)](https://kodekloud.com/p/kubernetes-for-the-absolute-beginners-hands-on)
-
-* [Kubernetes Fundamentals (LFS258) (The Linux Foundation)](https://training.linuxfoundation.org/training/kubernetes-fundamentals/)
-
-* [Kubernetes Quick Start with Hands-On Labs (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/kubernetes-quick-start)
-
-* [Kubernetes the Hard Way with Hands-On Labs (Linux Academy)](https://linuxacademy.com/linux/training/course/name/kubernetes-the-hard-way)
-
-* [Kubernetes Security with Hands-On Labs (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/kubernetes-security)
-
-* [Launch Your First OpenShift Operator with Hands-On Labs (Linux Academy)] (https://linuxacademy.com/containers/training/course/name/red-hat-open-shift)
-
-* [Learn Kubernetes by Doing - 100% Hands-On Experience (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/learn-kubernetes-by-doing)
-
-* [Learn Kubernetes using Interactive Hands-on Scenarios (Katacoda)](https://www.katacoda.com/courses/kubernetes/)
-
-* [Microservice Applications in Kubernetes - 100% Hands-On Experience (Linux Academy)] (https://linuxacademy.com/devops/training/course/name/learn-microservices-by-doing)
-
-* [Monitoring Kubernetes With Prometheus with Hands-On Labs (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/kubernetes-and-prometheus)
-
-* [Service Mesh with Istio with Hands-On Labs (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/service-mesh-with-istio-part-1)
-
-* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615)
-
-* [Self-paced Kubernetes online course (Learnk8s Academy)](https://learnk8s.io/academy)
-{{% /capture %}}
diff --git a/content/en/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md b/content/en/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md
index af24c72999457..94008289ee502 100644
--- a/content/en/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md
+++ b/content/en/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk.md
@@ -111,7 +111,7 @@ There are four files to edit to create a k8s secret when you are connecting to s
1. ELASTICSEARCH_USERNAME
1. KIBANA_HOST
-Set these with the information for your Elasticsearch cluster and your Kibana host. Here are some examples
+Set these with the information for your Elasticsearch cluster and your Kibana host. Here are some examples (also see [*this configuration*](https://stackoverflow.com/questions/59892896/how-to-connect-from-minikube-to-elasticsearch-installed-on-host-local-developme/59892897#59892897))
#### `ELASTICSEARCH_HOSTS`
1. A nodeGroup from the Elastic Elasticsearch Helm Chart:
diff --git a/content/en/examples/admin/snowflake-deployment.yaml b/content/en/examples/admin/snowflake-deployment.yaml
new file mode 100644
index 0000000000000..2f4f267916823
--- /dev/null
+++ b/content/en/examples/admin/snowflake-deployment.yaml
@@ -0,0 +1,20 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ labels:
+ app: snowflake
+ name: snowflake
+spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ app: snowflake
+ template:
+ metadata:
+ labels:
+ app: snowflake
+ spec:
+ containers:
+ - image: k8s.gcr.io/serve_hostname
+ imagePullPolicy: Always
+ name: snowflake
diff --git a/content/en/examples/examples_test.go b/content/en/examples/examples_test.go
index ee29c3a68b590..7c9664b64c168 100644
--- a/content/en/examples/examples_test.go
+++ b/content/en/examples/examples_test.go
@@ -34,8 +34,6 @@ import (
utilfeature "k8s.io/apiserver/pkg/util/feature"
"k8s.io/kubernetes/pkg/api/legacyscheme"
"k8s.io/kubernetes/pkg/api/testapi"
- "k8s.io/kubernetes/pkg/apis/admissionregistration"
- ar_validation "k8s.io/kubernetes/pkg/apis/admissionregistration/validation"
"k8s.io/kubernetes/pkg/apis/apps"
apps_validation "k8s.io/kubernetes/pkg/apis/apps/validation"
"k8s.io/kubernetes/pkg/apis/autoscaling"
@@ -434,12 +432,6 @@ func TestExampleObjectSchemas(t *testing.T) {
"node-problem-detector-configmap": {&apps.DaemonSet{}},
"termination": {&api.Pod{}},
},
- "federation": {
- "policy-engine-deployment": {&apps.Deployment{}},
- "policy-engine-service": {&api.Service{}},
- "replicaset-example-policy": {&apps.ReplicaSet{}},
- "scheduling-policy-admission": {&api.ConfigMap{}},
- },
"podpreset": {
"allow-db": {&settings.PodPreset{}},
"allow-db-merged": {&api.Pod{}},
@@ -525,9 +517,9 @@ func TestExampleObjectSchemas(t *testing.T) {
"redis": {&api.Pod{}},
},
"policy": {
- "privileged-psp": {&policy.PodSecurityPolicy{}},
- "restricted-psp": {&policy.PodSecurityPolicy{}},
- "example-psp": {&policy.PodSecurityPolicy{}},
+ "privileged-psp": {&policy.PodSecurityPolicy{}},
+ "restricted-psp": {&policy.PodSecurityPolicy{}},
+ "example-psp": {&policy.PodSecurityPolicy{}},
"zookeeper-pod-disruption-budget-maxunavailable": {&policy.PodDisruptionBudget{}},
"zookeeper-pod-disruption-budget-minunavailable": {&policy.PodDisruptionBudget{}},
},
diff --git a/content/en/examples/federation/policy-engine-deployment.yaml b/content/en/examples/federation/policy-engine-deployment.yaml
deleted file mode 100644
index 168af7ba4cf0f..0000000000000
--- a/content/en/examples/federation/policy-engine-deployment.yaml
+++ /dev/null
@@ -1,37 +0,0 @@
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- labels:
- app: opa
- name: opa
- namespace: federation-system
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: opa
- template:
- metadata:
- labels:
- app: opa
- name: opa
- spec:
- containers:
- - name: opa
- image: openpolicyagent/opa:0.4.10
- args:
- - "run"
- - "--server"
- - name: kube-mgmt
- image: openpolicyagent/kube-mgmt:0.2
- args:
- - "-kubeconfig=/srv/kubernetes/kubeconfig"
- - "-cluster=federation/v1beta1/clusters"
- volumeMounts:
- - name: federation-kubeconfig
- mountPath: /srv/kubernetes
- readOnly: true
- volumes:
- - name: federation-kubeconfig
- secret:
- secretName: federation-controller-manager-kubeconfig
diff --git a/content/en/examples/federation/policy-engine-service.yaml b/content/en/examples/federation/policy-engine-service.yaml
deleted file mode 100644
index 982870b06b218..0000000000000
--- a/content/en/examples/federation/policy-engine-service.yaml
+++ /dev/null
@@ -1,13 +0,0 @@
-apiVersion: v1
-kind: Service
-metadata:
- name: opa
- namespace: federation-system
-spec:
- selector:
- app: opa
- ports:
- - name: http
- protocol: TCP
- port: 8181
- targetPort: 8181
diff --git a/content/en/examples/federation/replicaset-example-policy.yaml b/content/en/examples/federation/replicaset-example-policy.yaml
deleted file mode 100644
index 43dc83b18b200..0000000000000
--- a/content/en/examples/federation/replicaset-example-policy.yaml
+++ /dev/null
@@ -1,21 +0,0 @@
-apiVersion: apps/v1
-kind: ReplicaSet
-metadata:
- labels:
- app: nginx-pci
- name: nginx-pci
- annotations:
- requires-pci: "true"
-spec:
- replicas: 3
- selector:
- matchLabels:
- app: nginx-pci
- template:
- metadata:
- labels:
- app: nginx-pci
- spec:
- containers:
- - image: nginx
- name: nginx-pci
diff --git a/content/en/examples/federation/scheduling-policy-admission.yaml b/content/en/examples/federation/scheduling-policy-admission.yaml
deleted file mode 100644
index a164722425555..0000000000000
--- a/content/en/examples/federation/scheduling-policy-admission.yaml
+++ /dev/null
@@ -1,29 +0,0 @@
-apiVersion: v1
-kind: ConfigMap
-metadata:
- name: admission
- namespace: federation-system
-data:
- config.yml: |
- apiVersion: apiserver.k8s.io/v1alpha1
- kind: AdmissionConfiguration
- plugins:
- - name: SchedulingPolicy
- path: /etc/kubernetes/admission/scheduling-policy-config.yml
- scheduling-policy-config.yml: |
- kubeconfig: /etc/kubernetes/admission/opa-kubeconfig
- opa-kubeconfig: |
- clusters:
- - name: opa-api
- cluster:
- server: http://opa.federation-system.svc.cluster.local:8181/v0/data/kubernetes/placement
- users:
- - name: scheduling-policy
- user:
- token: deadbeefsecret
- contexts:
- - name: default
- context:
- cluster: opa-api
- user: scheduling-policy
- current-context: default
diff --git a/content/en/examples/podpreset/allow-db-merged.yaml b/content/en/examples/podpreset/allow-db-merged.yaml
index 8a0ad101d7d64..7f52cc1fa49c1 100644
--- a/content/en/examples/podpreset/allow-db-merged.yaml
+++ b/content/en/examples/podpreset/allow-db-merged.yaml
@@ -14,9 +14,6 @@ spec:
volumeMounts:
- mountPath: /cache
name: cache-volume
- - mountPath: /etc/app/config.json
- readOnly: true
- name: secret-volume
ports:
- containerPort: 80
env:
@@ -32,6 +29,3 @@ spec:
volumes:
- name: cache-volume
emptyDir: {}
- - name: secret-volume
- secret:
- secretName: config-details
diff --git a/content/en/examples/podpreset/allow-db.yaml b/content/en/examples/podpreset/allow-db.yaml
index 0cca13bab2c3d..2c511e650d36e 100644
--- a/content/en/examples/podpreset/allow-db.yaml
+++ b/content/en/examples/podpreset/allow-db.yaml
@@ -19,12 +19,6 @@ spec:
volumeMounts:
- mountPath: /cache
name: cache-volume
- - mountPath: /etc/app/config.json
- readOnly: true
- name: secret-volume
volumes:
- name: cache-volume
emptyDir: {}
- - name: secret-volume
- secret:
- secretName: config-details
diff --git a/content/en/includes/federated-task-tutorial-prereqs.md b/content/en/includes/federated-task-tutorial-prereqs.md
deleted file mode 100644
index b254407a676a3..0000000000000
--- a/content/en/includes/federated-task-tutorial-prereqs.md
+++ /dev/null
@@ -1,5 +0,0 @@
-This guide assumes that you have a running Kubernetes Cluster Federation installation.
-If not, then head over to the [federation admin guide](/docs/tutorials/federation/set-up-cluster-federation-kubefed/) to learn how to
-bring up a cluster federation (or have your cluster administrator do this for you).
-Other tutorials, such as Kelsey Hightower's [Federated Kubernetes Tutorial](https://github.com/kelseyhightower/kubernetes-cluster-federation),
-might also help you create a Federated Kubernetes cluster.
diff --git a/content/es/docs/concepts/configuration/organize-cluster-access-kubeconfig.md b/content/es/docs/concepts/configuration/organize-cluster-access-kubeconfig.md
new file mode 100644
index 0000000000000..dc9f9e14a5015
--- /dev/null
+++ b/content/es/docs/concepts/configuration/organize-cluster-access-kubeconfig.md
@@ -0,0 +1,153 @@
+---
+title: Organizar el acceso a los clústeres utilizando archivos kubeconfig
+content_template: templates/concept
+weight: 60
+---
+
+{{% capture overview %}}
+
+Utilice los archivos kubeconfig para organizar la información acerca de los clústeres, los
+usuarios, los Namespaces y los mecanismos de autenticación. La herramienta de
+línea de comandos `kubectl` utiliza los archivos kubeconfig para hallar la información que
+necesita para escoger un clúster y comunicarse con el servidor API de un clúster.
+
+{{< note >}}
+Un archivo utilizado para configurar el acceso a los clústeres se denomina
+*archivo kubeconfig*. Esta es una forma genérica de referirse a los archivos de
+configuración. Esto no significa que exista un archivo llamado `kubeconfig`.
+{{< /note >}}
+
+Por defecto, `kubectl` busca un archivo llamado `config` en el directorio `$HOME/.kube`.
+Puedes especificar otros archivos kubeconfig mediante la configuración de la variable
+de entorno `KUBECONFIG` o mediante la configuracion del flag
+[`--kubeconfig`](/docs/reference/generated/kubectl/kubectl/).
+
+Para obtener instrucciones paso a paso acerca de cómo crear y especificar los archivos kubeconfig,
+consulte el recurso
+[Configurar El Acceso A Múltiples Clústeres](/docs/tasks/access-application-cluster/configure-access-multiple-clusters).
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Compatibilidad con múltiples clústeres, usuarios y mecanismos de autenticación
+
+Suponga que tiene diversos clústeres y que sus usuarios y componentes se autentican
+de diversas maneras. Por ejemplo:
+
+- Un kubelet en ejecución se podría autenticar usando certificados.
+- Un usuario se podría autenticar utilizando tokens.
+- Los administradores podrían tener un conjunto de certificados que sean suministrados a los usuarios individualmente.
+
+Con los archivos kubeconfig puedes organizar tus clústeres, usuarios y Namespaces.
+También puedes definir diferentes contextos para realizar de forma rápida y
+fácil cambios entre clústeres y Namespaces.
+
+## Contexto
+
+Un elemento *context* en un archivo kubeconfig se utiliza para agrupar los parámetros de
+acceso bajo un nombre apropiado. Cada contexto tiene tres parámetros: clúster, Namespace
+y usuario.
+Por defecto, la herramienta de línea de comandos `kubectl` utiliza los parámetros del
+*contexto actual* para comunicarse con el clúster.
+
+Para seleccionar el contexto actual:
+
+```shell
+kubectl config use-context
+```
+
+## Variable de entorno KUBECONFIG
+
+La variable de entorno `KUBECONFIG` contiene una lista de archivos kubeconfig.
+En el caso de Linux y Mac, la lista está delimitada por dos puntos. Si se trata
+de Windows, la lista está delimitada por punto y coma. La variable de entorno
+`KUBECONFIG` no es indispensable. Si la variable de entorno `KUBECONFIG` no existe,
+`kubectl` utiliza el archivo kubeconfig por defecto `$HOME/.kube/config`.
+
+Si la variable de entorno `KUBECONFIG` existe, `kubectl` utiliza una
+configuración eficiente que es el resultado de la fusión de los archivos
+listados en la variable de entorno `KUBECONFIG`.
+
+## Fusionando archivos kubeconfig
+
+Para poder ver su configuración, escriba el siguiente comando:
+
+```shell
+kubectl config view
+```
+
+Como se ha descrito anteriormente, la respuesta de este comando podría resultar a partir de un solo
+archivo kubeconfig, o podría ser el resultado de la fusión de varios archivos kubeconfig.
+
+A continuación se muestran las reglas que usa `kubectl` cuando fusiona archivos kubeconfig:
+
+1. Si el flag `--kubeconfig` está activado, usa solamente el archivo especificado. Sin fusionar.
+ Sólo se permite una instancia con este flag.
+
+ En caso contrario, si la variable de entorno `KUBECONFIG` está activada, sera usada
+ como un listado de los archivos a ser fusionados.
+ Fusionar los archivos listados en la variable de entorno `KUBECONFIG` de acuerdo
+ con estas reglas:
+
+ * Ignorar nombres de archivo vacíos.
+ * Producir errores para archivos con contenido que no pueden ser deserializados.
+ * El primer archivo que establezca un valor particular o una clave se impone.
+ * Nunca cambie el valor o la clave.
+ Ejemplo: Conserva el contexto del primer archivo para configurar el `contexto actual`.
+ Ejemplo: Si dos archivos especifican un `red-user`, utilice sólo los valores del primer archivo.
+ Incluso desechar el segundo archivo aunque tenga registros que no tengan conflictos.
+
+ Para obtener un ejemplo de configuración de la variable de entorno `KUBECONFIG`, consulte la sección
+ [Configuración de la variable de entorno KUBECONFIG](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable).
+
+ En caso contrario, utilice el archivo kubeconfig predeterminado `$HOME/.kube/config`, sin fusionar.
+
+2. Determinar el contexto a utilizar con base en el primer acierto en esta secuencia:
+
+ 1. Si es que existe, utilice el flag `---contexto` de la línea de comandos.
+ 2. Utilice el `contexto actual` procedente de los archivos kubeconfig fusionados.
+
+ En este punto se permite un contexto vacío.
+
+3. Determinar el clúster y el usuario. En este caso, puede o no haber un contexto.
+ Determine el clúster y el usuario con base en el primer acierto que se ejecute dos veces en
+ esta secuencia: una para el usuario y otra para el clúster:
+
+ 1. Si es que existen, utilice el flag `--user` o `--cluster` de la línea de comandos.
+ 2. Si el contexto no está vacío, tome el usuario o clúster del contexto.
+
+ En este caso el usuario y el clúster pueden estar vacíos.
+
+4. Determinar la información del clúster a utilizar. En este caso, puede o no haber información del clúster.
+ Se construye cada pieza de la información del clúster con base en esta secuencia, el primer acierto se impone:
+
+ 1. Si es que existen, use el flag `--server`, `--certificate-authority`, `--insecure-skip-tls-verify` en la línea de comandos.
+ 2. Si existen atributos de información de clúster procedentes de los archivos kubeconfig fusionados, utilícelos.
+ 3. Falla si no existe la ubicación del servidor.
+
+5. Determinar la información del usuario a utilizar. Cree información de usuario utilizando las mismas reglas que
+ la información de clúster, con la excepción de permitir sólo un mecanismo de autenticación por usuario:
+
+ 1. Si es que existen, utilice el flag `--client-certificate`, `--client-key`, `--username`, `--password`, `--token` de la línea de comandos.
+ 2. Utilice los campos `user` de los archivos kubeconfig fusionados.
+ 3. Falla si hay dos mecanismos de autenticación contradictorios.
+
+6. Si todavía falta información, utilice los valores predeterminados y solicite
+ información de autenticación.
+
+## Referencias de archivos
+
+Las referencias, así también como, las rutas de un archivo kubeconfig son relativas a la ubicación del archivo kubeconfig.
+Las referencias de un archivo en la línea de comandos son relativas al directorio actual de trabajo.
+Dentro de `$HOME/.kube/config`, las rutas relativas se almacenan de manera relativa a la ubicación del archivo kubeconfig , al igual que las rutas absolutas
+se almacenan absolutamente.
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* [Configurar el acceso a multiples Clústeres](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
+* [`kubectl config`](/docs/reference/generated/kubectl/kubectl-commands#config)
+
+{{% /capture %}}
diff --git a/content/fr/_index.html b/content/fr/_index.html
index 026a0f910f34b..16bfc5c82f18b 100644
--- a/content/fr/_index.html
+++ b/content/fr/_index.html
@@ -3,12 +3,12 @@
abstract: "Déploiement, mise à l'échelle et gestion automatisée des conteneurs"
cid: home
---
+{{< announcement >}}
{{< deprecationwarning >}}
{{< blocks/section id="oceanNodes" >}}
{{% blocks/feature image="flower" %}}
-
### [Kubernetes (K8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}}) est un système open-source permettant d'automatiser le déploiement, la mise à l'échelle et la gestion des applications conteneurisées.
Les conteneurs qui composent une application sont regroupés dans des unités logiques pour en faciliter la gestion et la découverte. Kubernetes s’appuie sur [15 années d’expérience dans la gestion de charges de travail de production (workloads) chez Google](http://queue.acm.org/detail.cfm?id=2898444), associé aux meilleures idées et pratiques de la communauté.
@@ -18,6 +18,7 @@
#### Quel que soit le nombre
Conçu selon les mêmes principes qui permettent à Google de gérer des milliards de conteneurs par semaine, Kubernetes peut évoluer sans augmenter votre équipe d'opérations.
+
{{% /blocks/feature %}}
{{% blocks/feature image="blocks" %}}
@@ -28,18 +29,15 @@
{{% /blocks/feature %}}
{{% blocks/feature image="suitcase" %}}
-
#### Quel que soit l'endroit
-Kubernetes est une solution open-source qui vous permet de tirer parti de vos infrastructures qu'elles soient sur site (on-premises), hybride ou en Cloud publique.
-Vous pourrez ainsi répartir sans effort vos workloads là où vous le souhaitez.
+Kubernetes est une solution open-source qui vous permet de tirer parti de vos infrastructures qu'elles soient sur site (on-premises), hybride ou en Cloud publique. Vous pourrez ainsi répartir sans effort vos workloads là où vous le souhaitez.
{{% /blocks/feature %}}
{{< /blocks/section >}}
{{< blocks/section id="video" background-image="kub_video_banner_homepage" >}}
-
Les défis de la migration de plus de 150 microservices vers Kubernetes
Par Sarah Wells, directrice technique des opérations et de la fiabilité, Financial Times
@@ -47,12 +45,12 @@
Les défis de la migration de plus de 150 microservices vers Kubernetes
diff --git a/content/fr/docs/contribute/start.md b/content/fr/docs/contribute/start.md
index 59cb54cfc24dc..39eee2a39dff3 100644
--- a/content/fr/docs/contribute/start.md
+++ b/content/fr/docs/contribute/start.md
@@ -70,7 +70,7 @@ Pour plus d'informations sur la contribution à la documentation dans plusieurs
Si vous souhaitez démarrer une nouvelle traduction, voir ["Traduction"](/docs/contribute/localization/).
-## Créer des demander recevables
+## Créer des demandes recevables
Toute personne possédant un compte GitHub peut soumettre un problème (rapport de bogue) à la documentation de Kubernetes.
Si vous voyez quelque chose qui ne va pas, même si vous ne savez pas comment le réparer, [ouvrez un ticket](#how-to-file-an-issue).
diff --git a/content/fr/docs/reference/kubectl/overview.md b/content/fr/docs/reference/kubectl/overview.md
index 634bd83a4ece9..7ed5b714b114c 100644
--- a/content/fr/docs/reference/kubectl/overview.md
+++ b/content/fr/docs/reference/kubectl/overview.md
@@ -69,31 +69,31 @@ Le tableau suivant inclut une courte description et la syntaxe générale pour c
Opération | Syntaxe | Description
-------------------- | -------------------- | --------------------
-`annotate` | `kubectl annotate (-f FICHIER \| TYPE NOM \| TYPE/NOM) CLE_1=VAL_1 ... CLE_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags]` | Ajoute ou modifie les annotations d'une ou plusieurs ressources.
+`annotate` | kubectl annotate (-f FICHIER | TYPE NOM | TYPE/NOM) CLE_1=VAL_1 ... CLE_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags] | Ajoute ou modifie les annotations d'une ou plusieurs ressources.
`api-versions` | `kubectl api-versions [flags]` | Liste les versions d'API disponibles.
`apply` | `kubectl apply -f FICHIER [flags]` | Applique un changement de configuration à une ressource depuis un fichier ou stdin.
`attach` | `kubectl attach POD -c CONTENEUR [-i] [-t] [flags]` | Attache à un conteneur en cours d'exécution soit pour voir la sortie standard soit pour interagir avec le conteneur (stdin).
-`autoscale` | `kubectl autoscale (-f FICHIER \| TYPE NOM \| TYPE/NOM) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [flags]` | Scale automatiquement l'ensemble des pods gérés par un replication controller.
+`autoscale` | kubectl autoscale (-f FICHIER | TYPE NOM | TYPE/NOM) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [flags] | Scale automatiquement l'ensemble des pods gérés par un replication controller.
`cluster-info` | `kubectl cluster-info [flags]` | Affiche les informations des endpoints du master et des services du cluster.
`config` | `kubectl config SOUS-COMMANDE [flags]` | Modifie les fichiers kubeconfig. Voir les sous-commandes individuelles pour plus de détails.
`create` | `kubectl create -f FICHIER [flags]` | Crée une ou plusieurs ressources depuis un fichier ou stdin.
-`delete` | `kubectl delete (-f FICHIER \| TYPE [NOM \| /NOM \| -l label \| --all]) [flags]` | Supprime des ressources soit depuis un fichier ou stdin, ou en indiquant des sélecteurs de label, des noms, des sélecteurs de ressources ou des ressources.
-`describe` | `kubectl describe (-f FICHIER \| TYPE [PREFIXE_NOM \| /NOM \| -l label]) [flags]` | Affiche l'état détaillé d'une ou plusieurs ressources.
+`delete` | kubectl delete (-f FICHIER | TYPE [NOM | /NOM | -l label | --all]) [flags] | Supprime des ressources soit depuis un fichier ou stdin, ou en indiquant des sélecteurs de label, des noms, des sélecteurs de ressources ou des ressources.
+`describe` | kubectl describe (-f FICHIER | TYPE [PREFIXE_NOM | /NOM | -l label]) [flags] | Affiche l'état détaillé d'une ou plusieurs ressources.
`diff` | `kubectl diff -f FICHIER [flags]` | Diff un fichier ou stdin par rapport à la configuration en cours (**BETA**)
-`edit` | `kubectl edit (-f FICHIER \| TYPE NOM \| TYPE/NOM) [flags]` | Édite et met à jour la définition d'une ou plusieurs ressources sur le serveur en utilisant l'éditeur par défaut.
+`edit` | kubectl edit (-f FICHIER | TYPE NOM | TYPE/NOM) [flags] | Édite et met à jour la définition d'une ou plusieurs ressources sur le serveur en utilisant l'éditeur par défaut.
`exec` | `kubectl exec POD [-c CONTENEUR] [-i] [-t] [flags] [-- COMMANDE [args...]]` | Exécute une commande à l'intérieur d'un conteneur dans un pod.
`explain` | `kubectl explain [--recursive=false] [flags]` | Obtient des informations sur différentes ressources. Par exemple pods, nœuds, services, etc.
-`expose` | `kubectl expose (-f FICHIER \| TYPE NOM \| TYPE/NOM) [--port=port] [--protocol=TCP\|UDP] [--target-port=nombre-ou-nom] [--name=nom] [--external-ip=ip-externe-ou-service] [--type=type] [flags]` | Expose un replication controller, service ou pod comme un nouveau service Kubernetes.
-`get` | `kubectl get (-f FICHIER \| TYPE [NOM \| /NOM \| -l label]) [--watch] [--sort-by=CHAMP] [[-o \| --output]=FORMAT_AFFICHAGE] [flags]` | Liste une ou plusieurs ressources.
-`label` | `kubectl label (-f FICHIER \| TYPE NOM \| TYPE/NOM) CLE_1=VAL_1 ... CLE_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags]` | Ajoute ou met à jour les labels d'une ou plusieurs ressources.
+`expose` | kubectl expose (-f FICHIER | TYPE NOM | TYPE/NOM) [--port=port] [--protocol=TCP|UDP] [--target-port=nombre-ou-nom] [--name=nom] [--external-ip=ip-externe-ou-service] [--type=type] [flags] | Expose un replication controller, service ou pod comme un nouveau service Kubernetes.
+`get` | kubectl get (-f FICHIER | TYPE [NOM | /NOM | -l label]) [--watch] [--sort-by=CHAMP] [[-o | --output]=FORMAT_AFFICHAGE] [flags] | Liste une ou plusieurs ressources.
+`label` | kubectl label (-f FICHIER | TYPE NOM | TYPE/NOM) CLE_1=VAL_1 ... CLE_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags] | Ajoute ou met à jour les labels d'une ou plusieurs ressources.
`logs` | `kubectl logs POD [-c CONTENEUR] [--follow] [flags]` | Affiche les logs d'un conteneur dans un pod.
-`patch` | `kubectl patch (-f FICHIER \| TYPE NOM \| TYPE/NOM) --patch PATCH [flags]` | Met à jour un ou plusieurs champs d'une resource en utilisant le processus de merge patch stratégique.
+`patch` | kubectl patch (-f FICHIER | TYPE NOM | TYPE/NOM) --patch PATCH [flags] | Met à jour un ou plusieurs champs d'une resource en utilisant le processus de merge patch stratégique.
`port-forward` | `kubectl port-forward POD [PORT_LOCAL:]PORT_DISTANT [...[PORT_LOCAL_N:]PORT_DISTANT_N] [flags]` | Transfère un ou plusieurs ports locaux vers un pod.
`proxy` | `kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix] [flags]` | Exécute un proxy vers un API server Kubernetes.
`replace` | `kubectl replace -f FICHIER` | Remplace une ressource depuis un fichier ou stdin.
-`rolling-update`| `kubectl rolling-update ANCIEN_NOM_CONTROLEUR ([NOUVEAU_NOM_CONTROLEUR] --image=NOUVELLE_IMAGE_CONTENEUR \| -f NOUVELLE_SPEC_CONTROLEUR) [flags]` | Exécute un rolling update en remplaçant graduellement le replication controller indiqué et ses pods.
+`rolling-update`| kubectl rolling-update ANCIEN_NOM_CONTROLEUR ([NOUVEAU_NOM_CONTROLEUR] --image=NOUVELLE_IMAGE_CONTENEUR | -f NOUVELLE_SPEC_CONTROLEUR) [flags] | Exécute un rolling update en remplaçant graduellement le replication controller indiqué et ses pods.
`run` | `kubectl run NOM --image=image [--env="cle=valeur"] [--port=port] [--replicas=replicas] [--dry-run=bool] [--overrides=inline-json] [flags]` | Exécute dans le cluster l'image indiquée.
-`scale` | `kubectl scale (-f FICHIER \| TYPE NOM \| TYPE/NOM) --replicas=QUANTITE [--resource-version=version] [--current-replicas=quantité] [flags]` | Met à jour la taille du replication controller indiqué.
+`scale` | kubectl scale (-f FICHIER | TYPE NOM | TYPE/NOM) --replicas=QUANTITE [--resource-version=version] [--current-replicas=quantité] [flags] | Met à jour la taille du replication controller indiqué.
`version` | `kubectl version [--client] [flags]` | Affiche la version de Kubernetes du serveur et du client.
Rappelez-vous : Pour tout savoir sur les opérations, voir la documentation de référence de [kubectl](/docs/user-guide/kubectl/).
diff --git a/content/fr/docs/tasks/tools/install-kubectl.md b/content/fr/docs/tasks/tools/install-kubectl.md
index 091cb63c30f58..2d1f50fccd55b 100644
--- a/content/fr/docs/tasks/tools/install-kubectl.md
+++ b/content/fr/docs/tasks/tools/install-kubectl.md
@@ -54,7 +54,7 @@ Vous devez utiliser une version de kubectl qui différe seulement d'une version
4. Testez pour vous assurer que la version que vous avez installée est à jour:
```
- kubectl version
+ kubectl version --client
```
### Installation à l'aide des gestionnaires des paquets natifs
@@ -80,28 +80,33 @@ yum install -y kubectl
{{< /tab >}}
{{< /tabs >}}
+### Installation avec des gestionnaires de paquets alternatifs
-### Installer avec snap
-
+{{< tabs name="other_kubectl_install" >}}
+{{% tab name="Snap" %}}
Si vous êtes sur Ubuntu ou une autre distribution Linux qui supporte le gestionnaire de paquets [snap](https://snapcraft.io/docs/core/install), kubectl est disponible comme application [snap](https://snapcraft.io/).
-1. Passez à l'utilisateur snap et exécutez la commande d'installation :
-
- ```
- sudo snap install kubectl --classic
- ```
+```shell
+snap install kubectl --classic
-2. Testez pour vous assurer que la version que vous avez installée est à jour :
+kubectl version --client
+```
+{{% /tab %}}
+{{% tab name="Homebrew" %}}
+Si vous êtes sur Linux et que vous utiliser [Homebrew](https://docs.brew.sh/Homebrew-on-Linux) comme gestionnaire de paquets, kubectl est disponible. [installation](https://docs.brew.sh/Homebrew-on-Linux#install)
+```shell
+brew install kubectl
- ```
- kubectl version
- ```
+kubectl version --client
+```
+{{% /tab %}}
+{{< /tabs >}}
## Installer kubectl sur macOS
### Installer le binaire kubectl avec curl sur macOS
-1. Téléchargez la dernière release:
+1. Téléchargez la dernière version:
```
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
@@ -129,7 +134,7 @@ Si vous êtes sur Ubuntu ou une autre distribution Linux qui supporte le gestion
4. Testez pour vous assurer que la version que vous avez installée est à jour:
```
- kubectl version
+ kubectl version --client
```
### Installer avec Homebrew sur macOS
@@ -138,6 +143,11 @@ Si vous êtes sur MacOS et que vous utilisez le gestionnaire de paquets [Homebre
1. Exécutez la commande d'installation:
+ ```
+ brew install kubectl
+ ```
+ ou
+
```
brew install kubernetes-cli
```
@@ -145,7 +155,7 @@ Si vous êtes sur MacOS et que vous utilisez le gestionnaire de paquets [Homebre
2. Testez pour vous assurer que la version que vous avez installée est à jour:
```
- kubectl version
+ kubectl version --client
```
### Installer avec Macports sur macOS
@@ -162,14 +172,14 @@ Si vous êtes sur MacOS et que vous utilisez le gestionnaire de paquets [Macport
2. Testez pour vous assurer que la version que vous avez installée est à jour:
```
- kubectl version
+ kubectl version --client
```
## Installer kubectl sur Windows
### Installer le binaire kubectl avec curl sur Windows
-1. Téléchargez la dernière release {{< param "fullversion" >}} depuis [ce lien](https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe).
+1. Téléchargez la dernière version {{< param "fullversion" >}} depuis [ce lien](https://storage.googleapis.com/kubernetes-release/release/{{< param "fullversion" >}}/bin/windows/amd64/kubectl.exe).
Ou si vous avez `curl` installé, utilisez cette commande:
@@ -183,8 +193,12 @@ Si vous êtes sur MacOS et que vous utilisez le gestionnaire de paquets [Macport
3. Testez pour vous assurer que la version que vous avez installée est à jour:
```
- kubectl version
+ kubectl version --client
```
+{{< note >}}
+[Docker Desktop pour Windows](https://docs.docker.com/docker-for-windows/#kubernetes) ajoute sa propre version de `kubectl` au $PATH.
+Si vous avez déjà installé Docker Desktop, vous devrez peut-être placer votre entrée PATH avant celle ajoutée par le programme d'installation de Docker Desktop ou supprimer le `kubectl` de Docker Desktop.
+{{< /note >}}
### Installer avec Powershell de PSGallery
@@ -204,7 +218,7 @@ Si vous êtes sous Windows et que vous utilisez le gestionnaire de paquets [Powe
2. Testez pour vous assurer que la version que vous avez installée est à jour:
```
- kubectl version
+ kubectl version --client
```
{{< note >}}La mise à jour de l'installation s'effectue en réexécutant les deux commandes listées à l'étape 1.{{< /note >}}
@@ -227,7 +241,7 @@ Pour installer kubectl sur Windows, vous pouvez utiliser le gestionnaire de paqu
2. Testez pour vous assurer que la version que vous avez installée est à jour:
```
- kubectl version
+ kubectl version --client
```
3. Accédez à votre répertoire personnel:
@@ -269,7 +283,7 @@ Vous pouvez installer kubectl en tant qu'élément du SDK Google Cloud.
3. Testez pour vous assurer que la version que vous avez installée est à jour:
```
- kubectl version
+ kubectl version --client
```
## Vérification de la configuration de kubectl
@@ -307,7 +321,7 @@ Vous trouverez ci-dessous les étapes à suivre pour configurer l'auto-compléti
{{< tabs name="kubectl_autocompletion" >}}
-{{% tab name="Bash on Linux" %}}
+{{% tab name="Bash sur Linux" %}}
### Introduction
@@ -344,6 +358,12 @@ Vous devez maintenant vérifier que le script de completion de kubectl est bien
```shell
kubectl completion bash >/etc/bash_completion.d/kubectl
```
+- Si vous avez un alias pour kubectl, vous pouvez étendre la completion de votre shell pour fonctionner avec cet alias:
+
+ ```shell
+ echo 'alias k=kubectl' >>~/.bashrc
+ echo 'complete -F __start_kubectl k' >>~/.bashrc
+ ```
{{< note >}}
bash-completion source tous les scripts de completion dans `/etc/bash_completion.d`.
@@ -354,49 +374,69 @@ Les deux approches sont équivalentes. Après avoir rechargé votre shell, l'aut
{{% /tab %}}
-{{% tab name="Bash on macOS" %}}
+{{% tab name="Bash sur macOS" %}}
-{{< warning>}}
-macOS inclut Bash 3.2 par défaut. Le script de complétion kubectl nécessite Bash 4.1+ et ne fonctionne pas avec Bash 3.2. Une des solutions possibles est d'installer une version plus récente de Bash sous macOS (voir instructions [ici](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)). Les instructions ci-dessous ne fonctionnent que si vous utilisez Bash 4.1+.
-{{< /warning >}}
### Introduction
Le script de complétion kubectl pour Bash peut être généré avec la commande `kubectl completion bash`. Sourcer le script de completion dans votre shell permet l'auto-complétion de kubectl.
-En revanche, le script de complétion dépend de [**bash-completion**](https://github.com/scop/bash-completion), ce qui implique que vous devez d'abord installer ce logiciel (vous pouvez tester si vous avez déjà installé bash-completion en utilisant `type _init_completion`).
+En revanche, le script de complétion dépend de [**bash-completion**](https://github.com/scop/bash-completion), ce qui implique que vous devez d'abord installer ce logiciel.
+
+{{< warning>}}
+macOS inclut Bash 3.2 par défaut. Le script de complétion kubectl nécessite Bash 4.1+ et ne fonctionne pas avec Bash 3.2. Une des solutions possibles est d'installer une version plus récente de Bash sous macOS (voir instructions [ici](https://itnext.io/upgrading-bash-on-macos-7138bd1066ba)). Les instructions ci-dessous ne fonctionnent que si vous utilisez Bash 4.1+.
+{{< /warning >}}
+
### Installer bash-completion
-Vous pouvez installer bash-completion avec Homebrew:
+{{< note >}}
+Comme mentionné, ces instructions supposent que vous utilisez Bash 4.1+, ce qui signifie que vous installerez bash-completion v2 (contrairement à Bash 3.2 et bash-completion v1, auquel cas la complétion pour kubectl ne fonctionnera pas).
+{{< /note >}}
+
+Vous pouvez tester si vous avez déjà installé bash-completion en utilisant `type _init_completion`. Si il n'est pas installé, vous pouvez installer bash-completion avec Homebrew:
```shell
-brew install bash-completion
+brew install bash-completion@2
```
Comme indiqué dans la sortie de `brew install` (section "Caveats"), ajoutez les lignes suivantes à votre fichier `~/.bashrc` ou `~/.bash_profile` :
```shell
-[ -f /usr/local/etc/bash_completion ] && . /usr/local/etc/bash_completion
+export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d"
+[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh"
```
-Rechargez votre shell.
+Rechargez votre shell et vérifiez que bash-completion v2 est correctement installé avec `type _init_completion`.
### Activer l'auto-complétion de kubectl
-Si vous avez installé kubectl avec Homebrew (comme expliqué [ici](#installer-avec-homebrew-sur-macos)), alors le script de complétion a été automatiquement installé dans `/usr/local/etc/bash_completion.d/kubectl`. Dans ce cas, vous n'avez rien à faire.
-
Si vous n'avez pas installé via Homebrew, vous devez maintenant vous assurer que le script de complétion kubectl est bien sourcé dans toutes vos sessions shell comme suit:
+- Sourcer le script de completion dans votre fichier `~/.bashrc`:
+
+ ```shell
+ echo 'source <(kubectl completion bash)' >>~/.bashrc
+
+ ```
+
- Ajoutez le script de complétion dans le répertoire `/usr/local/etc/bash_completion.d`:
```shell
kubectl completion bash >/usr/local/etc/bash_completion.d/kubectl
```
+- Si vous avez un alias pour kubectl, vous pouvez étendre la completion de votre shell pour fonctionner avec cet alias:
+
+ ```shell
+ echo 'alias k=kubectl' >>~/.bashrc
+ echo 'complete -F __start_kubectl k' >>~/.bashrc
+ ```
+
+Si vous avez installé kubectl avec Homebrew (comme expliqué [ici](#installer-avec-homebrew-sur-macos)), alors le script de complétion a été automatiquement installé dans `/usr/local/etc/bash_completion.d/kubectl`. Dans ce cas, vous n'avez rien à faire.
{{< note >}}
-bash-completion (en cas d'installation avec Homebrew) source tous les scripts de complétion dans le répertoire.
+L'installation Homebrew de bash-complétion v2 source tous les fichiers du répertoire `BASH_COMPLETION_COMPAT_DIR`, c'est pourquoi les deux dernières méthodes fonctionnent.
{{< /note >}}
Après avoir rechargé votre shell, l'auto-complétion de kubectl devrait fonctionner.
@@ -412,6 +452,13 @@ Pour faire ainsi dans toutes vos sessions shell, ajoutez ce qui suit à votre fi
source <(kubectl completion zsh)
```
+Si vous avez un alias pour kubectl, vous pouvez étendre la completion de votre shell pour fonctionner avec cet alias:
+
+```shell
+echo 'alias k=kubectl' >>~/.zshrc
+echo 'complete -F __start_kubectl k' >>~/.zshrc
+```
+
Après avoir rechargé votre shell, l'auto-complétion de kubectl devrait fonctionner.
Si vous rencontrez une erreur comme `complete:13: command not found: compdef`, alors ajoutez ce qui suit au début de votre fichier `~/.zshrc`:
diff --git a/content/ko/docs/concepts/_index.md b/content/ko/docs/concepts/_index.md
index 6d4e30b4cab69..c6e7556312130 100644
--- a/content/ko/docs/concepts/_index.md
+++ b/content/ko/docs/concepts/_index.md
@@ -32,7 +32,7 @@ weight: 40
* [파드](/ko/docs/concepts/workloads/pods/pod-overview/)
* [서비스](/ko/docs/concepts/services-networking/service/)
-* [볼륨](/docs/concepts/storage/volumes/)
+* [볼륨](/ko/docs/concepts/storage/volumes/)
* [네임스페이스](/ko/docs/concepts/overview/working-with-objects/namespaces/)
또한, 쿠버네티스에는 기초 오브젝트를 기반으로, 부가 기능 및 편의 기능을 제공하는 [컨트롤러](/ko/docs/concepts/architecture/controller/)에 의존하는 보다 높은 수준의 추상 개념도 포함되어 있다. 다음이 포함된다.
diff --git a/content/ko/docs/concepts/architecture/cloud-controller.md b/content/ko/docs/concepts/architecture/cloud-controller.md
index 5c872bf06a137..734a2cd1a059c 100644
--- a/content/ko/docs/concepts/architecture/cloud-controller.md
+++ b/content/ko/docs/concepts/architecture/cloud-controller.md
@@ -52,7 +52,7 @@ CCM은 쿠버네티스 컨트롤러 매니저(KCM)의 기능 일부를 독립시
볼륨 컨트롤러는 의도적으로 CCM의 일부가 되지 않도록 선택되었다. 연관된 복잡성 때문에 그리고 벤더 특유의 볼륨 로직 개념을 일반화 하기 위한 기존의 노력때문에, 볼륨 컨트롤러는 CCM으로 이전되지 않도록 결정되었다.
{{< /note >}}
-CCM을 이용하는 볼륨을 지원하기 위한 원래 계획은 플러그형 볼륨을 지원하기 위한 [Flex](/docs/concepts/storage/volumes/#flexVolume) 볼륨을 사용하기 위한 것이었다. 그러나, [CSI](/docs/concepts/storage/volumes/#csi)라 알려진 경쟁적인 노력이 Flex를 대체하도록 계획되고 있다.
+CCM을 이용하는 볼륨을 지원하기 위한 원래 계획은 플러그형 볼륨을 지원하기 위한 [Flex](/ko/docs/concepts/storage/volumes/#flexVolume) 볼륨을 사용하기 위한 것이었다. 그러나, [CSI](/ko/docs/concepts/storage/volumes/#csi)라 알려진 경쟁적인 노력이 Flex를 대체하도록 계획되고 있다.
이러한 역동성을 고려하여, CSI가 준비될 때까지 차이점에 대한 측정은 도중에 중지하기로 결정하였다.
diff --git a/content/ko/docs/concepts/architecture/nodes.md b/content/ko/docs/concepts/architecture/nodes.md
index f9c28cee9c135..a94a81a0cbdc9 100644
--- a/content/ko/docs/concepts/architecture/nodes.md
+++ b/content/ko/docs/concepts/architecture/nodes.md
@@ -184,7 +184,7 @@ kubelet은 `NodeStatus` 와 리스 오브젝트를 생성하고 업데이트 할
#### 안정성
쿠버네티스 1.4에서, 대량의 노드들이 마스터 접근에
-문제를 지닐 경우 (예를 들어 마스터에 네트워크 문제가 발생했기 때문에)
+문제를 지닐 경우 (예를 들어 마스터에 네트워크 문제들이 발생했기 때문에)
더 개선된 문제 해결을 하도록 노드 컨트롤러의 로직을 업데이트 했다. 1.4를 시작으로,
노드 컨트롤러는 파드 축출에 대한 결정을 내릴 경우 클러스터
내 모든 노드를 살핀다.
@@ -210,7 +210,7 @@ kubelet은 `NodeStatus` 와 리스 오브젝트를 생성하고 업데이트 할
노드가 가용성 영역들에 걸쳐 퍼져 있는 주된 이유는 하나의 전체 영역이
장애가 발생할 경우 워크로드가 상태 양호한 영역으로 이전되어질 수 있도록 하기 위해서이다.
그러므로, 하나의 영역 내 모든 노드들이 상태가 불량하면 노드 컨트롤러는
-정상 비율 `--node-eviction-rate`로 축출한다. 코너 케이스란 모든 영역이
+`--node-eviction-rate` 의 정상 비율로 축출한다. 코너 케이스란 모든 영역이
완전히 상태불량 (즉 클러스터 내 양호한 노드가 없는 경우) 한 경우이다.
이러한 경우, 노드 컨트롤러는 마스터 연결에 문제가 있어 일부 연결이
복원될 때까지 모든 축출을 중지하는 것으로 여긴다.
diff --git a/content/ko/docs/concepts/configuration/overview.md b/content/ko/docs/concepts/configuration/overview.md
index 794f46d079b60..1bcb7362d6593 100644
--- a/content/ko/docs/concepts/configuration/overview.md
+++ b/content/ko/docs/concepts/configuration/overview.md
@@ -28,7 +28,7 @@ weight: 10
- 더 나은 인트로스펙션(introspection)을 위해서, 어노테이션에 오브젝트의 설명을 넣는다.
-## "단독(Naked)" 파드 vs 레플리카 셋, 디플로이먼트, 그리고 잡
+## "단독(Naked)" 파드 vs 레플리카 셋, 디플로이먼트, 그리고 잡 {#naked-pods-vs-replicasets-deployments-and-jobs}
- 가능하다면 단독 파드(즉, [레플리카 셋](/ko/docs/concepts/workloads/controllers/replicaset/)이나 [디플로이먼트](/ko/docs/concepts/workloads/controllers/deployment/)에 연결되지 않은 파드)를 사용하지 않는다. 단독 파드는 노드 장애 이벤트가 발생해도 다시 스케줄링되지 않는다.
@@ -85,7 +85,7 @@ DNS 서버는 새로운 `서비스`를 위한 쿠버네티스 API를 Watch하며
- `imagePullPolicy: Never`: 이미지가 로컬에 존재한다고 가정한다. 이미지를 풀(Pull) 하기 위해 시도하지 않는다.
{{< note >}}
-컨테이너가 항상 같은 버전의 이미지를 사용하도록 만들기 위해, `sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`와 같은 이미지의 [다이제스트](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier)를 명시할 수 있다. 다이제스트는 특정 버전의 이미지를 고유하게 식별하며, 다이제스트 값을 변경하지 않는 한 쿠버네티스에 의해 절대로 변경되지 않는다.
+컨테이너가 항상 같은 버전의 이미지를 사용하도록 하기 위해, `<이미지 이름>:<태그>` 를 `<이미지 이름>@<다이제스트>` (예시 `image@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2`)로 변경해서 이미지의 [다이제스트](https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier)를 명시할 수 있다. 다이제스트는 특정 버전의 이미지를 고유하게 식별하며, 다이제스트 값을 변경하지 않는 한 쿠버네티스에 의해 절대로 변경되지 않는다.
{{< /note >}}
{{< note >}}
diff --git a/content/ko/docs/concepts/containers/container-environment-variables.md b/content/ko/docs/concepts/containers/container-environment-variables.md
index 6100f4c28a367..b5cfaccbfcb63 100644
--- a/content/ko/docs/concepts/containers/container-environment-variables.md
+++ b/content/ko/docs/concepts/containers/container-environment-variables.md
@@ -17,7 +17,7 @@ weight: 20
쿠버네티스 컨테이너 환경은 컨테이너에 몇 가지 중요한 리소스를 제공한다.
-* 하나의 [이미지](/ko/docs/concepts/containers/images/)와 하나 이상의 [볼륨](/docs/concepts/storage/volumes/)이 결합된 파일 시스템.
+* 하나의 [이미지](/ko/docs/concepts/containers/images/)와 하나 이상의 [볼륨](/ko/docs/concepts/storage/volumes/)이 결합된 파일 시스템.
* 컨테이너 자신에 대한 정보.
* 클러스터 내의 다른 오브젝트에 대한 정보.
diff --git a/content/ko/docs/concepts/containers/images.md b/content/ko/docs/concepts/containers/images.md
index 3bc71c53c82f3..615e84dc40c38 100644
--- a/content/ko/docs/concepts/containers/images.md
+++ b/content/ko/docs/concepts/containers/images.md
@@ -64,6 +64,7 @@ Docker *18.06 또는 그 이상* 을 사용하길 바란다. 더 낮은 버전
- IAM 역할과 정책을 사용하여 OCIR 저장소에 접근을 제어함
- Azure 컨테이너 레지스트리(ACR) 사용
- IBM 클라우드 컨테이너 레지스트리 사용
+ - IAM 역할 및 정책을 사용하여 IBM 클라우드 컨테이너 레지스트리에 대한 접근 권한 부여
- 프라이빗 레지스트리에 대한 인증을 위한 노드 구성
- 모든 파드는 구성된 프라이빗 레지스트리를 읽을 수 있음
- 클러스터 관리자에 의한 노드 구성 필요
@@ -85,8 +86,9 @@ Docker *18.06 또는 그 이상* 을 사용하길 바란다. 더 낮은 버전
클러스터 내에서 모든 파드는 해당 레지스트리에 있는 이미지에 읽기 접근 권한을 가질 것이다.
-Kubelet은 해당 인스턴스의 Google 서비스 계정을 이용하여 GCR을 인증할 것이다.
-인스턴스의 서비스 계정은 `https://www.googleapis.com/auth/devstorage.read_only`라서,
+Kubelet은 해당 인스턴스의 Google 서비스 계정을 이용하여
+GCR을 인증할 것이다. 인스턴스의 서비스 계정은
+`https://www.googleapis.com/auth/devstorage.read_only`라서,
프로젝트의 GCR로부터 풀은 할 수 있지만 푸시는 할 수 없다.
### Amazon Elastic Container Registry 사용
@@ -144,12 +146,11 @@ kubelet은 ECR 자격 증명을 가져오고 주기적으로 갱신할 것이다
[쿠버네티스 시크릿을 구성하고 그것을 파드 디플로이를 위해서 사용](/ko/docs/concepts/containers/images/#파드에-imagepullsecrets-명시)할 수 있다.
### IBM 클라우드 컨테이너 레지스트리 사용
-IBM 클라우드 컨테이너 레지스트리는 멀티-테넌트 프라이빗 이미지 레지스트리를 제공하여 사용자가 Docker 이미지를 안전하게 저장하고 공유할 수 있도록 한다. 기본적으로,
-프라이빗 레지스트리의 이미지는 통합된 취약점 조언기(Vulnerability Advisor)를 통해 조사되어 보안 이슈와 잠재적 취약성을 검출한다. IBM 클라우드 계정의 모든 사용자가 이미지에 접근할 수 있도록 하거나, 레지스트리 네임스페이스에 접근을 승인하는 토큰을 생성할 수 있다.
+IBM 클라우드 컨테이너 레지스트리는 멀티-테넌트 프라이빗 이미지 레지스트리를 제공하여 사용자가 이미지를 안전하게 저장하고 공유할 수 있도록 한다. 기본적으로, 프라이빗 레지스트리의 이미지는 통합된 취약점 조언기(Vulnerability Advisor)를 통해 조사되어 보안 이슈와 잠재적 취약성을 검출한다. IBM 클라우드 계정의 모든 사용자가 이미지에 접근할 수 있도록 하거나, IAM 역할과 정책으로 IBM 클라우드 컨테이너 레지스트리 네임스페이스의 접근 권한을 부여해서 사용할 수 있다.
-IBM 클라우드 컨테이너 레지스트리 CLI 플러그인을 설치하고 사용자 이미지를 위한 네임스페이스를 생성하기 위해서는, [IBM 클라우드 컨테이너 레지스트리 시작하기](https://cloud.ibm.com/docs/services/Registry?topic=registry-getting-started)를 참고한다.
+IBM 클라우드 컨테이너 레지스트리 CLI 플러그인을 설치하고 사용자 이미지를 위한 네임스페이스를 생성하기 위해서는, [IBM 클라우드 컨테이너 레지스트리 시작하기](https://cloud.ibm.com/docs/Registry?topic=registry-getting-started)를 참고한다.
-[IBM 클라우드 퍼블릭 이미지](https://cloud.ibm.com/docs/services/Registry?topic=registry-public_images) 및 사용자의 프라이빗 이미지로부터 컨테이너를 사용자의 IBM 클라우드 쿠버네티스 서비스 클러스터의 `default` 네임스페이스에 디플로이하기 위해서 IBM 클라우드 컨테이너 레지스트리를 사용하면 된다. 컨테이너를 다른 네임스페이스에 디플로이하거나, 다른 IBM 클라우드 컨테이너 레지스트리 지역 또는 IBM 클라우드 계정을 사용하기 위해서는, 쿠버네티스 `imagePullSecret`를 생성한다. 더 자세한 정보는, [이미지로부터 컨테이너 빌드하기](https://cloud.ibm.com/docs/containers?topic=containers-images)를 참고한다.
+다른 추가적인 구성이 없는 IBM 클라우드 쿠버네티스 서비스 클러스터의 IBM 클라우드 컨테이너 레지스트리 내 기본 네임스페이스에 저장되어 있는 배포된 이미지를 동일 계정과 동일 지역에서 사용하려면 [이미지로부터 컨테이너 빌드하기](https://cloud.ibm.com/docs/containers?topic=containers-images)를 본다. 다른 구성 옵션에 대한 것은 [레지스트리부터 클러스터에 이미지를 가져오도록 권한을 부여하는 방법 이해하기](https://cloud.ibm.com/docs/containers?topic=containers-registry#cluster_registry_auth)를 본다.
### 프라이빗 레지스트리에 대한 인증을 위한 노드 구성
@@ -239,6 +240,7 @@ kubectl describe pods/private-image-test-1 | grep 'Failed'
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
```
+
클러스터의 모든 노드가 반드시 동일한 `.docker/config.json`를 가져야 한다. 그렇지 않으면, 파드가
일부 노드에서만 실행되고 다른 노드에서는 실패할 것이다. 예를 들어, 노드 오토스케일링을 사용한다면, 각 인스턴스
템플릿은 `.docker/config.json`을 포함하거나 그것을 포함한 드라이브를 마운트해야 한다.
@@ -362,7 +364,6 @@ imagePullSecrets을 셋팅하여 자동화할 수 있다.
- 테넌트는 해당 시크릿을 각 네임스페이스의 imagePullSecrets에 추가한다.
-
다중 레지스트리에 접근해야 하는 경우, 각 레지스트리에 대해 하나의 시크릿을 생성할 수 있다.
Kubelet은 모든`imagePullSecrets` 파일을 하나의 가상`.docker / config.json` 파일로 병합한다.
diff --git a/content/ko/docs/concepts/overview/working-with-objects/names.md b/content/ko/docs/concepts/overview/working-with-objects/names.md
index 0804c0d425cd6..0ab2681a77022 100644
--- a/content/ko/docs/concepts/overview/working-with-objects/names.md
+++ b/content/ko/docs/concepts/overview/working-with-objects/names.md
@@ -1,5 +1,5 @@
---
-title: 이름(Name)
+title: 오브젝트 이름과 ID
content_template: templates/concept
weight: 20
---
@@ -22,7 +22,35 @@ weight: 20
{{< glossary_definition term_id="name" length="all" >}}
-관례에 따라, 쿠버네티스 리소스의 이름은 최대 253자까지 허용되고 소문자 알파벳과 숫자(alphanumeric), `-`, 그리고 `.`로 구성되며 특정 리소스는 보다 구체적인 제약을 갖는다.
+다음은 리소스에 일반적으로 사용되는 세가지 유형의 이름 제한 조건이다.
+
+### DNS 서브도메인 이름들
+
+대부분의 리소스 유형에는 [RFC 1123](https://tools.ietf.org/html/rfc1123)에 정의된 대로
+DNS 서브도메인 이름으로 사용할 수 있는 이름이 필요하다.
+이것은 이름이 다음을 충족해야 한다는 것을 의미한다.
+
+- 253자를 넘지 말아야 한다.
+- 소문자와 영숫자 `-` 또는 `.` 만 포함한다.
+- 영숫자로 시작한다.
+- 영숫자로 끝난다.
+
+### DNS 레이블 이름
+
+일부 리소스 유형은 [RFC 1123](https://tools.ietf.org/html/rfc1123)에
+정의된 대로 DNS 레이블 표준을 따라야 한다.
+이것은 이름이 다음을 충족해야 한다는 것을 의미한다.
+
+- 최대 63자이다.
+- 소문자와 영숫자 또는 `-` 만 포함한다.
+- 영숫자로 시작한다.
+- 영숫자로 끝난다.
+
+### 경로 세그먼트 이름
+
+일부 리소스 유형에서는 이름을 경로 세그먼트로 안전하게 인코딩 할 수
+있어야 한다. 즉 이름이 "." 또는 ".."이 아닐 수 있으며 이름에는
+"/" 또는 "%"가 포함될 수 없다.
여기 파드의 이름이 `nginx-demo`라는 매니페스트 예시가 있다.
@@ -39,6 +67,7 @@ spec:
- containerPort: 80
```
+
{{< note >}}
일부 리소스 유형은 이름에 추가적인 제약이 있다.
{{< /note >}}
diff --git a/content/ko/docs/concepts/security/_index.md b/content/ko/docs/concepts/security/_index.md
new file mode 100644
index 0000000000000..079e3dd8f88ef
--- /dev/null
+++ b/content/ko/docs/concepts/security/_index.md
@@ -0,0 +1,4 @@
+---
+title: "보안"
+weight: 81
+---
diff --git a/content/ko/docs/concepts/security/overview.md b/content/ko/docs/concepts/security/overview.md
new file mode 100644
index 0000000000000..a8f5f050a4b20
--- /dev/null
+++ b/content/ko/docs/concepts/security/overview.md
@@ -0,0 +1,161 @@
+---
+title: 클라우드 네이티브 보안 개요
+content_template: templates/concept
+weight: 1
+---
+
+{{< toc >}}
+
+{{% capture overview %}}
+쿠버네티스 보안(일반적인 보안)은 관련된 많은 부분이 상호작용하는
+방대한 주제다. 오늘날에는 웹 애플리케이션의 실행을 돕는
+수많은 시스템에 오픈소스 소프트웨어가 통합되어 있으며,
+전체적인 보안에 대하여 생각할 수 있는 방법에 대한 통찰력을 도울 수 있는
+몇 가지 중요한 개념이 있다. 이 가이드는 클라우드 네이티브 보안과 관련된
+몇 가지 일반적인 개념에 대한 멘탈 모델(mental model)을 정의한다. 멘탈 모델은 완전히 임의적이며
+소프트웨어 스택을 보호할 위치를 생각하는데 도움이되는 경우에만 사용해야
+한다.
+{{% /capture %}}
+
+{{% capture body %}}
+
+## 클라우드 네이티브 보안의 4C
+계층적인 보안에 대해서 어떻게 생각할 수 있는지 이해하는 데 도움이 될 수 있는 다이어그램부터 살펴보자.
+{{< note >}}
+이 계층화된 접근 방식은 보안에 대한 [심층 방어](https://en.wikipedia.org/wiki/Defense_in_depth_(computing))
+접근 방식을 강화하며, 소프트웨어 시스템의 보안을 위한 모범 사례로
+널리 알려져 있다. 4C는 클라우드(Cloud), 클러스터(Clusters), 컨테이너(Containers) 및 코드(Code)이다.
+{{< /note >}}
+
+{{< figure src="/images/docs/4c.png" title="클라우드 네이티브 보안의 4C" >}}
+
+
+위 그림에서 볼 수 있듯이,
+4C는 각각의 사각형의 보안에 따라 다르다. 코드
+수준의 보안만 처리하여 클라우드, 컨테이너 및 코드의 열악한 보안 표준으로부터
+보호하는 것은 거의 불가능하다. 그러나 이런 영역들의 보안이 적절하게
+처리되고, 코드에 보안을 추가한다면 이미 강력한 기반이 더욱
+강화될 것이다. 이러한 관심 분야는 아래에서 더 자세히 설명한다.
+
+## 클라우드
+
+여러 면에서 클라우드(또는 공동 위치 서버, 또는 기업의 데이터 센터)는 쿠버네티스 클러스터 구성을 위한
+[신뢰 컴퓨팅 기반(trusted computing base)](https://en.wikipedia.org/wiki/Trusted_computing_base)
+이다. 이러한 구성 요소 자체가 취약하거나(또는 취약한 방법으로 구성된)
+경우 이 기반 위에서 구축된 모든 구성 요소의 보안을
+실제로 보장할 방법이 없다. 각 클라우드 공급자는 그들의 환경에서 워크로드를
+안전하게 실행하는 방법에 대해 고객에게 광범위한 보안 권장 사항을
+제공한다. 모든 클라우드 공급자와 워크로드는 다르기 때문에
+클라우드 보안에 대한 권장 사항을 제공하는 것은 이 가이드의 범위를 벗어난다. 다음은
+알려진 클라우드 공급자의 보안 문서의 일부와
+쿠버네티스 클러스터를 구성하기 위한 인프라
+보안에 대한 일반적인 지침을 제공한다.
+
+### 클라우드 공급자 보안 표
+
+
+
+IaaS 공급자 | 링크 |
+-------------------- | ------------ |
+Alibaba Cloud | https://www.alibabacloud.com/trust-center |
+Amazon Web Services | https://aws.amazon.com/security/ |
+Google Cloud Platform | https://cloud.google.com/security/ |
+IBM Cloud | https://www.ibm.com/cloud/security |
+Microsoft Azure | https://docs.microsoft.com/en-us/azure/security/azure-security |
+VMWare VSphere | https://www.vmware.com/security/hardening-guides.html |
+
+
+자체 하드웨어나 다른 클라우드 공급자를 사용하는 경우 보안에 대한
+모범 사례는 해당 문서를 참조한다.
+
+### 일반적인 인프라 지침 표
+
+쿠버네티스 인프라에서 고려할 영역 | 추천 |
+--------------------------------------------- | ------------ |
+API 서버에 대한 네트워크 접근(마스터) | 이상적으로는 인터넷에서 쿠버네티스 마스터에 대한 모든 접근을 공개적으로 허용하지 않으며 클러스터를 관리하는데 필요한 IP 주소 집합으로 제한된 네트워크 접근 제어 목록(ACL)에 의해 제어되어야 한다. |
+노드에 대한 네트워크 접근(워커 서버) | 노드는 마스터의 지정된 포트 연결_만_ 허용하고(네트워크 접근 제어 목록의 사용), NodePort와 LoadBalancer 유형의 쿠버네티스 서비스에 대한 연결을 허용하도록 구성해야 한다. 가능한 노드가 공용 인터넷에 완전히 노출되어서는 안된다.
+클라우드 공급자 API에 대한 쿠버네티스 접근 | 각 클라우드 공급자는 쿠버네티스 마스터 및 노드에 서로 다른 권한을 부여해야 함으로써, 이런 권장 사항이 더 일반적이다. 관리해야 하는 리소스에 대한 [최소 권한의 원칙](https://en.wikipedia.org/wiki/Principle_of_least_privilege)을 따르는 클라우드 공급자의 접근 권한을 클러스터에 구성하는 것이 가장 좋다. AWS의 Kops에 대한 예제: https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md#iam-roles
+etcd에 대한 접근 | etcd (쿠버네티스의 데이터저장소)에 대한 접근은 마스터로만 제한되어야 한다. 구성에 따라 TLS를 통해 etcd를 사용해야 한다. 자세한 정보: https://github.com/etcd-io/etcd/tree/master/Documentation#security
+etcd 암호화 | 가능한 모든 드라이브를 유휴 상태에서 암호화 하는 것이 좋은 방법이지만, etcd는 전체 클러스터(시크릿 포함)의 상태를 유지하고 있기에 디스크의 암호화는 유휴 상태에서 암호화 되어야 한다.
+
+## 클러스터
+
+이 섹션에서는 쿠버네티스의 워크로드
+보안을 위한 링크를 제공한다. 쿠버네티스
+보안에 영향을 미치는 다음 두 가지 영역이 있다.
+
+* 클러스터를 구성하는 설정 가능한 컴포넌트의 보안
+* 클러스터에서 실행되는 컴포넌트의 보안
+
+### 클러스터_의_ 컴포넌트
+
+우발적이거나 악의적인 접근으로부터 클러스터를 보호하고,
+모범 사례에 대한 정보를 채택하기 위해서는
+[클러스터 보안](/docs/tasks/administer-cluster/securing-a-cluster/)에 대한 조언을 읽고 따른다.
+
+### 클러스터 _내_ 컴포넌트(애플리케이션)
+애플리케이션의 공격 영역에 따라, 보안의 특정 측면에
+중점을 둘 수 있다. 예를 들어, 다른 리소스 체인에 중요한 서비스(서비스 A)와
+리소스 소진 공격에 취약한 별도의 작업 부하(서비스 B)를 실행하는 경우,
+리소스 제한을 설정하지 않은 서비스 B에 의해
+서비스 A 또한 손상시킬 위험이 있다. 다음은 쿠버네티스에서
+실행 중인 워크로드를 보호할 때 고려해야 할 사항에 대한 링크 표이다.
+
+워크로드 보안에서 고려할 영역 | 추천 |
+------------------------------ | ------------ |
+RBAC 인증(쿠버네티스 API에 대한 접근) | https://kubernetes.io/docs/reference/access-authn-authz/rbac/
+인증 | https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/
+애플리케이션 시크릿 관리(및 유휴 상태에서의 etcd 암호화 등) | https://kubernetes.io/docs/concepts/configuration/secret/ https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/
+파드 보안 정책 | https://kubernetes.io/docs/concepts/policy/pod-security-policy/
+서비스 품질(및 클러스터 리소스 관리) | https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/
+네트워크 정책 | https://kubernetes.io/ko/docs/concepts/services-networking/network-policies/
+쿠버네티스 인그레스를 위한 TLS | https://kubernetes.io/ko/docs/concepts/services-networking/ingress/#tls
+
+
+
+## 컨테이너
+
+쿠버네티스에서 소프트웨어를 실행하려면, 소프트웨어는 컨테이너에 있어야 한다. 이로 인해,
+쿠버네티스의 원시적인 워크로드 보안으로부터 이점을 얻기 위해서
+반드시 고려해야 할 보안 사항이 있다. 컨테이너 보안
+또한 이 가이드의 범위를 벗어나지만, 해당 주제에 대한 추가적인 설명을 위하여
+일반 권장사항 및 링크 표를 아래에 제공한다.
+
+컨테이너에서 고려할 영역 | 추천 |
+------------------------------ | ------------ |
+컨테이너 취약점 스캔 및 OS에 종속적인 보안 | 이미지 빌드 단계의 일부 또는 정기적으로 [CoreOS의 Clair](https://github.com/coreos/clair/)와 같은 도구를 사용해서 컨테이너에 알려진 취약점이 있는지 검사한다.
+이미지 서명 및 시행 | 두 개의 다른 CNCF 프로젝트(TUF 와 Notary)는 컨테이너 이미지에 서명하고 컨테이너 내용에 대한 신뢰 시스템을 유지하는데 유용한 도구이다. 도커를 사용하는 경우 도커 엔진에 [도커 컨텐츠 신뢰](https://docs.docker.com/engine/security/trust/content_trust/)가 내장되어 있다. 시행 부분에서의 [IBM의 Portieris](https://github.com/IBM/portieris) 프로젝트는 쿠버네티스 다이나믹 어드미션 컨트롤러로 실행되는 도구로, 클러스터에서 허가하기 전에 Notary를 통해 이미지가 적절하게 서명되었는지 확인한다.
+권한있는 사용자의 비허용 | 컨테이너를 구성할 때 컨테이너의 목적을 수행하는데 필요한 최소 권한을 가진 사용자를 컨테이너 내에 만드는 방법에 대해서는 설명서를 참조한다.
+
+## 코드
+
+마지막으로 애플리케이션의 코드 수준으로 내려가면, 가장 많은 제어를 할 수 있는
+주요 공격 영역 중 하나이다. 이런 코드 수준은 쿠버네티스의 범위
+밖이지만 몇가지 권장사항이 있다.
+
+### 일반적인 코드 보안 지침표
+
+코드에서 고려할 영역 | 추천 |
+--------------------------------------------- | ------------ |
+TLS를 통한 접근 | 코드가 TCP를 통해 통신해야 한다면, 클라이언트와 먼저 TLS 핸드 셰이크를 수행하는 것이 이상적이다. 몇 가지 경우를 제외하고, 기본 동작은 전송 중인 모든 것을 암호화하는 것이다. 한걸음 더 나아가, VPC의 "방화벽 뒤"에서도 서비스 간 네트워크 트래픽을 암호화하는 것이 좋다. 이것은 인증서를 가지고 있는 두 서비스의 양방향 검증을 [mTLS](https://en.wikipedia.org/wiki/Mutual_authentication)를 통해 수행할 수 있다. 이것을 수행하기 위해 쿠버네티스에는 [Linkerd](https://linkerd.io/) 및 [Istio](https://istio.io/)와 같은 수많은 도구가 있다. |
+통신 포트 범위 제한 | 이 권장사항은 당연할 수도 있지만, 가능하면 통신이나 메트릭 수집에 꼭 필요한 서비스의 포트만 노출시켜야 한다. |
+타사 종속성 보안 | 애플리케이션은 자체 코드베이스의 외부에 종속적인 경향이 있기 때문에, 코드의 종속성을 정기적으로 스캔하여 현재 알려진 취약점이 없는지 확인하는 것이 좋다. 각 언어에는 이런 검사를 자동으로 수행하는 도구를 가지고 있다. |
+정적 코드 분석 | 대부분 언어에는 잠재적으로 안전하지 않은 코딩 방법에 대해 코드 스니펫을 분석할 수 있는 방법을 제공한다. 가능한 언제든지 일반적인 보안 오류에 대해 코드베이스를 스캔할 수 있는 자동화된 도구를 사용하여 검사를 한다. 도구는 다음에서 찾을 수 있다: https://www.owasp.org/index.php/Source_Code_Analysis_Tools |
+동적 탐지 공격 | 일반적으로 서비스에서 발생할 수 있는 잘 알려진 공격 중 일부를 서비스에 테스트할 수 있는 자동화된 몇 가지 도구가 있다. 이런 잘 알려진 공격에는 SQL 인젝션, CSRF 및 XSS가 포함된다. 가장 널리 사용되는 동적 분석 도구는 OWASP Zed Attack 프록시다. https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project |
+
+
+## 강력한(robust) 자동화
+
+위에서 언급한 대부분의 제안사항은 실제로 일련의 보안 검사의 일부로 코드를
+전달하는 파이프라인에 의해 자동화 될 수 있다. 소프트웨어 전달을 위한
+"지속적인 해킹(Continuous Hacking)"에 대한 접근 방식에 대해 알아 보려면, 자세한 설명을 제공하는 [이 기사](https://thenewstack.io/beyond-ci-cd-how-continuous-hacking-of-docker-containers-and-pipeline-driven-security-keeps-ygrene-secure/)를 참고한다.
+
+{{% /capture %}}
+{{% capture whatsnext %}}
+* [파드에 대한 네트워크 정책](/docs/concepts/services-networking/network-policies/) 알아보기
+* [클러스터 보안](/docs/tasks/administer-cluster/securing-a-cluster/)에 대해 알아보기
+* [API 접근 통제](/docs/reference/access-authn-authz/controlling-access/)에 대해 알아보기
+* 컨트롤 플레인에 대한 [전송 데이터 암호화](/docs/tasks/tls/managing-tls-in-a-cluster/) 알아보기
+* [Rest에서 데이터 암호화](/docs/tasks/administer-cluster/encrypt-data/) 알아보기
+* [쿠버네티스 시크릿](/docs/concepts/configuration/secret/)에 대해 알아보기
+{{% /capture %}}
diff --git a/content/ko/docs/concepts/services-networking/ingress.md b/content/ko/docs/concepts/services-networking/ingress.md
index 793a1520e322a..64d610a49944b 100644
--- a/content/ko/docs/concepts/services-networking/ingress.md
+++ b/content/ko/docs/concepts/services-networking/ingress.md
@@ -334,7 +334,7 @@ spec:
{{< note >}}
TLS 기능을 제공하는 다양한 인그레스 컨트롤러간의 기능
차이가 있다. 사용자 환경에서의 TLS의 작동 방식을 이해하려면
-[nginx](https://git.k8s.io/ingress-nginx/README.md#https),
+[nginx](https://kubernetes.github.io/ingress-nginx/user-guide/tls/),
[GCE](https://git.k8s.io/ingress-gce/README.md#frontend-https) 또는 기타
플랫폼의 특정 인그레스 컨트롤러에 대한 설명서를 참조한다.
{{< /note >}}
diff --git a/content/ko/docs/concepts/services-networking/service.md b/content/ko/docs/concepts/services-networking/service.md
index 1c33fb3008dc1..6d47a2c2aa6b0 100644
--- a/content/ko/docs/concepts/services-networking/service.md
+++ b/content/ko/docs/concepts/services-networking/service.md
@@ -1189,7 +1189,7 @@ kube-proxy는 유저스페이스 모드에 있을 때 SCTP 연결 관리를 지
{{% capture whatsnext %}}
-* [서비스와 애플리케이션 연결](/docs/concepts/services-networking/connect-applications-service/) 알아보기
+* [서비스와 애플리케이션 연결](/ko/docs/concepts/services-networking/connect-applications-service/) 알아보기
* [인그레스](/ko/docs/concepts/services-networking/ingress/)에 대해 알아보기
* [엔드포인트슬라이스](/ko/docs/concepts/services-networking/endpoint-slices/)에 대해 알아보기
diff --git a/content/ko/docs/concepts/storage/volume-pvc-datasource.md b/content/ko/docs/concepts/storage/volume-pvc-datasource.md
new file mode 100644
index 0000000000000..ab9f1db2ca4c5
--- /dev/null
+++ b/content/ko/docs/concepts/storage/volume-pvc-datasource.md
@@ -0,0 +1,65 @@
+---
+title: CSI 볼륨 복제하기
+content_template: templates/concept
+weight: 30
+---
+
+{{% capture overview %}}
+
+{{< feature-state for_k8s_version="v1.16" state="beta" >}}
+이 문서에서는 쿠버네티스의 기존 CSI 볼륨 복제의 개념을 설명한다. [볼륨]
+(/ko/docs/concepts/storage/volumes)을 숙지하는 것을 추천한다.
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## 소개
+
+{{< glossary_tooltip text="CSI" term_id="csi" >}} 볼륨 복제 기능은 `dataSource` 필드에 기존 {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}를 지정하는 지원을 추가해서 사용자가 {{< glossary_tooltip term_id="volume" >}}을 복제하려는 것을 나타낸다.
+
+복제는 표준 볼륨처럼 소비할 수 있는 쿠버네티스 볼륨의 복제본으로 정의된다. 유일한 차이점은 프로비저닝할 때 "새" 빈 볼륨을 생성하는 대신에 백엔드 장치가 지정된 볼륨의 정확한 복제본을 생성한다는 것이다.
+
+쿠버네티스 API의 관점에서 복제를 구현하면 새로운 PVC 생성 중에 기존 PVC를 데이터 소스로 지정할 수 있는 기능이 추가된다. 소스 PVC는 바인딩되어있고, 사용가능해야 한다(사용 중이 아니어야함).
+
+사용자는 이 기능을 사용할 때 다음 사항을 알고 있어야 한다.
+
+* 복제 지원(`VolumePVCDataSource`)은 CSI 드라이버에서만 사용할 수 있다.
+* 복제 지원은 동적 프로비저너만 사용할 수 있다.
+* CSI 드라이버는 볼륨 복제 기능을 구현했거나 구현하지 않았을 수 있다.
+* PVC는 대상 PVC와 동일한 네임스페이스에 있는 경우에만 복제할 수 있다(소스와 대상은 동일한 네임스페이스에 있어야 함).
+* 복제는 동일한 스토리지 클래스 내에서만 지원된다.
+ - 대상 볼륨은 소스와 동일한 스토리지 클래스여야 한다.
+ - 기본 스토리지 클래스를 사용할 수 있으며, 사양에 storageClassName을 생략할 수 있다.
+
+
+## 프로비저닝
+
+동일한 네임스페이스에서 기존 PVC를 참조하는 dataSource를 추가하는 것을 제외하고는 다른 PVC와 마찬가지로 복제가 프로비전된다.
+
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: clone-of-pvc-1
+ namespace: myns
+spec:
+ accessModes:
+ - ReadWriteOnce
+ storageClassName: cloning
+ resources:
+ requests:
+ storage: 5Gi
+ dataSource:
+ kind: PersistentVolumeClaim
+ name: pvc-1
+```
+
+그 결과로 지정된 소스 `pvc-1` 과 동일한 내용을 가진 `clone-of-pvc-1` 이라는 이름을 가지는 새로운 PVC가 생겨난다.
+
+## 사용
+
+새 PVC를 사용할 수 있게 되면, 복제된 PVC는 다른 PVC와 동일하게 소비된다. 또한, 이 시점에서 새롭게 생성된 PVC는 독립된 오브젝트이다. 원본 dataSource PVC와는 무관하게 독립적으로 소비하고, 복제하고, 스냅샷의 생성 또는 삭제를 할 수 있다. 이는 소스가 새롭게 생성된 복제본에 어떤 방식으로든 연결되어 있지 않으며, 새롭게 생성된 복제본에 영향 없이 수정하거나, 삭제할 수도 있는 것을 의미한다.
+
+{{% /capture %}}
diff --git a/content/ko/docs/concepts/storage/volume-snapshot-classes.md b/content/ko/docs/concepts/storage/volume-snapshot-classes.md
new file mode 100644
index 0000000000000..f4d29912383f8
--- /dev/null
+++ b/content/ko/docs/concepts/storage/volume-snapshot-classes.md
@@ -0,0 +1,65 @@
+---
+title: 볼륨 스냅샷 클래스
+content_template: templates/concept
+weight: 30
+---
+
+{{% capture overview %}}
+
+이 문서는 쿠버네티스의 `VolumeSnapshotClass` 개요를 설명한다.
+[볼륨 스냅샷](/docs/concepts/storage/volume-snapshots/)과
+[스토리지 클래스](/docs/concepts/storage/storage-classes)의 숙지를 추천한다.
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## 소개
+
+`StorageClass` 는 관리자가 볼륨을 프로비저닝할 때 제공하는 스토리지의 "클래스"를
+설명하는 방법을 제공하는 것처럼, `VolumeSnapshotClass` 는 볼륨 스냅샷을
+프로비저닝할 때 스토리지의 "클래스"를 설명하는 방법을 제공한다.
+
+## VolumeSnapshotClass 리소스
+
+각 `VolumeSnapshotClass` 에는 클래스에 속하는 `VolumeSnapshot` 을
+동적으로 프로비전 할 때 사용되는 `driver`, `deletionPolicy` 그리고 `parameters`
+필드를 포함한다.
+
+`VolumeSnapshotClass` 오브젝트의 이름은 중요하며, 사용자가 특정
+클래스를 요청할 수 있는 방법이다. 관리자는 `VolumeSnapshotClass` 오브젝트를
+처음 생성할 때 클래스의 이름과 기타 파라미터를 설정하고, 오브젝트가
+생성된 이후에는 업데이트할 수 없다.
+
+관리자는 특정 클래스의 바인딩을 요청하지 않는 VolumeSnapshots에만
+기본 `VolumeSnapshotClass` 를 지정할 수 있다.
+
+```yaml
+apiVersion: snapshot.storage.k8s.io/v1beta1
+kind: VolumeSnapshotClass
+metadata:
+ name: csi-hostpath-snapclass
+driver: hostpath.csi.k8s.io
+deletionPolicy: Delete
+parameters:
+```
+
+### 드라이버
+
+볼륨 스냅샷 클래스에는 VolumeSnapshots의 프로비저닝에 사용되는 CSI 볼륨 플러그인을
+결정하는 드라이버를 가지고 있다. 이 필드는 반드시 지정해야한다.
+
+### 삭제정책(DeletionPolicy)
+
+볼륨 스냅샷 클래스는 삭제정책을 가지고 있다. 바인딩 된 `VolumeSnapshot` 오브젝트를 삭제할 때 `VolumeSnapshotContent` 의 상황을 구성할 수 있다. 볼륨 스냅삿의 삭제정책은 `Retain` 또는 `Delete` 일 수 있다. 이 필드는 반드시 지정해야 한다.
+
+삭제정책이 `Delete` 인 경우 기본 스토리지 스냅샷이 `VolumeSnapshotContent` 오브젝트와 함께 삭제된다. 삭제정책이 `Retain` 인 경우 기본 스냅샷과 `VolumeSnapshotContent` 모두 유지된다.
+
+## 파라미터
+
+볼륨 스냅샷 클래스에는 볼륨 스냅샷 클래스에 속하는 볼륨 스냅샷을
+설명하는 파라미터를 가지고 있다. `driver` 에 따라 다른 파라미터를 사용할
+수 있다.
+
+{{% /capture %}}
diff --git a/content/ko/docs/concepts/workloads/controllers/cron-jobs.md b/content/ko/docs/concepts/workloads/controllers/cron-jobs.md
index ff76d12f868bf..4e912069ec921 100644
--- a/content/ko/docs/concepts/workloads/controllers/cron-jobs.md
+++ b/content/ko/docs/concepts/workloads/controllers/cron-jobs.md
@@ -13,9 +13,14 @@ _크론 잡은_ 시간 기반의 일정에 따라 [잡](/docs/concepts/workloads
하나의 크론잡 객체는 _크론탭_ (크론 테이블) 파일의 한 줄과 같다. 크론잡은 잡을 [크론](https://en.wikipedia.org/wiki/Cron)형식으로 쓰여진 주어진 일정에 따라 주기적으로 동작시킨다.
-{{< note >}}
-모든 **크론잡** `일정:` 시간은 잡이 처음 시작된 마스터의 시간대를 기반으로 한다.
-{{< /note >}}
+{{< caution >}}
+모든 **크론잡** `일정:` 시간은 {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}
+의 시간대를 기준으로 한다.
+
+컨트롤 플레인이 파드 또는 베어 컨테이너에서 kube-controller-manager를
+실행하는 경우 kube-controller-manager 컨테이너의 설정된 시간대는 크론 잡 컨트롤러가
+사용하는 시간대로 설정한다.
+{{< /caution >}}
크론잡 리소스에 대한 매니페스트를 생성할때에는 제공하는 이름이
52자 이하인지 확인해야 한다. 이는 크론잡 컨트롤러는 제공된 잡 이름에
diff --git a/content/ko/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/content/ko/docs/concepts/workloads/controllers/jobs-run-to-completion.md
new file mode 100644
index 0000000000000..939b6e8293277
--- /dev/null
+++ b/content/ko/docs/concepts/workloads/controllers/jobs-run-to-completion.md
@@ -0,0 +1,477 @@
+---
+title: 잡 - 실행부터 완료까지
+content_template: templates/concept
+feature:
+ title: 배치 실행
+ description: >
+ 쿠버네티스는 서비스 외에도 배치와 CI 워크로드를 관리할 수 있으며, 원하는 경우 실패한 컨테이너를 교체할 수 있다.
+weight: 70
+---
+
+{{% capture overview %}}
+
+잡에서 하나 이상의 파드를 생성하고 지정된 수의 파드가 성공적으로 종료되도록 한다.
+파드가 성공적으로 완료되면, 성공적으로 완료된 잡을 추적한다. 지정된 수의
+성공 완료에 도달하면, 작업(즉, 잡)이 완료된다. 잡을 삭제하면 잡이 생성한
+파드가 정리된다.
+
+간단한 사례는 잡 오브젝트를 하나 생성해서 파드 하나를 안정적으로 실행하고 완료하는 것이다.
+첫 번째 파드가 실패 또는 삭제된 경우(예로는 노드 하드웨어의 실패 또는
+노드 재부팅) 잡 오브젝트는 새로운 파드를 기동시킨다.
+
+잡을 사용하면 여러 파드를 병렬로 실행할 수도 있다.
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## 예시 잡 실행하기
+
+다음은 잡 설정 예시이다. 예시는 파이(π)의 2000 자리까지 계산해서 출력한다.
+이를 완료하는 데 약 10초가 소요된다.
+
+{{< codenew file="controllers/job.yaml" >}}
+
+이 명령으로 예시를 실행할 수 있다.
+
+```shell
+kubectl apply -f https://k8s.io/examples/controllers/job.yaml
+```
+```
+job.batch/pi created
+```
+
+`kubectl` 을 사용해서 잡 상태를 확인한다.
+
+```shell
+kubectl describe jobs/pi
+```
+```
+Name: pi
+Namespace: default
+Selector: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
+Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
+ job-name=pi
+Annotations: kubectl.kubernetes.io/last-applied-configuration:
+ {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"pi","namespace":"default"},"spec":{"backoffLimit":4,"template":...
+Parallelism: 1
+Completions: 1
+Start Time: Mon, 02 Dec 2019 15:20:11 +0200
+Completed At: Mon, 02 Dec 2019 15:21:16 +0200
+Duration: 65s
+Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
+Pod Template:
+ Labels: controller-uid=c9948307-e56d-4b5d-8302-ae2d7b7da67c
+ job-name=pi
+ Containers:
+ pi:
+ Image: perl
+ Port:
+ Host Port:
+ Command:
+ perl
+ -Mbignum=bpi
+ -wle
+ print bpi(2000)
+ Environment:
+ Mounts:
+ Volumes:
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal SuccessfulCreate 14m job-controller Created pod: pi-5rwd7
+```
+
+`kubectl get pods` 를 사용해서 잡의 완료된 파드를 본다.
+
+잡에 속하는 모든 파드를 기계적으로 읽을 수 있는 양식으로 나열하려면, 다음과 같은 명령을 사용할 수 있다.
+
+```shell
+pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
+echo $pods
+```
+```
+pi-5rwd7
+```
+
+여기서 셀렉터는 잡의 셀렉터와 동일하다. `--output=jsonpath` 옵션은 반환된 목록의
+각각의 파드에서 이름을 가져와서 표현하는 방식을 지정한다.
+
+파드 중 하나를 표준 출력으로 본다.
+
+```shell
+kubectl logs $pods
+```
+다음과 유사하게 출력된다.
+```shell
+3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
+```
+
+## 잡 사양 작성하기
+
+다른 쿠버네티스의 설정과 마찬가지로 잡에는 `apiVersion`, `kind` 그리고 `metadata` 필드가 필요하다.
+
+잡에는 [`.spec` 섹션](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)도 필요하다.
+
+### 파드 템플릿
+
+`.spec.template` 은 `.spec` 의 유일한 필수 필드이다.
+
+`.spec.template` 은 [파드 템플릿](/ko/docs/concepts/workloads/pods/pod-overview/#파드-템플릿)이다. 이것은 `apiVersion` 또는 `kind` 가 없다는 것을 제외한다면 [파드](/ko/docs/concepts/workloads/pods/pod/)와 정확하게 같은 스키마를 가지고 있다.
+
+추가로 파드의 필수 필드 외에도 잡의 파드 템플릿은 적절한
+레이블([파드 셀렉터](#파드-셀렉터)를 본다)과 적절한 재시작 정책을 명시해야 한다.
+
+`Never` 또는 `OnFailure` 와 같은 [`RestartPolicy`](/ko/docs/concepts/workloads/pods/pod-lifecycle/#재시작-정책)만 허용된다.
+
+### 파드 셀렉터
+
+`.spec.selector` 필드는 선택 사항이다. 대부분의 케이스에서 지정해서는 안된다.
+[자신의 파드 셀렉터를 지정하기](#자신의-파드-셀렉터를-지정하기) 섹션을 참고한다.
+
+
+### 병렬 잡
+
+잡으로 실행하기에 적합한 작업 유형은 크게 세 가지가 있다.
+
+1. 비-병렬(Non-parallel) 잡:
+ - 일반적으로, 파드가 실패하지 않은 한, 하나의 파드만 시작된다.
+ - 파드가 성공적으로 종료하자마자 즉시 잡이 완료된다.
+1. *고정적(fixed)인 완료 횟수* 를 가진 병렬 잡:
+ - `.spec.completions` 에 0이 아닌 양수 값을 지정한다.
+ - 잡은 전체 작업을 나타내며 1에서 `.spec.completions` 까지의 범위의 각 값에 대해 한 개씩 성공한 파드가 있으면 완료된다.
+ - **아직 구현되지 않음:** 각 파드에게는 1부터 `.spec.completions` 까지의 범위 내의 서로 다른 인덱스가 전달된다.
+1. *작업 큐(queue)* 가 있는 병렬 잡:
+ - `.spec.completions` 를 지정하지 않고, `.spec.parallelism` 를 기본으로 한다.
+ - 파드는 각자 또는 외부 서비스 간에 조정을 통해 각각의 작업을 결정해야 한다. 예를 들어 파드는 작업 큐에서 최대 N 개의 항목을 일괄로 가져올(fetch) 수 있다.
+ - 각 파드는 모든 피어들의 작업이 완료되었는지 여부를 독립적으로 판단할 수 있으며, 결과적으로 전체 잡이 완료되게 한다.
+ - 잡의 _모든_ 파드가 성공적으로 종료되면, 새로운 파드는 생성되지 않는다.
+ - 하나 이상의 파드가 성공적으로 종료되고, 모든 파드가 종료되면 잡은 성공적으로 완료된다.
+ - 성공적으로 종료된 파드가 하나라도 생긴 경우, 다른 파드들은 해당 작업을 지속하지 않아야 하며 어떠한 출력도 작성하면 안 된다. 파드들은 모두 종료되는 과정에 있어야 한다.
+
+_비-병렬_ 잡은 `.spec.completions` 와 `.spec.parallelism` 모두를 설정하지 않은 채로 둘 수 있다. 이때 둘 다
+설정하지 않은 경우 1이 기본으로 설정된다.
+
+_고정적인 완료 횟수_ 잡은 `.spec.completions` 을 필요한 완료 횟수로 설정해야 한다.
+`.spec.parallelism` 을 설정할 수 있고, 설정하지 않으면 1이 기본으로 설정된다.
+
+_작업 큐_ 잡은 `.spec.completions` 를 설정하지 않은 상태로 두고, `.spec.parallelism` 을
+음수가 아닌 정수로 설정해야 한다.
+
+다른 유형의 잡을 사용하는 방법에 대한 더 자세한 정보는 [잡 패턴](#잡-패턴) 섹션을 본다.
+
+
+#### 병렬 처리 제어하기
+
+요청된 병렬 처리(`.spec.parallelism`)는 음수가 아닌 값으로 설정할 수 있다.
+만약 지정되지 않은 경우에는 1이 기본이 된다.
+만약 0으로 지정되면 병렬 처리가 증가할 때까지 사실상 일시 중지된다.
+
+실제 병렬 처리(모든 인스턴스에서 실행되는 파드의 수)는 여러가지 이유로 요청된
+병렬 처리보다 많거나 적을 수 있다.
+
+- _고정적인 완료 횟수(fixed completion count)_ 잡의 경우, 병렬로 실행 중인 파드의 수는 남은 완료 수를
+ 초과하지 않는다. `.spec.parallelism` 의 더 큰 값은 사실상 무시된다.
+- _작업 큐_ 잡은 파드가 성공한 이후에 새로운 파드가 시작되지 않는다. 그러나 나머지 파드는 완료될 수 있다.
+- 만약 잡 {{< glossary_tooltip term_id="controller" >}} 가 반응할 시간이 없는 경우
+- 만약 잡 컨트롤러가 어떤 이유(`리소스 쿼터` 의 부족, 권한 부족 등)로든 파드 생성에 실패한 경우,
+ 요청한 것보다 적은 수의 파드가 있을 수 있다.
+- 잡 컨트롤러는 동일한 잡에서 과도하게 실패한 이전 파드들로 인해 새로운 파드의 생성을 조절할 수 있다.
+- 파드가 정상적으로(gracefully) 종료되면, 중지하는데 시간이 소요된다.
+
+## 파드와 컨테이너 장애 처리하기
+
+파드내 컨테이너의 프로세스가 0이 아닌 종료 코드로 종료되었거나 컨테이너 메모리 제한을
+초과해서 죽는 등의 여러가지 이유로 실패할 수 있다. 만약 이런 일이
+발생하고 `.spec.template.spec.restartPolicy = "OnFailure"` 라면 파드는
+노드에 그대로 유지되지만, 컨테이너는 다시 실행된다. 따라서 프로그램은 로컬에서 재시작될 때의
+케이스를 다루거나 `.spec.template.spec.restartPolicy = "Never"` 로 지정해야 한다.
+더 자세한 정보는 [파드 라이프사이클](/ko/docs/concepts/workloads/pods/pod-lifecycle/#상태-예제)의 `restartPolicy` 를 본다.
+
+파드가 노드에서 내보내지는 경우(노드 업그레이드, 재부팅, 삭제 등) 또는 파드의 컨테이너가 실패
+되고 `.spec.template.spec.restartPolicy = "Never"` 로 설정됨과 같은 여러 이유로
+전체 파드가 실패할 수 있다. 파드가 실패하면 잡 컨트롤러는
+새 파드를 시작한다. 이 의미는 애플리케이션이 새 파드에서 재시작될 때 이 케이스를 처리해야
+한다는 점이다. 특히, 이전 실행으로 인한 임시파일, 잠금, 불완전한 출력 그리고 이와 유사한
+것들을 처리해야 한다.
+
+`.spec.parallelism = 1`, `.spec.completions = 1` 그리고
+`.spec.template.spec.restartPolicy = "Never"` 를 지정하더라도 같은 프로그램을
+두 번 시작하는 경우가 있다는 점을 참고한다.
+
+`.spec.parallelism` 그리고 `.spec.completions` 를 모두 1보다 크게 지정한다면 한번에
+여러개의 파드가 실행될 수 있다. 따라서 파드는 동시성에 대해서도 관대(tolerant)해야 한다.
+
+### 파드 백오프(backoff) 실패 정책
+
+구성 등의 논리적 오류로 인해 약간의 재시도 이후에
+잡을 실패하게 만들려는 경우가 있다.
+이렇게 하려면 `.spec.backoffLimit` 에 잡을 실패로 간주하기 이전에
+재시도할 횟수를 설정한다. 백오프 제한은 기본적으로 6으로 설정되어 있다. 잡과
+관련한 실패한 파드는 최대 6분안에서 기하급수적으로 증가하는 백-오프 지연 (10초, 20초, 40초 ...)
+한도가 되어 잡 컨트롤러에 의해 재생성 된다. 잡의 다음 상태
+확인 이전에 새로 실패한 파드가 표시되지 않으면 백 오프
+카운트가 재설정 된다.
+
+{{< note >}}
+1.12 이전 버전의 쿠버네티스 버전에 대해 여전히 [#54870](https://github.com/kubernetes/kubernetes/issues/54870) 이슈가 있다.
+{{< /note >}}
+{{< note >}}
+만약 잡에 `restartPolicy = "OnFailure"` 가 있는 경우 잡 백오프 한계에
+도달하면 잡을 실행 중인 컨테이너가 종료된다. 이로 인해 잡 실행 파일의 디버깅이
+더 어려워질 수 있다. 디버깅하거나 로깅 시스템을 사용해서 실패한 작업의 결과를 실수로 손실되지 않도록
+하려면 `restartPolicy = "Never"` 로 설정하는 것을 권장한다.
+{{< /note >}}
+
+## 잡의 종료와 정리
+
+잡이 완료되면 파드가 더 이상 생성되지도 않지만, 삭제되지도 않는다. 이를 유지하면
+완료된 파드의 로그를 계속 보며 에러, 경고 또는 다른 기타 진단 출력을 확인할 수 있다.
+잡 오브젝트는 완료된 후에도 상태를 볼 수 있도록 남아 있다. 상태를 확인한 후 이전 잡을 삭제하는 것은 사용자의 몫이다.
+`kubectl` 로 잡을 삭제할 수 있다 (예: `kubectl delete jobs/pi` 또는 `kubectl delete -f ./job.yaml`). `kubectl` 을 사용해서 잡을 삭제하면 생성된 모든 파드도 함께 삭제된다.
+
+기본적으로 파드의 실패(`restartPolicy=Never`) 또는 컨테이너가 오류(`restartPolicy=OnFailure`)로 종료되지 않는 한, 잡은 중단되지 않고 실행되고
+이때 위에서 설명했던 `.spec.backoffLimit` 까지 연기된다. `.spec.backoffLimit` 에 도달하면 잡은 실패로 표기되고 실행 중인 모든 파드는 종료된다.
+
+잡을 종료하는 또 다른 방법은 유효 데드라인을 설정하는 것이다.
+잡의 `.spec.activeDeadlineSeconds` 필드를 초 단위로 설정하면 된다.
+`activeDeadlineSeconds` 는 생성된 파드의 수에 관계 없이 잡의 기간에 적용된다.
+잡이 `activeDeadlineSeconds` 에 도달하면, 실행 중인 모든 파드가 종료되고 잡의 상태는 `reason: DeadlineExceeded` 와 함께 `type: Failed` 가 된다.
+
+잡의 `.spec.activeDeadlineSeconds` 는 `.spec.backoffLimit` 보다 우선한다는 점을 참고한다. 따라서 하나 이상 실패한 파드를 재시도하는 잡은 `backoffLimit` 에 도달하지 않은 경우에도 `activeDeadlineSeconds` 에 지정된 시간 제한에 도달하면 추가 파드를 배포하지 않는다.
+
+예시:
+
+```yaml
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: pi-with-timeout
+spec:
+ backoffLimit: 5
+ activeDeadlineSeconds: 100
+ template:
+ spec:
+ containers:
+ - name: pi
+ image: perl
+ command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
+ restartPolicy: Never
+```
+
+잡의 사양과 잡의 [파드 템플릿 사양](/ko/docs/concepts/workloads/pods/init-containers/#자세한-동작)에는 모두 `activeDeadlineSeconds` 필드가 있다는 점을 참고한다. 이 필드를 적절한 레벨로 설정해야 한다.
+
+`restartPolicy` 는 잡 자체에 적용되는 것이 아니라 파드에 적용된다는 점을 유념한다. 잡의 상태가 `type: Failed` 이 되면, 잡의 자동 재시작은 없다.
+즉, `.spec.activeDeadlineSeconds` 와 `.spec.backoffLimit` 로 활성화된 잡의 종료 메커니즘은 영구적인 잡의 실패를 유발하며 이를 해결하기 위해 수동 개입이 필요하다.
+
+## 완료된 잡을 자동으로 정리
+
+완료된 잡은 일반적으로 시스템에서 더 이상 필요로 하지 않는다. 시스템 내에
+이를 유지한다면 API 서버에 부담이 된다.
+만약 [크론잡](/ko/docs/concepts/workloads/controllers/cron-jobs/)과
+같은 상위 레벨 컨트롤러가 잡을 직접 관리하는 경우,
+지정된 용량 기반 정리 정책에 따라 크론잡이 잡을 정리할 수 있다.
+
+### 완료된 잡을 위한 TTL 메커니즘
+
+{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
+
+완료된 잡 (`Complete` 또는 `Failed`)을 자동으로 정리하는 또 다른 방법은
+잡의 `.spec.ttlSecondsAfterFinished` 필드를 지정해서 완료된 리소스에 대해
+[TTL 컨트롤러](/ko/docs/concepts/workloads/controllers/ttlafterfinished/)에서
+제공하는 TTL 메커니즘을 사용하는
+것이다.
+
+TTL 컨트롤러는 잡을 정리하면 잡을 계단식으로 삭제한다.
+즉, 잡과 함께 파드와 같은 종속 오브젝트를 삭제한다. 잡을
+삭제하면 finalizer와 같은 라이프사이클 보증이 보장되는 것을
+참고한다.
+
+예시:
+
+```yaml
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: pi-with-ttl
+spec:
+ ttlSecondsAfterFinished: 100
+ template:
+ spec:
+ containers:
+ - name: pi
+ image: perl
+ command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
+ restartPolicy: Never
+```
+
+`pi-with-ttl` 잡은 완료 후 `100` 초 이후에
+자동으로 삭제될 수 있다.
+
+만약 필드를 `0` 으로 설정하면, 잡이 완료된 직후에 자동으로
+삭제되도록 할 수 있다. 만약 필드를 설정하지 않으면, 이 잡이 완료된
+후에 TTL 컨트롤러에 의해 정리되지 않는다.
+
+이 TTL 메커니즘은 기능 게이트 `TTLAfterFinished`와 함께 알파 단계이다. 더
+자세한 정보는 완료된 리소스를 위한
+[TTL 컨트롤러](/ko/docs/concepts/workloads/controllers/ttlafterfinished/)
+문서를 본다.
+
+## 잡 패턴
+
+잡 오브젝트를 사용해서 신뢰할 수 있는 파드의 병렬 실행을 지원할 수 있다. 잡 오브젝트는 과학
+컴퓨팅(scientific computing)에서 일반적으로 사용되는 밀접하게 통신하는 병렬 프로세스를 지원하도록
+설계되지 않았다. 잡 오브젝트는 독립적이지만 관련된 *작업 항목* 집합의 병렬 처리를 지원한다.
+여기에는 전송할 이메일들, 렌더링할 프레임, 코드 변환이 필요한 파일, NoSQL 데이터베이스에서의
+키 범위 스캔 등이 있다.
+
+복잡한 시스템에는 여러개의 다른 작업 항목 집합이 있을 수 있다. 여기서는 사용자와
+함께 관리하려는 하나의 작업 항목 집합 — *배치 잡* 을 고려하고 있다.
+
+병렬 계산에는 몇몇 다른 패턴이 있으며 각각의 장단점이 있다.
+트레이드오프는 다음과 같다.
+
+- 각 작업 항목에 대한 하나의 잡 오브젝트 vs 모든 작업 항목에 대한 단일 잡 오브젝트. 후자는
+ 작업 항목 수가 많은 경우 더 적합하다. 전자는 사용자와 시스템이 많은 수의 잡 오브젝트를
+ 관리해야 하는 약간의 오버헤드를 만든다.
+- 작업 항목과 동일한 개수의 파드 생성 vs 각 파드에서 다수의 작업 항목을 처리
+ 전자는 일반적으로 기존 코드와 컨테이너를 거의 수정할 필요가 없다. 후자는
+ 이전 글 머리표(-)와 비슷한 이유로 많은 수의 작업 항목에 적합하다.
+- 여러 접근 방식이 작업 큐를 사용한다. 이를 위해서는 큐 서비스를 실행하고,
+ 작업 큐를 사용하도록 기존 프로그램이나 컨테이너를 수정해야 한다.
+ 다른 접근 방식들은 기존에 컨테이너화된 애플리케이션에 보다 쉽게 적용할 수 있다.
+
+
+여기에 트레이드오프가 요약되어있고, 2열에서 4열까지가 위의 트레이드오프에 해당한다.
+패턴 이름은 예시와 더 자세한 설명을 위한 링크이다.
+
+| 패턴 | 단일 잡 오브젝트 | 작업 항목보다 파드가 적은가? | 수정하지 않은 앱을 사용하는가? | Kube 1.1에서 작동하는가? |
+| -------------------------------------------------------------------- |:-----------------:|:---------------------------:|:-------------------:|:-------------------:|
+| [잡 템플릿 확장](/docs/tasks/job/parallel-processing-expansion/) | | | ✓ | ✓ |
+| [작업 항목 당 파드가 있는 큐](/docs/tasks/job/coarse-parallel-processing-work-queue/) | ✓ | | 때때로 | ✓ |
+| [가변 파드 수를 가진 큐](/docs/tasks/job/fine-parallel-processing-work-queue/) | ✓ | ✓ | | ✓ |
+| 정적 작업이 할당된 단일 잡 | ✓ | | ✓ | |
+
+`.spec.completions` 로 완료를 지정할 때, 잡 컨트롤러에 의해 생성된 각 파드는
+동일한 [`사양`](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)을 갖는다. 이 의미는
+작업의 모든 파드는 동일한 명령 줄과 동일한 이미지,
+동일한 볼륨, (거의) 동일한 환경 변수를 가진다는 점이다. 이 패턴은
+파드가 다른 작업을 수행하도록 배열하는 다른 방법이다.
+
+이 표는 각 패턴에 필요한 `.spec.parallelism` 그리고 `.spec.completions` 설정을 보여준다.
+여기서 `W` 는 작업 항목의 수이다.
+
+| 패턴 | `.spec.completions` | `.spec.parallelism` |
+| -------------------------------------------------------------------- |:-------------------:|:--------------------:|
+| [잡 템플릿 확장](/docs/tasks/job/parallel-processing-expansion/) | 1 | 1이어야 함 |
+| [작업 항목 당 파드가 있는 큐](/docs/tasks/job/coarse-parallel-processing-work-queue/) | W | any |
+| [가변 파드 수를 가진 큐](/docs/tasks/job/fine-parallel-processing-work-queue/) | 1 | any |
+| 정적 작업이 할당된 단일 잡 | W | any |
+
+
+## 고급 사용법
+
+### 자신의 파드 셀렉터를 지정하기
+
+일반적으로 잡 오브젝트를 생성할 때 `.spec.selector` 를 지정하지 않는다.
+시스템의 기본적인 로직은 잡이 생성될 때 이 필드를 추가한다.
+이것은 다른 잡과 겹치지 않는 셀렉터 값을 선택한다.
+
+그러나, 일부 케이스에서는 이 자동화된 설정 셀렉터를 재정의해야 할 수도 있다.
+이를 위해 잡의 `.spec.selector` 를 설정할 수 있다.
+
+이 것을 할 때는 매우 주의해야 한다. 만약 해당 잡의 파드에 고유하지
+않고 연관이 없는 파드와 일치하는 레이블 셀렉터를 지정하면, 연관이 없는 잡의 파드가 삭제되거나,
+해당 잡이 다른 파드가 완료한 것으로 수를 세거나, 하나 또는
+양쪽 잡 모두 파드 생성이나 실행 완료를 거부할 수도 있다. 만약 고유하지 않은 셀렉터가
+선택된 경우, 다른 컨트롤러(예: 레플리케이션 컨트롤러)와 해당 파드는
+예측할 수 없는 방식으로 작동할 수 있다. 쿠버네티스는 당신이 `.spec.selector` 를 지정할 때
+발생하는 실수를 막을 수 없을 것이다.
+
+다음은 이 기능을 사용하려는 경우의 예시이다.
+
+잡 `old` 가 이미 실행 중이다. 기존 파드가 계속
+실행되기를 원하지만, 잡이 생성한 나머지 파드에는 다른
+파드 템플릿을 사용하고 잡으로 하여금 새 이름을 부여하기를 원한다.
+그러나 관련된 필드들은 업데이트가 불가능하기 때문에 잡을 업데이트할 수 없다.
+따라서 `kubectl delete jobs/old --cascade=false` 를 사용해서
+잡 `old` 를 삭제하지만, _파드를 실행 상태로 둔다_.
+삭제하기 전에 어떤 셀렉터를 사용하는지 기록한다.
+
+```
+kubectl get job old -o yaml
+```
+```
+kind: Job
+metadata:
+ name: old
+ ...
+spec:
+ selector:
+ matchLabels:
+ controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
+ ...
+```
+
+그런 이후에 이름이 `new` 인 새 잡을 생성하고, 동일한 셀렉터를 명시적으로 지정한다.
+기존 파드에는 `controller-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`
+레이블이 있기에 잡 `new` 에 의해서도 제어된다.
+
+시스템이 일반적으로 자동 생성하는 셀렉터를 사용하지 않도록 하기 위해
+새 잡에서 `manualSelector: true` 를 지정해야 한다.
+
+```
+kind: Job
+metadata:
+ name: new
+ ...
+spec:
+ manualSelector: true
+ selector:
+ matchLabels:
+ controller-uid: a8f3d00d-c6d2-11e5-9f87-42010af00002
+ ...
+```
+
+새 잡 자체는 `a8f3d00d-c6d2-11e5-9f87-42010af00002` 와 다른 uid 를 가지게 될 것이다.
+`manualSelector: true` 를 설정하면 시스템에게 사용자가 무엇을 하는지 알고 있음을 알리고, 이런
+불일치를 허용한다.
+
+## 대안
+
+### 베어(Bare) 파드
+
+파드가 실행 중인 노드가 재부팅되거나 실패하면 파드가 종료되고
+다시 시작되지 않는다. 그러나 잡은 종료된 항목을 대체하기 위해 새 파드를 생성한다.
+따라서, 애플리케이션에 단일 파드만 필요한 경우에도 베어 파드 대신
+잡을 사용하는 것을 권장한다.
+
+### 레플리케이션 컨트롤러
+
+잡은 [레플리케이션 컨트롤러](/ko/docs/concepts/workloads/controllers/replicationcontroller/)를 보완한다.
+레플리케이션 컨트롤러는 종료하지 않을 파드(예: 웹 서버)를 관리하고, 잡은 종료될 것으로
+예상되는 파드(예: 배치 작업)를 관리한다.
+
+[파드 라이프사이클](/ko/docs/concepts/workloads/pods/pod-lifecycle/)에서 설명한 것처럼, `잡` 은 *오직*
+`OnFailure` 또는 `Never` 와 같은 `RestartPolicy` 를 사용하는 파드에만 적절하다.
+(참고: `RestartPolicy` 가 설정되지 않은 경우에는 기본값은 `Always` 이다.)
+
+### 단일 잡으로 컨트롤러 파드 시작
+
+또 다른 패턴은 단일 잡이 파드를 생성한 후 다른 파드들을 생성해서 해당 파드들에
+일종의 사용자 정의 컨트롤러 역할을 하는 것이다. 이를 통해 최대한의 유연성을 얻을 수 있지만,
+시작하기에는 다소 복잡할 수 있으며 쿠버네티스와의 통합성이 낮아진다.
+
+이 패턴의 한 예시는 파드를 시작하는 잡이다. 파드는 스크립트를 실행해서
+스파크(Spark) 마스터 컨트롤러 ([스파크 예시](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/spark/README.md)를 본다)를 시작하고,
+스파크 드라이버를 실행한 다음, 정리한다.
+
+이 접근 방식의 장점은 전체 프로세스가 잡 오브젝트의 완료를 보장하면서도,
+파드 생성과 작업 할당 방법을 완전히 제어할 수 있다는 점이다.
+
+## 크론 잡 {#cron-jobs}
+
+[`크론잡`](/ko/docs/concepts/workloads/controllers/cron-jobs/)을 사용해서 Unix 도구인 `cron`과 유사하게 지정된 시간/일자에 실행되는 잡을 생성할 수 있다.
+
+{{% /capture %}}
diff --git a/content/ko/docs/concepts/workloads/controllers/replicaset.md b/content/ko/docs/concepts/workloads/controllers/replicaset.md
index f5d199c746578..47b74587aa956 100644
--- a/content/ko/docs/concepts/workloads/controllers/replicaset.md
+++ b/content/ko/docs/concepts/workloads/controllers/replicaset.md
@@ -71,53 +71,50 @@ kubectl describe rs/frontend
출력은 다음과 유사할 것이다.
```shell
-Name: frontend
-Namespace: default
-Selector: tier=frontend
-Labels: app=guestbook
- tier=frontend
-Annotations:
-Replicas: 3 current / 3 desired
-Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
+Name: frontend
+Namespace: default
+Selector: tier=frontend
+Labels: app=guestbook
+ tier=frontend
+Annotations: kubectl.kubernetes.io/last-applied-configuration:
+ {"apiVersion":"apps/v1","kind":"ReplicaSet","metadata":{"annotations":{},"labels":{"app":"guestbook","tier":"frontend"},"name":"frontend",...
+Replicas: 3 current / 3 desired
+Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
- Labels: app=guestbook
- tier=frontend
+ Labels: tier=frontend
Containers:
php-redis:
- Image: gcr.io/google_samples/gb-frontend:v3
- Port: 80/TCP
- Requests:
- cpu: 100m
- memory: 100Mi
- Environment:
- GET_HOSTS_FROM: dns
- Mounts:
- Volumes:
+ Image: gcr.io/google_samples/gb-frontend:v3
+ Port:
+ Host Port:
+ Environment:
+ Mounts:
+ Volumes:
Events:
- FirstSeen LastSeen Count From SubobjectPath Type Reason Message
- --------- -------- ----- ---- ------------- -------- ------ -------
- 1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-qhloh
- 1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-dnjpy
- 1m 1m 1 {replicaset-controller } Normal SuccessfulCreate Created pod: frontend-9si5l
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Normal SuccessfulCreate 117s replicaset-controller Created pod: frontend-wtsmm
+ Normal SuccessfulCreate 116s replicaset-controller Created pod: frontend-b2zdv
+ Normal SuccessfulCreate 116s replicaset-controller Created pod: frontend-vcmts
```
마지막으로 파드가 올라왔는지 확인할 수 있다.
```shell
-kubectl get Pods
+kubectl get pods
```
다음과 유사한 파드 정보를 볼 수 있다.
```shell
-NAME READY STATUS RESTARTS AGE
-frontend-9si5l 1/1 Running 0 1m
-frontend-dnjpy 1/1 Running 0 1m
-frontend-qhloh 1/1 Running 0 1m
+NAME READY STATUS RESTARTS AGE
+frontend-b2zdv 1/1 Running 0 6m36s
+frontend-vcmts 1/1 Running 0 6m36s
+frontend-wtsmm 1/1 Running 0 6m36s
```
또한 파드들의 소유자 참조 정보가 해당 프런트엔드 레플리카셋으로 설정되어 있는지 확인할 수 있다.
확인을 위해서는 실행 중인 파드 중 하나의 yaml을 확인한다.
```shell
-kubectl get pods frontend-9si5l -o yaml
+kubectl get pods frontend-b2zdv -o yaml
```
메타데이터의 ownerReferences 필드에 설정되어있는 프런트엔드 레플리카셋의 정보가 다음과 유사하게 나오는 것을 볼 수 있다.
@@ -125,11 +122,11 @@ kubectl get pods frontend-9si5l -o yaml
apiVersion: v1
kind: Pod
metadata:
- creationTimestamp: 2019-01-31T17:20:41Z
+ creationTimestamp: "2020-02-12T07:06:16Z"
generateName: frontend-
labels:
tier: frontend
- name: frontend-9si5l
+ name: frontend-b2zdv
namespace: default
ownerReferences:
- apiVersion: apps/v1
@@ -137,7 +134,7 @@ metadata:
controller: true
kind: ReplicaSet
name: frontend
- uid: 892a2330-257c-11e9-aecd-025000000001
+ uid: f391f6db-bb9b-4c09-ae74-6a1f77f3d5cf
...
```
@@ -166,16 +163,17 @@ kubectl apply -f https://kubernetes.io/examples/pods/pod-rs.yaml
파드를 가져온다.
```shell
-kubectl get Pods
+kubectl get pods
```
결과에는 새로운 파드가 이미 종료되었거나 종료가 진행 중인 것을 보여준다.
```shell
NAME READY STATUS RESTARTS AGE
-frontend-9si5l 1/1 Running 0 1m
-frontend-dnjpy 1/1 Running 0 1m
-frontend-qhloh 1/1 Running 0 1m
-pod2 0/1 Terminating 0 4s
+frontend-b2zdv 1/1 Running 0 10m
+frontend-vcmts 1/1 Running 0 10m
+frontend-wtsmm 1/1 Running 0 10m
+pod1 0/1 Terminating 0 1s
+pod2 0/1 Terminating 0 1s
```
파드를 먼저 생성한다.
@@ -191,15 +189,15 @@ kubectl apply -f https://kubernetes.io/examples/controllers/frontend.yaml
레플리카셋이 해당 파드를 소유한 것을 볼 수 있으며 새 파드 및 기존 파드의 수가
레플리카셋이 필요로 하는 수와 일치할 때까지 사양에 따라 신규 파드만 생성한다. 파드를 가져온다.
```shell
-kubectl get Pods
+kubectl get pods
```
다음 출력에서 볼 수 있다.
```shell
NAME READY STATUS RESTARTS AGE
-frontend-pxj4r 1/1 Running 0 5s
-pod1 1/1 Running 0 13s
-pod2 1/1 Running 0 13s
+frontend-hmmj2 1/1 Running 0 9s
+pod1 1/1 Running 0 36s
+pod2 1/1 Running 0 36s
```
이러한 방식으로 레플리카셋은 템플릿을 사용하지 않는 파드를 소유하게 된다.
diff --git a/content/ko/docs/concepts/workloads/controllers/ttlafterfinished.md b/content/ko/docs/concepts/workloads/controllers/ttlafterfinished.md
index ff6cc0284f22d..48a8bdb303db2 100644
--- a/content/ko/docs/concepts/workloads/controllers/ttlafterfinished.md
+++ b/content/ko/docs/concepts/workloads/controllers/ttlafterfinished.md
@@ -33,8 +33,8 @@ TTL 컨트롤러는 실행이 완료된 리소스 오브젝트의 수명을
완료된 잡(`완료` 또는 `실패`)을 자동으로 정리하기 위해 이 기능을 사용할 수 있다.
리소스의 작업이 완료된 TTL 초(sec) 후 (다른 말로는, TTL이 만료되었을 때),
TTL 컨트롤러는 해당 리소스가 정리될 수 있다고 가정한다.
-TTL 컨트롤러가 리소스를 정리할때 리소스를 연속적으로 삭제한다. 즉,
-의존하는 오브젝트와 함께 삭제한다. 리소스가 삭제되면 완료자(finalizers)와
+TTL 컨트롤러가 리소스를 정리할때 리소스를 연속적으로 삭제한다. 이는
+의존하는 오브젝트도 해당 리소스와 함께 삭제되는 것을 의미한다. 리소스가 삭제되면 완료자(finalizers)와
같은 라이프 사이클 보증이 적용 된다.
TTL 초(sec)는 언제든지 설정이 가능하다. 여기에 잡 필드 중
diff --git a/content/ko/docs/concepts/workloads/pods/pod-overview.md b/content/ko/docs/concepts/workloads/pods/pod-overview.md
index 55126172feb7a..1902594c330d9 100644
--- a/content/ko/docs/concepts/workloads/pods/pod-overview.md
+++ b/content/ko/docs/concepts/workloads/pods/pod-overview.md
@@ -51,7 +51,7 @@ card:
#### 저장소
-파드는 공유 저장소 집합인 {{< glossary_tooltip text="Volumes" term_id="volume" >}} 을 명시할 수 있다. 파드 내부의 모든 컨테이너는 공유 볼륨에 접근할 수 있고, 그 컨테이너끼리 데이터를 공유하는 것을 허용한다. 또한 볼륨은 컨테이너가 재시작되어야 하는 상황에도 파드 안의 데이터가 영구적으로 유지될 수 있게 한다. 쿠버네티스가 어떻게 파드 안의 공유 저장소를 사용하는지 보려면 [볼륨](/docs/concepts/storage/volumes/)를 참고하길 바란다.
+파드는 공유 저장소 집합인 {{< glossary_tooltip text="Volumes" term_id="volume" >}} 을 명시할 수 있다. 파드 내부의 모든 컨테이너는 공유 볼륨에 접근할 수 있고, 그 컨테이너끼리 데이터를 공유하는 것을 허용한다. 또한 볼륨은 컨테이너가 재시작되어야 하는 상황에도 파드 안의 데이터가 영구적으로 유지될 수 있게 한다. 쿠버네티스가 어떻게 파드 안의 공유 저장소를 사용하는지 보려면 [볼륨](/ko/docs/concepts/storage/volumes/)를 참고하길 바란다.
## 파드 작업
diff --git a/content/ko/docs/concepts/workloads/pods/pod.md b/content/ko/docs/concepts/workloads/pods/pod.md
index 16f2e4149b795..4a1b3645e05d2 100644
--- a/content/ko/docs/concepts/workloads/pods/pod.md
+++ b/content/ko/docs/concepts/workloads/pods/pod.md
@@ -41,7 +41,7 @@ _파드_ 는 (고래 떼(pod of whales)나 콩꼬투리(pea pod)와 마찬가지
공유 볼륨에 엑세스 할 수 있다.
[도커](https://www.docker.com/)의 구조 관점에서 보면
-파드는 공유 네임스페이스와 공유 [볼륨](/docs/concepts/storage/volumes/)을 가진
+파드는 공유 네임스페이스와 공유 [볼륨](/ko/docs/concepts/storage/volumes/)을 가진
도커 컨테이너 그룹으로 모델링 된다.
개별 애플리케이션 컨테이너와 같이, 파드는 상대적으로 수명이 짧은 엔터티로 간주된다.
@@ -189,7 +189,7 @@ API에서 파드를 즉시 제거하므로 동일한 이름으로 새 파드를
Kubernetes v1.1부터, 파드의 모든 컨테이너는 컨테이너 스펙의 `SecurityContext`의 `privileged` 플래그를 사용하여 특권 모드를 사용할 수 있다. 이것은 네트워크 스택을 조작하고 장치에 액세스하는 것과 같은 Linux 기능을 사용하려는 컨테이너에 유용하다. 컨테이너 내의 프로세스는 컨테이너 외부의 프로세스에서 사용할 수 있는 거의 동일한 권한을 갖는다. 특권 모드를 사용하면 네트워크 및 볼륨 플러그인을 kubelet에 컴파일 할 필요가 없는 별도의 파드로 쉽게 만들 수 있다.
마스터가 Kubernetes v1.1 이상에서 실행 중이고, 노드가 v1.1 보다 낮은 버전을 실행중인 경우 새 권한이 부여 된 파드는 api-server에 의해 승인되지만 시작되지는 않는다. 이것들은 pending 상태가 될 것이다.
-사용자가 `kubectl describe pod FooPodName` 을 호출하면 사용자는 파드가 사용자가 `kubectl describe pod FooPodName` 을 호출하면 사용자는 파드가 pending 상태에 있는 이유를 볼 수 있다. describe 명령 출력의 이벤트 테이블은 다음과 같다.
+사용자가 `kubectl describe pod FooPodName` 을 호출하면 사용자는 파드가 pending 상태에 있는 이유를 볼 수 있다. describe 명령 출력의 이벤트 테이블은 다음과 같다.
`Error validating pod "FooPodName"."FooPodNamespace" from api, ignoring: spec.containers[0].securityContext.privileged: forbidden '<*>(0xc2089d3248)true'`
마스터가 v1.1보다 낮은 버전에서 실행중인 경우 특권을 갖는 파드를 만들 수 없다. 유저가 특권을 갖는 컨테이너가 있는 파드를 만들려고 하면 다음과 같은 오류가 발생한다.
diff --git a/content/ko/docs/contribute/participating.md b/content/ko/docs/contribute/participating.md
index 88db56aa340d0..f1abbfc27c9e7 100644
--- a/content/ko/docs/contribute/participating.md
+++ b/content/ko/docs/contribute/participating.md
@@ -24,6 +24,7 @@ SIG Docs는 모든 컨트리뷰터의 콘텐츠와 리뷰를 환영한다.
쿠버네티스 커뮤니티 내에서 멤버십이 운영되는 방식에 대한 보다 많은 정보를 확인하려면
[커뮤니티 멤버십](https://github.com/kubernetes/community/blob/master/community-membership.md)
문서를 확인한다.
+
문서의 나머지에서는 대외적으로 쿠버네티스를 가장 잘 드러내는 수단 중 하나인 쿠버네티스 웹사이트와
문서를 관리하는 책임을 가지는 SIG Docs에서,
이런 체계가 작동하는 특유의 방식에 대한 윤곽을 잡아보겠다.
@@ -52,7 +53,8 @@ SIG Docs는 모든 컨트리뷰터의 콘텐츠와 리뷰를 환영한다.
누구나 다음 작업을 할 수 있다.
- 문서를 포함한 쿠버네티스의 모든 부분에 대해 GitHub 이슈 열기.
-- 풀 리퀘스트/ 에 대한 구속력 없는 피드백 제공
+- 풀 리퀘스트에 대한 구속력 없는 피드백 제공
+- 기존 컨텐츠를 현지화하는데 도움주는 것
- [슬랙](http://slack.k8s.io/) 또는 [SIG docs 메일링 리스트](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)에 개선할 아이디어를 제시한다.
- `/lgtm` Prow 명령 ("looks good to me" 의 줄임말)을 사용해서 병합을 위한 풀 리퀘스트의 변경을 추천한다.
{{< note >}}
@@ -120,7 +122,6 @@ GitHub 그룹의 멤버이다. 리뷰어는 문서 풀 리퀘스트를 리뷰하
- 이슈 해결 및 분류
- 풀 리퀘스트 리뷰와 구속력있는 피드백 제공
- 다이어그램, 그래픽 자산과 포함가능한 스크린샷과 비디오를 생성
-- 현지화
- 코드에서 사용자 화면 문자열 편집
- 코드 코멘트 개선
@@ -166,7 +167,7 @@ GitHub 그룹에 당신을 추가하기를 요청한다. `kubernetes-website-adm
승인자는
[@kubernetes/sig-docs-maintainers](https://github.com/orgs/kubernetes/teams/sig-docs-maintainers)
-GitHub 그룹의 멤버이다. [SIG Docs의 팀과 그룹](#teams-and-groups-within-sig-docs) 문서를 참조한다.
+GitHub 그룹의 멤버이다. [SIG Docs 팀과 자동화](#sig-docs-팀과-자동화) 문서를 참조한다.
승인자는 다음의 작업을 할 수 있다.
@@ -225,7 +226,7 @@ GitHub 그룹에 당신을 추가하기를 요청한다. `kubernetes-website-adm
것으로 기대한다. [일주일 간 PR Wrangler 되기](/docs/contribute/advanced#be-the-pr-wrangler-for-a-week)
문서를 참고한다.
-## SIG Docs chairperson
+## SIG Docs 의장
SIG Docs를 포함한 각 SIG는, 한 명 이상의 SIG 멤버가 의장 역할을 하도록 선정한다. 이들은 SIG Docs와
다른 쿠버네티스 조직 간 연락책(point of contact)이 된다. 이들은 쿠버네티스 프로젝트 전반의 조직과
@@ -297,7 +298,7 @@ PR 소유자에게 조언하는데 활용된다.
- 모든 쿠버네티스 맴버는 코멘트에 `/lgtm` 을 추가해서 `lgtm` 레이블을 추가할 수 있다.
- SIG Docs 승인자들만이 코멘트에 `/approve` 를
추가해서 풀 리퀘스트를 병합할 수 있다. 일부 승인자들은
- [PR Wrangler](#pr-wrangler) EHsms [SIG Docs 의장](#sig-docs-chairperson)과
+ [PR Wrangler](#pr-wrangler) 또는 [SIG Docs 의장](#sig-docs-의장)과
같은 특정 역할도 수행한다.
{{% /capture %}}
diff --git a/content/ko/docs/contribute/style/write-new-topic.md b/content/ko/docs/contribute/style/write-new-topic.md
index d6c21c7bc9774..c08248e7833bc 100644
--- a/content/ko/docs/contribute/style/write-new-topic.md
+++ b/content/ko/docs/contribute/style/write-new-topic.md
@@ -92,7 +92,7 @@ YAML 블록이다. 여기 예시가 있다.
- 이 코드는 `kubectl get deploy mydeployment -o json | jq '.status'`와 같은
명령어의 출력을 보여준다.
- 이 코드는 시도해보기에 적절하지 않다. 예를 들어
- 특정 [FlexVolume](/docs/concepts/storage/volumes#flexvolume) 구현에 따라
+ 특정 [FlexVolume](/ko/docs/concepts/storage/volumes#flexvolume) 구현에 따라
파드를 만들기 위해 YAML 파일을
포함할 수 있다.
- 이 코드의 목적은 더 큰 파일의 일부를 강조하는 것이기 때문에
diff --git a/content/ko/docs/reference/glossary/volume.md b/content/ko/docs/reference/glossary/volume.md
index 4f5c26c88a15d..8e2841e72cfcd 100755
--- a/content/ko/docs/reference/glossary/volume.md
+++ b/content/ko/docs/reference/glossary/volume.md
@@ -2,7 +2,7 @@
title: 볼륨(Volume)
id: volume
date: 2018-04-12
-full_link: /docs/concepts/storage/volumes/
+full_link: /ko/docs/concepts/storage/volumes/
short_description: >
데이터를 포함하고 있는 디렉토리이며, 파드의 컨테이너에서 접근 가능하다.
diff --git a/content/ko/docs/tasks/_index.md b/content/ko/docs/tasks/_index.md
index 4289d0544dbe4..9624c907d30a4 100644
--- a/content/ko/docs/tasks/_index.md
+++ b/content/ko/docs/tasks/_index.md
@@ -57,10 +57,6 @@ content_template: templates/concept
클러스터를 운영하기 위한 일반적인 태스크를 배운다.
-## 페더레이션(federation) 운영하기(administering)
-
-클러스터 페더레이션의 컴포넌트들을 구성한다.
-
## 스테이트풀 애플리케이션 관리하기
스테이트풀 셋의 스케일링, 삭제하기, 디버깅을 포함하는 스테이트풀 애플리케이션 관리를 위한 일반적인 태스크를 수행한다.
diff --git a/content/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md
index 0a3778288d355..0ab00f5d1c25b 100644
--- a/content/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md
+++ b/content/ko/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md
@@ -86,7 +86,9 @@ kubectl config --kubeconfig=config-demo set-credentials experimenter --username=
```
{{< note >}}
-`kubectl config unset users.`을 실행하여 사용자를 삭제할 수 있다.
+- 사용자를 삭제하려면 `kubectl --kubeconfig=config-demo config unset users.` 를 실행한다.
+- 클러스터를 제거하려면 `kubectl --kubeconfig=config-demo config unset clusters.` 를 실행한다.
+- 컨텍스트를 제거하려면 `kubectl --kubeconfig=config-demo config unset contexts.` 를 실행한다.
{{< /note >}}
컨텍스트 세부사항들을 구성 파일에 추가한다.
diff --git a/content/ko/docs/tutorials/online-training/overview.md b/content/ko/docs/tutorials/online-training/overview.md
index 9a9dfd794b490..44f1c69ddc358 100644
--- a/content/ko/docs/tutorials/online-training/overview.md
+++ b/content/ko/docs/tutorials/online-training/overview.md
@@ -39,8 +39,6 @@ content_template: templates/concept
* [Hands-on Introduction to Kubernetes (Instruqt)](https://play.instruqt.com/public/topics/getting-started-with-kubernetes)
-* [IBM Cloud: Deploying Microservices with Kubernetes (Coursera)](https://www.coursera.org/learn/deploy-micro-kube-ibm-cloud)
-
* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x)
* [Kubernetes Essentials with Hands-On Labs (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/kubernetes-essentials)
diff --git a/content/ko/docs/tutorials/stateful-application/zookeeper.md b/content/ko/docs/tutorials/stateful-application/zookeeper.md
index 7486a7fe71d4b..dd7a850134351 100644
--- a/content/ko/docs/tutorials/stateful-application/zookeeper.md
+++ b/content/ko/docs/tutorials/stateful-application/zookeeper.md
@@ -19,7 +19,7 @@ weight: 40
- [파드](/docs/user-guide/pods/single-container/)
- [클러스터 DNS](/ko/docs/concepts/services-networking/dns-pod-service/)
- [헤드리스 서비스](/ko/docs/concepts/services-networking/service/#헤드리스-headless-서비스)
-- [퍼시스턴트볼륨](/docs/concepts/storage/volumes/)
+- [퍼시스턴트볼륨](/ko/docs/concepts/storage/volumes/)
- [퍼시스턴트볼륨 프로비저닝](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/persistent-volume-provisioning/)
- [스테이트풀셋](/ko/docs/concepts/workloads/controllers/statefulset/)
- [파드디스룹션버짓](/ko/docs/concepts/workloads/pods/disruptions/#specifying-a-poddisruptionbudget)
diff --git a/content/ko/examples/controllers/job.yaml b/content/ko/examples/controllers/job.yaml
new file mode 100644
index 0000000000000..b448f2eb81daf
--- /dev/null
+++ b/content/ko/examples/controllers/job.yaml
@@ -0,0 +1,14 @@
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: pi
+spec:
+ template:
+ spec:
+ containers:
+ - name: pi
+ image: perl
+ command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
+ restartPolicy: Never
+ backoffLimit: 4
+
diff --git a/content/pt/docs/concepts/cluster-administration/kubelet-garbage-collection.md b/content/pt/docs/concepts/cluster-administration/kubelet-garbage-collection.md
new file mode 100644
index 0000000000000..78270eedccfbc
--- /dev/null
+++ b/content/pt/docs/concepts/cluster-administration/kubelet-garbage-collection.md
@@ -0,0 +1,71 @@
+---
+reviewers:
+title: Configurando o Garbage Collection do kubelet
+content_template: templates/concept
+weight: 70
+---
+
+{{% capture overview %}}
+
+O Garbage collection(Coleta de lixo) é uma função útil do kubelet que limpa imagens e contêineres não utilizados. O kubelet executará o garbage collection para contêineres a cada minuto e para imagens a cada cinco minutos.
+
+Ferramentas externas de garbage collection não são recomendadas, pois podem potencialmente interromper o comportamento do kubelet removendo os contêineres que existem.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Coleta de imagens
+
+O Kubernetes gerencia o ciclo de vida de todas as imagens através do imageManager, com a cooperação do cadvisor.
+
+A política para o garbage collection de imagens leva dois fatores em consideração:
+`HighThresholdPercent` e `LowThresholdPercent`. Uso do disco acima do limite acionará o garbage collection. O garbage collection excluirá as imagens que foram menos usadas recentemente até que o nível fique abaixo do limite.
+
+## Coleta de container
+
+A política para o garbage collection de contêineres considera três variáveis definidas pelo usuário. `MinAge` é a idade mínima em que um contêiner pode ser coletado. `MaxPerPodContainer` é o número máximo de contêineres mortos que todo par de pod (UID, container name) pode ter. `MaxContainers` é o número máximo de contêineres mortos totais. Essas variáveis podem ser desabilitadas individualmente, definindo `MinAge` como zero e definindo `MaxPerPodContainer` e `MaxContainers` respectivamente para menor que zero.
+
+O Kubelet atuará em contêineres não identificados, excluídos ou fora dos limites definidos pelos sinalizadores mencionados. Os contêineres mais antigos geralmente serão removidos primeiro. `MaxPerPodContainer` e `MaxContainer` podem potencialmente conflitar entre si em situações em que a retenção do número máximo de contêineres por pod (`MaxPerPodContainer`) estaria fora do intervalo permitido de contêineres globais mortos (`MaxContainers`). O `MaxPerPodContainer` seria ajustado nesta situação: O pior cenário seria fazer o downgrade do `MaxPerPodContainer` para 1 e remover os contêineres mais antigos. Além disso, os contêineres pertencentes a pods que foram excluídos são removidos assim que se tornem mais antigos que `MinAge`.
+
+Os contêineres que não são gerenciados pelo kubelet não estão sujeitos ao garbage collection de contêiner.
+
+## Configurações do usuário
+
+Os usuários podem ajustar os seguintes limites para ajustar o garbage collection da imagem com os seguintes sinalizadores do kubelet:
+
+1. `image-gh-high-threshold`, a porcentagem de uso de disco que aciona o garbage collection da imagem. O padrão é 85%.
+2. `image-gc-low-threshold`, a porcentagem de uso de disco com o qual o garbage collection da imagem tenta liberar. O padrão é 80%.
+
+Também permitimos que os usuários personalizem a política do garbagem collection através dos seguintes sinalizadores do kubelet:
+
+1. `minimum-container-ttl-duration`, idade mínima para um contêiner finalizado antes de ser colectado. O padrão é 0 minuto, o que significa que todo contêiner finalizado será coletado como lixo.
+2. `maximum-dead-containers-per-container`, número máximo de instâncias antigas a serem retidas por contêiner. O padrão é 1.
+3. `maximum-dead-containers`, número máximo de instâncias antigas de contêineres para retenção global. O padrão é -1, o que significa que não há limite global.
+
+Os contêineres podem ser potencialmente coletados como lixo antes que sua utilidade expire. Esses contêineres podem conter logs e outros dados que podem ser úteis para solucionar problemas. Um valor suficientemente grande para `maximum-dead-containers-per-container` é altamente recomendado para permitir que pelo menos 1 contêiner morto seja retido por contêiner esperado. Um valor maior para `maximum-dead-containers` também é recomendados por um motivo semelhante.
+Consulte [esta issue](https://github.com/kubernetes/kubernetes/issues/13287) para obter mais detalhes.
+
+## Descontinuado
+
+Alguns recursos do Garbage Collection neste documento serão substituídos pelo kubelet eviction no futuro.
+
+Incluindo:
+
+| Flag Existente | Nova Flag | Fundamentação |
+| ----------------------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------ |
+| `--image-gc-high-threshold` | `--eviction-hard` ou `--eviction-soft` | os sinais existentes de despejo podem acionar o garbage collection da imagem |
+| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | recuperações de despejo atinge o mesmo comportamento |
+| `--maximum-dead-containers` | | descontinuado quando os logs antigos forem armazenados fora do contexto do contêiner |
+| `--maximum-dead-containers-per-container` | | descontinuado quando os logs antigos forem armazenados fora do contexto do contêiner |
+| `--minimum-container-ttl-duration` | | descontinuado quando os logs antigos forem armazenados fora do contexto do contêiner |
+| `--low-diskspace-threshold-mb` | `--eviction-hard` ou `eviction-soft` | O despejo generaliza os limites do disco para outros recursos |
+| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | O despejo generaliza a transição da pressão do disco para outros recursos |
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+Consulte [Configurando a Manipulação de Recursos Insuficientes](/docs/tasks/administer-cluster/out-of-resource/) para mais detalhes.
+
+{{% /capture %}}
diff --git a/content/pt/docs/concepts/cluster-administration/logging.md b/content/pt/docs/concepts/cluster-administration/logging.md
new file mode 100644
index 0000000000000..f605a3e8756ec
--- /dev/null
+++ b/content/pt/docs/concepts/cluster-administration/logging.md
@@ -0,0 +1,206 @@
+---
+reviewers:
+ - piosz
+ - x13n
+title: Arquitetura de Log
+content_template: templates/concept
+weight: 60
+---
+
+{{% capture overview %}}
+
+Os logs de aplicativos e sistemas podem ajudá-lo a entender o que está acontecendo dentro do seu cluster. Os logs são particularmente úteis para depurar problemas e monitorar a atividade do cluster. A maioria das aplicações modernas possui algum tipo de mecanismo de logs; como tal, a maioria dos mecanismos de contêineres também é projetada para suportar algum tipo de log. O método de log mais fácil e abrangente para aplicações em contêiner é gravar nos fluxos de saída e erro padrão.
+
+No entanto, a funcionalidade nativa fornecida por um mecanismo de contêiner ou tempo de execução geralmente não é suficiente para uma solução completa de log. Por exemplo, se um contêiner travar, um pod for despejado ou um nó morrer, geralmente você ainda desejará acessar os logs do aplicativo. Dessa forma, os logs devem ter armazenamento e ciclo de vida separados, independentemente de nós, pods ou contêineres. Este conceito é chamado _cluster-level-logging_. O log no nível de cluster requer um back-end separado para armazenar, analisar e consultar logs. O kubernetes não fornece uma solução de armazenamento nativa para dados de log, mas você pode integrar muitas soluções de log existentes no cluster do Kubernetes.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+As arquiteturas de log no nível de cluster são descritas no pressuposto de que um back-end de log esteja presente dentro ou fora do cluster. Se você não estiver interessado em ter o log no nível do cluster, ainda poderá encontrar a descrição de como os logs são armazenados e manipulados no nó para serem úteis.
+
+## Log básico no Kubernentes
+
+Nesta seção, você pode ver um exemplo de log básico no Kubernetes que gera dados para o fluxo de saída padrão(standard output stream). Esta demostração usa uma [especificação de pod](/examples/debug/counter-pod.yaml) com um contêiner que grava algum texto na saída padrão uma vez por segundo.
+
+{{< codenew file="debug/counter-pod.yaml" >}}
+
+Para executar este pod, use o seguinte comando:
+
+```shell
+kubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml
+```
+
+A saída será:
+
+```
+pod/counter created
+```
+
+Para buscar os logs, use o comando `kubectl logs`, da seguinte maneira:
+
+```shell
+kubectl logs counter
+```
+
+A saída será:
+
+```
+0: Mon Jan 1 00:00:00 UTC 2001
+1: Mon Jan 1 00:00:01 UTC 2001
+2: Mon Jan 1 00:00:02 UTC 2001
+...
+```
+
+Você pode usar `kubectl logs` para recuperar logs de uma instanciação anterior de um contêiner com o sinalizador `--previous`, caso o contêiner tenha falhado. Se o seu pod tiver vários contêineres, você deverá especificar quais logs do contêiner você deseja acessar anexando um nome de contêiner ao comando. Veja a [documentação do `kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) para mais destalhes.
+
+## Logs no nível do Nó
+
+![Log no nível do nó](/images/docs/user-guide/logging/logging-node-level.png)
+
+Tudo o que um aplicativo em contêiner grava no `stdout` e `stderr` é tratado e redirecionado para algum lugar por dentro do mecanismo de contêiner. Por exemplo, o mecanismo de contêiner do Docker redireciona esses dois fluxos para [um driver de log](https://docs.docker.com/engine/admin/logging/overview), configurado no Kubernetes para gravar em um arquivo no formato json.
+
+{{< note >}}
+O driver de log json do Docker trata cada linha como uma mensagem separada. Ao usar o driver de log do Docker, não há suporte direto para mensagens de várias linhas. Você precisa lidar com mensagens de várias linhas no nível do agente de log ou superior.
+{{< /note >}}
+
+Por padrão, se um contêiner reiniciar, o kubelet manterá um contêiner terminado com seus logs. Se um pod for despejado do nó, todos os contêineres correspondentes também serão despejados, juntamente com seus logs.
+
+Uma consideração importante no log no nível do nó está implementado a rotação de log, para que os logs não consumam todo o armazenamento disponível no nó. Atualmente, o Kubernentes não é responsável pela rotação de logs, mas uma ferramenta de deployment deve configurar uma solução para resolver isso.
+Por exemplo, nos clusters do Kubernetes, implementados pelo script `kube-up.sh`, existe uma ferramenta [`logrotate`](https://linux.die.net/man/8/logrotate) configurada para executar a cada hora. Você pode configurar um tempo de execução do contêiner para girar os logs do aplicativo automaticamente, por exemplo, usando o `log-opt` do Docker.
+No script `kube-up.sh`, a última abordagem é usada para imagem COS no GCP, e a anterior é usada em qualquer outro ambiente. Nos dois casos por padrão, a rotação é configurada para ocorrer quando o arquivo de log exceder 10MB.
+
+Como exemplo, você pode encontrar informações detalhadas sobre como o `kube-up.sh` define o log da imagem COS no GCP no [script][cosconfigurehelper] correspondente.
+
+Quando você executa [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) como no exemplo de log básico acima, o kubelet no nó lida com a solicitação e lê diretamente do arquivo de log, retornando o conteúdo na resposta.
+
+{{< note >}}
+Atualmente, se algum sistema externo executou a rotação, apenas o conteúdo do arquivo de log mais recente estará disponível através de `kubectl logs`. Por exemplo, se houver um arquivo de 10MB, o `logrotate` executa a rotação e existem dois arquivos, um com 10MB de tamanho e um vazio, o `kubectl logs` retornará uma resposta vazia.
+{{< /note >}}
+
+[cosConfigureHelper]: https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh
+
+### Logs de componentes do sistema
+
+Existem dois tipos de componentes do sistema: aqueles que são executados em um contêiner e aqueles que não são executados em um contêiner. Por exemplo:
+
+- O scheduler Kubernetes e o kube-proxy são executados em um contêiner.
+- O tempo de execução do kubelet e do contêiner, por exemplo, Docker, não é executado em contêineres.
+
+Nas máquinas com systemd, o tempo de execução do kubelet e do container é gravado no journald. Se systemd não estiver presente, eles gravam em arquivos `.log` no diretório `/var/log`.
+Os componentes do sistema dentro dos contêineres sempre gravam no diretório `/var/log`, ignorando o mecanismo de log padrão. Eles usam a biblioteca de logs [klog][klog]. Você pode encontrar as convenções para a gravidade do log desses componentes nos [documentos de desenvolvimento sobre log](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md).
+
+Da mesma forma que os logs de contêiner, os logs de componentes do sistema no diretório `/var/log` devem ser rotacionados. Nos clusters do Kubernetes criados pelo script `kube-up.sh`, esses logs são configurados para serem rotacionados pela ferramenta `logrotate` diariamente ou quando o tamanho exceder 100MB.
+
+[klog]: https://github.com/kubernetes/klog
+
+## Arquiteturas de log no nível de cluster
+
+Embora o Kubernetes não forneça uma solução nativa para o log em nível de cluster, há várias abordagens comuns que você pode considerar. Aqui estão algumas opções:
+
+- Use um agente de log no nível do nó que seja executado em todos os nós.
+- Inclua um contêiner sidecar dedicado para efetuar logging em um pod de aplicativo.
+- Envie logs diretamente para um back-end de dentro de um aplicativo.
+
+### Usando um agente de log de nó
+
+![Usando um agente de log no nível do nó](/images/docs/user-guide/logging/logging-with-node-agent.png)
+
+Você pode implementar o log em nível de cluster incluindo um _agente de log em nível de nó_ em cada nó. O agente de log é uma ferramenta dedicada que expõe logs ou envia logs para um back-end. Geralmente, o agente de log é um contêiner que tem acesso a um diretório com arquivos de log de todos os contêineres de aplicativos nesse nó.
+
+Como o agente de log deve ser executado em todos os nós, é comum implementá-lo como uma réplica do DaemonSet, um pod de manifesto ou um processo nativo dedicado no nó. No entanto, as duas últimas abordagens são obsoletas e altamente desencorajadas.
+
+O uso de um agente de log no nível do nó é a abordagem mais comum e incentivada para um cluster Kubernetes, porque ele cria apenas um agente por nó e não requer alterações nos aplicativos em execução no nó. No entanto, o log no nível do nó _funciona apenas para a saída padrão dos aplicativos e o erro padrão_.
+
+O Kubernetes não especifica um agente de log, mas dois agentes de log opcionais são fornecidos com a versão Kubernetes: [Stackdriver Logging](/docs/user-guide/logging/stackdriver) para uso com o Google Cloud Platform e [Elasticsearch](/docs/user-guide/logging/elasticsearch). Você pode encontrar mais informações e instruções nos documentos dedicados. Ambos usam [fluentd](http://www.fluentd.org/) com configuração customizada como um agente no nó.
+
+### Usando um contêiner sidecar com o agente de log
+
+Você pode usar um contêiner sidecar de uma das seguintes maneiras:
+
+- O container sidecar transmite os logs do aplicativo para seu próprio `stdout`.
+- O contêiner do sidecar executa um agente de log, configurado para selecionar logs de um contêiner de aplicativo.
+
+#### Streaming sidecar conteiner
+
+![Conteiner sidecar com um streaming container](/images/docs/user-guide/logging/logging-with-streaming-sidecar.png)
+
+Fazendo com que seus contêineres de sidecar fluam para seus próprios `stdout` e `stderr`, você pode tirar proveito do kubelet e do agente de log que já executam em cada nó. Os contêineres sidecar lêem logs de um arquivo, socket ou journald. Cada contêiner sidecar individual imprime o log em seu próprio `stdout` ou `stderr` stream.
+
+Essa abordagem permite separar vários fluxos de logs de diferentes partes do seu aplicativo, algumas das quais podem não ter suporte para gravar em `stdout` ou `stderr`. A lógica por trás do redirecionamento de logs é mínima, portanto dificilmente representa uma sobrecarga significativa. Além disso, como `stdout` e `stderr` são manipulados pelo kubelet, você pode usar ferramentas internas como o `kubectl logs`.
+
+Considere o seguinte exemplo. Um pod executa um único contêiner e grava em dois arquivos de log diferentes, usando dois formatos diferentes. Aqui está um arquivo de configuração para o Pod:
+
+{{< codenew file="admin/logging/two-files-counter-pod.yaml" >}}
+
+Seria uma bagunça ter entradas de log de diferentes formatos no mesmo fluxo de logs, mesmo se você conseguisse redirecionar os dois componentes para o fluxo `stdout` do contêiner. Em vez disso, você pode introduzir dois contêineres sidecar. Cada contêiner sidecar pode direcionar um arquivo de log específico de um volume compartilhado e depois redirecionar os logs para seu próprio fluxo `stdout`.
+
+Aqui está um arquivo de configuração para um pod que possui dois contêineres sidecar:
+
+{{< codenew file="admin/logging/two-files-counter-pod-streaming-sidecar.yaml" >}}
+
+Agora, quando você executa este pod, é possível acessar cada fluxo de log separadamente, executando os seguintes comandos:
+
+```shell
+kubectl logs counter count-log-1
+```
+
+```
+0: Mon Jan 1 00:00:00 UTC 2001
+1: Mon Jan 1 00:00:01 UTC 2001
+2: Mon Jan 1 00:00:02 UTC 2001
+...
+```
+
+```shell
+kubectl logs counter count-log-2
+```
+
+```
+Mon Jan 1 00:00:00 UTC 2001 INFO 0
+Mon Jan 1 00:00:01 UTC 2001 INFO 1
+Mon Jan 1 00:00:02 UTC 2001 INFO 2
+...
+```
+
+O agente no nível do nó instalado em seu cluster coleta esses fluxos de logs automaticamente sem nenhuma configuração adicional. Se desejar, você pode configurar o agente para analisar as linhas de log, dependendo do contêiner de origem.
+
+Observe que, apesar do baixo uso da CPU e da memória (ordem de alguns milicores por CPU e ordem de vários megabytes de memória), gravar logs em um arquivo e depois transmiti-los para o `stdout` pode duplicar o uso do disco. Se você tem um aplicativo que grava em um único arquivo, geralmente é melhor definir `/dev/stdout` como destino, em vez de implementar a abordagem de contêiner de transmissão no sidecar.
+
+Os contêineres sidecar também podem ser usados para rotacionar arquivos de log que não podem ser rotacionados pelo próprio aplicativo. Um exemplo dessa abordagem é um pequeno contêiner executando `logrotate` periodicamente.
+No entanto, é recomendável usar o `stdout` e o `stderr` diretamente e deixar as políticas de rotação e retenção no kubelet.
+
+#### Contêiner sidecar com um agente de log
+
+![Contêiner sidecar com um agente de log](/images/docs/user-guide/logging/logging-with-sidecar-agent.png)
+
+Se o agente de log no nível do nó não for flexível o suficiente para sua situação, você poderá criar um contêiner secundário com um agente de log separado que você configurou especificamente para executar com seu aplicativo.
+
+{{< note >}}
+O uso de um agente de log em um contêiner sidecar pode levar a um consumo significativo de recursos. Além disso, você não poderá acessar esses logs usando o comando `kubectl logs`, porque eles não são controlados pelo kubelet.
+{{< /note >}}
+
+Como exemplo, você pode usar o [Stackdriver](/docs/tasks/debug-application-cluster/logging-stackdriver/), que usa fluentd como um agente de log. Aqui estão dois arquivos de configuração que você pode usar para implementar essa abordagem. O primeiro arquivo contém um [ConfigMap](/docs/tasks/configure-pod-container/configure-pod-configmap/) para configurar o fluentd.
+
+{{< codenew file="admin/logging/fluentd-sidecar-config.yaml" >}}
+
+{{< note >}}
+A configuração do fluentd está além do escopo deste artigo. Para obter informações sobre como configurar o fluentd, consulte a [documentação oficial do fluentd](http://docs.fluentd.org/).
+{{< /note >}}
+
+O segundo arquivo descreve um pod que possui um contêiner sidecar rodando fluentemente.
+O pod monta um volume onde o fluentd pode coletar seus dados de configuração.
+
+{{< codenew file="admin/logging/two-files-counter-pod-agent-sidecar.yaml" >}}
+
+Depois de algum tempo, você pode encontrar mensagens de log na interface do Stackdriver.
+
+Lembre-se de que este é apenas um exemplo e você pode realmente substituir o fluentd por qualquer agente de log, lendo de qualquer fonte dentro de um contêiner de aplicativo.
+
+### Expondo logs diretamente do aplicativo
+
+![Expondo logs diretamente do aplicativo](/images/docs/user-guide/logging/logging-from-application.png)
+
+Você pode implementar o log no nível do cluster, expondo ou enviando logs diretamente de todos os aplicativos; no entanto, a implementação desse mecanismo de log está fora do escopo do Kubernetes.
+
+{{% /capture %}}
diff --git a/content/pt/examples/admin/logging/fluentd-sidecar-config.yaml b/content/pt/examples/admin/logging/fluentd-sidecar-config.yaml
new file mode 100644
index 0000000000000..eea1849b033fa
--- /dev/null
+++ b/content/pt/examples/admin/logging/fluentd-sidecar-config.yaml
@@ -0,0 +1,25 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: fluentd-config
+data:
+ fluentd.conf: |
+
+
+
+
+
+ type google_cloud
+
diff --git a/content/pt/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml b/content/pt/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml
new file mode 100644
index 0000000000000..b37b616e6f7c7
--- /dev/null
+++ b/content/pt/examples/admin/logging/two-files-counter-pod-agent-sidecar.yaml
@@ -0,0 +1,39 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: counter
+spec:
+ containers:
+ - name: count
+ image: busybox
+ args:
+ - /bin/sh
+ - -c
+ - >
+ i=0;
+ while true;
+ do
+ echo "$i: $(date)" >> /var/log/1.log;
+ echo "$(date) INFO $i" >> /var/log/2.log;
+ i=$((i+1));
+ sleep 1;
+ done
+ volumeMounts:
+ - name: varlog
+ mountPath: /var/log
+ - name: count-agent
+ image: k8s.gcr.io/fluentd-gcp:1.30
+ env:
+ - name: FLUENTD_ARGS
+ value: -c /etc/fluentd-config/fluentd.conf
+ volumeMounts:
+ - name: varlog
+ mountPath: /var/log
+ - name: config-volume
+ mountPath: /etc/fluentd-config
+ volumes:
+ - name: varlog
+ emptyDir: {}
+ - name: config-volume
+ configMap:
+ name: fluentd-config
diff --git a/content/pt/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml b/content/pt/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml
new file mode 100644
index 0000000000000..87bd198cfdab7
--- /dev/null
+++ b/content/pt/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml
@@ -0,0 +1,38 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: counter
+spec:
+ containers:
+ - name: count
+ image: busybox
+ args:
+ - /bin/sh
+ - -c
+ - >
+ i=0;
+ while true;
+ do
+ echo "$i: $(date)" >> /var/log/1.log;
+ echo "$(date) INFO $i" >> /var/log/2.log;
+ i=$((i+1));
+ sleep 1;
+ done
+ volumeMounts:
+ - name: varlog
+ mountPath: /var/log
+ - name: count-log-1
+ image: busybox
+ args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log']
+ volumeMounts:
+ - name: varlog
+ mountPath: /var/log
+ - name: count-log-2
+ image: busybox
+ args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log']
+ volumeMounts:
+ - name: varlog
+ mountPath: /var/log
+ volumes:
+ - name: varlog
+ emptyDir: {}
diff --git a/content/pt/examples/admin/logging/two-files-counter-pod.yaml b/content/pt/examples/admin/logging/two-files-counter-pod.yaml
new file mode 100644
index 0000000000000..6ebeb717a1892
--- /dev/null
+++ b/content/pt/examples/admin/logging/two-files-counter-pod.yaml
@@ -0,0 +1,26 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: counter
+spec:
+ containers:
+ - name: count
+ image: busybox
+ args:
+ - /bin/sh
+ - -c
+ - >
+ i=0;
+ while true;
+ do
+ echo "$i: $(date)" >> /var/log/1.log;
+ echo "$(date) INFO $i" >> /var/log/2.log;
+ i=$((i+1));
+ sleep 1;
+ done
+ volumeMounts:
+ - name: varlog
+ mountPath: /var/log
+ volumes:
+ - name: varlog
+ emptyDir: {}
diff --git a/content/pt/examples/debug/counter-pod.yaml b/content/pt/examples/debug/counter-pod.yaml
new file mode 100644
index 0000000000000..f997886386258
--- /dev/null
+++ b/content/pt/examples/debug/counter-pod.yaml
@@ -0,0 +1,10 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: counter
+spec:
+ containers:
+ - name: count
+ image: busybox
+ args: [/bin/sh, -c,
+ 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
diff --git a/content/ru/docs/contribute/style/_index.md b/content/ru/docs/contribute/style/_index.md
new file mode 100644
index 0000000000000..57686140c2beb
--- /dev/null
+++ b/content/ru/docs/contribute/style/_index.md
@@ -0,0 +1,7 @@
+---
+title: Обзор оформления документации
+main_menu: true
+weight: 80
+---
+
+Темы в этом разделе содержат рекомендации по написанию, форматированию и организации контента, а также охватывают настройку Hugo в контексте документации Kubernetes.
diff --git a/content/ru/docs/contribute/style/content-guide.md b/content/ru/docs/contribute/style/content-guide.md
new file mode 100644
index 0000000000000..567e3b6dceeed
--- /dev/null
+++ b/content/ru/docs/contribute/style/content-guide.md
@@ -0,0 +1,101 @@
+---
+title: Руководство по содержанию документации
+linktitle: Руководство по содержанию
+content_template: templates/concept
+weight: 10
+card:
+ name: contribute
+ weight: 20
+ title: Руководство по содержанию документации
+---
+
+{{% capture overview %}}
+
+Эта страница содержит рекомендации по добавлению контента в документацию Kubernetes.
+Если у вас есть вопросы по поводу допустимого контента, обратитесь к каналу #sig-docs в [Slack Kubernetes](http://slack.k8s.io/) и задайте свои вопросы! Поступайте на своё усмотрение и не стесняйтесь вносить изменения в этот документ через пулреквест.
+
+Для получения дополнительной информации о создании нового контента для документации Kubernetes следуйте инструкциям в [руководстве по оформлению](/ru/docs/contribute/style/style-guide).
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Участие в контенте
+
+Документация Kubernetes включает содержимое из оригинального репозитория [kubernetes/website](https://github.com/kubernetes/website). Документация Kubernetes находится в директории `kubernetes/website/content//docs`, большая часть которой относится к [проекту Kubernetes](https://github.com/kubernetes/kubernetes). Документация Kubernetes может также включать содержимое их проектов в GitHub-организациях [kubernetes](https://github.com/kubernetes) и [kubernetes-sigs](https://github.com/kubernetes-sigs), если у этих проектов нет собственной документации. Всегда можно ссылаться на действующие проекты kubernetes, kubernetes-sigs и ({{< glossary_tooltip text="CNCF" term_id="cncf" >}}) в документации Kubernetes, но перелинковка с продуктами определённого разработчика не допускается. Проверьте списки проектов CNCF ([Graduated/Incubating](https://www.cncf.io/projects/), [Sandbox](https://www.cncf.io/sandbox-projects/), [Archived](https://www.cncf.io/archived-projects/)), если вы не уверены в статусе CNCF проекта
+
+### Контент, полученный из двух источников
+
+Документация Kubernetes не содержит дублированный контент, полученный из разных мест (так называемый **контент из двумя источниками**). Контент из двух источников требует дублирования работы со стороны мейнтейнеров проекта и к тому же быстро теряет актуальность.
+Перед добавлением контента, задайте себе вопрос:
+
+- Новая информация относится к действующему проекту CNCF ИЛИ проекту в организациях на GitHub kubernetes или kubernetes-sigs?
+ - Если да, то:
+ - У этого проекта есть собственная документация?
+ - если да, то укажите ссылку на документацию проекта в документации Kubernetes
+ - если нет, добавьте информацию в репозиторий проекта (если это возможно), а затем укажите ссылку на неё в документации Kubernetes
+ - Если нет, то:
+ - Остановитесь!
+ - Добавление информации по продуктам от других разработчиков не допускается
+ - Не разрешено ссылаться на документацию и сайты сторонних разработчиков.
+
+### Разрешенная и запрещённая информация
+
+Есть несколько условий, когда в документации Kubernetes может быть информация, относящиеся не к проектам Kubernetes.
+Ниже перечислены основные категории по содержанию проектов, не касающихся к Kubernetes, а также приведены рекомендации о том, что разрешено, а что нет:
+
+1. Инструкции по установке или эксплуатации Kubernetes, которые не связаны с проектами Kubernetes
+ - Разрешено:
+ - Ссылаться на документацию на CNCF-проекта или на проект в GitHub-организациях kubernetes или kubernetes-sigs
+ - Пример: для установки Kubernetes в процессе обучения нужно обязательно установить и настроить minikube, а также сослаться на соответствующую документацию minikube
+ - Добавление инструкций для проектов в организации kubernetes или kubernetes-sigs, если по ним нет инструкций
+ - Пример: добавление инструкций по установке и решению неполадок [kubadm](https://github.com/kubernetes/kubeadm)
+ - Запрещено:
+ - Добавление информацию, которая повторяет документацию в другом репозитории
+ - Примеры:
+ - Добавление инструкций по установке и настройке minikube; Minikube имеет собственную [документацию](https://minikube.sigs.k8s.io/docs/), которая включают эти инструкции
+ - Добавление инструкций по установке Docker, CRI-O, containerd и других окружений для выполнения контейнеров в разных операционных системах
+ - Добавление инструкций по установке Kubernetes в промышленных окружениях, используя разные проекты:
+ -Kubernetes Rebar Integrated Bootstrap (KRIB) — это проект стороннего разработчика, поэтому все содержимое находится репозитории разработчика.
+ - У проекта [Kubernetes Operations (kops)](https://github.com/kubernetes/kops) есть инструкции по установке и руководства в GitHub-репозитории.
+ - У проекта [Kubespray](https://kubespray.io) есть собственная документация
+ - Добавление руководства, в котором объясняется, как выполнить задачу с использованием продукта определенного разработчика или проекта с открытым исходным кодом, не являющиеся CNCF-проектом или проектом в GitHub-организациях kubernetes или kubnetes-sigs.
+ - Добавление руководства по использованию CNCF-проекта или проекта в GitHub-организациях kubernetes или kubnetes-sigs, если у проекта есть собственная документация
+1. Подробное описание технических аспектов по использованию стороннего проекта (не Kubernetes) или как этот проект разработан
+
+ Добавление такого типа информации в документацию Kubernetes не допускается.
+1. Информация стороннему проекту
+ - Разрешено:
+ - Добавление краткого введения о CNCF-проекте или проекте в GitHub-организациях kubernetes или kubernetes-sigs; этот абзац может содержать ссылки на проект
+ - Запрещено:
+ - Добавление информации по продукту определённого разработчика
+ - Добавление информации по проекту с открытым исходным кодом, который не является CNCF-проектом или проектом в GitHub-организациях kubernetes или kubnetes-sigs
+ - Добавление информации, дублирующего документацию из другого проекта, независимо от оригинального репозитория
+ - Пример: добавление документации для проекта [Kubernetes in Docker (KinD)](https://kind.sigs.k8s.io) в документацию Kubernetes
+1. Только ссылки на сторонний проект
+ - Разрешено:
+ - Ссылаться на проекты в GitHub-организациях kubernetes и kubernetes-sigs
+ - Пример: добавление ссылок на [документацию](https://kind.sigs.k8s.io/docs/user/quick-start) проекта Kubernetes in Docker (KinD), который находится в GitHub-организации kubernetes-sigs
+ - Добавление ссылок на действующие CNCF-проекты
+ - Пример: добавление ссылок на [документацию](https://prometheus.io/docs/introduction/overview/) проекта Prometheus; Prometheus — это действующий проект CNCF
+ - Запрещено:
+ - Ссылаться на продукты стороннего разработчика
+ - Ссылаться на архивированные проекты CNCF
+ - Ссылаться на недействующие проекты в организациях GitHub в kubernetes и kubernetes-sigs
+ - Ссылаться на проекты с открытым исходным кодом, которые не являются проектами CNCF или не находятся в организациях GitHub kubernetes или kubernetes-sigs.
+1. Содержание учебных курсов
+ - Разрешено:
+ - Ссылаться на независимые от разработчиков учебные курсы Kubernetes, предлагаемыми [CNCF](https://www.cncf.io/), [Linux Foundation](https://www.linuxfoundation.org/) и [Linux Academy](https://linuxacademy.com/) (партнер Linux Foundation)
+ - Пример: добавление ссылок на курсы Linux Academy, такие как [Kubernetes Quick Start](https://linuxacademy.com/course/kubernetes-quick-start/) в [Kubernetes Security](https://linuxacademy.com/course/kubernetes-security/)
+ - Запрещено:
+ - Ссылаться на учебныЕе онлайн-курсы, вне CNCF, Linux Foundation или Linux Academy; документация Kubernetes не содержит ссылок на сторонний контент
+ - Пример: добавление ссылок на учебные руководства или курсы Kubernetes на Medium, KodeKloud, Udacity, Coursera, learnk8s и т.д.
+ - Ссылаться на руководства определённых разработчиков вне зависимости от обучающей организации
+ - Пример: добавление ссылок на такие курсы Linux Academy, как [Google Kubernetes Engine Deep Dive](https://linuxacademy.com/google-cloud-platform/training/course/name/google-kubernetes-engine-deep-dive) and [Amazon EKS Deep Dive](https://linuxacademy.com/course/amazon-eks-deep-dive/)
+
+Если у вас есть вопросы по поводу допустимого контента, присоединяйтесь к каналу #sig-docs в [Slack Kubernetes](http://slack.k8s.io/)!
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+* Прочитайте [руководство по оформлению](/ru/docs/contribute/style/style-guide).
+{{% /capture %}}
diff --git a/content/ru/docs/contribute/style/content-organization.md b/content/ru/docs/contribute/style/content-organization.md
new file mode 100644
index 0000000000000..5c2e00dba68d7
--- /dev/null
+++ b/content/ru/docs/contribute/style/content-organization.md
@@ -0,0 +1,131 @@
+---
+title: Организация контента
+content_template: templates/concept
+weight: 40
+---
+
+
+{{% capture overview %}}
+
+Этот сайт использует Hugo. В Hugo [организация контента](https://gohugo.io/content-management/organization/) — основная концепция.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+{{% note %}}
+**Подсказка:** при редактировании контента используйте команду `hugo server --navigateToChanged`, чтобы запустить Hugo.
+{{% /note %}}
+
+## Списки страниц
+
+### Порядок страницы
+
+Меню в сайдбаре, каталог страниц документации используют стандартный порядок перечисления Hugo, который сортирует элементы по весу (от 1), дате (начиная с самых новых) и затем по заголовку ссылки.
+
+Таким образом, если вам нужно поднять страницу или раздел, определите её вес в фронтальной части:
+
+```yaml
+title: Моя страница
+weight: 10
+```
+
+{{% note %}}
+Для значений веса страниц лучше не использовать привычный порядок 1, 2, 3..., а предпочесть другой интервал, например, 10, 20, 30... В будущем это позволит вставить последующие страницы в желаемую позицию.
+{{% /note %}}
+
+### Главное меню документации
+
+Главное меню `Документация` состоит из разделов по пути `docs/` с установленным флагом `main_menu` в фронтальной части файла раздела `_index.md`:
+
+```yaml
+main_menu: true
+```
+
+Обратите внимание, что текст ссылки берётся из переменной `linkTitle`, поэтому, если вы хотите, чтобы он отличался от заголовка страницы, измените его в файле:
+
+```yaml
+main_menu: true
+title: Название страницы
+linkTitle: Название, которое будет использоваться в ссылках
+```
+
+{{% note %}}
+Перечисленные выше переменные должны быть определены для каждого перевода. Если вы не видите созданный вами раздел в меню, скорее всего, это может быть связано с тем, что Hugo не определил его как раздел. Создайте файл `_index.md` в директории раздела.
+{{% /note %}}
+
+### Документация в боковом меню
+
+Меню сайдбара в документации собирается из _текущего дерева разделов_ по пути `docs/`.
+
+Оно отобразит все разделы и их страницы.
+
+Если вы хотите, чтобы раздел или страница не отображались в меню, установите для флага `toc_hide` значение `true` в фронтальной части файла:
+
+```yaml
+toc_hide: true
+```
+
+При переходе к непустому разделу будет отображаться указанный раздел или страница (например, `_index.md`). В противном случае выводиться первая страница в этом разделе.
+
+### Каталог документации
+
+Каталог страниц на главной странице документации сгенерирован с учётом всех разделов и страниц документации.
+
+Если вы хотите скрыть раздел или страницу, установите для флага `toc_hide` значение `true` в фронтальной части файла:
+
+```yaml
+toc_hide: true
+```
+
+### Главное меню
+
+Ссылки сайта в верхнем правом меню, а также в футере, создаются посредством сканирования страниц. Этот процесс гарантирует, что страница действительно существует на сайте. Поэтому, если раздела `case-studies` на сайте (или в переводе) не существует, ссылка не появится.
+
+## Пакеты страниц
+
+В дополнение к отдельным страницам с контентом (Markdown-файлам), Hugo поддерживает [пакеты страниц (page bundles)](https://gohugo.io/content-management/page-bundles/).
+
+К примеру, [пользовательские макрокоды Hugo](/docs/contribute/style/hugo-shortcodes/) — узел пакета (`leaf bundle`). Все, что находится в директории, включая `index.md`, будет частью пакета. Сюда также относятся относительные ссылки на страницы, изображения, которые могут быть обработаны и т.д.:
+
+```bash
+en/docs/home/contribute/includes
+├── example1.md
+├── example2.md
+├── index.md
+└── podtemplate.json
+```
+
+Другой распространённый пример — это пакет `includes`. Он устанавливает переменную `headless: true`, которая означает, что файл не будет доступен по собственному URL-адресу. Вместо этого он будет использоваться в других страницах как вставляемый файл.
+
+```bash
+en/includes
+├── default-storage-class-prereqs.md
+├── federated-task-tutorial-prereqs.md
+├── index.md
+├── partner-script.js
+├── partner-style.css
+├── task-tutorial-prereqs.md
+├── user-guide-content-moved.md
+└── user-guide-migration-notice.md
+```
+
+Необходимо отметить следующие особенности файлов в пакетах:
+
+* Для переведенных пакетов любые отсутствующие файлы будут унаследованы от файлов на оригинальном (английском) языке. Это позволяет избежать дублирования.
+* Все файлы в пакете — в Hugo называются ресурсы (`Resources`), в которых вы можете определить метаданные, зависимые от языка, например, параметры и заголовок, даже если они не поддерживают в фронтальной части (YAML-файлы и т.д.). Смотрите [Метаданные ресурсов страницы](https://gohugo.io/content-management/page-resources/#page-resources-metadata) для получения дополнительной информации.
+* Значение, которое вы получаете через `.RelPermalink` в `Resource` будет отличаться в зависимости от страницы. Смотрите [Постоянные ссылки](https://gohugo.io/content-management/urls/#permalinks) для получения дополнительной информации.
+
+## Стилизация
+
+Исходные файлы стилей в формате [SASS](https://sass-lang.com/) находятся в директории `assets/sass` и автоматически собираются Hugo.
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* Подробнее про [пользовательские макрокоды Hugo](/ru/docs/contribute/style/hugo-shortcodes/)
+* Подробнее про [оформление документации](/ru/docs/contribute/style/style-guide)
+* Подробнее про [содержание документации](/ru/docs/contribute/style/content-guide)
+
+{{% /capture %}}
diff --git a/content/ru/docs/contribute/style/hugo-shortcodes/example1.md b/content/ru/docs/contribute/style/hugo-shortcodes/example1.md
new file mode 100644
index 0000000000000..30ffcf0235fce
--- /dev/null
+++ b/content/ru/docs/contribute/style/hugo-shortcodes/example1.md
@@ -0,0 +1,9 @@
+---
+title: Пример #1
+---
+
+Это **пример** содержимого в файле внутри пакета узла **includes**.
+
+{{< note >}}
+Подключаемые файлы также могут содержать макрокоды.
+{{< /note >}}
\ No newline at end of file
diff --git a/content/ru/docs/contribute/style/hugo-shortcodes/example2.md b/content/ru/docs/contribute/style/hugo-shortcodes/example2.md
new file mode 100644
index 0000000000000..68a9730617bed
--- /dev/null
+++ b/content/ru/docs/contribute/style/hugo-shortcodes/example2.md
@@ -0,0 +1,7 @@
+---
+title: Пример #1
+---
+
+Это другой **пример** содержимого в файле внутри пакета узла **includes**.
+
+
diff --git a/content/ru/docs/contribute/style/hugo-shortcodes/index.md b/content/ru/docs/contribute/style/hugo-shortcodes/index.md
new file mode 100644
index 0000000000000..b77cc77ec1b45
--- /dev/null
+++ b/content/ru/docs/contribute/style/hugo-shortcodes/index.md
@@ -0,0 +1,245 @@
+---
+approvers:
+- chenopis
+title: Пользовательские макрокоды Hugo
+content_template: templates/concept
+---
+
+{{% capture overview %}}
+На этой странице объясняются пользовательские макрокоды Hugo, которые можно использовать в Markdown-файлах документации Kubernetes.
+
+Узнать подробнее про макрокоды можно в [документации Hugo](https://gohugo.io/content-management/shortcodes).
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Состояние функциональности
+
+В Markdown странице (файл с расширением `.md`) вы можете добавить макрокод, чтобы отобразить версию и состояние документированной функциональной возможности.
+
+### Демонстрация состояния функциональности
+
+Ниже показан фрагмент кода для вывода состояния функциональности, который сообщает о функциональности в стабильной версии Kubernetes 1.10.
+
+```
+{{* feature-state for_k8s_version="v1.10" state="stable" */>}}
+```
+
+Результат:
+
+{{< feature-state for_k8s_version="v1.10" state="stable" >}}
+
+Допустимые значения для `state`:
+
+* alpha
+* beta
+* deprecated
+* stable
+
+### Код состояния функциональности
+
+По умолчанию отображается версия Kubernetes, соответствующая версии страницы или сайта. Это значение можно переопределить, передав параметр макрокода for_k8s_version.
+
+```
+{{* feature-state for_k8s_version="v1.10" state="stable" */>}}
+```
+
+Результат:
+
+{{< feature-state for_k8s_version="v1.10" state="stable" >}}
+
+#### Функциональность в альфа-версии
+
+```
+{{* feature-state state="alpha" */>}}
+```
+
+Результат:
+
+{{< feature-state state="alpha" >}}
+
+#### Функциональность в бета-версии
+
+```
+{{* feature-state state="beta" */>}}
+```
+
+Результат:
+
+{{< feature-state state="beta" >}}
+
+#### Функциональность в стабильной версии
+
+```
+{{* feature-state state="stable" */>}}
+```
+
+Результат:
+
+{{< feature-state state="stable" >}}
+
+#### Устаревшая функциональность
+
+```
+{{* feature-state state="deprecated" */>}}
+```
+
+Результат:
+
+{{< feature-state state="deprecated" >}}
+
+## Глоссарий
+
+Вы можете сослаться на термины из [глоссария](/docs/reference/glossary/) в виде всплывающей (при наведении мыши) подсказки, что удобно при чтении документации через интернет.
+
+Исходные файлы терминов глоссария хранятся в отдельной директории по URL-адресу [https://github.com/kubernetes/website/tree/master/content/en/docs/reference/glossary](https://github.com/kubernetes/website/tree/master/content/en/docs/reference/glossary).
+
+### Демонстрация глоссария
+
+Например, следующий фрагмент кода в Markdown будет отображен в виде всплывающей подсказки — {{< glossary_tooltip text="cluster" term_id="cluster" >}}:
+
+```liquid
+{{* glossary_tooltip text="cluster" term_id="cluster" */>}}
+```
+
+## Заголовки таблиц
+
+Для улучшения доступности таблиц для программ для чтения с экрана, добавьте заголовок к таблице. Чтобы добавить [заголовок](https://www.w3schools.com/tags/tag_caption.asp) таблицы, поместите таблицу в макрокод `table` и определите значение заголовка в параметре` caption`.
+
+{{< note >}}
+Заголовки таблиц предназначены только для программ чтения с экрана, поэтому в браузере они не будут отображаться.
+{{< /note >}}
+
+Пример:
+
+```go-html-template
+{{* table caption="Конфигурационные параметры" >}}
+Параметр | Описание | Значение по умолчанию
+:---------|:------------|:-------
+`timeout` | Тайм-аут для запросов | `30s`
+`logLevel` | Уровень логирования | `INFO`
+{{< /table */>}}
+```
+
+Результат:
+
+{{* table caption="Конфигурационные параметры" >}}
+Параметр | Описание | Значение по умолчанию
+:---------|:------------|:-------
+`timeout` | Тайм-аут для запросов | `30s`
+`logLevel` | Уровень логирования | `INFO`
+{{< /table >}}
+
+Если вы изучите HTML-код таблицы, вы заметите следующий ниже элемент сразу после открывающего элемента `
`:
+
+```html
+
+```
+
+## Вкладки
+
+Страница в формате Markdown (файл с расширением `.md`) на этом сайте может содержать набор вкладок для отображения нескольких разновидностей определённого решения.
+
+Макрокод `tabs` принимает следующие параметры:
+
+* `name`: имя, отображаемое на вкладке.
+* `codelang`: если вы указываете встроенный контент для макрокода `tab`, вы можете сообщить Hugo, какой язык использовать для подсветки синтаксиса.
+* `include`: включаемый файл в вкладку. Если вкладка находится в [узле пакета (leaf bundle)](https://gohugo.io/content-management/page-bundles/#leaf-bundles) Hugo, то файл (может быть любым MIME-типом, который поддерживает Hugo) ищется в самом пакете. Если нет, то включаемое содержимое ищется относительно текущей страницы. Обратите внимание, что при использовании `include` вам следует использовать самозакрывающийся синтаксис. Например, {{* tab name="Content File #1" include="example1" /*/>}}. Язык может быть указан в `codelang`, в противном случае язык определяется из имени файла.
+* Если содержимое вкладки это Markdown, вам нужно использовать символ `%`. Например, `{{%/* tab name="Вкладка 1" %}}This is **markdown**{{% /tab */%}}`
+* Вы можете совместно использовать перечисленные выше параметры.
+Ниже приведена демонстрация шорткода вкладок.
+
+Ниже приведены примеры вкладок.
+
+{{< note >}}
+**Имя** вкладки в элементе `tabs` должно быть уникальным на странице.
+{{< /note >}}
+
+### Демонстрация вкладок: подсветка синтаксиса в блоках кода
+
+```go-text-template
+{{* tabs name="tab_with_code" >}}
+{{{< tab name="Вкладка 1" codelang="bash" >}}
+echo "Это вкладка 1."
+{{< /tab >}}
+{{< tab name="Вкладка 2" codelang="go" >}}
+println "Это вкладка 2."
+{{< /tab >}}}
+{{< /tabs */>}}
+```
+
+Результат:
+
+{{< tabs name="tab_with_code" >}}
+{{< tab name="Вкладка 1" codelang="bash" >}}
+echo "Это вкладка 1."
+{{< /tab >}}
+{{< tab name="Вкладка 2" codelang="go" >}}
+println "Это вкладка 2."
+{{< /tab >}}
+{{< /tabs >}}
+
+### Демонстрация вкладок: встроенный Markdown и HTML
+
+```go-html-template
+{{* tabs name="tab_with_md" >}}
+{{% tab name="Markdown" %}}
+Это **разметка Markdown.**
+{{< note >}}
+Также можно использовать макрокоды.
+{{< /note >}}
+{{% /tab %}}
+{{< tab name="HTML" >}}
+
+{{< /tab >}}
+{{< /tabs >}}
+
+### Демонстрация вкладок: включение файлов
+
+```go-text-template
+{{* tabs name="tab_with_file_include" >}}
+{{< tab name="Content File #1" include="example1" />}}
+{{< tab name="Content File #2" include="example2" />}}
+{{< tab name="JSON File" include="podtemplate" />}}
+{{< /tabs */>}}
+```
+
+Результат:
+
+{{< tabs name="tab_with_file_include" >}}
+{{< tab name="Content File #1" include="example1" />}}
+{{< tab name="Content File #2" include="example2" />}}
+{{< tab name="JSON File" include="podtemplate" />}}
+{{< /tabs >}}
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+* Подробнее про [Hugo](https://gohugo.io/).
+* Подробнее про [написание новой темы](/ru/docs/contribute/style/write-new-topic/).
+* Подробнее про [использование шаблонов страниц](/ru/docs/contribute/style/page-templates/).
+* Подробнее про [создание пулреквеста](/ru/docs/contribute/start/#отправка-пулреквеста).
+{{% /capture %}}
\ No newline at end of file
diff --git a/content/ru/docs/contribute/style/hugo-shortcodes/podtemplate.json b/content/ru/docs/contribute/style/hugo-shortcodes/podtemplate.json
new file mode 100644
index 0000000000000..bd4327414a10a
--- /dev/null
+++ b/content/ru/docs/contribute/style/hugo-shortcodes/podtemplate.json
@@ -0,0 +1,22 @@
+ {
+ "apiVersion": "v1",
+ "kind": "PodTemplate",
+ "metadata": {
+ "name": "nginx"
+ },
+ "template": {
+ "metadata": {
+ "labels": {
+ "name": "nginx"
+ },
+ "generateName": "nginx-"
+ },
+ "spec": {
+ "containers": [{
+ "name": "nginx",
+ "image": "dockerfile/nginx",
+ "ports": [{"containerPort": 80}]
+ }]
+ }
+ }
+ }
diff --git a/content/ru/docs/contribute/style/page-templates.md b/content/ru/docs/contribute/style/page-templates.md
new file mode 100644
index 0000000000000..f49307b414102
--- /dev/null
+++ b/content/ru/docs/contribute/style/page-templates.md
@@ -0,0 +1,186 @@
+---
+title: Использование шаблонов страниц
+content_template: templates/concept
+weight: 30
+card:
+ name: contribute
+ weight: 30
+---
+
+{{% capture overview %}}
+При добавлении новых тем воспользуйтесь одним из перечисленных ниже шаблонов.
+Это регламентирует пользовательское восприятие определённой страницы.
+
+Шаблоны страниц находятся в директории [`layouts/partials/templates`](https://git.k8s.io/website/layouts/partials/templates) репозитория [`kubernetes/website`](https://github.com/kubernetes/website).
+
+{{< note >}}
+Каждая новая тема должна использовать шаблон. Если вы не уверены, какой шаблон использовать для новой темы, начните с [шаблона концепции](#шаблон-концепции).
+{{< /note >}}
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## Шаблон концепции
+
+Страница концепции объясняет некоторые аспекты Kubernetes. Например, страницы концепции может описывать объект Deployment в Kubernetes и разъяснить какую роль он играет после развертывания, масштабирования и обновления приложения. Как правило, страницы концепций не включают последовательности шагов, и вместо этого содержат ссылки на задачи или руководства.
+
+Для написания новой страницы концепции в директории `/content/en/docs/concepts` создайте поддиректорию с Markdown-файлом со следующим требованиями:
+
+- Во фронтальной части YAML этой страницы определите поле `content_template: templates/concept`.
+- В теле страницы укажите переменные `capture` и любые другие, которые вы хотите включить:
+
+ | Переменная | Обязательна? |
+ |------------|--------------|
+ | overview | да |
+ | body | да |
+ | whatsnext | нет |
+
+
+ Тело страницы будет выглядеть следующим образом (удалите все необязательные capture-блоки, если они вам не понадобятся):
+
+ ```
+ {{%/* capture overview */%}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture body */%}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture whatsnext */%}}
+
+ {{%/* /capture */%}}
+ ```
+
+- Заполните каждый раздел информацией. Следуйте этим рекомендациям:
+ - Структурируйте контент с помощью заголовков H2 и H3.
+ - В блоке `overview` одним абзацем сформируйте контекст темы.
+ - В блоке `body` объясните суть концепции.
+ - В блоке `whatsnext` сформируйте ненумерованный список тем (до 5), к которым нужно обратиться, чтобы получить дополнительную информацию о концепции.
+
+[Annotations](/docs/concepts/overview/working-with-objects/annotations/) — это готовый пример шаблона концепции. Кстати, текущая страница использует шаблон концепции.
+
+## Шаблон задачи
+
+На странице задачи показывается, как сделать что-то одно конкретное, главным образом с помощью короткой последовательности шагов. В страницах задач очень короткое объяснение, хотя они часто ссылаются на концептуальные темы, где уже можно найти соответствующую справочную информацию и ресурсы.
+
+Для написания новой страницы задачи в директории `/content/en/docs/tasks` создайте поддиректорию с Markdown-файлом со следующим требованиями:
+
+- Во фронтальной части YAML этой страницы определите поле `content_template: templates/task`.
+- В теле страницы укажите переменные `capture` и любые другие, которые вы хотите включить:
+
+ | Переменная | Обязательна? |
+ |------------|--------------|
+ | overview | да |
+ | prerequisites | да |
+ | steps | нет |
+ | discussion | нет |
+ | whatsnext | нет |
+
+
+ Тело страницы будет выглядеть следующим образом (удалите все необязательные capture-блоки, если они вам не нужны):
+
+ ```
+ {{%/* capture overview */%}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture prerequisites */%}}
+
+ {{* include "task-tutorial-prereqs.md" */>}} {{* version-check */>}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture steps */%}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture discussion */%}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture whatsnext */%}}
+
+ {{%/* /capture */%}}
+ ```
+
+- Заполните каждый блок информацией. Следуйте этим рекомендациям:
+ - Используйте по минимуму заголовков H2 (с двумя ведущими символами `#`). У самих разделов заголовок формируется автоматически по заданному шаблону.
+ - В блоке `overview` обозначьте контекст для всей темы.
+ - В блоке `prerequisites` используйте ненумерованные списки, где это возможно. Добавьте дополнительные предварительные условия ниже `include`. Предварительные условия по умолчанию содержат пункт про наличие работающего кластера.
+ - В блоке `steps` используйте нумерованные списки.
+ - В блоке `discussion` подробно распишите информацию, описанную в разделе `steps`.
+ - В блоке `whatsnext` сформируйте ненумерованный список тем (до 5), которые могут быть интересны читателю в качестве дополнительного чтения.
+
+Пример готовой темы, в которой используется шаблон задачи — [Using an HTTP proxy to access the Kubernetes API](/docs/tasks/access-kubernetes-api/http-proxy-access-api).
+
+## Шаблон руководства
+
+На странице руководства показывается, как выполнить что-то более крупнее одной-единственной задачи. Как правило, страницы руководства поделена на несколько разделов, в каждом из которых есть последовательность шагов. Например, руководство может включать анализ примера кода, демонстрирующий определенную возможность Kubernetes. Руководства могут содержать поверхностные объяснения и одновременно включать ссылки на соответствующие концептуальные темы для получения углубленных знаний.
+
+Для написания новой страницы задачи в директории `/content/en/docs/tutorials` создайте поддиректорию с Markdown-файлом со следующим требованиями:
+
+- Во фронтальной части YAML этой страницы определите поле `content_template: templates/tutorial`.
+- В теле страницы укажите переменные `capture` и любые другие, которые вы хотите включить:
+
+ | Переменная | Обязательна? |
+ |------------|--------------|
+ | overview | да |
+ | prerequisites | да |
+ | objectives | да |
+ | lessoncontent | да |
+ | cleanup | нет |
+ | whatsnext | нет |
+
+ Тело страницы будет выглядеть следующим образом (удалите все необязательные capture-блоки, если они вам не понадобятся):
+
+ ```
+ {{%/* capture overview */%}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture prerequisites */%}}
+
+ {{* include "task-tutorial-prereqs.md" */>}} {{* version-check */>}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture objectives */%}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture lessoncontent */%}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture cleanup */%}}
+
+ {{%/* /capture */%}}
+
+ {{%/* capture whatsnext */%}}
+
+ {{%/* /capture */%}}
+ ```
+
+- Заполните каждый блок информацией. Следуйте этим рекомендациям:
+ - Используйте по минимуму заголовков H2 (с двумя ведущими символами `#`). У самих разделов заголовок формируется автоматически по заданному шаблону.
+ - В блоке `overview` обозначьте контекст для всей темы.
+ - В блоке `prerequisites` используйте ненумерованные списки, где это возможно. Добавьте дополнительные предварительные условия ниже `include`. Предварительные условия по умолчанию содержат пункт про наличие работающего кластера.
+ - В блоке `objectives` используйте ненумерованные списки.
+ - В блоке `lessoncontent` целесообразно используйте совместно нумерованные списки и повествовательное содержание.
+ - В блоке `cleanup` используйте нумерованные списки для описания шагов для очистки состояния кластера после выполнения задачи.
+ - В блоке `whatsnext` сформируйте ненумерованный список тем (до 5), которые могут быть интересны читателю в качестве дополнительного чтения.
+
+Пример завершенной темы, в которой используется шаблон руководства — [Running a Stateless Application Using a Deployment](/docs/tutorials/stateless-application/run-stateless-application-deployment/).
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+- Подробнее про [оформление документации](/ru/docs/contribute/style/style-guide/)
+- Подробнее про [содержание документации](/ru/docs/contribute/style/content-guide/)
+- Подробнее про [организацию контента](/ru/docs/contribute/style/content-organization/)
+
+{{% /capture %}}
diff --git a/content/ru/docs/contribute/style/style-guide.md b/content/ru/docs/contribute/style/style-guide.md
new file mode 100644
index 0000000000000..612111c673daf
--- /dev/null
+++ b/content/ru/docs/contribute/style/style-guide.md
@@ -0,0 +1,570 @@
+---
+title: Руководство по оформлению документации
+linktitle: Руководство по оформлению
+content_template: templates/concept
+weight: 10
+card:
+ name: contribute
+ weight: 20
+ title: Руководство по оформлению документации
+---
+
+{{% capture overview %}}
+На этой странице вы найдёте рекомендации по оформлению написания документации Kubernetes. Это рекомендации, а не правила. Используйте здравый смысл и не стесняйтесь предлагать изменения в этот документ в виде пулреквеста.
+
+Для получения подробной информации о создании нового контента в документацию Kubernetes посмотрите [руководство по контенту документации](/ru/docs/contribute/style/content-guide/), а также следуйте инструкциям по [использованию шаблонов страниц](/ru/docs/contribute/style/page-templates/) и [открытию пулревеста в документацию](/ru/docs/contribute/start/#улучшение-существующего-текста).
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+{{< note >}}
+В документации Kubernetes используется [Blackfriday Markdown Renderer](https://github.com/russross/blackfriday) вместе с несколькими [макрокодами Hugo](/docs/home/contribute/includes/) для добавления поддержки записей глоссария, вкладок и отображения состояний функциональностей.
+{{< /note >}}
+
+## Язык
+
+Документация Kubernetes была переведена на несколько языков (см. [README-файлы локализаций](https://github.com/kubernetes/website/blob/master/README.md#localization-readmemds)).
+
+Процесс локализации документации на другие языки описан в [соответствующей странице по локализации](/ru/docs/contribute/localization/).
+
+## Правила форматирования документации
+
+### Используйте верблюжью нотацию для написания объектов API
+
+Когда вы указываете имя API-объекта, используйте те же самые прописные и строчные буквы так, как они записаны в имени объекта. Как правило, имена объектов API написаны с использованием [верблюжьей нотации](https://ru.wikipedia.org/wiki/Camel_case).
+
+Не разделяйте имя объекта API на отдельные слова. Например, пишите PodTemplateList, а не Pod Template List.
+
+Указывая имена API-объектов, не добавляйте к ним слово "объект", за исключением случаев, когда это только ухудшает читаемость.
+
+{{< table caption = "Можно делать и нельзя - Объекты API" >}}
+Можно | Нельзя
+:--| :-----
+В Pod два контейнера. | В поде два контейнера.
+Deployment отвечает за ... | Объект Deployment отвечает за ...
+PodList — это список Pod. | Pod List — это список подов.
+Два ContainerPorts ... | Два объекта ContainerPort ...
+Два объекта ContainerStateTerminated ... | Два ContainerStateTerminated ...
+{{< /table >}}
+
+
+### Используйте угловые скобки для заполнителей
+
+Используйте угловые скобки для заполнителей. Сообщите читателю, что означает заполнитель.
+
+1. Отобразить информацию о Pod:
+
+ kubectl describe pod -n
+
+ Если пространство имён пода равняется `default`, вы можете опустить параметр '-n'.
+
+### Используйте полужирное начертание для элементов пользовательского интерфейса
+
+{{< table caption = "Можно делать и нельзя - Элементы интерфейса в полужирном начертании" >}}
+Можно | Нельзя
+:--| :-----
+Нажмите на **Fork**. | Нажмите на "Fork".
+Выберите **Other**. | Выберите "Other".
+{{< /table >}}
+
+### Используйте курсивное начертание для определения или включения новых терминов
+
+{{< table caption = "Можно делать и нельзя - Используйте курсивное начертание для новых терминов" >}}
+Можно | Нельзя
+:--| :-----
+_Кластер_ — это набор узлов ... | "Кластер" — это набор узлов ...
+Эти компоненты формируют _панель управления_. | Эти компоненты формируют **панель управления**.
+{{< /table >}}
+
+### Оформляйте как код имена файлов, директории и пути
+
+{{< table caption = "Можно делать и нельзя - Оформляйте как код имена файлов, директории и пути" >}}
+Можно | Нельзя
+:--| :-----
+Откройте файл `envars.yaml`. | Откройте файл envars.yaml.
+Перейдите в директорию `/docs/tutorials`. | Перейдите в директорию /docs/tutorials.
+Откройте файл `/_data/concepts.yaml` file. | Откройте файл /_data/concepts.yaml.
+{{< /table >}}
+
+### Используйте международные правила для пунктуации внутри кавычек
+
+{{< table caption = "Можно делать и нельзя - Используйте международные правила для пунктуации внутри кавычек" >}}
+Можно | Нельзя
+:--| :-----
+события записываются с соответствующей "стадией". | события записываются с соответствующей "стадией."
+Копия называется "fork". | Копия называется "fork."
+{{< /table >}}
+
+## Форматирование с использованием однострочного кода
+
+### Используйте оформление кода для встроенного кода и команд
+
+Для однострочного (встроенного) блока кода в HTML-документе используйте тег ``. В файле Markdown используйте обратную кавычку (`).
+
+{{< table caption = "Можно делать и нельзя - Use code style for inline code and commands" >}}
+Можно | Нельзя
+:--| :-----
+Команда `kubectl run` создает Deployment. | Команда "kubectl run" создает Deployment.
+Для декларативного управления используйте `kubectl apply`. | Для декларативного управления используйте "kubectl apply".
+Заключите примеры кода в тройные обратные кавычки. `(```)` | Заключите примеры кода с использованием любого другого синтаксиса.
+Используйте одинарные обратные кавычки для выделения встроенного кода. Например, `var example = true`. | Используйте две звездочки (**) или подчёркивание (_) для выделения встроенного кода. Например, **var example = true**.
+Используйте тройные обратные кавычки до и после многострочного блока кода для отдельных блоков кода. | Используйте многострочные блоки кода для создания диаграмм, блок-схем или т.д.
+Используйте понятные имена переменных с соответствующим контекстом. | Используйте имена переменных, такие как 'foo', 'bar' и 'baz', которые не имеют смысла и которым не хватает контекста.
+Удаляйте завершающие пробелы в коде. | Добавляйте конечные пробелы в код там, где они необходимо, поскольку программа для чтения с экрана также будет зачитывать пробелы.
+{{< /table >}}
+
+{{< note >}}
+На сайте включена подсветка синтаксиса для примеров кода, хотя указывать язык необязательно. Подсветка синтаксиса в блоке кода должна соответствовать [рекомендациям по контрастности](https://www.w3.org/WAI/WCAG21/quickref/?versions=2.0&showtechniques=141%2C143#contrast-minimum).
+{{< /note >}}
+
+### Имена полей объектов и пространства имён оформляйте как код
+
+{{< table caption = "Можно делать и нельзя - Имена полей объектов и пространства имён оформляйте как код" >}}
+Можно | Нельзя
+:--| :-----
+Задайте значение для поля `replicas` в конфигурационном файле. | Задайте значение для поля "replicas" в конфигурационном файле.
+Значением поля `exec` является объект ExecAction. | Значением поля "exec" является объект ExecAction.
+Запустите процесс как Daemonset в пространстве имен `kube-system`. | Запустите процесс как Daemonset в пространстве имен kube-system.
+{{< /table >}}
+
+### Имена компонентов и командного инструмента оформляйте как код
+
+{{< table caption = "Можно делать и нельзя - Имена компонентов и командного инструмента оформляйте как код" >}}
+Можно | Нельзя
+:--| :-----
+kubelet сохраняет стабильность узла. | `kubelet` сохраняет стабильность узла.
+`kubectl` обрабатывает поиск и аутентификацию на сервере API. | kubectl обрабатывает поиск и аутентификацию на apiserver.
+Запустите процесс с использованием сертификата `kube-apiserver --client-ca-file=FILENAME`. | Запустите процесс с использованием сертификата kube-apiserver --client-ca-file=FILENAME.
+{{< /table >}}
+
+### Начинайте предложение с имени инструмента или компонента
+
+{{< table caption = "Можно делать и нельзя - Начинайте предложение с имени инструмента или компонента" >}}
+Можно | Нельзя
+:--| :-----
+The `kubeadm` tool bootstraps and provisions machines in a cluster. | `kubeadm` tool bootstraps and provisions machines in a cluster.
+The kube-scheduler is the default scheduler for Kubernetes. | kube-scheduler is the default scheduler for Kubernetes.
+{{< /table >}}
+
+### Используйте общее описание вместо имени компонента
+
+{{< table caption = "Можно делать и нельзя - Используйте общее описание вместо имени компонента" >}}
+Можно | Нельзя
+:--| :-----
+Сервер Kubernetes API следует спецификации OpenAPI. | apiserver следует спецификации OpenAPI.
+Агрегированные API-интерфейсы — вспомогательные API-серверы. | Агрегированные API-интерфейсы — вспомогательные APIServers.
+{{< /table >}}
+
+### Cтроковые и целочисленные значения полей пишите в обычном стиле
+
+Для значений полей типа string или integer используйте обычный стиль без кавычек.
+
+{{< table caption = "Можно делать и нельзя - Cтроковые и целочисленные значения полей пишите в обычном стиле" >}}
+Можно | Нельзя
+:--| :-----
+Задайте значение для поля `imagePullPolicy` как Always. | Задайте значение для поля `imagePullPolicy` как "Always".
+Задайте значение для поля `image` как nginx:1.8. | Задайте значение для поля`image` как `nginx:1.8`.
+Задайте значение для поля `replicas` как 2. | Задайте значение для поля `replicas` как `2`.
+{{< /table >}}
+
+
+## Форматирование фрагментов кода
+
+### Не добавляйте символ приглашения командной строки
+
+{{< table caption = "Можно делать и нельзя - Не добавляйте символ приглашения командной строки" >}}
+Можно | Нельзя
+:--| :-----
+kubectl get pods | $ kubectl get pods
+{{< /table >}}
+
+
+### Отделяйте команды от вывода
+
+Убедитесь, что Pod работает на выбранном вами узле:
+
+ kubectl get pods --output=wide
+
+Вывод будет примерно таким:
+
+ NAME READY STATUS RESTARTS AGE IP NODE
+ nginx 1/1 Running 0 13s 10.200.0.4 worker0
+
+### Версионирование примеров Kubernetes
+
+Примеры кода и примеры конфигурации, которые включают информацию о версии, должны согласовываться с относящемуся к нему тексту.
+
+Если информация зависит от версии, необходимо определить версию Kubernetes в секции `prerequisites` [шаблона задачи](/ru/docs/contribute/style/page-templates/#шаблон-задачи) или [шаблона руководства](/ru/docs/contribute/style/page-templates/#шаблон-руководства). После сохранения страницы секция `prerequisites` отобразится в отдельном блоке с заголовком **Подготовка к работе**.
+
+Для указания версии Kubernetes для страницы задачи или руководства в фронтальную часть файла добавьте поле `min-kubernetes-server-version`.
+
+Если YAML-пример определён в отдельном файле, поищите и просмотрите темы, которые ссылаются на него.
+Убедитесь, что темы с подключённым YAML-файлом содержат соответствующую информацию о версии.
+Если ни одна из тем не использует какой-либо YAML-файл подумайте над тем, чтобы удалить его вместо того, чтобы обновления.
+
+Например, если вы пишете руководство, предназначенное для использования с версией Kubernetes 1.8, фронтальная часть вашего Markdown-файла должен выглядеть примерно так:
+
+```yaml
+---
+title:
+min-kubernetes-server-version: v1.8
+---
+```
+
+В примерах кода и конфигурации не добавляйте комментарии про альтернативные версии.
+Убедитесь в том, чтобы в комментариях ваших примеров не содержались некорректные сведения, такие как ниже:
+
+```yaml
+apiVersion: v1 # в более ранних версиях...
+kind: Pod
+...
+```
+
+## Словарь Kubernetes.io
+
+Список специфичных для Kubernetes терминов и слов, которые будут регулярно встречаться по всему сайту.
+
+{{< table caption = "Словарь Kubernetes.io" >}}
+Термин | Пример использования
+:--- | :----
+Kubernetes | Kubernetes всегда должен начинаться с заглавной буквы.
+Docker | Docker всегда должен начинаться с заглавной буквы.
+SIG Docs | SIG Docs, а не SIG-DOCS или другие варианты.
+On-premises | On-premises or On-prem rather than On-premise or other variations.
+{{< /table >}}
+
+## Макрокоды
+
+[Макрокоды Hugo](https://gohugo.io/content-management/shortcodes) помогают создавать разного рода обращений к читателю. Наша документация поддерживает три разных макрокода для этого: **примечание** {{* note */>}}, **предостережение** {{* caution */>}} и **предупреждение** {{* warning */>}}.
+
+1. Заключите текст в открывающий и закрывающий макрокод.
+
+2. Используйте следующий синтаксис для определения стиля:
+
+ ```
+ {{* note */>}}
+ Вам не нужно указывать надпись; макрокод автоматически добавит её. (Примечание:, Предостережение:, и т.д.)
+ {{* /note */>}}
+ ```
+
+Результат:
+
+{{< note >}}
+Надпись блока будет такой же, как и имя тега.
+{{< /note >}}
+
+### Примечание
+
+Используйте {{* note */>}} для выделения подсказки или части информации, которая может быть полезна для ознакомления.
+
+Например:
+
+```
+{{* note */>}}
+Вы _также_ можете использовать Markdown внутри этих выносок.
+{{* /note */>}}
+```
+
+Результат:
+
+{{< note >}}
+Вы _также_ можете использовать Markdown внутри этих выносок.
+{{< /note >}}
+
+Вы можете использовать {{* note */>}} в списке:
+
+```
+1. Используйте макрокод примечания в списке
+
+1. Второй пункт с добавленным блоком примечания
+
+ {{* note */>}}
+ Макрокоды предупреждения, предостережения и примечания, добавленные в списки, должны содержать отступ в четыре пробела. Смотрите раздел [Распространённые проблемы с шорткодами](#распространённые-проблемы-с-шорткодами).
+ {{* /note */>}}
+
+1. Третий пункт в списке
+
+1. Четвертый пункт в списке
+```
+
+Результат:
+
+1. Используйте макрокод примечания в списке
+
+1. Второй пункт с добавленным блоком примечания
+
+ {{< note >}}
+ Макрокоды предупреждения, предостережения и примечания, добавленные в списки, должны содержать отступ в четыре пробела. Смотрите раздел [Распространённые проблемы с шорткодами](#распространённые-проблемы-с-шорткодами).
+ {{< /note >}}
+
+1. Третий пункт в списке
+
+1. Четвертый пункт в списке
+
+### Предостережение
+
+Используйте {{* caution */>}}, чтобы обратить внимание к важной информации, которая поможет избежать подводных камней.
+
+Например:
+
+```
+{{* caution */>}}
+Оформление выноски применяется только к строке, следующей после тега выше.
+{{* /caution */>}}
+```
+
+Результат:
+
+{{< caution >}}
+Оформление выноски применяется только к строке, следующей после тега выше.
+{{< /caution >}}
+
+### Предупреждение
+
+Используйте {{* warning */>}} для обозначения предупреждающей информации или такой, которую чрезвычайно важно соблюдать.
+
+Например:
+
+```
+{{* warning */>}}
+Острожно.
+{{* /warning */>}}
+```
+
+Результат:
+
+{{< warning >}}
+Острожно.
+{{< /warning >}}
+
+### Встраиваемая среда выполнения Katacoda
+
+С помощью этой кнопки пользователи могут запустить Minikube в своём браузере с помощью [терминала Katacoda](https://www.katacoda.com/embed/panel).
+Таким образом снижается порог входа, позволяя пользователям попробовать Minikube с помощью одного щелчка мыши, вместо того, чтобы устанавливать Minikube и Kubectl локально.
+
+Встроенная среда выполнения сконфигурирована для выполнения команды `minikube start` и позволяет пользователям пройти руководство в той же самой вкладке, что и документация.
+
+{{< caution >}}
+Сессия ограничена 15 минутами.
+{{< /caution >}}
+
+Например:
+
+```
+{{* kat-button */>}}
+```
+
+Результат:
+
+{{< kat-button >}}
+
+## Распространённые проблемы с шорткодами
+
+### Упорядоченные списки
+
+Макрокоды сбросят нумерацию в нумерованных списках, если вы не добавите отступ в четыре пробела перед уведомлением и тегом.
+
+Например:
+
+ 1. Разогреть духовку до 350˚F
+
+ 1. Подготовить тесто и вылить её в формочку для выпечки.
+ {{* note */>}}Для лучшего результата смажьте формочку.{{* /note */>}}
+
+ 1. Выпекать 20-25 minutes или пока тесто не зарумянится.
+
+Результат:
+
+1. Разогреть духовку до 350˚F
+
+1. Подготовить тесто и вылить её в формочку для выпечки.
+
+ {{< note >}}Для лучшего результата смажьте формочку.{{< /note >}}
+
+1. Выпекать 20-25 minutes или пока тесто не зарумянится.
+
+### Выражения для вставок
+
+Макрокоды внутри include-выражений нарушит процесс сборки. Поэтому они должны быть вставлены в родительский документ до и после вызова include. Например:
+
+```
+{{* note */>}}
+{{* include "task-tutorial-prereqs.md" */>}}
+{{* /note */>}}
+```
+
+
+## Элементы Markdown
+
+### Переносы строк
+Добавляйте одну новую строку для разделения содержимого таких блоков, как заголовки, списки, изображения, многострочный код и т.д. Исключение составляют заголовки второго уровня, которые должны быть разделены двумя переводами строки. Заголовки второго уровня следуют за первым уровнем (или названием страницы). Две пустые строки помогает лучше наглядно представить общую структуру содержимого в редакторе кода.
+
+### Заголовки
+Люди, просматривающие документацию, могут использовать программу чтения с экрана или другую вспомогательную технологию (Assistive technologies, AT). [Программы чтения с экрана](https://ru.wikipedia.org/wiki/%D0%AD%D0%BA%D1%80%D0%B0%D0%BD%D0%BD%D0%BE%D0%B5_%D1%81%D1%87%D0%B8%D1%82%D1%8B%D0%B2%D0%B0%D1%8E%D1%89%D0%B5%D0%B5_%D1%83%D1%81%D1%82%D1%80%D0%BE%D0%B9%D1%81%D1%82%D0%B2%D0%BE) — устройства вывода, которые выводят элементы на странице по очереди. Если на странице много текста, вы можете использовать заголовки, чтобы придать странице внутреннюю структуру. Хорошая структура страницы помогает всем читателям легко перемещаться по странице или выбрать интересующие темы.
+
+{{< table caption = "Можно делать и нельзя - Заголовки" >}}
+Можно | Нельзя
+:--| :-----
+Обновите заголовок в фронтальной части страницы или записи блога. | Используйте заголовок первого уровня, так как Hugo автоматически преобразует название страницы в фронтальной части в заголовок первого уровня.
+Используйте упорядоченные заголовки, чтобы сформировать общее представление о содержании страницы. | Используйте заголовки с уровня 4 по 6, если только это только в этом нет необходимости. Если текст настолько подробный, возможно, его нужно разделить на отдельные статьи.
+Используйте знак решётки или хеша (#) для всех видов контента, кроме записей блога. | Используйте подчеркивание (--- или ===) для обозначения заголовков первого уровня.
+Начинайте с большой буквы только первое слово в заголовке. Например, **Расширение kubectl с помощью плагинов** | Пишите с заглавной буквы все слова в заголовке. Например, **Расширение Kubectl С Помощью Плагинов**
+{{< /table >}}
+
+### Параграфы
+
+{{< table caption = "Можно делать и нельзя - Параграфы" >}}
+Можно | Нельзя
+:--| :-----
+Проследите за тем, чтобы в одном абзаце было не более 6 предложений. | Добавить к первом абзацу отступ с пробелами. Например, ⋅⋅⋅Три пробела перед абзацем образуют отступ.
+Используйте три дефиса (---) для создания горизонтальной черты. Используйте горизонтальные линии для обозначения конца в содержании абзаца. Например, смена места в истории или переход темы в разделе. | Используйте горизонтальные линии для оформления.
+{{< /table >}}
+
+### Ссылки
+
+{{< table caption = "Можно делать и нельзя - Ссылки" >}}
+Можно | Нельзя
+:--| :-----
+Указывайте ссылки, которые передают суть содержания, на который они ссылаются. Например: "Некоторые порты открыты на ваших машинах. Смотрите раздел Проверка необходимых портов, чтобы получить дополнительную информацию". | Используйте двусмысленные термины, такие как "нажмите сюда". Например: Некоторые порты открыты на ваших машинах. Смотрите этот раздел, чтобы получить дополнительную информацию".
+Указывайте ссылки в стиле Markdown: `[текст ссылки](URL)`. Например: `[Макрокоды Hugo](/docs/contribute/style/hugo-shortcodes/#table-captions)` отобразится как [Макрокоды Hugo](/docs/contribute/style/hugo-shortcodes/#table-captions). | Указывайте ссылки в формате HTML: `Ознакомьтесь с нашим руководством!` или добавляйте ссылки, которые открываются в новых вкладках или окнах. Например: `[example website](https://example.com){target="_blank"}`
+{{< /table >}}
+
+
+### Списки
+Сгруппируйте пункты в списке так, чтобы они были связаны друг с другом и следовали в определённом порядке, либо чтобы они сохраняли взаимосвязь между несколькими элементами. Когда программа чтения с экрана встречает нумерованный или неупорядоченный список, пользователю будет проинформирован, что существует группа из элементов списка. Затем пользователь может использовать клавиши-стрелки для перемещения между разными элементами в списке.
+Навигационные ссылки по сайту также могут быть помечены как элементы списка; в конечном счёте, все они просто группа связанных ссылок.
+
+ - Заканчивайте каждый элемент в списке точкой, если один или несколько элементов в списке являются законченными предложениями. Как правило, для согласованности либо все элементы должны быть целыми предложениями, либо ни один из них.
+
+ {{< note >}} Упорядоченные списки, которые являются частью неполного вступительного предложения, могут быть написаны в нижнем регистре и оканчиваться на точку, как если бы каждый элемент был составляющей вступительного предложения.{{< /note >}}
+
+ - Используйте цифру один (1.) для упорядоченных списков.
+
+ - Используйте (+), (*) или (-) для неупорядоченных списков.
+
+ - Добавьте пустую строку после каждого списка.
+
+ - Во вложенных списках добавьте отступ в четыре пробела (например, ⋅⋅⋅⋅).
+
+ - Элементы списка могут содержать несколько абзацев. Каждый последующий абзац в элементе списка должен иметь отступ в четыре пробела или один символ табуляции.
+
+### Таблицы
+
+Семантическая цель таблицы данных состоит в представлении данных в табличном виде. Пользователи с нормальным зрением могут бегло просмотреть таблицу, однако программа для чтения с экрана сканирует таблицу построчно. Заголовок таблицы используется для создания информативного заголовка для табличных данных. Инструменты вспомогательных технологий (Assistive technologies, AT) используют элемент заголовка HTML-таблицы, чтобы идентифицировать для пользователей, какие на странице есть таблицы.
+
+- Добавьте подписи к таблицам с помощью соответствующих [макрокодов Hugo](/docs/contribute/style/hugo-shortcodes/#table-captions).
+
+## Рекомендации по написанию контента
+
+В этом разделе перечислены рекомендации для написания ясного, лаконичного и единообразного текста документации.
+
+### Используйте настоящее время
+
+{{< table caption = "Можно делать и нельзя - Используйте настоящее время" >}}
+Можно | Нельзя
+:--| :-----
+Эта команда запускает прокси. | Эта команда запустит прокси.
+ {{< /table >}}
+
+Исключение: используйте будущее или прошедшее время, если требуется передать правильный смысл.
+
+### Используйте действительный залог
+
+{{< table caption = "Можно делать и нельзя - Используйте действительный залог" >}}
+Можно | Нельзя
+:--| :-----
+Вы можете изучить API с помощью браузера. | API можно изучить с помощью браузера.
+В файле YAML определяется количество реплик. | Количество реплик определяется в файле YAML.
+{{< /table >}}
+
+Исключение: используйте страдательный залог, если в действительном залоге получается неудачная формулировка.
+
+### Используйте простой и понятный язык
+
+Используйте простой и доступный язык. Избегайте использования ненужных фраз, например, "пожалуйста".
+
+{{< table caption = "Можно делать и нельзя - Используйте простой и понятный язык" >}}
+Можно | Нельзя
+:--| :-----
+Чтобы создать ReplicaSet, ... | Для того чтобы создать a ReplicaSet, ...
+Смотрите конфигурационный файл. | Пожалуйста, смотрите конфигурационный файл.
+Посмотрите Pods. | С помощью следующей команды мы посмотрим Pods.
+{{< /table >}}
+
+### Обращайтесь к читателю на "вы"
+
+{{< table caption = "Можно делать и нельзя - Обращайтесь к читателю на вы" >}}
+Можно | Нельзя
+:--| :-----
+Вы можете создать Deployment с помощью ... | Мы создадим Deployment с помощью ...
+В предыдущем выводе вы можете увидеть ... | В предыдущем выводе мы можем увидеть ...
+{{< /table >}}
+
+
+### Избегайте использования латинских фраз
+
+Вместо латинских аббревиатур используйте соответствующие выражения на английском.
+
+{{< table caption = "Можно делать и нельзя - Избегайте использования латинских фраз" >}}
+Можно | Нельзя
+:--| :-----
+For example, ... | e.g., ...
+That is, ...| i.e., ...
+{{< /table >}}
+
+
+Исключение: используйте "etc." вместо "et cetera".
+
+## Ошибки, которые следует избегать
+
+### Избегайте использования "мы"
+
+Использование "мы" в предложении может сбить с толку, так так неясно, кто под этим "мы" подразумевается (имеется ли в виду сам читатель при этом).
+
+{{< table caption = "Можно делать и нельзя - Избегайте использования мы" >}}
+Можно | Нельзя
+:--| :-----
+Версия 1.4 включает в себя ... | В версии 1.4 мы добавили ...
+Kubernetes представляет новую возможность для ... | Мы представляем новую возможность ...
+На этой странице вы узнаете, как использовать Pods. | На этой странице мы познакомимся с Pods.
+{{< /table >}}
+
+
+### Избегайте жаргона и идиомы
+
+Некоторые читатели говорят на английском как на втором языке. Избегайте жаргона и идиом, чтобы облегчить им понимание.
+
+{{< table caption = "Можно делать и нельзя - Избегайте жаргона и идиомы" >}}
+Можно | Нельзя
+:--| :-----
+Internally, ... | Under the hood, ...
+Create a new cluster. | Turn up a new cluster.
+{{< /table >}}
+
+
+### Избегайте выражений о будущем
+
+Не давайте обещаний или намеков на будущее. Если вам нужно рассказать про функциональность в альфа-версии, под соответствующем заголовком напишите поясняющий текст, что информация относится к альфа-версии.
+
+### Избегайте выражений, которые могут потерять актуальность
+
+Избегайте таких слов, как "в настоящее время" и "новый". Новая функциональность в настоящее время не будет таковой через несколько месяцев.
+
+{{< table caption = "Можно делать и нельзя - Избегайте выражений, которые могут потерять актуальность" >}}
+Можно | Нельзя
+:--| :-----
+В версии 1.4 ... | В текущей версии ...
+Функциональность Federation предоставляет ... | Новая функциональность Federation предоставляет ...
+{{< /table >}}
+
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* Подробнее про [написание новой темы](/ru/docs/contribute/style/write-new-topic/).
+* Подробнее про [использование шаблонов страниц](/ru/docs/contribute/style/page-templates/).
+* Подробнее про [создание пулреквеста](/ru/docs/contribute/start/#отправка-пулреквеста)).
+
+{{% /capture %}}
diff --git a/content/ru/docs/contribute/style/write-new-topic.md b/content/ru/docs/contribute/style/write-new-topic.md
new file mode 100644
index 0000000000000..d13789e52921d
--- /dev/null
+++ b/content/ru/docs/contribute/style/write-new-topic.md
@@ -0,0 +1,119 @@
+---
+title: Написание новой темы
+content_template: templates/task
+weight: 20
+---
+
+{{% capture overview %}}
+На этой странице показано, как создать новую тему для документации Kubernetes.
+{{% /capture %}}
+
+
+{{% capture prerequisites %}}
+Создайте копию репозитория документации Kubernetes, как описано в разделе [Участие для начинающих](/ru/docs/contribute/start/).
+{{% capture steps %}}
+
+## Выбор типы страницы
+
+Перед написанием новой темы, выберите тип страницы, который бы лучше всего подходил под ваш текст:
+
+{{< table caption = "Правила выбора типа страницы" >}}
+Тип | Описание
+:--- | :----------
+Концепция | Страница концепции объясняет некоторые аспекты Kubernetes. Например, страницы концепции может описывать объект Deployment в Kubernetes и разъяснить, какую роль он играет после развертывания, масштабирования и обновления приложения. Как правило, страницы концепций не включают последовательности шагов, а вместо этого содержат ссылки на задачи или руководства. В качестве примера концептуальной темы посмотрите страницу Nodes.
+Задача | На странице задачи показывается, как сделать что-то одно конкретное, главным образом с помощью короткой последовательности шагов. Страница задачи может быть короткой или длинной, если она остаётся сконцентрированной на одном аспекте. На странице задач можно сочетать краткие объяснения с необходимыми шагами для выполнения, однако если вам нужно дать подробное пояснение, вам следует сделать это в концептуальной теме. Смежные задачи и концептуальные темы должны быть связаны друг с другом. В качестве примера короткой страницы задачи посмотрите Configure a Pod to Use a Volume for Storage. Пример длинной страницы задачи смотрите Configure Liveness and Readiness Probes
+Руководство | На странице руководства показано, как сделать что-то более крупнее одной-единственной задачи. В руководстве может быть несколько последовательностей шагов, которые читатели могут реально выполнить по ходу чтения страницы. Либо на странице руководства могут приведены объяснения связанных частей кода. Например, руководство может содержать разбор примера кода. Руководство может включать в себя краткие объяснения связанной функциональности Kubernetes, но при они этом должны ссылаться на сопутствующие концептуальные темы, где можно узнать подробнее про конкретные возможности.
+{{< /table >}}
+
+Используйте шаблон для каждой новой страницы. Каждый тип страницы использует определённый [шаблон](/docs/contribute/style/page-templates/), поэтому при написании собственных тем вам следует выбрать свой шаблон. Использование шаблонов помогает поддерживать единообразие в темах конкретного типа.
+
+## Выбор заголовка и имени файла a title and filename
+
+Подберите заголовок, содержащий такие ключевые слова, по которым вы могли его найти в поисковике.
+Имя файла должно создаваться из слов в заголовке, написанных через дефис.
+Например, для темы с заголовком [Using an HTTP Proxy to Access the Kubernetes API](/docs/tasks/access-kubernetes-api/http-proxy-access-api/) имя файла будет `http-proxy-access-api.md`. Вам не нужно указывать "kubernetes" в имени файла, потому что слово "kubernetes" уже есть в полном URL-адресе темы, например:
+
+ /docs/tasks/access-kubernetes-api/http-proxy-access-api/
+
+## Добавление заголовка темы в фронтальную часть
+
+В [фронтальную часть](https://gohugo.io/content-management/front-matter/) файла вашей темы поместите поле заголовка `title`. Фронтальная часть — YAML-блок, который находится тремя дефисами в самом верху страницы. Например:
+
+ ---
+ title: Using an HTTP Proxy to Access the Kubernetes API
+ ---
+
+## Выбор директории
+
+В зависимости от типа вашей страницы поместите новый файл в одну из следующую поддиректорию:
+
+* /content/en/docs/tasks/
+* /content/en/docs/tutorials/
+* /content/en/docs/concepts/
+
+Вы можете поместить файл в имеющуюся поддиректорию либо создать новую.
+
+## Добавление темы в оглавлении
+
+Оглавление динамически генерируется исходя из структуры директорий документации. Корневые директории в `/content/en/docs/` создают навигацию с основными ссылками, где у каждой поддиректории есть записи в оглавлении.
+
+В каждой поддиректории есть файл `_index.md`, представляющий собой "главную" страницу всего содержимого этой поддиректории. В файле `_index.md` не нужно применять шаблон. В нём находится обзор содержания тем в поддиректории.
+
+Другие файлы в директории по умолчанию сортируются в алфавитном порядке. Такой порядок сортировки редко устраивает. Для управления такой относительной сортировкой тем в поддиректории, определите ключ `weight:` с целым числом в фронтальной части файла. Как правило, мы используем значения, кратные 10, чтобы оставить про запас для будущих страниц. Например, тема с весом `10` будет отображаться перед темой с весом `20`.
+
+## Вставка кода в тему
+
+Если вы хотите добавить код в тему, вы можете встроить код из файла напрямую, используя синтаксис блока кода в Markdown. Такой способ рекомендуется использовать в следующих случаев (это не исчерпывающий список):
+
+- В вашем коде показывается вывод такой команды, как `kubectl get deploy mydeployment -o json | jq '.status'`.
+- Ваш код недостаточно универсален, чтобы пользователи могли его попробовать сами. В качестве примера можно привести пример YAML-файла для создания Pod, который зависит от конкретной реализации [FlexVolume](/docs/concepts/storage/volumes#flexvolume).
+- Ваш код — это не готовый пример, потому что он предзначен для выделения части большего файла. Например, при описании способов настройки [PodSecurityPolicy](/docs/tasks/administer-cluster/sysctl-cluster/#podsecuritypolicy) по определённым причинам вы можете включить небольшой фрагмент напрямую в файле темы.
+- Ваш код по разным причинам не подходит для тестирования пользователями. Например, если вы описываете, как новый атрибут должен добавляться к ресурсу с помощью команды `kubectl edit`, то вы можете добавить короткий пример, показывающий только добавляемый атрибут.
+
+## Добавление кода из другого файла
+
+Другой способ добавить код в вашу тему — создать новый полноценный файл с примером (или группу файлов примеров), а затем из вашей темы подключить этот пример.
+Используйте этот метод, чтобы включить универсальный и повторно используемый пример YAML-файла, который читатель может проверить сам.
+
+При добавлении нового отдельного файла примера, например, в формате YAML, поместите код в одну из директорий `/examples/`, где `` — язык темы. В вашем файле темы используйте макрокод `codenew`:
+
+```none
+{{* codenew file="/my-example-yaml>" */>}}
+```
+
+где `` — это путь к включаемому файлу относительно директории `examples`. Следующий макрокод Hugo ссылается на YAML-файл по пути `/content/en/examples/pods/storage/gce-volume.yaml`.
+
+```none
+{{* codenew file="pods/storage/gce-volume.yaml" */>}}
+```
+
+{{< note >}}
+Чтобы отобразить Hugo-макрокоды в исходном виде, как в приведенном выше примере, поместите их в комментарии в стиле языка Си между `<` и `>`. Для примера посмотрите исходный код этой страницы.
+{{< /note >}}
+
+## Демонстрация создания API-объекта из конфигурационного файла
+
+Если вам нужно показать, как создать объект API из файла конфигурации, поместите файл конфигурации в одну из директорий в `/examples`.
+
+В вашей теме укажите эту команду:
+
+```
+kubectl create -f https://k8s.io/examples/pods/storage/gce-volume.yaml
+```
+
+{{< note >}}
+При добавлении новых YAML-файлов в директорию `/examples`, убедитесь, что этот файл перечислен в файле `/examples_test.go`. Подключённый к сайту Travis CI автоматически выполнит этот тестовый сценарий при отправке PR, чтобы проверить все примеры.
+{{< /note >}}
+
+В качестве примера темы, в которой используется этот метод, смотрите [Running a Single-Instance Stateful Application](/docs/tutorials/stateful-application/run-stateful-application/).
+
+## Добавление изображений в тему
+
+Поместите файлы изображений в директорию `/images`. Предпочтительный формат изображения — SVG.
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+* Подробнее про [использование шаблонов страниц](/ru/docs/contribute/style/page-templates/).
+* Подробнее про [создание пулреквеста](/ru/docs/contribute/start/#отправка-пулреквеста)).
+{{% /capture %}}
diff --git a/content/vi/docs/reference/glossary/cluster.md b/content/vi/docs/reference/glossary/cluster.md
new file mode 100644
index 0000000000000..1aeafc54b258e
--- /dev/null
+++ b/content/vi/docs/reference/glossary/cluster.md
@@ -0,0 +1,18 @@
+---
+title: Cluster
+id: cluster
+date: 2020-02-26
+full_link:
+short_description: >
+ Một tập các worker machine, được gọi là node, dùng để chạy các các ứng dụng được đóng gói (containerized application). Mỗi cụm (cluster) có ít nhất một worker node.
+
+aka:
+tags:
+- fundamental
+- operation
+---
+Một tập các worker machine, được gọi là node, dùng để chạy các containerized application. Mỗi cụm (cluster) có ít nhất một worker node.
+
+
+Các worker node chứa các pod (là những thành phần của ứng dụng). Control Plane quản lý các worker node và pod trong cluster.
+Trong môi trường sản phẩm (production environment), Control Plane thường chạy trên nhiều máy tính và một cluster thường chạy trên nhiều node, cung cấp khả năng chịu lỗi (fault-tolerance) và tính sẵn sàng cao (high availability).
\ No newline at end of file
diff --git a/content/vi/docs/reference/glossary/containerd.md b/content/vi/docs/reference/glossary/containerd.md
new file mode 100644
index 0000000000000..b14971032a12a
--- /dev/null
+++ b/content/vi/docs/reference/glossary/containerd.md
@@ -0,0 +1,15 @@
+---
+title: containerd
+id: containerd
+date: 2020-03-04
+full_link: https://containerd.io/docs/
+short_description: >
+ Một container runtime tập trung vào sự đơn giản, mạnh mẽ và linh động.
+aka:
+tags:
+- tool
+---
+ Một container runtime tập trung vào sự đơn giản, mạnh mẽ và linh động.
+
+
+containerd là một {{< glossary_tooltip text="container" term_id="container" >}} runtime cái mà chạy như một daemon trên Linux hoặc Windows. containerd quan tâm tới việc lấy và lưu trữ các container image, thực thi các container, cung cấp truy cập mạng, và nhiều hơn nữa.
\ No newline at end of file
diff --git a/content/vi/docs/reference/glossary/control-plane.md b/content/vi/docs/reference/glossary/control-plane.md
new file mode 100644
index 0000000000000..b8f9b0e7f8709
--- /dev/null
+++ b/content/vi/docs/reference/glossary/control-plane.md
@@ -0,0 +1,15 @@
+---
+title: Control Plane
+id: control-plane
+date: 2020-03-04
+full_link:
+short_description: >
+ Tầng điều khiển container, được dùng để đưa ra API và các interface để định nghĩa, triển khai, và quản lý vòng đời của các container.
+
+aka:
+tags:
+- fundamental
+---
+ Tầng điều khiển container, được dùng để đưa ra API và các interface để định nghĩa, triển khai, và quản lý vòng đời của các container.
+
+
diff --git a/content/vi/docs/reference/glossary/cri-o.md b/content/vi/docs/reference/glossary/cri-o.md
new file mode 100644
index 0000000000000..be8dde10ac92a
--- /dev/null
+++ b/content/vi/docs/reference/glossary/cri-o.md
@@ -0,0 +1,19 @@
+---
+title: CRI-O
+id: cri-o
+date: 2020-03-05
+full_link: https://cri-o.io/#what-is-cri-o
+short_description: >
+ Một container runtime nhẹ dành riêng cho Kubernetes
+aka:
+tags:
+- tool
+---
+Một công cụ giúp bạn sử dụng các OCI container runtime với Kubernetes CRI.
+
+
+
+CRI-O là một thực thi của {{< glossary_tooltip term_id="cri" >}} để cho phép sử dụng các {{< glossary_tooltip text="container" term_id="container" >}} runtime cái mà tương thích với Open Container Initiative (OCI)
+[runtime spec](http://www.github.com/opencontainers/runtime-spec).
+
+Triển khai CRI-O cho phép Kuberentes sử dụng bất kì OCI-compliant runtime như container runtime để chạy {{< glossary_tooltip text="Pods" term_id="pod" >}}, và để lấy CRI container image từ các remote registry.
\ No newline at end of file
diff --git a/content/vi/docs/reference/glossary/daemonset.md b/content/vi/docs/reference/glossary/daemonset.md
new file mode 100644
index 0000000000000..1039514a62d63
--- /dev/null
+++ b/content/vi/docs/reference/glossary/daemonset.md
@@ -0,0 +1,19 @@
+---
+title: DaemonSet
+id: daemonset
+date: 2020-03-05
+full_link: /docs/concepts/workloads/controllers/daemonset
+short_description: >
+ Đảm bảo một bản sao của Pod đang chạy trên một tập các node của cluster.
+aka:
+tags:
+- fundamental
+- core-object
+- workload
+---
+ Đảm bảo một bản sao của {{< glossary_tooltip text="Pod" term_id="pod" >}} đang chạy trên một tập các node của {{< glossary_tooltip text="cluster" term_id="cluster" >}}.
+
+
+
+Được sử dụng để deploy những system daemon ví dụ như log collector, monitoring agent, những cái thường phải chạy trên mọi {{< glossary_tooltip term_id="node" >}}.
+
diff --git a/content/vi/docs/reference/glossary/etcd.md b/content/vi/docs/reference/glossary/etcd.md
new file mode 100644
index 0000000000000..d52a91fdbd600
--- /dev/null
+++ b/content/vi/docs/reference/glossary/etcd.md
@@ -0,0 +1,20 @@
+---
+title: etcd
+id: etcd
+date: 2020-27-02
+full_link: /docs/tasks/administer-cluster/configure-upgrade-etcd/
+short_description: >
+ Key value store nhất quán (consistent) và sẵn sàng cao (highly-available) được sử dụng như một kho lưu trữ của Kubernetes cho tất cả dữ liệu của cluster.
+
+aka:
+tags:
+- architecture
+- storage
+---
+ Key value store nhất quán (consistent) và sẵn sàng cao (highly-available) được sử dụng như một kho lưu trữ của Kubernetes cho tất cả dữ liệu của cluster.
+
+
+
+Nếu Kubernetes cluster của bạn sử dụng etcd như kho lưu trữ của nó, chắc chắn bạn có một kế hoạch [back up](/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster) cho những dữ liệu này.
+
+Bạn có thể tìm thêm thông tin chi tiết về etcd tại [documentation](https://etcd.io/docs/).
\ No newline at end of file
diff --git a/content/vi/docs/reference/glossary/kube-apiserver.md b/content/vi/docs/reference/glossary/kube-apiserver.md
new file mode 100644
index 0000000000000..94b0aa218301f
--- /dev/null
+++ b/content/vi/docs/reference/glossary/kube-apiserver.md
@@ -0,0 +1,22 @@
+---
+title: API server
+id: kube-apiserver
+date: 2020-02-26
+full_link: /docs/reference/generated/kube-apiserver/
+short_description: >
+ Thành phần tầng điểu khiển (control plane), được dùng để phục vụ Kubernetes API.
+
+aka:
+- kube-apiserver
+tags:
+- architecture
+- fundamental
+---
+ API server là một thành phần của Kubernetes {{< glossary_tooltip text="control plane" term_id="control-plane" >}}, được dùng để đưa ra Kubernetes API.
+API server là front end của Kubernetes control plane.
+
+
+
+Thực thi chính của API server là [kube-apiserver](/docs/reference/generated/kube-apiserver/).
+kube-apiserver được thiết kế để co giãn theo chiều ngang — có nghĩa là nó co giãn bằng cách triển khai thêm các thực thể.
+Bạn có thể chạy một vài thực thể của kube-apiserver và cân bằng lưu lượng giữa các thực thể này.
\ No newline at end of file
diff --git a/content/vi/docs/reference/glossary/kube-scheduler.md b/content/vi/docs/reference/glossary/kube-scheduler.md
new file mode 100644
index 0000000000000..dc7c63a31ff3f
--- /dev/null
+++ b/content/vi/docs/reference/glossary/kube-scheduler.md
@@ -0,0 +1,17 @@
+---
+title: kube-scheduler
+id: kube-scheduler
+date: 2020-03-05
+full_link: /docs/reference/generated/kube-scheduler/
+short_description: >
+ Thành phần của Control Plane, được dùng để giám sát việc tạo những pod mới mà chưa được chỉ định vào node nào, và chọn một node để chúng chạy trên đó.
+
+aka:
+tags:
+- architecture
+---
+ Thành phần của Control Plane, được dùng để giám sát việc tạo những pod mới mà chưa được chỉ định vào node nào, và chọn một node để chúng chạy trên đó.
+
+
+
+Những yếu tố trong những quyết định lập lịch bao gồm những yêu cầu về tài nguyên, những đòi hỏi về phần cứng/phần mềm/chính sách, những thông số về affinity và anti-affinity, dữ liệu tại chỗ (data locality), nhiễu inter-workload và thời hạn (deadline).
diff --git a/content/zh/blog/_posts/2015-03-00-Kubernetes-Gathering-Videos.md b/content/zh/blog/_posts/2015-03-00-Kubernetes-Gathering-Videos.md
index 2576296c644b6..c53ea685856d0 100644
--- a/content/zh/blog/_posts/2015-03-00-Kubernetes-Gathering-Videos.md
+++ b/content/zh/blog/_posts/2015-03-00-Kubernetes-Gathering-Videos.md
@@ -3,7 +3,6 @@
title: " Kubernetes 采集视频 "
date: 2015-03-23
slug: kubernetes-gathering-videos
-url: /blog/2015/03/Kubernetes-Gathering-Videos
---
* kubectl exec -p $POD -- $CMD
@@ -116,7 +115,7 @@ Notes from meeting:
* want to inject a binary under control of the host, similar to pre-start hooks
* socat, nsenter, whatever the pre-start hook needs
-
+
-->
* 想要在主机的控制下注入二进制文件,类似于预启动钩子
diff --git a/content/zh/blog/_posts/2015-03-00-Welcome-To-Kubernetes-Blog.md b/content/zh/blog/_posts/2015-03-00-Welcome-To-Kubernetes-Blog.md
index 3e3b5f6d3dbd8..3b35b0b21ad1f 100644
--- a/content/zh/blog/_posts/2015-03-00-Welcome-To-Kubernetes-Blog.md
+++ b/content/zh/blog/_posts/2015-03-00-Welcome-To-Kubernetes-Blog.md
@@ -2,12 +2,11 @@
title: 欢迎来到 Kubernetes 博客!
date: 2015-03-20
slug: welcome-to-kubernetes-blog
-url: /blog/2015/03/Welcome-To-Kubernetes-Blog
---
每个星期,Kubernetes 贡献者社区几乎都会在谷歌 Hangouts 上聚会。我们希望任何对此感兴趣的人都能了解这个论坛的讨论内容。
-议程
+议程
* Mesos 集成
* 高可用性(HA)
@@ -36,7 +35,7 @@ Agenda
* 客户端版本化
笔记
@@ -71,7 +70,7 @@ Notes
* Load-balance apiserver.
* Cold standby for controller manager and other master components.
-
+
-->
* HA
@@ -95,7 +94,7 @@ Notes
* See
* Justin working on multi-platform e2e dashboard
-
+
-->
* 向 e2e 添加性能和分析详细信息以跟踪回归
@@ -123,7 +122,7 @@ Notes
* Structured types are useful in the client. Versioned structs would be ok.
* If start with json/yaml (kubectl), shouldn’t convert to structured types. Use swagger.
-
+
-->
* 客户端版本化
diff --git a/content/zh/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_29.md b/content/zh/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_29.md
index c451022671a51..399d92679201d 100644
--- a/content/zh/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_29.md
+++ b/content/zh/blog/_posts/2015-04-00-Weekly-Kubernetes-Community-Hangout_29.md
@@ -2,7 +2,6 @@
title: " Kubernetes 社区每周聚会笔记- 2015年4月24日 "
date: 2015-04-30
slug: weekly-kubernetes-community-hangout_29
-url: /blog/2015/04/Weekly-Kubernetes-Community-Hangout_29
---
每个星期,Kubernetes 贡献者社区几乎都会在谷歌 Hangouts 上聚会。我们希望任何对此感兴趣的人都能了解这个论坛的讨论内容。
@@ -85,7 +84,7 @@ Notes:
* Brendan: 请求,它如何查找重复请求?Cassandra 希望在底层复制数据。向上和向下扩缩是有效的。根据负载动态地创建存储。它的步骤不仅仅是快照——通过编程使用预分配创建副本。
* Tim: 帮助自动配置。
-
+
+-->
* 简单的滚动更新 - Brendan
@@ -58,7 +57,7 @@ Every week the Kubernetes contributing community meet virtually over Google Hang
* Can run AppContainer and docker containers in same pod.
* Changes are close to merged.
-
+
-->
* Rocket 演示 - CoreOS 的伙计们
@@ -88,7 +87,7 @@ Every week the Kubernetes contributing community meet virtually over Google Hang
* * Can create new service account with ServiceAccountToken. Controller will create token for it.
* Can create a pod with service account, pods will have service account secret mounted at /var/run/secrets/kubernetes.io/…
-
+
-->
* 演示 service accounts 和 secrets 被添加到 pod - Jordan
@@ -106,16 +105,16 @@ Every week the Kubernetes contributing community meet virtually over Google Hang
* * 可以使用 ServiceAccountToken 创建新的 service account。控制器将为它创建令牌。
* 可以创建一个带有 service account 的 pod, pod 将在 /var/run/secrets/kubernets.io/…
-
-
-
+
* Kubelet 在容器中运行 - Paul
* Kubelet 成功地运行了带有 secret 的 pod。
-
+
diff --git a/content/zh/blog/_posts/2015-06-00-Slides-Cluster-Management-With.md b/content/zh/blog/_posts/2015-06-00-Slides-Cluster-Management-With.md
index 8e1c001d65373..e2937cd16ef10 100644
--- a/content/zh/blog/_posts/2015-06-00-Slides-Cluster-Management-With.md
+++ b/content/zh/blog/_posts/2015-06-00-Slides-Cluster-Management-With.md
@@ -2,7 +2,6 @@
title: "幻灯片:Kubernetes 集群管理,爱丁堡大学演讲"
date: 2015-06-26
slug: slides-cluster-management-with
-url: /blog/2015/06/Slides-Cluster-Management-With
---
_今天的嘉宾帖子是由 IT 自动化领域的领导者 Puppet Labs 的高级软件工程师 Gareth Rushgrove 撰写的。Gareth告诉我们一个新的 Puppet 模块,它帮助管理 Kubernetes 中的资源。_
@@ -27,7 +26,7 @@ _今天的嘉宾帖子是由 IT 自动化领域的领导者 Puppet Labs 的高
### Puppet Kubernetes 模块
@@ -48,7 +47,7 @@ kubernetes_pod { 'sample-pod':
}]
},
```
-}
+}
-->
```
@@ -63,10 +62,10 @@ kubernetes_pod { 'sample-pod':
image => 'nginx',
}]
},
-}
+}
```
@@ -91,7 +90,7 @@ Kubernetes has several resources, from Pods and Services to Replication Controll
Kubernetes 有很多资源,来自 Pods、 Services、 Replication Controllers 和 Service Accounts。您可以在[Puppet 中的 kubernetes 留言簿示例](https://puppetlabs.com/blog/kubernetes-guestbook-example-puppet)文章中看到管理这些资源的模块示例。这演示了如何将规范的 hello-world 示例转换为使用 Puppet代码。
@@ -113,17 +112,17 @@ guestbook { 'myguestbook':
frontend_replicas => 3,
redis_master_image => 'redis',
redis_slave_image => 'gcr.io/google_samples/gb-redisslave:v1',
- frontend_image => 'gcr.io/google_samples/gb-frontend:v3',
+ frontend_image => 'gcr.io/google_samples/gb-frontend:v3',
}
```
您可以在Puppet博客文章[在 Puppet 中为 Kubernetes 构建自己的抽象](https://puppetlabs.com/blog/building-your-own-abstractions-kubernetes-puppet)中阅读更多关于使用 Puppet 定义的类型的信息,并看到更多的代码示例。
@@ -146,13 +145,13 @@ The advantages of using Puppet rather than just the standard YAML files and kube
- 能够针对 Kubernetes API 重复运行相同的代码,以检测任何更改或修正配置。
值得注意的是,大多数大型组织都将拥有非常异构的环境,运行各种各样的软件和操作系统。拥有统一这些离散系统的单一工具链可以使采用 Kubernetes 等新技术变得更加容易。
diff --git a/content/zh/blog/_posts/2016-01-00-Simple-Leader-Election-With-Kubernetes.md b/content/zh/blog/_posts/2016-01-00-Simple-Leader-Election-With-Kubernetes.md
index bcea1c026e4cc..a1c9bb3c3b8e5 100644
--- a/content/zh/blog/_posts/2016-01-00-Simple-Leader-Election-With-Kubernetes.md
+++ b/content/zh/blog/_posts/2016-01-00-Simple-Leader-Election-With-Kubernetes.md
@@ -3,7 +3,6 @@
title: " Simple leader election with Kubernetes and Docker "
date: 2016-01-11
slug: simple-leader-election-with-kubernetes
-url: /blog/2016/01/Simple-Leader-Election-With-Kubernetes
---
#### Overview
@@ -59,13 +58,13 @@ Given these primitives, the code to use master election is relatively straightfo
给定这些原语,使用 master election 的代码相对简单,您可以在这里找到[here][1]。我们自己来做吧。
-```
+```
$ kubectl run leader-elector --image=gcr.io/google_containers/leader-elector:0.4 --replicas=3 -- --election=example
```
这将创建一个包含3个副本的 leader election 集合:
-```
+```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
leader-elector-inmr1 1/1 Running 0 13s
@@ -91,7 +90,7 @@ leader-elector-sgwcq 1/1 Running 0 13s
@@ -127,7 +126,7 @@ _'example' 是上面 kubectl run … 命令_中候选集的名称
$ kubectl get endpoints example -o yaml
```
现在,要验证 leader election 是否实际有效,请在另一个终端运行:
-```
+```
$ kubectl delete pods (leader-pod-name)
```
@@ -142,7 +141,7 @@ The leader-election container provides a simple webserver that can serve on any
Leader-election container 提供了一个简单的 web 服务器,可以服务于任何地址(e.g. http://localhost:4040)。您可以通过删除现有的 leader election 组并创建一个新的 leader elector 组来测试这一点,在该组中,您还可以向 leader elector 映像传递--http=(host):(port) 规范。这将导致集合中的每个成员通过 webhook 提供有关领导者的信息。
-```
+```
# delete the old leader elector group
$ kubectl delete rc leader-elector
@@ -174,7 +173,7 @@ http://localhost:8001/api/v1/proxy/namespaces/default/pods/(leader-pod-name):404
And you will see:
-```
+```
{"name":"(name-of-leader-here)"}
```
#### Leader election with sidecars
@@ -192,7 +191,7 @@ http://localhost:8001/api/v1/proxy/namespaces/default/pods/(leader-pod-name):404
你会看到:
-```
+```
{"name":"(name-of-leader-here)"}
```
#### 有副手的 leader election
@@ -209,7 +208,7 @@ Leader-election container 可以作为一个 sidecar,您可以从自己的应
-* “Kubernetes 硬件黑客:通过旋钮、推杆和滑块探索 Kubernetes API” 演讲者 Ian Lewis 和 Brian Dorsey,谷歌开发布道师* [http://sched.co/6Bl3](http://sched.co/6Bl3)
+* “Kubernetes 硬件黑客:通过旋钮、推杆和滑块探索 Kubernetes API” 演讲者 Ian Lewis 和 Brian Dorsey,谷歌开发布道师* [http://sched.co/6Bl3](http://sched.co/6Bl3)
* “rktnetes: 容器运行时和 Kubernetes 的新功能” 演讲者 Jonathan Boulle, CoreOS 的主程 -* [http://sched.co/6BY7](http://sched.co/6BY7)
* “Kubernetes 文档:贡献、修复问题、收集奖金” 作者:John Mulhausen,首席技术作家,谷歌 -* [http://sched.co/6BUP](http://sched.co/6BUP)
* “[OpenStack 在 Kubernetes 的世界中扮演什么角色?](https://kubeconeurope2016.sched.org/event/6BYC/what-is-openstacks-role-in-a-kubernetes-world?iframe=yes&w=i:0;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no)” 作者:Thierry carez, OpenStack 基金会工程总监 -* http://sched.co/6BYC
-* “容器调度的实用指南” 作者:Mandy Waite,开发者倡导者,谷歌 -* [http://sched.co/6BZa](http://sched.co/6BZa)
+* “容器调度的实用指南” 作者:Mandy Waite,开发者倡导者,谷歌 -* [http://sched.co/6BZa](http://sched.co/6BZa)
* “[《纽约时报》编辑部正在制作 Kubernetes](https://kubeconeurope2016.sched.org/event/67f2/kubernetes-in-production-in-the-new-york-times-newsroom?iframe=yes&w=i:0;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no)” Eric Lewis,《纽约时报》网站开发人员 -* [http://sched.co/67f2](http://sched.co/67f2)
* “[使用 NGINX 为 Kubernetes 创建一个高级负载均衡解决方案](https://kubeconeurope2016.sched.org/event/6Bc9/creating-an-advanced-load-balancing-solution-for-kubernetes-with-nginx?iframe=yes&w=i:0;&sidebar=yes&bg=no#?iframe=yes&w=i:100;&sidebar=yes&bg=no)” 作者:Andrew Hutchings, NGINX 技术产品经理 -* http://sched.co/6Bc9
@@ -61,11 +60,11 @@ Get your KubeCon EU [tickets here](https://ti.to/kubecon/kubecon-eu-2016).
[在这里](https://ti.to/kubecon/kubecon-eu-2016)获取您的 KubeCon EU 门票。
会场地址:CodeNode * 英国伦敦南广场 10 号
酒店住宿:[酒店](https://skillsmatter.com/contact-us)
@@ -74,9 +73,9 @@ Google is a proud Diamond sponsor of KubeCon EU 2016. Come to London next month,
谷歌是 KubeCon EU 2016 的钻石赞助商。下个月 3 月 10 - 11 号来伦敦,参观 13 号展位,了解 Kubernetes,Google Container Engine(GKE),Google Cloud Platform 的所有信息!
_KubeCon 是由 KubeAcademy、LLC 组织的,这是一个由社区驱动的开发者团体,专注于开发人员的教育和 kubernet.com 的推广
diff --git a/content/zh/blog/_posts/2016-02-00-Kubernetes-Community-Meeting-Notes.md b/content/zh/blog/_posts/2016-02-00-Kubernetes-Community-Meeting-Notes.md
index 5d1037f747226..608a5bbd33b36 100644
--- a/content/zh/blog/_posts/2016-02-00-Kubernetes-Community-Meeting-Notes.md
+++ b/content/zh/blog/_posts/2016-02-00-Kubernetes-Community-Meeting-Notes.md
@@ -2,7 +2,6 @@
title: " Kubernetes 社区会议记录 - 20160204 "
date: 2016-02-09
slug: kubernetes-community-meeting-notes
-url: /blog/2016/02/Kubernetes-Community-Meeting-Notes
---
* 书记员:Rob Hirschfeld
* 演示视频(20分钟):CoreOS rkt + Kubernetes[Shaya Potter]
- * 期待在未来几个月内看到与rkt和k8s的整合(“rkt-netes”)。 还没有集成到 v1.2版本中。
+ * 期待在未来几个月内看到与rkt和k8s的整合(“rkt-netes”)。 还没有集成到 v1.2版本中。
* Shaya 做了一个演示(8分钟的会议视频参考)
* rkt的CLI显示了旋转容器
* [注意:音频在点数上是乱码]
@@ -46,11 +45,11 @@ Kubernetes 贡献社区在每周四 10:00 PT 开会,通过视频会议讨论项
* Dawn Chen:
* 将 rkt 与 kubernetes 集成的其余问题:1)cadivsor 2) DNS 3)与日志记录相关的错误
* 但是需要在 e2e 测试套件上做更多的工作
-
* 用例(10分钟):在 OpenStack 上的 eBay k8s 和 k8s 上的 OpenStack [Ashwin Raveendran]
@@ -90,7 +89,7 @@ Kubernetes 贡献社区在每周四 10:00 PT 开会,通过视频会议讨论项
* 我们希望在多个平台上进行测试的共识。
* 为测试报告提供一个全面转储会很有帮助
* 可以使用"phone-home"收集异常
-
+
要参与 Kubernetes 社区,请考虑加入我们的[Slack 频道][2],查看 GitHub上的 [Kubernetes 项目][3],或加入[Kubernetes-dev Google 小组][4]。如果你真的很兴奋,你可以完成上述所有工作并加入我们的下一次社区对话-2016年2月11日。请将您自己或您想要了解的主题添加到[议程][5]并通过加入[此组][6]来获取日历邀请。
diff --git a/content/zh/blog/_posts/2016-07-00-Citrix-Netscaler-And-Kubernetes.md b/content/zh/blog/_posts/2016-07-00-Citrix-Netscaler-And-Kubernetes.md
index b0a27044dcc1c..45ab59e3af0e8 100644
--- a/content/zh/blog/_posts/2016-07-00-Citrix-Netscaler-And-Kubernetes.md
+++ b/content/zh/blog/_posts/2016-07-00-Citrix-Netscaler-And-Kubernetes.md
@@ -2,7 +2,6 @@
title: " Citrix + Kubernetes = 全垒打 "
date: 2016-07-14
slug: citrix-netscaler-and-kubernetes
-url: /blog/2016/07/Citrix-Netscaler-And-Kubernetes
---
-编者按:今天的客座文章来自 Citrix Systems 的产品管理总监 Mikko Disini,他分享了他们在 Kubernetes 集成上的合作经验。 _
+编者按:今天的客座文章来自 Citrix Systems 的产品管理总监 Mikko Disini,他分享了他们在 Kubernetes 集成上的合作经验。 _
技术合作就像体育运动。如果你能像一个团队一样合作,你就能在最后关头取得胜利。这就是我们对谷歌云平台团队的经验。
-最近,我们与 Google 云平台(GCP)联系,代表 Citrix 客户以及更广泛的企业市场,希望就工作负载的迁移进行协作。此迁移需要将 [NetScaler Docker 负载均衡器]https://www.citrix.com/blogs/2016/06/20/the-best-docker-load-balancer-at-dockercon-in-seattle-this-week/) CPX 包含到 Kubernetes 节点中,并解决将流量引入 CPX 代理的任何问题。
+最近,我们与 Google 云平台(GCP)联系,代表 Citrix 客户以及更广泛的企业市场,希望就工作负载的迁移进行协作。此迁移需要将 [NetScaler Docker 负载均衡器]https://www.citrix.com/blogs/2016/06/20/the-best-docker-load-balancer-at-dockercon-in-seattle-this-week/) CPX 包含到 Kubernetes 节点中,并解决将流量引入 CPX 代理的任何问题。
**为什么是 NetScaler 和 Kubernetes**
@@ -42,16 +41,16 @@ Recently, we approached Google Cloud Platform (GCP) to collaborate on behalf of
-->
1. Citrix 的客户希望他们开始使用 Kubernetes 部署他们的容器和微服务体系结构时,能够像当初迁移到云计算时一样,享有 NetScaler 所提供的第 4 层到第 7 层能力
-2. Kubernetes 提供了一套经过验证的基础设施,可用来运行容器和虚拟机,并自动交付工作负载;
+2. Kubernetes 提供了一套经过验证的基础设施,可用来运行容器和虚拟机,并自动交付工作负载;
3. NetScaler CPX 提供第 4 层到第 7 层的服务,并为日志和分析平台 [NetScaler 管理和分析系统](https://www.citrix.com/blogs/2016/05/24/introducing-the-next-generation-netscaler-management-and-analytics-system/) 提供高效的度量数据。
我希望我们所有与技术合作伙伴一起工作的经验都能像与 GCP 一起工作一样好。我们有一个列表,包含支持我们的用例所需要解决的问题。我们能够快速协作形成解决方案。为了解决这些问题,GCP 团队提供了深入的技术支持,与 Citrix 合作,从而使得 NetScaler CPX 能够在每台主机上作为客户端代理启动运行。
接下来,需要在 GCP 入口负载均衡器的数据路径中插入 NetScaler CPX,使 NetScaler CPX 能够将流量分散到前端 web 服务器。NetScaler 团队进行了修改,以便 NetScaler CPX 监听 API 服务器事件,并配置自己来创建 VIP、IP 表规则和服务器规则,以便跨前端应用程序接收流量和负载均衡。谷歌云平台团队提供反馈和帮助,验证为克服技术障碍所做的修改。完成了!
@@ -61,7 +60,7 @@ NetScaler CPX use case is supported in [Kubernetes 1.3](https://kubernetes.io/bl
NetScaler CPX 用例在 [Kubernetes 1.3](https://kubernetes.io/blog/2016/07/kubernetes-1-3-bridging-cloud-native-and-enterprise-workloads/) 中提供支持。Citrix 的客户和更广泛的企业市场将有机会基于 Kubernetes 享用 NetScaler 服务,从而降低将工作负载转移到云平台的阻力。
您可以在[此处](https://www.citrix.com/networking/microservices.html)了解有关 NetScaler CPX 的更多信息。
diff --git a/content/zh/blog/_posts/2016-07-00-Dashboard-Web-Interface-For-Kubernetes.md b/content/zh/blog/_posts/2016-07-00-Dashboard-Web-Interface-For-Kubernetes.md
index 935cbcf325033..dd0533def361c 100644
--- a/content/zh/blog/_posts/2016-07-00-Dashboard-Web-Interface-For-Kubernetes.md
+++ b/content/zh/blog/_posts/2016-07-00-Dashboard-Web-Interface-For-Kubernetes.md
@@ -2,7 +2,6 @@
title: " Dashboard - Kubernetes 的全功能 Web 界面 "
date: 2016-07-15
slug: dashboard-web-interface-for-kubernetes
-url: /blog/2016/07/Dashboard-Web-Interface-For-Kubernetes
---
_编者按:这篇文章是[一系列深入的文章](https://kubernetes.io/blog/2016/07/five-days-of-kubernetes-1-3) 中关于Kubernetes 1.3的新内容的一部分_
[Kubernetes Dashboard](http://github.com/kubernetes/dashboard)是一个旨在为 Kubernetes 世界带来通用监控和操作 Web 界面的项目。三个月前,我们[发布](https://kubernetes.io/blog/2016/04/building-awesome-user-interfaces-for-kubernetes)第一个面向生产的版本,从那时起 dashboard 已经做了大量的改进。在一个 UI 中,您可以在不离开浏览器的情况下,与 Kubernetes 集群执行大多数可能的交互。这篇博客文章分解了最新版本中引入的新功能,并概述了未来的路线图。
-**全功能的 Dashboard**
+**全功能的 Dashboard**
由于社区和项目成员的大量贡献,我们能够为[Kubernetes 1.3发行版](https://kubernetes.io/blog/2016/07/kubernetes-1-3-bridging-cloud-native-and-enterprise-workloads/)提供许多新功能。我们一直在认真听取用户的反馈(参见[摘要信息图表](http://static.lwy.io/img/kubernetes_dashboard_infographic.png)),并解决了最高优先级的请求和难点。
-->
-_编者按,今天的嘉宾帖子来自一位独立的 kubernetes 撰稿人 Justin Santa Barbara,分享了他对项目从一开始到未来发展的思考。_
+_编者按,今天的嘉宾帖子来自一位独立的 kubernetes 撰稿人 Justin Santa Barbara,分享了他对项目从一开始到未来发展的思考。_
-**亲爱的 K8s,**
+**亲爱的 K8s,**
-_很难相信你是唯一的一个 - 成长这么快的。在你一岁生日的时候,我想我可以写一个小纸条,告诉你为什么我在你出生的时候那么兴奋,为什么我觉得很幸运能成为抚养你长大的一员,为什么我渴望看到你继续成长!_
+_很难相信你是唯一的一个 - 成长这么快的。在你一岁生日的时候,我想我可以写一个小纸条,告诉你为什么我在你出生的时候那么兴奋,为什么我觉得很幸运能成为抚养你长大的一员,为什么我渴望看到你继续成长!_
-_--Justin_
+_--Justin_
你从一个优秀的基础 - 良好的声明性功能开始,它是围绕一个具有良好定义的模式和机制的坚实的 API 构建的,这样我们就可以向前发展了。果然,在你的第一年里,你增长得如此之快:autoscaling、HTTP load-balancing support (Ingress)、support for persistent workloads including clustered databases (PetSets)。你已经和更多的云交了朋友(欢迎 azure 和 openstack 加入家庭),甚至开始跨越区域和集群(Federation)。这些只是一些最明显的变化 - 在你的大脑里发生了太多的变化!
我觉得你一直保持开放的态度真是太好了 - 你好像把所有的东西都写在 github 上 - 不管是好是坏。我想我们在这方面都学到了很多,比如让工程师做缩放声明的风险,然后在没有完全相同的精确性和严谨性框架的情况下,将这些声明与索赔进行权衡。但我很自豪你选择了不降低你的标准,而是上升到挑战,只是跑得更快 - 这可能不是最现实的办法,但这是唯一的方式能移动山!
然而,不知何故,你已经设法避免了许多其他开源软件陷入的共同死胡同,特别是当那些项目越来越大,开发人员最终要做的比直接使用它更多的时候。你是怎么做到的?有一个很可能是虚构的故事,讲的是 IBM 的一名员工犯了一个巨大的错误,被传唤去见大老板,希望被解雇,却被告知“我们刚刚花了几百万美元培训你。我们为什么要解雇你?“。尽管谷歌对你进行了大量的投资(包括 redhat 和其他公司),但我有时想知道,我们正在避免的错误是否更有价值。有一个非常开放的开发过程,但也有一个“oracle”,它有时会通过告诉我们两年后如果我们做一个特定的设计决策会发生什么来纠正错误。这是你应该听的父母!
所以,尽管你只有一岁,你真的有一个[旧灵魂](http://queue.acm.org/detail.cfm?ID=2898444)。我只是[很多人抚养你](https://kubernetes.io/blog/2016/07/happy-k8sbday-1)中的一员,但对我来说,能够与那些建立了这些令人难以置信的系统并拥有所有这些领域知识的人一起工作是一次极好的学习经历。然而,因为我们是白手起家(而不是采用现有的 Borg 代码),我们处于同一水平,仍然可以就如何培养你进行真正的讨论。好吧,至少和我们的水平一样接近,但值得称赞的是,他们都太好了,从来没提过!
@@ -67,9 +66,9 @@ If I would pick just two of the wise decisions those brilliant people made:
- 控制器是状态同步器:我们指定目标,您的控制器将不遗余力地工作,使系统达到该状态。它们工作在强类型 API 基础上,并且贯穿整个代码,因此 Kubernetes 比一个大的程序多一百个小程序。仅仅从技术上扩展到数千个节点是不够的;这个项目还必须扩展到数千个开发人员和特性;控制器帮助我们达到目的。
等等我们就走!我们将取代那些控制器,建立更多,API 基金会让我们构建任何我们可以用这种方式表达的东西 - 大多数东西只是标签或注释远离!但你的思想不会由语言来定义:有了第三方资源,你可以表达任何你选择的东西。现在我们可以不用在 Kubernetes 建造Kubernetes 了,创造出与其他任何东西一样感觉是 Kubernetes 的一部分的东西。最近添加的许多功能,如ingress、DNS integration、autoscaling and network policies ,都已经完成或可以通过这种方式完成。最终,在这些事情发生之前很难想象你会是怎样的一个人,但是明天的标准功能可以从今天开始,没有任何障碍或看门人,甚至对一个听众来说也是这样。
@@ -77,13 +76,13 @@ So I’m looking forward to seeing more and more growth happen further and furth
所以我期待着看到越来越多的增长发生在离 Kubernetes 核心越来越远的地方。我们必须通过这些阶段来工作;从需要在 kubernetes 内核中发生的事情开始——比如用部署替换复制控制器。现在我们开始构建不需要核心更改的东西。但我们仍然在讨论基础设施和应用程序。接下来真正有趣的是:当我们开始构建依赖于 kubernetes api 的应用程序时。我们一直有使用 kubernetes api 进行自组装的 cassandra 示例,但我们还没有真正开始更广泛地探讨这个问题。正如 S3 APIs 改变了我们构建记忆事物的方式一样,我认为 k8s APIs 也将改变我们构建思考事物的方式。
所以我很期待你的二岁生日:我可以试着预测你那时的样子,但我知道你会超越我所能想象的最大胆的东西。哦,这是你要去的地方!
-_-- Justin Santa Barbara, 独立的 Kubernetes 贡献者_
+_-- Justin Santa Barbara, 独立的 Kubernetes 贡献者_
diff --git a/content/zh/blog/_posts/2017-10-00-Five-Days-Of-Kubernetes-18.md b/content/zh/blog/_posts/2017-10-00-Five-Days-Of-Kubernetes-18.md
index 979d5cd931101..eee16d7a497b2 100644
--- a/content/zh/blog/_posts/2017-10-00-Five-Days-Of-Kubernetes-18.md
+++ b/content/zh/blog/_posts/2017-10-00-Five-Days-Of-Kubernetes-18.md
@@ -2,7 +2,6 @@
title: " Kubernetes 1.8 的五天 "
date: 2017-10-24
slug: five-days-of-kubernetes-18
-url: /blog/2017/10/Five-Days-Of-Kubernetes-18
---
Kubernetes 允许开发人员根据当前的流量和负载自动调整集群大小和 pod 副本的数量。这些调整减少了未使用节点的数量,节省了资金和资源。
在这次演讲中,谷歌的 Marcin Wielgus 将带领您了解 Kubernetes 中 pod 和 node 自动调焦的当前状态:它是如何工作的,以及如何使用它,包括在生产应用程序中部署的最佳实践。
喜欢这个演讲吗? 12 月 6 日至 8 日,在 Austin 参加 KubeCon 关于扩展和自动化您的 Kubernetes 集群的更令人兴奋的会议。[现在注册](https://www.eventbrite.com/e/kubecon-cloudnativecon-north-america-registration-37824050754?_ga=2.9666039.317115486.1510003873-1623727562.1496428006)。
diff --git a/content/zh/docs/concepts/architecture/controller.md b/content/zh/docs/concepts/architecture/controller.md
index a28f2a577db44..ecbd1f372646a 100644
--- a/content/zh/docs/concepts/architecture/controller.md
+++ b/content/zh/docs/concepts/architecture/controller.md
@@ -51,7 +51,7 @@ detail.
-->
## 控制器模式 {#controller-pattern}
-一个控制器至少追踪一种类型的 Kubernetes 资源。这些[对象](/docs/concepts/overview/working-with-objects/kubernetes-objects/)有一个代表期望状态的指定字段。正对这种资源的控制器就是要使他的当前状态接近与期望状态。
+一个控制器至少追踪一种类型的 Kubernetes 资源。这些[对象](/docs/concepts/overview/working-with-objects/kubernetes-objects/)有一个代表期望状态的指定字段。控制器负责确保其追踪的资源对象的当前状态接近期望状态。
控制器可能会自行执行操作;在 Kubernetes 中更常见的是一个控制器会发送信息给 {{< glossary_tooltip text="API 服务器" term_id="kube-apiserver" >}},这会有副作用。看下面这个例子。
diff --git a/content/zh/docs/concepts/cluster-administration/monitoring.md b/content/zh/docs/concepts/cluster-administration/monitoring.md
new file mode 100644
index 0000000000000..b2ff727eac241
--- /dev/null
+++ b/content/zh/docs/concepts/cluster-administration/monitoring.md
@@ -0,0 +1,222 @@
+---
+title: Kubernetes 控制面板的指标
+content_template: templates/concept
+weight: 60
+---
+
+{{% capture overview %}}
+
+
+
+系统组件的指标可以让我们更好的看清系统内部究竟发生了什么,尤其对于构建仪表盘和告警都非常有用。
+
+Kubernetes 控制面板中的指标是以 [prometheus](https://prometheus.io/docs/instrumenting/exposition_formats/) 格式发出的,而且是易于阅读的。
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+
+
+## Kubernetes 的指标
+
+在大多数情况下,指标在 HTTP 服务器的 `/metrics` 端点使用,对于默认情况下不暴露端点的组件,可以使用 `--bind-address` 参数启用。
+
+
+
+举例下面这些组件:
+
+* {{< glossary_tooltip term_id="kube-controller-manager" text="kube-controller-manager" >}}
+* {{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}}
+* {{< glossary_tooltip term_id="kube-apiserver" text="kube-apiserver" >}}
+* {{< glossary_tooltip term_id="kube-scheduler" text="kube-scheduler" >}}
+* {{< glossary_tooltip term_id="kubelet" text="kubelet" >}}
+
+
+
+在生产环境中,你可能需要配置 [Prometheus Server](https://prometheus.io/) 或其他指标收集器来定期收集这些指标,并使它们在某种时间序列数据库中可用。
+
+请注意 {{< glossary_tooltip term_id="kubelet" text="kubelet" >}} 同样在 `/metrics/cadvisor`、`/metrics/resource` 和 `/metrics/probes` 等端点提供性能指标。这些指标的生命周期并不相同。
+
+如果你的集群还使用了 {{< glossary_tooltip term_id="rbac" text="RBAC" >}} ,那读取指标数据的时候,还需要通过具有 ClusterRole 的用户、组或者 ServiceAccount 来进行授权,才有权限访问 `/metrics` 。
+
+举例:
+
+```
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: prometheus
+rules:
+ - nonResourceURLs:
+ - "/metrics"
+ verbs:
+ - get
+```
+
+
+
+## 指标的生命周期
+
+内测版指标 → 稳定版指标 → 弃用指标 → 隐藏指标 → 删除
+
+内测版指标没有任何稳定性保证,因此可能随时被修改或删除。
+
+稳定版指标可以保证不会改变,具体的说,稳定就意味着:
+
+
+
+* 这个指标自身不会被删除或者重命名。
+* 这个指标类型不会被更改
+
+
+
+弃用指标表明这个指标最终将会被删除,要想查找是哪个版本,你需要检查其注释,注释中包括该指标从哪个 kubernetes 版本被弃用。
+
+指标弃用前:
+
+```
+# HELP some_counter this counts things
+# TYPE some_counter counter
+some_counter 0
+```
+
+
+
+指标弃用后:
+
+```
+# HELP some_counter (Deprecated since 1.15.0) this counts things
+# TYPE some_counter counter
+some_counter 0
+```
+
+
+
+一个指标一旦被隐藏,默认这个指标是不会发布来被抓取的。如果你想要使用这个隐藏指标,你需要覆盖相关集群组件的配置。
+
+一个指标一旦被删除,那这个指标就不会被发布,您也不可以通过覆盖配置来进行更改。
+
+
+
+## 显示隐藏指标
+
+综上所述,管理员可以通过在运行可执行文件时添加一些特定的参数来开启一些隐藏的指标。当管理员错过了之前版本的的一些已弃用的指标时,这个可被视作是一个后门。
+
+`show-hidden-metrics-for-version` 参数可以指定一个版本,用来显示这个版本中被隐藏的指标。这个版本号形式是x.y,x 是主要版本号,y 是次要版本号。补丁版本并不是必须的,尽管在一些补丁版本中也会有一些指标会被弃用,因为指标弃用策略主要是针对次要版本。
+
+这个参数只能使用上一版本作为其值,如果管理员将上一版本设置为 `show-hidden-metrics-for-version` 的值,那么就会显示上一版本所有被隐藏的指标,太老的版本是不允许的,因为这不符合指标弃用策略。
+
+以指标 `A` 为例,这里假设 `A` 指标在 1.n 版本中被弃用,根据指标弃用策略,我们可以得出以下结论:
+
+
+
+* 在 `1.n` 版本中,这个指标被弃用,并且默认情况下,这个指标还是可以发出.
+* 在 `1.n+1` 版本中,这个指标默认被隐藏,你可以通过设置参数 `show-hidden-metrics-for-version=1.n` 来使它可以被发出.
+* 在 `1.n+2` 版本中,这个指标就被从代码库中删除,也不会再有后门了.
+
+如果你想要从 `1.12` 版本升级到 `1.13` ,但仍然需要依赖指标 `A` ,你可以通过命令行设置隐藏指标 `--show-hidden-metrics=1.12` ,但是在升级到 `1.14`时就必须要删除这个指标的依赖,因为这个版本中这个指标已经被删除了。
+
+
+
+## 组件指标
+
+### kube-controller-manager 指标
+
+控制器管理器指标提供了有关控制器管理器性能和运行状况的重要见解。这些指标包括常见的一些 Go 语言运行时的重要指标(比如 go_routine 的数量)和一些控制器的特定指标(比如 etcd 的请求时延),还有一些云供应商(比如 AWS、GCE、OpenStack)的 API 请求延迟,用来评估集群的整体运行状况。
+
+从 Kubernetes 1.7 开始,详细的云供应商指标便可用于 GCE、 AWS、Vsphere 和 OpenStack 的存储操作,这些指标可用于监控持久卷运行时的健康状况。
+
+举例,GCE 的这些指标是这些:
+
+```
+cloudprovider_gce_api_request_duration_seconds { request = "instance_list"}
+cloudprovider_gce_api_request_duration_seconds { request = "disk_insert"}
+cloudprovider_gce_api_request_duration_seconds { request = "disk_delete"}
+cloudprovider_gce_api_request_duration_seconds { request = "attach_disk"}
+cloudprovider_gce_api_request_duration_seconds { request = "detach_disk"}
+cloudprovider_gce_api_request_duration_seconds { request = "list_disk"}
+```
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+
+
+* 了解有关 [Prometheus 指标相关的文本格式](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#text-based-format)
+* 查看 [Kubernetes 稳定版指标](https://github.com/kubernetes/kubernetes/blob/master/test/instrumentation/testdata/stable-metrics-list.yaml)列表
+* 了解有关 [Kubernetes 指标弃用策略](https://kubernetes.io/docs/reference/using-api/deprecation-policy/#deprecating-a-feature-or-behavior )
+
+{{% /capture %}}
diff --git a/content/zh/docs/concepts/configuration/manage-compute-resources-container.md b/content/zh/docs/concepts/configuration/manage-compute-resources-container.md
index 627e27a2ad23d..c3e6b6af4d1a8 100644
--- a/content/zh/docs/concepts/configuration/manage-compute-resources-container.md
+++ b/content/zh/docs/concepts/configuration/manage-compute-resources-container.md
@@ -275,7 +275,7 @@ resource limits, see the
The resource usage of a Pod is reported as part of the Pod status.
-If [optional monitoring](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/README.md)
+If [optional monitoring](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/)
is configured for your cluster, then Pod resource usage can be retrieved from
the monitoring system.
-->
@@ -284,7 +284,7 @@ the monitoring system.
Pod 的资源使用情况被报告为 Pod 状态的一部分。
-如果为集群配置了 [可选监控](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/cluster-monitoring/README.md),则可以从监控系统检索 Pod 资源的使用情况。
+如果为集群配置了 [可选监控](/docs/tasks/debug-application-cluster/resource-metrics-pipeline/),则可以从监控系统检索 Pod 资源的使用情况。
+
+{{% capture overview %}}
+
+
+
+Operator 是 Kubernetes 的扩展软件,它利用[自定义资源](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)管理应用及其组件。
+Operator 遵循 Kubernetes 的理念,特别是在[控制环](/docs/concepts/#kubernetes-control-plane)方面。
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+
+
+## 初衷
+
+Operator 模式旨在捕获(正在管理一个或一组服务的)运维人员的关键目标。
+负责特定应用和 service 的运维人员,在系统应该如何运行、如何部署以及出现问题时如何处理等方面有深入的了解。
+
+在 Kubernetes 上运行工作负载的人们都喜欢通过自动化来处理重复的任务。Operator 模式会封装您编写的(Kubernetes 本身提供功能以外的)任务自动化代码。
+
+
+
+## Kubernetes 上的 Operator
+
+Kubernetes 为自动化而生。无需任何修改,您即可以从 Kubernetes 核心中获得许多内置的自动化功能。
+您可以使用 Kubernetes 自动化部署和运行工作负载, *甚至* 可以自动化 Kubernetes 自身。
+
+Kubernetes {{< glossary_tooltip text="控制器" term_id="controller" >}} 使您无需修改 Kubernetes 自身的代码,即可以扩展集群的行为。
+Operator 是 Kubernetes API 的客户端,充当[自定义资源](/docs/concepts/api-extension/custom-resources/)的控制器。
+
+
+
+## Operator 示例 {#example}
+
+使用 Operator 可以自动化的事情包括:
+
+* 按需部署应用
+* 获取/还原应用状态的备份
+* 处理应用代码的升级以及相关改动。例如,数据库 schema 或额外的配置设置
+* 发布一个 service,要求不支持 Kubernetes API 的应用也能发现它
+* 模拟整个或部分集群中的故障以测试其稳定性
+* 在没有内部成员选举程序的情况下,为分布式应用选择首领角色
+
+
+
+想要更详细的了解 Operator?这儿有一个详细的示例:
+
+1. 有一个名为 SampleDB 的自定义资源,您可以将其配置到集群中。
+2. 一个包含 Operator 控制器部分的 Deployment,用来确保 Pod 处于运行状态。
+3. Operator 代码的容器镜像。
+4. 控制器代码,负责查询控制平面以找出已配置的 SampleDB 资源。
+5. Operator 的核心是告诉 API 服务器,如何使现实与代码里配置的资源匹配。
+ * 如果添加新的 SampleDB,Operator 将设置 PersistentVolumeClaims 以提供持久化的数据库存储,设置 StatefulSet 以运行 SampleDB,并设置 Job 来处理初始配置。
+ * 如果您删除它,Operator 将建立快照,然后确保 StatefulSet 和 Volume 已被删除。
+6. Operator 也可以管理常规数据库的备份。对于每个 SampleDB 资源,Operator 会确定何时创建(可以连接到数据库并进行备份的)Pod。这些 Pod 将依赖于 ConfigMap 和/或 具有数据库连接详细信息和凭据的 Secret。
+7. 由于 Operator 旨在为其管理的资源提供强大的自动化功能,因此它还需要一些额外的支持性代码。在这个示例中,代码将检查数据库是否正运行在旧版本上,如果是,则创建 Job 对象为您升级数据库。
+
+
+
+## 部署 Operator
+
+部署 Operator 最常见的方法是将自定义资源及其关联的控制器添加到您的集群中。跟运行容器化应用一样,Controller 通常会运行在 {{< glossary_tooltip text="控制平面" term_id="control-plane" >}} 之外。例如,您可以在集群中将控制器作为 Deployment 运行。
+
+
+
+## 使用 Operator {#using-operators}
+
+部署 Operator 后,您可以对 Operator 所使用的资源执行添加、修改或删除操作。按照上面的示例,您将为 Operator 本身建立一个 Deployment,然后:
+
+```shell
+kubectl get SampleDB # 查找所配置的数据库
+
+kubectl edit SampleDB/example-database # 手动修改某些配置
+```
+
+
+
+可以了!Operator 会负责应用所作的更改并保持现有服务处于良好的状态
+
+## 编写你自己的 Operator {#writing-operator}
+
+
+
+如果生态系统中没可以实现您目标的 Operator,您可以自己编写代码。在[接下来](#what-s-next)一节中,您会找到编写自己的云原生 Operator 需要的库和工具的链接。
+
+您还可以使用任何支持 [Kubernetes API 客户端](/docs/reference/using-api/client-libraries/)的语言或运行时来实现 Operator(即控制器)。
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+
+
+* 详细了解[自定义资源](/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
+* 在 [OperatorHub.io](https://operatorhub.io/) 上找到现成的、适合您的 Operator
+* 借助已有的工具来编写您自己的 Operator,例如:
+ * [KUDO](https://kudo.dev/) (Kubernetes 通用声明式 Operator)
+ * [kubebuilder](https://book.kubebuilder.io/)
+ * [Metacontroller](https://metacontroller.app/),可与 Webhook 结合使用,以实现自己的功能。
+ * [Operator 框架](https://github.com/operator-framework/getting-started)
+* [发布](https://operatorhub.io/)您的 Operator,让别人也可以使用
+* 阅读 [CoreOS 原文](https://coreos.com/blog/introducing-operators.html),其介绍了 Operator 介绍
+* 阅读这篇来自谷歌云的关于构建 Operator 最佳实践的[文章](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps)
+
+{{% /capture %}}
diff --git a/content/zh/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/zh/docs/concepts/overview/working-with-objects/kubernetes-objects.md
index 0015553467ffd..156fa61d56096 100644
--- a/content/zh/docs/concepts/overview/working-with-objects/kubernetes-objects.md
+++ b/content/zh/docs/concepts/overview/working-with-objects/kubernetes-objects.md
@@ -1,12 +1,21 @@
---
title: 理解 Kubernetes 对象
-
-redirect_from:
-- "/docs/concepts/abstractions/overview/"
-- "/docs/concepts/abstractions/overview.html"
content_template: templates/concept
+weight: 10
+card:
+ name: 概念
+ weight: 40
---
+
+
{{% capture overview %}}
-也需要提供对象的 `spec` 字段。对象 `spec` 的精确格式对每个 Kubernetes 对象来说是不同的,包含了特定于该对象的嵌套字段。[Kubernetes API 参考](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)能够帮助我们找到任何我们想创建的对象的 spec 格式。
+您也需要提供对象的 `spec` 字段。对象 `spec` 的精确格式对每个 Kubernetes 对象来说是不同的,包含了特定于该对象的嵌套字段。[Kubernetes API 参考](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)能够帮助我们找到任何我们想创建的对象的 spec 格式。
例如,可以从
[这里](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core)
查看 `Pod` 的 `spec` 格式,
@@ -150,10 +159,13 @@ and the `spec` format for a `Deployment` can be found
{{% capture whatsnext %}}
-
-* 了解最重要的基本 Kubernetes 对象,例如 [Pod](/docs/concepts/workloads/pods/pod-overview/)。
+* [Kubernetes API 概述](/docs/reference/using-api/api-overview/) 提供关于 API 概念的进一步阐述
+* 了解最重要的 Kubernetes 基本对象,例如 [Pod](/docs/concepts/workloads/pods/pod-overview/)。
+* 了解 Kubernetes 中的[控制器](/docs/concepts/architecture/controller/)。
{{% /capture %}}
diff --git a/content/zh/docs/concepts/storage/storage-classes.md b/content/zh/docs/concepts/storage/storage-classes.md
index 3c7dc98bc8b32..b1d8becc60ce9 100644
--- a/content/zh/docs/concepts/storage/storage-classes.md
+++ b/content/zh/docs/concepts/storage/storage-classes.md
@@ -36,7 +36,7 @@ systems.
## 介绍
`StorageClass` 为管理员提供了描述存储 `"类"` 的方法。
-不同的`类型`可能会映射到不同的服务质量等级或备份策略,或是由群集管理员制定的任意策略。
+不同的`类型`可能会映射到不同的服务质量等级或备份策略,或是由集群管理员制定的任意策略。
Kubernetes 本身并不清楚各种`类`代表的什么。这个`类`的概念在其他存储系统中有时被称为"配置文件"。
-管理员可以为没有申请绑定到特定 `StorageClass` 的 PVC 指定一个默认的`类` :
-更多详情请参阅 [`PersistentVolumeClaim` 章节](#persistentvolumeclaims)。
+管理员可以为没有申请绑定到特定 `StorageClass` 的 PVC 指定一个默认的存储`类` :
+更多详情请参阅 [`PersistentVolumeClaim` 章节](/docs/concepts/storage/persistent-volumes/#class-1)。
```yaml
apiVersion: storage.k8s.io/v1
@@ -92,13 +92,13 @@ for provisioning PVs. This field must be specified.
-->
### 存储分配器
-`StorageClass` 有一个分配器,用来决定使用哪个`卷插件`分配`持久化卷申领`。该字段必须指定。
+`StorageClass` 有一个分配器,用来决定使用哪个`卷插件`分配`PV`。该字段必须指定。
-| 卷插件 | 提供厂商 | 配置例子 |
+| 卷插件 | 内置分配器 | 配置例子 |
| :--- | :---: | :---: |
| AWSElasticBlockStore | ✓ | [AWS EBS](#aws-ebs) |
| AzureFile | ✓ | [Azure File](#azure-file) |
diff --git a/content/zh/docs/concepts/storage/volume-pvc-datasource.md b/content/zh/docs/concepts/storage/volume-pvc-datasource.md
index c740f56a5ffae..a8e1a5117569c 100644
--- a/content/zh/docs/concepts/storage/volume-pvc-datasource.md
+++ b/content/zh/docs/concepts/storage/volume-pvc-datasource.md
@@ -24,18 +24,7 @@ weight: 30
This document describes the concept of cloning existing CSI Volumes in Kubernetes. Familiarity with [Volumes](/docs/concepts/storage/volumes) is suggested.
-->
-本文档描述 Kubernetes 中克隆现有 CSI 卷的概念。建议先熟悉[卷](/docs/concepts/storage/volumes)。
-
-
-
-此功能需要启动 VolumePVCDataSource 功能门:
-
-```
---feature-gates=VolumePVCDataSource=true
-```
-
+本文档介绍 Kubernetes 中克隆现有 CSI 卷的概念。阅读前建议先熟悉[卷](/docs/concepts/storage/volumes)。
{{% /capture %}}
@@ -52,13 +41,13 @@ This feature requires VolumePVCDataSource feature gate to be enabled:
The {{< glossary_tooltip text="CSI" term_id="csi" >}} Volume Cloning feature adds support for specifying existing {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}s in the `dataSource` field to indicate a user would like to clone a {{< glossary_tooltip term_id="volume" >}}.
-->
-{{< glossary_tooltip text="CSI" term_id="csi" >}} 卷克隆功能增加了在 `dataSource` 字段指定现有的 {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}s,来表示用户想要克隆的 {{< glossary_tooltip term_id="volume" >}}。
+{{< glossary_tooltip text="CSI" term_id="csi" >}} 卷克隆功能增加了通过在 `dataSource` 字段中指定存在的 {{< glossary_tooltip text="PVC" term_id="persistent-volume-claim" >}}s,来表示用户想要克隆的 {{< glossary_tooltip term_id="volume" >}}。
-克隆定义为已有 Kubernetes 卷的副本,可以像任何标准卷一样被使用。唯一的区别就是配置后,后端设备将创建指定卷的精确副本,而不是创建一个“新的”空卷。
+克隆,意思是为已有的 Kubernetes 卷创建副本,它可以像任何其它标准卷一样被使用。唯一的区别就是配置后,后端设备将创建指定完全相同的副本,而不是创建一个“新的”空卷。
+
+{{% capture overview %}}
+
+基于角色(Role)的访问控制(RBAC)是一种基于企业中用户的角色来调节控制对计算机或网络资源的访问方法。
+{{% /capture %}}
+
+{{% capture body %}}
+
+`RBAC` 使用 `rbac.authorization.k8s.io` {{< glossary_tooltip text="API 组" term_id="api-group" >}}
+来驱动鉴权操作,允许管理员通过 Kubernetes API 动态配置策略。
+
+在 1.8 版本中,RBAC 模式是稳定的并通过 rbac.authorization.k8s.io/v1 API 提供支持。
+
+要启用 RBAC,在启动 API 服务器时添加 `--authorization-mode=RBAC` 参数。
+
+
+
+## API 概述
+
+本节介绍 RBAC API 所声明的四种顶级类型。用户可以像与其他 API 资源交互一样,
+(通过 `kubectl`、API 调用等方式)与这些资源交互。例如,
+命令 `kubectl apply -f (resource).yml` 可以用在这里的任何一个例子之上。
+尽管如此,建议读者循序渐进阅读下面的章节,由浅入深。
+
+
+### Role 和 ClusterRole
+
+在 RBAC API 中,一个角色包含一组相关权限的规则。权限是纯粹累加的(不存在拒绝某操作的规则)。
+角色可以用 `Role` 来定义到某个命名空间上,
+或者用 `ClusterRole` 来定义到整个集群作用域。
+
+一个 `Role` 只可以用来对某一命名空间中的资源赋予访问权限。
+下面的 `Role` 示例定义到名称为 "default" 的命名空间,可以用来授予对该命名空间中的 Pods 的读取权限:
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ namespace: default
+ name: pod-reader
+rules:
+- apiGroups: [""] # "" 指定核心 API 组
+ resources: ["pods"]
+ verbs: ["get", "watch", "list"]
+```
+
+`ClusterRole` 可以授予的权限和 `Role` 相同,
+但是因为 `ClusterRole` 属于集群范围,所以它也可以授予以下访问权限:
+
+* 集群范围资源 (比如 nodes)
+* 非资源端点(比如 "/healthz")
+* 跨命名空间访问的有名字空间作用域的资源(如 Pods),比如运行命令`kubectl get pods --all-namespaces` 时需要此能力
+
+下面的 `ClusterRole` 示例可用来对某特定命名空间下的 Secrets 的读取操作授权,
+或者跨所有命名空间执行授权(取决于它是如何[绑定](#rolebinding-and-clusterrolebinding)的):
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ # 此处的 "namespace" 被省略掉是因为 ClusterRoles 是没有命名空间的。
+ name: secret-reader
+rules:
+- apiGroups: [""]
+ resources: ["secrets"]
+ verbs: ["get", "watch", "list"]
+```
+
+
+### RoleBinding 和 ClusterRoleBinding
+
+角色绑定(RoleBinding)是将角色中定义的权限赋予一个或者一组用户。
+它包含若干主体(用户,组和服务账户)的列表和对这些主体所获得的角色的引用。
+可以使用 `RoleBinding` 在指定的命名空间中执行授权,
+或者在集群范围的命名空间使用 `ClusterRoleBinding` 来执行授权。
+
+一个 `RoleBinding` 可以引用同一的命名空间中的 `Role` 。
+下面的例子 `RoleBinding` 将 "pod-reader" 角色授予在 "default" 命名空间中的用户 "jane";
+这样,用户 "jane" 就具有了读取 "default" 命名空间中 pods 的权限。
+
+`roleRef` 里的内容决定了实际创建绑定的方法。`kind` 可以是 `Role` 或 `ClusterRole`,
+`name` 将引用你要指定的 `Role` 或 `ClusterRole` 的名称。在下面的例子中,角色绑定使用
+`roleRef` 将用户 "jane" 绑定到前文创建的角色 `Role`,其名称是 `pod-reader`。
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+# 此角色绑定使得用户 "jane" 能够读取 "default" 命名空间中的 Pods
+kind: RoleBinding
+metadata:
+ name: read-pods
+ namespace: default
+subjects:
+- kind: User
+ name: jane # Name is case sensitive
+ apiGroup: rbac.authorization.k8s.io
+roleRef:
+ kind: Role #this must be Role or ClusterRole
+ name: pod-reader # 这里的名称必须与你想要绑定的 Role 或 ClusterRole 名称一致
+ apiGroup: rbac.authorization.k8s.io
+```
+
+
+`RoleBinding` 也可以引用 `ClusterRole`,对 `ClusterRole` 所定义的、位于 `RoleBinding` 命名空间内的资源授权。
+这可以允许管理者在
+整个集群中定义一组通用的角色,然后在多个命名空间中重用它们。
+
+例如下面的例子,`RoleBinding` 指定的是 `ClusterRole`,
+"dave" (主体,区分大小写)将只可以读取在"development"
+命名空间( `RoleBinding` 的命名空间)中的"secrets"。
+
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+# 这个角色绑定允许 "dave" 用户在 "development" 命名空间中有读取 secrets 的权限。
+kind: RoleBinding
+metadata:
+ name: read-secrets
+ namespace: development # 这里只授予 "development" 命名空间的权限。
+subjects:
+- kind: User
+ name: dave # 名称区分大小写
+ apiGroup: rbac.authorization.k8s.io
+roleRef:
+ kind: ClusterRole
+ name: secret-reader
+ apiGroup: rbac.authorization.k8s.io
+```
+
+
+
+最后,`ClusterRoleBinding` 可用来在集群级别或对所有命名空间执行授权。
+下面的例子允许 "manager" 组中的任何用户读取任意命名空间中 "secrets"。
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+# 这个集群角色绑定允许 "manager" 组中的任何用户读取任意命名空间中 "secrets"。
+kind: ClusterRoleBinding
+metadata:
+ name: read-secrets-global
+subjects:
+- kind: Group
+ name: manager # 名称区分大小写
+ apiGroup: rbac.authorization.k8s.io
+roleRef:
+ kind: ClusterRole
+ name: secret-reader
+ apiGroup: rbac.authorization.k8s.io
+```
+
+你不能修改绑定对象所引用的 `Role` 或 `ClusterRole` 。
+试图改变绑定对象的 `roleRef` 将导致验证错误。想要
+改变现有绑定对象中 `roleRef` 字段的内容,必须删除并
+重新创建绑定对象。这种限制有两个主要原因:
+
+1.关于不同角色的绑定是完全不一样的。更改 `roleRef`
+ 需要删除/重建绑定,确保要赋予绑定的完整主体列表是新
+的角色(而不是只是启用修改 `roleRef` 在不验证所有现有
+主体的情况下的,应该授予新角色对应的权限)。
+
+2.使得 `roleRef` 不可以改变现有绑定主体用户的 `update` 权限,
+这样可以让它们能够管理主体列表,而不能更改授予这些主体相关
+的角色。
+
+命令 `kubectl auth reconcile` 可以创建或者更新包含 RBAC 对象的清单文件,
+并且在必要的情况下删除和重新创建绑定对象,以改变所引用的角色。
+更多相关信息请参照[命令用法和示例](#kubectl-auth-reconcile)
+
+
+### 对资源的引用
+
+大多数资源都是使用名称的字符串表示,例如在相关的 API 端点的 URL 之中出现的 "pods" 。
+然而有一些 Kubernetes API 涉及 "子资源(subresources)",例如 pod 的日志。Pod 日志相关的端点 URL 如下:
+
+```http
+GET /api/v1/namespaces/{namespace}/pods/{name}/log
+```
+
+在这种情况下,"pods" 是有命名空间的资源,而 "log" 是 pods 的子资源。在 RBAC 角色中,
+使用"/"分隔资源和子资源。允许一个主体要同时读取 pods 和 pod logs,你可以这么写:
+
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ namespace: default
+ name: pod-and-pod-logs-reader
+rules:
+- apiGroups: [""]
+ resources: ["pods", "pods/log"]
+ verbs: ["get", "list"]
+```
+
+
+对于某些请求,也可以通过 `resourceNames` 列表按名称引用资源。
+在指定时,可以将请求类型限制资源的单个实例。限制只可以 "get" 和 "update"
+的单一configmap,你可以这么写:
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ namespace: default
+ name: configmap-updater
+rules:
+- apiGroups: [""]
+ resources: ["configmaps"]
+ resourceNames: ["my-configmap"]
+ verbs: ["update", "get"]
+```
+
+需要注意的是,`create` 请求不能被 resourceName 限制,因为在鉴权时还不知道对象名称。
+另一个例外是 `deletecollection`。
+
+
+### Aggregated ClusterRoles
+
+从 1.9 开始,集群角色(ClusterRole)可以通过使用 `aggregationRule` 的方式并组合其他 ClusterRoles 来创建。
+聚合集群角色的权限是由控制器管理的,方法是通过过滤与标签选择器匹配的 ClusterRules,并将其中的权限进行组合。
+一个聚合集群角色的示例如下:
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: monitoring
+aggregationRule:
+ clusterRoleSelectors:
+ - matchLabels:
+ rbac.example.com/aggregate-to-monitoring: "true"
+rules: [] # 具体规则由控制器管理器自动填写。
+```
+
+创建一个与标签选择器匹配的 ClusterRole 之后,其上定义的规则将成为聚合集群角色的一部分。在下面的例子中,
+通过创建一个新的、标签同样为 `rbac.example.com/aggregate-to-monitoring: true` 的
+ClusterRole,新的规则可被添加到 "monitoring" 集群角色中。
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: monitoring-endpoints
+ labels:
+ rbac.example.com/aggregate-to-monitoring: "true"
+# 这些规则将被添加到 "monitoring" 角色中。
+rules:
+- apiGroups: [""]
+ resources: ["services", "endpoints", "pods"]
+ verbs: ["get", "list", "watch"]
+```
+
+
+
+默认的面向用户的角色(如下所述)使用 ClusterRole 聚合。这使得管理者可以为自定义资源设置使用规则属性,
+比如通过 CustomResourceDefinitions 或聚合 API 服务器为默认角色提供的服务。
+
+例如,在以下 ClusterRoles 中让 "admin" 和 "edit" 拥有管理自定义资源 "CronTabs" 的权限,
+ "view" 角色对资源有只读操作权限。
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: aggregate-cron-tabs-edit
+ labels:
+ # 将这些权限添加到默认角色 "admin" 和 "edit" 中。
+ rbac.authorization.k8s.io/aggregate-to-admin: "true"
+ rbac.authorization.k8s.io/aggregate-to-edit: "true"
+rules:
+- apiGroups: ["stable.example.com"]
+ resources: ["crontabs"]
+ verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: aggregate-cron-tabs-view
+ labels:
+ # 将这些权限添加到默认角色 "view" 中。
+ rbac.authorization.k8s.io/aggregate-to-view: "true"
+rules:
+- apiGroups: ["stable.example.com"]
+ resources: ["crontabs"]
+ verbs: ["get", "list", "watch"]
+```
+
+
+#### 角色示例
+
+在以下示例中,我们仅截取展示了 `rules` 对应部分,
+允许读取在核心 {{< glossary_tooltip text="API 组" term_id="api-group" >}}下的 Pods:
+
+```yaml
+rules:
+- apiGroups: [""]
+ resources: ["pods"]
+ verbs: ["get", "list", "watch"]
+```
+
+允许读/写在 "extensions" 和 "apps" API 组中的 "deployments" 资源:
+
+```yaml
+rules:
+- apiGroups: ["extensions", "apps"]
+ resources: ["deployments"]
+ verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
+```
+
+允许读取 "pods" 和读/写 "jobs" :
+
+```yaml
+rules:
+- apiGroups: [""]
+ resources: ["pods"]
+ verbs: ["get", "list", "watch"]
+- apiGroups: ["batch", "extensions"]
+ resources: ["jobs"]
+ verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
+```
+
+
+允许读取名称为 "my-config"的 `ConfigMap` (需要通过 `RoleBinding` 绑定带某名字空间中特定的 `ConfigMap`):
+
+```yaml
+rules:
+- apiGroups: [""]
+ resources: ["configmaps"]
+ resourceNames: ["my-config"]
+ verbs: ["get"]
+```
+
+允许读取在核心组中的 "nodes" 资源(因为 `Node` 是集群范围的,所以需要 `ClusterRole` 绑定到 `ClusterRoleBinding` 才生效)
+
+```yaml
+rules:
+- apiGroups: [""]
+ resources: ["nodes"]
+ verbs: ["get", "list", "watch"]
+```
+
+允许在非资源端点 "/healthz" 和其子路径上发起 "GET" 和 "POST" 请求(必须在 `ClusterRole` 绑定 `ClusterRoleBinding` 才生效)
+
+```yaml
+rules:
+- nonResourceURLs: ["/healthz", "/healthz/*"] # '*' 在 nonResourceURL 中的意思是后缀全局匹配。
+ verbs: ["get", "post"]
+```
+
+
+### 对主体的引用
+
+`RoleBinding` 或者 `ClusterRoleBinding` 需要绑定角色到 *主体*。
+主体可以是组,用户或者服务账户。
+
+用户是由字符串表示,它们可以是普通的用户名,像 "alice",或者是
+邮件格式 "bob@example.com",或者是数字ID。由 Kubernetes 管理员配置[身份认证模块](/docs/reference/access-authn-authz/authentication/)
+需要的格式。RBAC 鉴权系统不对格式作任何要求,但是前缀 `system:` 是 Kubernetes 系统保留的,
+所以管理员要确保配置的用户名不能出现上述前缀格式。
+
+用户组信息是 Kubernetes 现在提供的一种身份验证模块,与用户一样,对组的字符串没有格式要求,
+只是不能使用保留的前缀 `system:` 。
+
+[服务账号](/docs/tasks/configure-pod-container/configure-service-account/) 的用户名前缀为`system:serviceaccount:`,
+属于前缀为 `system:serviceaccounts:` 的用户组。
+
+
+#### RoleBinding的示例
+
+下面的示例只是展示 `RoleBinding` 中 `subjects` 的部分。
+
+用户的名称为 "alice@example.com":
+
+```yaml
+subjects:
+- kind: User
+ name: "alice@example.com"
+ apiGroup: rbac.authorization.k8s.io
+```
+
+组的名称为 "frontend-admins":
+
+```yaml
+subjects:
+- kind: Group
+ name: "frontend-admins"
+ apiGroup: rbac.authorization.k8s.io
+```
+
+服务账号在 kube-system 命名空间中:
+
+```yaml
+subjects:
+- kind: ServiceAccount
+ name: default
+ namespace: kube-system
+```
+
+在名称为 "qa" 命名空间中所有的服务账号:
+
+```yaml
+subjects:
+- kind: Group
+ name: system:serviceaccounts:qa
+ apiGroup: rbac.authorization.k8s.io
+```
+
+
+
+所有的服务账号:
+
+```yaml
+subjects:
+- kind: Group
+ name: system:serviceaccounts
+ apiGroup: rbac.authorization.k8s.io
+```
+
+所有认证过的用户 (版本 1.5+):
+
+```yaml
+subjects:
+- kind: Group
+ name: system:authenticated
+ apiGroup: rbac.authorization.k8s.io
+```
+
+所有未认证的用户 (版本 1.5+):
+
+```yaml
+subjects:
+- kind: Group
+ name: system:unauthenticated
+ apiGroup: rbac.authorization.k8s.io
+```
+
+所有用户 (版本 1.5+):
+
+```yaml
+subjects:
+- kind: Group
+ name: system:authenticated
+ apiGroup: rbac.authorization.k8s.io
+- kind: Group
+ name: system:unauthenticated
+ apiGroup: rbac.authorization.k8s.io
+```
+
+
+## 默认 Roles 和 Role Bindings
+
+API servers创建一组默认为 `ClusterRole` 和 `ClusterRoleBinding` 的对象。
+其中许多是以 `system:` 为前缀的,它表示资源是基础设施 "owned" 的。对于这些资源的修改可能导致集群功能失效。
+例如,`system:node` 是集群角色,它是定义 kubelets 相关的权限,如果这个角色被修改,它将导致 kubelets 无法正常工作。
+
+所有默认的 ClusterRole 和 ClusterRoleBinding 对象都会被标记为 `kubernetes.io/bootstrapping=rbac-defaults`。
+
+
+### 自动更新
+
+在每次启动时,API Server 都会更新默认 ClusterRole 所缺少的各种权限,并更新默认 ClusterRoleBinding 所缺少的各个角色绑定主体。
+这种自动更新机制允许集群去修复一些特殊的修改。
+由于权限和角色绑定主体在新的 Kubernetes 版本中可能发生变化,所以这样的话也能够保证角色和角色绑定始终保持是最新的。
+
+如果要禁止此功能,请将默认ClusterRole以及ClusterRoleBinding的`rbac.authorization.kubernetes.io/autoupdate`设置成`false`。
+
+注意,缺乏默认权限和角色绑定主体可能会导致非功能性集群问题。
+
+自动更新功能在 Kubernetes 版本1.6+ 的 RBAC 认证是默认开启的。
+
+
+### Discovery Roles
+
+无论是经过身份验证的还是未经过身份验证的用户,默认角色的用户读取API被认为是安全的,可以公开访问(包括CustomResourceDefinitions),
+如果要禁用匿名未经过身份验证的用户访问,请在 API server 中添加 `--anonymous-auth=false` 的配置选项。
+
+通过运行命令 `kubectl` 可以查看这些角色的配置信息:
+
+```
+kubectl get clusterroles system:discovery -o yaml
+```
+
+注意:不建议编辑这个角色,因为更改将在 API server 重启时自动更新时覆盖(见上文)
+
+