diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES
index ecf6d21378fad..2ec17186422a3 100644
--- a/OWNERS_ALIASES
+++ b/OWNERS_ALIASES
@@ -137,6 +137,38 @@ aliases:
- stewart-yu
- xiangpengzhao
- zhangxiaoyu-zidif
+ sig-docs-fr-owners: #Team: Documentation; GH: sig-docs-fr-owners
+ - sieben
+ - perriea
+ - rekcah78
+ - lledru
+ - yastij
+ - smana
+ - rbenzair
+ - abuisine
+ - erickhun
+ - jygastaud
+ - awkif
+ sig-docs-fr-reviews: #Team: Documentation; GH: sig-docs-fr-reviews
+ - sieben
+ - perriea
+ - rekcah78
+ - lledru
+ - yastij
+ - smana
+ - rbenzair
+ - abuisine
+ - erickhun
+ - jygastaud
+ - awkif
+ sig-docs-it-owners: #Team: Italian docs localization; GH: sig-docs-it-owners
+ - rlenferink
+ - lledru
+ - micheleberardi
+ sig-docs-it-reviews: #Team: Italian docs PR reviews; GH:sig-docs-it-reviews
+ - rlenferink
+ - lledru
+ - micheleberardi
sig-docs-ja-owners: #Team: Japanese docs localization; GH: sig-docs-ja-owners
- cstoku
- nasa9084
diff --git a/README-fr.md b/README-fr.md
new file mode 100644
index 0000000000000..cca46595ad023
--- /dev/null
+++ b/README-fr.md
@@ -0,0 +1,83 @@
+# Documentation de Kubernetes
+
+[![Build Status](https://api.travis-ci.org/kubernetes/website.svg?branch=master)](https://travis-ci.org/kubernetes/website)
+[![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest)
+
+Bienvenue !
+Ce référentiel contient toutes les informations nécessaires à la construction du site web et de la documentation de Kubernetes.
+Nous sommes très heureux que vous vouliez contribuer !
+
+## Contribuer à la rédaction des docs
+
+Vous pouvez cliquer sur le bouton **Fork** en haut à droite de l'écran pour créer une copie de ce dépôt dans votre compte GitHub.
+Cette copie s'appelle un *fork*.
+Faites tous les changements que vous voulez dans votre fork, et quand vous êtes prêt à nous envoyer ces changements, allez dans votre fork et créez une nouvelle pull request pour nous le faire savoir.
+
+Une fois votre pull request créée, un examinateur de Kubernetes se chargera de vous fournir une revue claire et exploitable.
+En tant que propriétaire de la pull request, **il est de votre responsabilité de modifier votre pull request pour tenir compte des commentaires qui vous ont été fournis par l'examinateur de Kubernetes.**
+Notez également que vous pourriez vous retrouver avec plus d'un examinateur de Kubernetes pour vous fournir des commentaires ou vous pourriez finir par recevoir des commentaires d'un autre examinateur que celui qui vous a été initialement affecté pour vous fournir ces commentaires.
+De plus, dans certains cas, l'un de vos examinateur peut demander un examen technique à un [examinateur technique de Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) au besoin.
+Les examinateurs feront de leur mieux pour fournir une revue rapidement, mais le temps de réponse peut varier selon les circonstances.
+
+Pour plus d'informations sur la contribution à la documentation Kubernetes, voir :
+
+* [Commencez à contribuer](https://kubernetes.io/docs/contribute/start/)
+* [Apperçu des modifications apportées à votre documentation](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally)
+* [Utilisation des modèles de page](http://kubernetes.io/docs/contribute/style/page-templates/)
+* [Documentation Style Guide](http://kubernetes.io/docs/contribute/style/style-guide/)
+* [Traduction de la documentation Kubernetes](https://kubernetes.io/docs/contribute/localization/)
+
+## Exécuter le site localement en utilisant Docker
+
+La façon recommandée d'exécuter le site web Kubernetes localement est d'utiliser une image spécialisée [Docker](https://docker.com) qui inclut le générateur de site statique [Hugo](https://gohugo.io).
+
+> Si vous êtes sous Windows, vous aurez besoin de quelques outils supplémentaires que vous pouvez installer avec [Chocolatey](https://chocolatey.org). `choco install install make`
+
+> Si vous préférez exécuter le site Web localement sans Docker, voir [Exécuter le site localement avec Hugo](#running-the-site-locally-using-hugo) ci-dessous.
+
+Si vous avez Docker [up and running](https://www.docker.com/get-started), construisez l'image Docker `kubernetes-hugo' localement:
+
+```bash
+make docker-image
+```
+
+Une fois l'image construite, vous pouvez exécuter le site localement :
+
+```bash
+make docker-serve
+```
+
+Ouvrez votre navigateur à l'adresse: http://localhost:1313 pour voir le site.
+Lorsque vous apportez des modifications aux fichiers sources, Hugo met à jour le site et force le navigateur à rafraîchir la page.
+
+## Exécuter le site localement en utilisant Hugo
+
+Voir la [documentation officielle Hugo](https://gohugo.io/getting-started/installing/) pour les instructions d'installation Hugo.
+Assurez-vous d'installer la version Hugo spécifiée par la variable d'environnement `HUGO_VERSION` dans le fichier [`netlify.toml`](netlify.toml#L9).
+
+Pour exécuter le site localement lorsque vous avez Hugo installé :
+
+```bash
+make serve
+```
+
+Le serveur Hugo local démarrera sur le port 1313.
+Ouvrez votre navigateur à l'adresse: http://localhost:1313 pour voir le site.
+Lorsque vous apportez des modifications aux fichiers sources, Hugo met à jour le site et force le navigateur à rafraîchir la page.
+
+## Communauté, discussion, contribution et assistance
+
+Apprenez comment vous engager avec la communauté Kubernetes sur la [page communauté](http://kubernetes.io/community/).
+
+Vous pouvez joindre les responsables de ce projet à l'adresse :
+
+- [Slack](https://kubernetes.slack.com/messages/sig-docs)
+- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
+
+### Code de conduite
+
+La participation à la communauté Kubernetes est régie par le [Code de conduite de Kubernetes](code-of-conduct.md).
+
+## Merci !
+
+Kubernetes prospère grâce à la participation de la communauté, et nous apprécions vraiment vos contributions à notre site et à notre documentation !
diff --git a/config.toml b/config.toml
index 98bf0d4a33d42..c8b6633e9df33 100644
--- a/config.toml
+++ b/config.toml
@@ -166,3 +166,26 @@ time_format_blog = "02.01.2006"
# A list of language codes to look for untranslated content, ordered from left to right.
language_alternatives = ["en"]
+[languages.fr]
+title = "Kubernetes"
+description = "Production-Grade Container Orchestration"
+languageName ="Français"
+weight = 5
+contentDir = "content/fr"
+
+[languages.fr.params]
+time_format_blog = "02.01.2006"
+# A list of language codes to look for untranslated content, ordered from left to right.
+language_alternatives = ["en"]
+
+[languages.it]
+title = "Kubernetes"
+description = "Production-Grade Container Orchestration"
+languageName ="Italian"
+weight = 6
+contentDir = "content/it"
+
+[languages.it.params]
+time_format_blog = "02.01.2006"
+# A list of language codes to look for untranslated content, ordered from left to right.
+language_alternatives = ["en"]
diff --git a/content/en/blog/_posts/2016-08-00-Security-Best-Practices-Kubernetes-Deployment.md b/content/en/blog/_posts/2016-08-00-Security-Best-Practices-Kubernetes-Deployment.md
index 74bb85843ad28..28b9a2ccb77fe 100644
--- a/content/en/blog/_posts/2016-08-00-Security-Best-Practices-Kubernetes-Deployment.md
+++ b/content/en/blog/_posts/2016-08-00-Security-Best-Practices-Kubernetes-Deployment.md
@@ -4,6 +4,8 @@ date: 2016-08-31
slug: security-best-practices-kubernetes-deployment
url: /blog/2016/08/Security-Best-Practices-Kubernetes-Deployment
---
+_Note: some of the recommendations in this post are no longer current. Current cluster hardening options are described in this [documentation](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/)._
+
_Editor’s note: today’s post is by Amir Jerbi and Michael Cherny of Aqua Security, describing security best practices for Kubernetes deployments, based on data they’ve collected from various use-cases seen in both on-premises and cloud deployments._
Kubernetes provides many controls that can greatly improve your application security. Configuring them requires intimate knowledge with Kubernetes and the deployment’s security requirements. The best practices we highlight here are aligned to the container lifecycle: build, ship and run, and are specifically tailored to Kubernetes deployments. We adopted these best practices in [our own SaaS deployment](http://blog.aquasec.com/running-a-security-service-in-google-cloud-real-world-example) that runs Kubernetes on Google Cloud Platform.
diff --git a/content/en/blog/_posts/2017-07-00-How-Watson-Health-Cloud-Deploys.md b/content/en/blog/_posts/2017-07-00-How-Watson-Health-Cloud-Deploys.md
index 2917f55e79394..0d6e6481cd2cc 100644
--- a/content/en/blog/_posts/2017-07-00-How-Watson-Health-Cloud-Deploys.md
+++ b/content/en/blog/_posts/2017-07-00-How-Watson-Health-Cloud-Deploys.md
@@ -19,7 +19,7 @@ I was able to run more processes on a single physical server than I could using
-To orchestrate container deployment, we are using[Armada infrastructure](https://console.bluemix.net/containers-kubernetes/launch), a Kubernetes implementation by IBM for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure.
+To orchestrate container deployment, we are using [IBM Cloud Kubernetes Service infrastructure](https://cloud.ibm.com/containers-kubernetes/landing), a Kubernetes implementation by IBM for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure.
@@ -39,7 +39,7 @@ Here is a snapshot of Watson Care Manager, running inside a Kubernetes cluster:
-Before deploying an app, a user must create a worker node cluster. I can create a cluster using the kubectl cli commands or create it from[a Bluemix](http://bluemix.net/) dashboard.
+Before deploying an app, a user must create a worker node cluster. I can create a cluster using the kubectl cli commands or create it from the [IBM Cloud](https://cloud.ibm.com/) dashboard.
@@ -107,16 +107,16 @@ If needed, run a rolling update to update the existing pod.
-Deploying the application in Armada:
+Deploying the application in IBM Cloud Kubernetes Service:
-Provision a cluster in Armada with \ worker nodes. Create Kubernetes controllers for deploying the containers in worker nodes, the Armada infrastructure pulls the Docker images from IBM Bluemix Docker registry to create containers. We tried deploying an application container and running a logmet agent (see Reading and displaying logs using logmet container, below) inside the containers that forwards the application logs to an IBM cloud logging service. As part of the process, YAML files are used to create a controller resource for the UrbanCode Deploy (UCD). UCD agent is deployed as a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) controller, which is used to connect to the UCD server. The whole process of deployment of application happens in UCD. To support the application for public access, we created a service resource to interact between pods and access container services. For storage support, we created persistent volume claims and mounted the volume for the containers.
+Provision a cluster in IBM Cloud Kubernetes Service with \ worker nodes. Create Kubernetes controllers for deploying the containers in worker nodes, the IBM Cloud Kubernetes Service infrastructure pulls the Docker images from IBM Cloud Container Registry to create containers. We tried deploying an application container and running a logmet agent (see Reading and displaying logs using logmet container, below) inside the containers that forwards the application logs to an IBM Cloud logging service. As part of the process, YAML files are used to create a controller resource for the UrbanCode Deploy (UCD). UCD agent is deployed as a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) controller, which is used to connect to the UCD server. The whole process of deployment of application happens in UCD. To support the application for public access, we created a service resource to interact between pods and access container services. For storage support, we created persistent volume claims and mounted the volume for the containers.
| ![](https://lh6.googleusercontent.com/iFKlbBX8rjWTuygIfjImdxP8R7xXuvaaoDwldEIC3VRL03XIehxagz8uePpXllYMSxoyai5a6N-0NB4aTGK9fwwd8leFyfypxtbmaWBK-b2Kh9awcA76-_82F7ZZl7lgbf0gyFN7) |
-| UCD: IBM UrbanCode Deploy is a tool for automating application deployments through your environments. Armada: Kubernetes implementation of IBM. WH Docker Registry: Docker Private image registry. Common agent containers: We expect to configure our services to use the WHC mandatory agents. We deployed all ion containers. |
+| UCD: IBM UrbanCode Deploy is a tool for automating application deployments through your environments. IBM Cloud Kubernetes Service: Kubernetes implementation of IBM. WH Docker Registry: Docker Private image registry. Common agent containers: We expect to configure our services to use the WHC mandatory agents. We deployed all ion containers. |
@@ -142,7 +142,7 @@ Exposing services with Ingress:
-To expose our services to outside the cluster, we used Ingress. In Armada, if we create a paid cluster, an Ingress controller is automatically installed for us to use. We were able to access services through Ingress by creating a YAML resource file that specifies the service path.
+To expose our services to outside the cluster, we used Ingress. In IBM Cloud Kubernetes Service, if we create a paid cluster, an Ingress controller is automatically installed for us to use. We were able to access services through Ingress by creating a YAML resource file that specifies the service path.
diff --git a/content/en/blog/_posts/2018-05-04-Announcing-Kubeflow-0-1.md b/content/en/blog/_posts/2018-05-04-Announcing-Kubeflow-0-1.md
index 2b23ac523b961..9bc8ceb2c9cf7 100644
--- a/content/en/blog/_posts/2018-05-04-Announcing-Kubeflow-0-1.md
+++ b/content/en/blog/_posts/2018-05-04-Announcing-Kubeflow-0-1.md
@@ -94,7 +94,7 @@ If you’d like to try out Kubeflow, we have a number of options for you:
1. You can use sample walkthroughs hosted on [Katacoda](https://www.katacoda.com/kubeflow)
2. You can follow a guided tutorial with existing models from the [examples repository](https://github.com/kubeflow/examples). These include the [Github Issue Summarization](https://github.com/kubeflow/examples/tree/master/github_issue_summarization), [MNIST](https://github.com/kubeflow/examples/tree/master/mnist) and [Reinforcement Learning with Agents](https://github.com/kubeflow/examples/tree/master/agents).
-3. You can start a cluster on your own and try your own model. Any Kubernetes conformant cluster will support Kubeflow including those from contributors [Caicloud](https://www.prnewswire.com/news-releases/caicloud-releases-its-kubernetes-based-cluster-as-a-service-product-claas-20-and-the-first-tensorflow-as-a-service-taas-11-while-closing-6m-series-a-funding-300418071.html), [Canonical](https://jujucharms.com/canonical-kubernetes/), [Google](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-container-cluster), [Heptio](https://heptio.com/products/kubernetes-subscription/), [Mesosphere](https://github.com/mesosphere/dcos-kubernetes-quickstart), [Microsoft](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough), [IBM](https://console.bluemix.net/docs/containers/cs_tutorials.html#cs_cluster_tutorial), [Red Hat/Openshift ](https://docs.openshift.com/container-platform/3.3/install_config/install/quick_install.html#install-config-install-quick-install)and [Weaveworks](https://www.weave.works/product/cloud/).
+3. You can start a cluster on your own and try your own model. Any Kubernetes conformant cluster will support Kubeflow including those from contributors [Caicloud](https://www.prnewswire.com/news-releases/caicloud-releases-its-kubernetes-based-cluster-as-a-service-product-claas-20-and-the-first-tensorflow-as-a-service-taas-11-while-closing-6m-series-a-funding-300418071.html), [Canonical](https://jujucharms.com/canonical-kubernetes/), [Google](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-container-cluster), [Heptio](https://heptio.com/products/kubernetes-subscription/), [Mesosphere](https://github.com/mesosphere/dcos-kubernetes-quickstart), [Microsoft](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough), [IBM](https://cloud.ibm.com/docs/containers?topic=containers-cs_cluster_tutorial#cs_cluster_tutorial), [Red Hat/Openshift ](https://docs.openshift.com/container-platform/3.3/install_config/install/quick_install.html#install-config-install-quick-install)and [Weaveworks](https://www.weave.works/product/cloud/).
There were also a number of sessions at KubeCon + CloudNativeCon EU 2018 covering Kubeflow. The links to the talks are here; the associated videos will be posted in the coming days.
diff --git a/content/en/blog/_posts/2019-02-11-runc-CVE-2019-5736.md b/content/en/blog/_posts/2019-02-11-runc-CVE-2019-5736.md
new file mode 100644
index 0000000000000..b5414e8c5be3f
--- /dev/null
+++ b/content/en/blog/_posts/2019-02-11-runc-CVE-2019-5736.md
@@ -0,0 +1,92 @@
+---
+title: Runc and CVE-2019-5736
+date: 2019-02-11
+---
+
+This morning [a container escape vulnerability in runc was announced](https://www.openwall.com/lists/oss-security/2019/02/11/2). We wanted to provide some guidance to Kubernetes users to ensure everyone is safe and secure.
+
+## What Is Runc?
+
+Very briefly, runc is the low-level tool which does the heavy lifting of spawning a Linux container. Other tools like Docker, Containerd, and CRI-O sit on top of runc to deal with things like data formatting and serialization, but runc is at the heart of all of these systems.
+
+Kubernetes in turn sits on top of those tools, and so while no part of Kubernetes itself is vulnerable, most Kubernetes installations are using runc under the hood.
+
+### What Is The Vulnerability?
+
+While full details are still embargoed to give people time to patch, the rough version is that when running a process as root (UID 0) inside a container, that process can exploit a bug in runc to gain root privileges on the host running the container. This then allows them unlimited access to the server as well as any other containers on that server.
+
+If the process inside the container is either trusted (something you know is not hostile) or is not running as UID 0, then the vulnerability does not apply. It can also be prevented by SELinux, if an appropriate policy has been applied. RedHat Enterprise Linux and CentOS both include appropriate SELinux permissions with their packages and so are believed to be unaffected if SELinux is enabled.
+
+The most common source of risk is attacker-controller container images, such as unvetted images from public repositories.
+
+### What Should I Do?
+
+As with all security issues, the two main options are to mitigate the vulnerability or upgrade your version of runc to one that includes the fix.
+
+As the exploit requires UID 0 within the container, a direct mitigation is to ensure all your containers are running as a non-0 user. This can be set within the container image, or via your pod specification:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: run-as-uid-1000
+spec:
+ securityContext:
+ runAsUser: 1000
+ # ...
+```
+
+This can also be enforced globally using a PodSecurityPolicy:
+
+```yaml
+apiVersion: policy/v1beta1
+kind: PodSecurityPolicy
+metadata:
+ name: non-root
+spec:
+ privileged: false
+ allowPrivilegeEscalation: false
+ runAsUser:
+ # Require the container to run without root privileges.
+ rule: 'MustRunAsNonRoot'
+```
+
+Setting a policy like this is highly encouraged given the overall risks of running as UID 0 inside a container.
+
+Another potential mitigation is to ensure all your container images are vetted and trusted. This can be accomplished by building all your images yourself, or by vetting the contents of an image and then pinning to the image version hash (`image: external/someimage@sha256:7832659873hacdef`).
+
+Upgrading runc can generally be accomplished by upgrading the package `runc` for your distribution or by upgrading your OS image if using immutable images. This is a list of known safe versions for various distributions and platforms:
+
+* Ubuntu - [`runc 1.0.0~rc4+dfsg1-6ubuntu0.18.10.1`](https://people.canonical.com/~ubuntu-security/cve/2019/CVE-2019-5736.html)
+* Debian - [`runc 1.0.0~rc6+dfsg1-2`](https://security-tracker.debian.org/tracker/CVE-2019-5736)
+* RedHat Enterprise Linux - [`docker 1.13.1-91.git07f3374.el7`](https://access.redhat.com/security/vulnerabilities/runcescape) (if SELinux is disabled)
+* Amazon Linux - [`docker 18.06.1ce-7.25.amzn1.x86_64`](https://alas.aws.amazon.com/ALAS-2019-1156.html)
+* CoreOS - Stable: [`1967.5.0`](https://coreos.com/releases/#1967.5.0) / Beta: [`2023.2.0`](https://coreos.com/releases/#2023.2.0) / Alpha: [`2051.0.0`](https://coreos.com/releases/#2051.0.0)
+* Kops Debian - [in progress](https://github.com/kubernetes/kops/pull/6460)
+* Docker - [`18.09.2`](https://github.com/docker/docker-ce/releases/tag/v18.09.2)
+
+Some platforms have also posted more specific instructions:
+
+#### Google Container Engine (GKE)
+
+Google has issued a [security bulletin](https://cloud.google.com/kubernetes-engine/docs/security-bulletins#february-11-2019-runc) with more detailed information but in short, if you are using the default GKE node image then you are safe. If you are using an Ubuntu node image then you will need to mitigate or upgrade to an image with a fixed version of runc.
+
+#### Amazon Elastic Container Service for Kubernetes (EKS)
+
+Amazon has also issued a [security bulletin](https://aws.amazon.com/security/security-bulletins/AWS-2019-002/) with more detailed information. All EKS users should mitigate the issue or upgrade to a new node image.
+
+#### Azure Kubernetes Service (AKS)
+
+Microsoft has issued a [security bulletin](https://azure.microsoft.com/en-us/updates/cve-2019-5736-and-runc-vulnerability/) with detailed information on mitigating the issue. Microsoft recommends all AKS users to upgrade their cluster to mitigate the issue.
+
+### Docker
+
+We don't have specific confirmation that Docker for Mac and Docker for Windows are vulnerable, however it seems likely. Docker has released a fix in [version 18.09.2](https://github.com/docker/docker-ce/releases/tag/v18.09.2) and it is recommended you upgrade to it. This also applies to other deploy systems using Docker under the hood.
+
+If you are unable to upgrade Docker, the Rancher team has provided backports of the fix for many older versions at [github.com/rancher/runc-cve](https://github.com/rancher/runc-cve).
+
+## Getting More Information
+
+If you have any further questions about how this vulnerability impacts Kubernetes, please join us at [discuss.kubernetes.io](https://discuss.kubernetes.io/).
+
+If you would like to get in contact with the [runc team](https://github.com/opencontainers/org/blob/master/README.md#communications), you can reach them on [Google Groups](https://groups.google.com/a/opencontainers.org/forum/#!forum/dev) or `#opencontainers` on Freenode IRC.
diff --git a/content/en/blog/_posts/2019-02-12-building-a-kubernetes-edge-control-plane-for-envoy-v2.md b/content/en/blog/_posts/2019-02-12-building-a-kubernetes-edge-control-plane-for-envoy-v2.md
new file mode 100644
index 0000000000000..7751712f0e703
--- /dev/null
+++ b/content/en/blog/_posts/2019-02-12-building-a-kubernetes-edge-control-plane-for-envoy-v2.md
@@ -0,0 +1,107 @@
+---
+title: Building a Kubernetes Edge (Ingress) Control Plane for Envoy v2
+date: 2019-02-12
+slug: building-a-kubernetes-edge-control-plane-for-envoy-v2
+---
+
+
+**Author:**
+Daniel Bryant, Product Architect, Datawire;
+Flynn, Ambassador Lead Developer, Datawire;
+Richard Li, CEO and Co-founder, Datawire
+
+
+Kubernetes has become the de facto runtime for container-based microservice applications, but this orchestration framework alone does not provide all of the infrastructure necessary for running a distributed system. Microservices typically communicate through Layer 7 protocols such as HTTP, gRPC, or WebSockets, and therefore having the ability to make routing decisions, manipulate protocol metadata, and observe at this layer is vital. However, traditional load balancers and edge proxies have predominantly focused on L3/4 traffic. This is where the [Envoy Proxy](https://www.envoyproxy.io/) comes into play.
+
+Envoy proxy was designed as a [universal data plane](https://blog.envoyproxy.io/the-universal-data-plane-api-d15cec7a) from the ground-up by the Lyft Engineering team for today's distributed, L7-centric world, with broad support for L7 protocols, a real-time API for managing its configuration, first-class observability, and high performance within a small memory footprint. However, Envoy's vast feature set and flexibility of operation also makes its configuration highly complicated -- this is evident from looking at its rich but verbose [control plane](https://blog.envoyproxy.io/service-mesh-data-plane-vs-control-plane-2774e720f7fc) syntax.
+
+With the open source [Ambassador API Gateway](https://www.getambassador.io), we wanted to tackle the challenge of creating a new control plane that focuses on the use case of deploying Envoy as an forward-facing edge proxy within a Kubernetes cluster, in a way that is idiomatic to Kubernetes operators. In this article, we'll walk through two major iterations of the Ambassador design, and how we integrated Ambassador with Kubernetes.
+
+
+## Ambassador pre-2019: Envoy v1 APIs, Jinja Template Files, and Hot Restarts
+
+Ambassador itself is deployed within a container as a Kubernetes service, and uses annotations added to Kubernetes Services as its [core configuration model](https://www.getambassador.io/reference/configuration). This approach [enables application developers to manage routing](https://www.getambassador.io/concepts/developers) as part of the Kubernetes service definition. We explicitly decided to go down this route because of [limitations](https://blog.getambassador.io/kubernetes-ingress-nodeport-load-balancers-and-ingress-controllers-6e29f1c44f2d) in the current [Ingress API spec](https://kubernetes.io/docs/concepts/services-networking/ingress/), and we liked the simplicity of extending Kubernetes services, rather than introducing another custom resource type. An example of an Ambassador annotation can be seen here:
+
+
+```
+kind: Service
+apiVersion: v1
+metadata:
+ name: my-service
+ annotations:
+ getambassador.io/config: |
+ ---
+ apiVersion: ambassador/v0
+ kind: Mapping
+ name: my_service_mapping
+ prefix: /my-service/
+ service: my-service
+spec:
+ selector:
+ app: MyApp
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
+```
+
+
+Translating this simple Ambassador annotation config into valid [Envoy v1](https://www.envoyproxy.io/docs/envoy/v1.6.0/configuration/overview/v1_overview) config was not a trivial task. By design, Ambassador's configuration isn't based on the same conceptual model as Envoy's configuration -- we deliberately wanted to aggregate and simplify operations and config. Therefore, translating between one set of concepts to the other involves a fair amount of logic within Ambassador.
+
+In this first iteration of Ambassador we created a Python-based service that watched the Kubernetes API for changes to Service objects. When new or updated Ambassador annotations were detected, these were translated from the Ambassador syntax into an intermediate representation (IR) which embodied our core configuration model and concepts. Next, Ambassador translated this IR into a representative Envoy configuration which was saved as a file within pods associated with the running Ambassador k8s Service. Ambassador then "hot-restarted" the Envoy process running within the Ambassador pods, which triggered the loading of the new configuration.
+
+There were many benefits with this initial implementation. The mechanics involved were fundamentally simple, the transformation of Ambassador config into Envoy config was reliable, and the file-based hot restart integration with Envoy was dependable.
+
+However, there were also notable challenges with this version of Ambassador. First, although the hot restart was effective for the majority of our customers' use cases, it was not very fast, and some customers (particularly those with huge application deployments) found it was limiting the frequency with which they could change their configuration. Hot restart can also drop connections, especially long-lived connections like WebSockets or gRPC streams.
+
+More crucially, though, the first implementation of the IR allowed rapid prototyping but was primitive enough that it proved very difficult to make substantial changes. While this was a pain point from the beginning, it became a critical issue as Envoy shifted to the [Envoy v2 API](https://www.envoyproxy.io/docs/envoy/latest/configuration/overview/v2_overview). It was clear that the v2 API would offer Ambassador many benefits -- as Matt Klein outlined in his blog post, "[The universal data plane API](https://blog.envoyproxy.io/the-universal-data-plane-api-d15cec7a)" -- including access to new features and a solution to the connection-drop problem noted above, but it was also clear that the existing IR implementation was not capable of making the leap.
+
+
+## Ambassador >= v0.50: Envoy v2 APIs (ADS), Testing with KAT, and Golang
+
+In consultation with the [Ambassador community](http://d6e.co/slack), the [Datawire](www.datawire.io) team undertook a redesign of the internals of Ambassador in 2018. This was driven by two key goals. First, we wanted to integrate Envoy's v2 configuration format, which would enable the support of features such as [SNI](https://www.getambassador.io/user-guide/sni/), [rate limiting](https://www.getambassador.io/user-guide/rate-limiting) and [gRPC authentication APIs](https://www.getambassador.io/user-guide/auth-tutorial). Second, we also wanted to do much more robust semantic validation of Envoy configuration due to its increasing complexity (particularly when operating with large-scale application deployments).
+
+
+### Initial stages
+
+We started by restructuring the Ambassador internals more along the lines of a multipass compiler. The class hierarchy was made to more closely mirror the separation of concerns between the Ambassador configuration resources, the IR, and the Envoy configuration resources. Core parts of Ambassador were also redesigned to facilitate contributions from the community outside Datawire. We decided to take this approach for several reasons. First, Envoy Proxy is a very fast moving project, and we realized that we needed an approach where a seemingly minor Envoy configuration change didn't result in days of reengineering within Ambassador. In addition, we wanted to be able to provide semantic verification of configuration.
+
+As we started working more closely with Envoy v2, a testing challenge was quickly identified. As more and more features were being supported in Ambassador, more and more bugs appeared in Ambassador's handling of less common but completely valid combinations of features. This drove to creation of a new testing requirement that meant Ambassador's test suite needed to be reworked to automatically manage many combinations of features, rather than relying on humans to write each test individually. Moreover, we wanted the test suite to be fast in order to maximize engineering productivity.
+
+Thus, as part of the Ambassador rearchitecture, we introduced the [Kubernetes Acceptance Test (KAT)](https://github.com/datawire/ambassador/tree/master/kat) framework. KAT is an extensible test framework that:
+
+
+
+1. Deploys a bunch of services (along with Ambassador) to a Kubernetes cluster
+1. Run a series of verification queries against the spun up APIs
+1. Perform a bunch of assertions on those query results
+
+KAT is designed for performance -- it batches test setup upfront, and then runs all the queries in step 3 asynchronously with a high performance client. The traffic driver in KAT runs locally using [Telepresence](https://www.telepresence.io), which makes it easier to debug issues.
+
+### Introducing Golang to the Ambassador Stack
+
+With the KAT test framework in place, we quickly ran into some issues with Envoy v2 configuration and hot restart, which presented the opportunity to switch to use Envoy’s Aggregated Discovery Service (ADS) APIs instead of hot restart. This completely eliminated the requirement for restart on configuration changes, which we found could lead to dropped connection under high loads or long-lived connections.
+
+However, we faced an interesting question as we considered the move to the ADS. The ADS is not as simple as one might expect: there are explicit ordering dependencies when sending updates to Envoy. The Envoy project has reference implementations of the ordering logic, but only in Go and Java, where Ambassador was primarily in Python. We agonized a bit, and decided that the simplest way forward was to accept the polyglot nature of our world, and do our ADS implementation in Go.
+
+We also found, with KAT, that our testing had reached the point where Python’s performance with many network connections was a limitation, so we took advantage of Go here, as well, writing KAT’s querying and backend services primarily in Go. After all, what’s another Golang dependency when you’ve already taken the plunge?
+
+With a new test framework, new IR generating valid Envoy v2 configuration, and the ADS, we thought we were done with the major architectural changes in Ambassador 0.50. Alas, we hit one more issue. On the Azure Kubernetes Service, Ambassador annotation changes were no longer being detected.
+
+Working with the highly-responsive AKS engineering team, we were able to identify the issue -- namely, the Kubernetes API server in AKS is exposed through a chain of proxies, requiring clients to be updating to understand how to connect using the FQDN of the API server, which is provided through a mutating webhook in AKS. Unfortunately, support for this feature was not available in the official Kubernetes Python client, so this was the third spot where we chose to switch to Go instead of Python.
+
+This raises the interesting question of, “why not ditch all the Python code, and just rewrite Ambassador entirely in Go?” It’s a valid question. The main concern with a rewrite is that Ambassador and Envoy operate at different conceptual levels rather than simply expressing the same concepts with different syntax. Being certain that we’ve expressed the conceptual bridges in a new language is not a trivial challenge, and not something to undertake without already having really excellent test coverage in place
+
+At this point, we use Go to coverage very specific, well-contained functions that can be verified for correctness much more easily that we could verify a complete Golang rewrite. In the future, who knows? But for 0.50.0, this functional split let us both take advantage of Golang’s strengths, while letting us retain more confidence about all the changes already in 0.50.
+
+## Lessons Learned
+
+We've learned a lot in the process of building [Ambassador 0.50](https://blog.getambassador.io/ambassador-0-50-ga-release-notes-sni-new-authservice-and-envoy-v2-support-3b30a4d04c81). Some of our key takeaways:
+
+* Kubernetes and Envoy are very powerful frameworks, but they are also extremely fast moving targets -- there is sometimes no substitute for reading the source code and talking to the maintainers (who are fortunately all quite accessible!)
+* The best supported libraries in the Kubernetes / Envoy ecosystem are written in Go. While we love Python, we have had to adopt Go so that we're not forced to maintain too many components ourselves.
+* Redesigning a test harness is sometimes necessary to move your software forward.
+* The real cost in redesigning a test harness is often in porting your old tests to the new harness implementation.
+* Designing (and implementing) an effective control plane for the edge proxy use case has been challenging, and the feedback from the open source community around Kubernetes, Envoy and Ambassador has been extremely useful.
+
+Migrating Ambassador to the Envoy v2 configuration and ADS APIs was a long and difficult journey that required lots of architecture and design discussions and plenty of coding, but early feedback from results have been positive. [Ambassador 0.50 is available now](https://blog.getambassador.io/announcing-ambassador-0-50-8dffab5b05e0), so you can take it for a test run and share your feedback with the community on our [Slack channel](http://d6e.co/slack) or on [Twitter](https://www.twitter.com/getambassadorio).
diff --git a/content/en/docs/concepts/cluster-administration/cloud-providers.md b/content/en/docs/concepts/cluster-administration/cloud-providers.md
index 68099874c8e4c..ff3df214b43ee 100644
--- a/content/en/docs/concepts/cluster-administration/cloud-providers.md
+++ b/content/en/docs/concepts/cluster-administration/cloud-providers.md
@@ -367,14 +367,21 @@ The `--hostname-override` parameter is ignored by the VSphere cloud provider.
## IBM Cloud Kubernetes Service
### Compute nodes
-By using the IBM Cloud Kubernetes Service provider, you can create clusters with a mixture of virtual and physical (bare metal) nodes in a single zone or across multiple zones in a region. For more information, see [Planning your cluster and worker node setup](https://console.bluemix.net/docs/containers/cs_clusters_planning.html#plan_clusters).
+By using the IBM Cloud Kubernetes Service provider, you can create clusters with a mixture of virtual and physical (bare metal) nodes in a single zone or across multiple zones in a region. For more information, see [Planning your cluster and worker node setup](https://cloud.ibm.com/docs/containers?topic=containers-plan_clusters#plan_clusters).
The name of the Kubernetes Node object is the private IP address of the IBM Cloud Kubernetes Service worker node instance.
### Networking
-The IBM Cloud Kubernetes Service provider provides VLANs for quality network performance and network isolation for nodes. You can set up custom firewalls and Calico network policies to add an extra layer of security for your cluster, or connect your cluster to your on-prem data center via VPN. For more information, see [Planning in-cluster and private networking](https://console.bluemix.net/docs/containers/cs_network_cluster.html#planning).
+The IBM Cloud Kubernetes Service provider provides VLANs for quality network performance and network isolation for nodes. You can set up custom firewalls and Calico network policies to add an extra layer of security for your cluster, or connect your cluster to your on-prem data center via VPN. For more information, see [Planning in-cluster and private networking](https://cloud.ibm.com/docs/containers?topic=containers-cs_network_cluster#cs_network_cluster).
-To expose apps to the public or within the cluster, you can leverage NodePort, LoadBalancer, or Ingress services. You can also customize the Ingress application load balancer with annotations. For more information, see [Planning to expose your apps with external networking](https://console.bluemix.net/docs/containers/cs_network_planning.html#planning).
+To expose apps to the public or within the cluster, you can leverage NodePort, LoadBalancer, or Ingress services. You can also customize the Ingress application load balancer with annotations. For more information, see [Planning to expose your apps with external networking](https://cloud.ibm.com/docs/containers?topic=containers-cs_network_planning#cs_network_planning).
### Storage
-The IBM Cloud Kubernetes Service provider leverages Kubernetes-native persistent volumes to enable users to mount file, block, and cloud object storage to their apps. You can also use database-as-a-service and third-party add-ons for persistent storage of your data. For more information, see [Planning highly available persistent storage](https://console.bluemix.net/docs/containers/cs_storage_planning.html#storage_planning).
+The IBM Cloud Kubernetes Service provider leverages Kubernetes-native persistent volumes to enable users to mount file, block, and cloud object storage to their apps. You can also use database-as-a-service and third-party add-ons for persistent storage of your data. For more information, see [Planning highly available persistent storage](https://cloud.ibm.com/docs/containers?topic=containers-storage_planning#storage_planning).
+
+## Baidu Cloud Container Engine
+
+### Node Name
+
+The Baidu cloud provider uses the private IP address of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object.
+Note that the Kubernetes Node name must match the Baidu VM private IP.
diff --git a/content/en/docs/concepts/cluster-administration/networking.md b/content/en/docs/concepts/cluster-administration/networking.md
index d4fda1a89293d..bbd223e9def89 100644
--- a/content/en/docs/concepts/cluster-administration/networking.md
+++ b/content/en/docs/concepts/cluster-administration/networking.md
@@ -7,8 +7,9 @@ weight: 50
---
{{% capture overview %}}
-Kubernetes approaches networking somewhat differently than Docker does by
-default. There are 4 distinct networking problems to solve:
+Networking is a central part of Kubernetes, but it can be challenging to
+understand exactly how it is expected to work. There are 4 distinct networking
+problems to address:
1. Highly-coupled container-to-container communications: this is solved by
[pods](/docs/concepts/workloads/pods/pod/) and `localhost` communications.
@@ -21,80 +22,56 @@ default. There are 4 distinct networking problems to solve:
{{% capture body %}}
-Kubernetes assumes that pods can communicate with other pods, regardless of
-which host they land on. Every pod gets its own IP address so you do not
-need to explicitly create links between pods and you almost never need to deal
-with mapping container ports to host ports. This creates a clean,
-backwards-compatible model where pods can be treated much like VMs or physical
-hosts from the perspectives of port allocation, naming, service discovery, load
-balancing, application configuration, and migration.
-
-There are requirements imposed on how you set up your cluster networking to
-achieve this.
-
-## Docker model
-
-Before discussing the Kubernetes approach to networking, it is worthwhile to
-review the "normal" way that networking works with Docker. By default, Docker
-uses host-private networking. It creates a virtual bridge, called `docker0` by
-default, and allocates a subnet from one of the private address blocks defined
-in [RFC1918](https://tools.ietf.org/html/rfc1918) for that bridge. For each
-container that Docker creates, it allocates a virtual Ethernet device (called
-`veth`) which is attached to the bridge. The veth is mapped to appear as `eth0`
-in the container, using Linux namespaces. The in-container `eth0` interface is
-given an IP address from the bridge's address range.
-
-The result is that Docker containers can talk to other containers only if they
-are on the same machine (and thus the same virtual bridge). Containers on
-different machines can not reach each other - in fact they may end up with the
-exact same network ranges and IP addresses.
-
-In order for Docker containers to communicate across nodes, there must
-be allocated ports on the machine’s own IP address, which are then
-forwarded or proxied to the containers. This obviously means that
-containers must either coordinate which ports they use very carefully
-or ports must be allocated dynamically.
-
-## Kubernetes model
-
-Coordinating ports across multiple developers is very difficult to do at
-scale and exposes users to cluster-level issues outside of their control.
+Kubernetes is all about sharing machines between applications. Typically,
+sharing machines requires ensuring that two applications do not try to use the
+same ports. Coordinating ports across multiple developers is very difficult to
+do at scale and exposes users to cluster-level issues outside of their control.
+
Dynamic port allocation brings a lot of complications to the system - every
application has to take ports as flags, the API servers have to know how to
insert dynamic port numbers into configuration blocks, services have to know
how to find each other, etc. Rather than deal with this, Kubernetes takes a
different approach.
+## The Kubernetes network model
+
+Every `Pod` gets its own IP address. This means you do not need to explicitly
+create links between `Pods` and you almost never need to deal with mapping
+container ports to host ports. This creates a clean, backwards-compatible
+model where `Pods` can be treated much like VMs or physical hosts from the
+perspectives of port allocation, naming, service discovery, load balancing,
+application configuration, and migration.
+
Kubernetes imposes the following fundamental requirements on any networking
implementation (barring any intentional network segmentation policies):
- * all containers can communicate with all other containers without NAT
- * all nodes can communicate with all containers (and vice-versa) without NAT
- * the IP that a container sees itself as is the same IP that others see it as
+ * pods on a node can communicate with all pods on all nodes without NAT
+ * agents on a node (e.g. system daemons, kubelet) can communicate with all
+ pods on that node
-What this means in practice is that you can not just take two computers
-running Docker and expect Kubernetes to work. You must ensure that the
-fundamental requirements are met.
+Note: For those platforms that support `Pods` running in the host network (e.g.
+Linux):
+
+ * pods in the host network of a node can communicate with all pods on all
+ nodes without NAT
This model is not only less complex overall, but it is principally compatible
with the desire for Kubernetes to enable low-friction porting of apps from VMs
to containers. If your job previously ran in a VM, your VM had an IP and could
talk to other VMs in your project. This is the same basic model.
-Until now this document has talked about containers. In reality, Kubernetes
-applies IP addresses at the `Pod` scope - containers within a `Pod` share their
-network namespaces - including their IP address. This means that containers
-within a `Pod` can all reach each other's ports on `localhost`. This does imply
-that containers within a `Pod` must coordinate port usage, but this is no
-different than processes in a VM. This is called the "IP-per-pod" model. This
-is implemented, using Docker, as a "pod container" which holds the network namespace
-open while "app containers" (the things the user specified) join that namespace
-with Docker's `--net=container:` function.
-
-As with Docker, it is possible to request host ports, but this is reduced to a
-very niche operation. In this case a port will be allocated on the host `Node`
-and traffic will be forwarded to the `Pod`. The `Pod` itself is blind to the
-existence or non-existence of host ports.
+Kubernetes IP addresses exist at the `Pod` scope - containers within a `Pod`
+share their network namespaces - including their IP address. This means that
+containers within a `Pod` can all reach each other's ports on `localhost`. This
+also means that containers within a `Pod` must coordinate port usage, but this
+is no different than processes in a VM. This is called the "IP-per-pod" model.
+
+How this is implemented is a detail of the particular container runtime in use.
+
+It is possible to request ports on the `Node` itself which forward to your `Pod`
+(called host ports), but this is a very niche operation. How that forwarding is
+implemented is also a detail of the container runtime. The `Pod` itself is
+blind to the existence or non-existence of host ports.
## How to implement the Kubernetes networking model
diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md
index cac58727d7b39..fa654f776e66c 100644
--- a/content/en/docs/concepts/configuration/secret.md
+++ b/content/en/docs/concepts/configuration/secret.md
@@ -13,10 +13,10 @@ weight: 50
{{% capture overview %}}
-Objects of type `secret` are intended to hold sensitive information, such as
-passwords, OAuth tokens, and ssh keys. Putting this information in a `secret`
-is safer and more flexible than putting it verbatim in a `pod` definition or in
-a docker image. See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) for more information.
+Kubernetes `secret` objects let you store and manage sensitive information, such
+as passwords, OAuth tokens, and ssh keys. Putting this information in a `secret`
+is safer and more flexible than putting it verbatim in a
+{{< glossary_tooltip term_id="pod" >}} definition or in a {{< glossary_tooltip text="container image" term_id="image" >}}. See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) for more information.
{{% /capture %}}
@@ -32,7 +32,8 @@ more control over how it is used, and reduces the risk of accidental exposure.
Users can create secrets, and the system also creates some secrets.
To use a secret, a pod needs to reference the secret.
-A secret can be used with a pod in two ways: as files in a [volume](/docs/concepts/storage/volumes/) mounted on one or more of
+A secret can be used with a pod in two ways: as files in a
+{{< glossary_tooltip text="volume" term_id="volume" >}} mounted on one or more of
its containers, or used by kubelet when pulling images for the pod.
### Built-in Secrets
@@ -94,11 +95,14 @@ password.txt: 12 bytes
username.txt: 5 bytes
```
-Note that neither `get` nor `describe` shows the contents of the file by default.
-This is to protect the secret from being exposed accidentally to someone looking
+{{< note >}}
+`kubectl get` and `kubectl describe` avoid showing the contents of a secret by
+default.
+This is to protect the secret from being exposed accidentally to an onlooker,
or from being stored in a terminal log.
+{{< /note >}}
-See [decoding a secret](#decoding-a-secret) for how to see the contents.
+See [decoding a secret](#decoding-a-secret) for how to see the contents of a secret.
#### Creating a Secret Manually
@@ -271,8 +275,9 @@ $ echo 'MWYyZDFlMmU2N2Rm' | base64 --decode
### Using Secrets
-Secrets can be mounted as data volumes or be exposed as environment variables to
-be used by a container in a pod. They can also be used by other parts of the
+Secrets can be mounted as data volumes or be exposed as
+{{< glossary_tooltip text="environment variables" term_id="container-env-variables" >}}
+to be used by a container in a pod. They can also be used by other parts of the
system, without being directly exposed to the pod. For example, they can hold
credentials that other parts of the system should use to interact with external
systems on your behalf.
@@ -458,7 +463,8 @@ Secret updates.
#### Using Secrets as Environment Variables
-To use a secret in an environment variable in a pod:
+To use a secret in an {{< glossary_tooltip text="environment variable" term_id="container-env-variables" >}}
+in a pod:
1. Create a secret or use an existing one. Multiple pods can reference the same secret.
1. Modify your Pod definition in each container that you wish to consume the value of a secret key to add an environment variable for each secret key you wish to consume. The environment variable that consumes the secret key should populate the secret's name and key in `env[].valueFrom.secretKeyRef`.
@@ -534,10 +540,10 @@ Secret volume sources are validated to ensure that the specified object
reference actually points to an object of type `Secret`. Therefore, a secret
needs to be created before any pods that depend on it.
-Secret API objects reside in a namespace. They can only be referenced by pods
-in that same namespace.
+Secret API objects reside in a {{< glossary_tooltip text="namespace" term_id="namespace" >}}.
+They can only be referenced by pods in that same namespace.
-Individual secrets are limited to 1MB in size. This is to discourage creation
+Individual secrets are limited to 1MiB in size. This is to discourage creation
of very large secrets which would exhaust apiserver and kubelet memory.
However, creation of many smaller secrets could also exhaust memory. More
comprehensive limits on memory usage due to secrets is a planned feature.
@@ -549,8 +555,8 @@ controller. It does not include pods created via the kubelets
not common ways to create pods.)
Secrets must be created before they are consumed in pods as environment
-variables unless they are marked as optional. References to Secrets that do not exist will prevent
-the pod from starting.
+variables unless they are marked as optional. References to Secrets that do
+not exist will prevent the pod from starting.
References via `secretKeyRef` to keys that do not exist in a named Secret
will prevent the pod from starting.
@@ -821,6 +827,7 @@ be available in future releases of Kubernetes.
## Security Properties
+
### Protections
Because `secret` objects can be created independently of the `pods` that use
@@ -829,51 +836,52 @@ creating, viewing, and editing pods. The system can also take additional
precautions with `secret` objects, such as avoiding writing them to disk where
possible.
-A secret is only sent to a node if a pod on that node requires it. It is not
-written to disk. It is stored in a tmpfs. It is deleted once the pod that
-depends on it is deleted.
-
-On most Kubernetes-project-maintained distributions, communication between user
-to the apiserver, and from apiserver to the kubelets, is protected by SSL/TLS.
-Secrets are protected when transmitted over these channels.
-
-Secret data on nodes is stored in tmpfs volumes and thus does not come to rest
-on the node.
+A secret is only sent to a node if a pod on that node requires it.
+Kubelet stores the secret into a `tmpfs` so that the secret is not written
+to disk storage. Once the Pod that depends on the secret is deleted, kubelet
+will delete its local copy of the secret data as well.
There may be secrets for several pods on the same node. However, only the
secrets that a pod requests are potentially visible within its containers.
-Therefore, one Pod does not have access to the secrets of another pod.
+Therefore, one Pod does not have access to the secrets of another Pod.
There may be several containers in a pod. However, each container in a pod has
to request the secret volume in its `volumeMounts` for it to be visible within
the container. This can be used to construct useful [security partitions at the
Pod level](#use-case-secret-visible-to-one-container-in-a-pod).
+On most Kubernetes-project-maintained distributions, communication between user
+to the apiserver, and from apiserver to the kubelets, is protected by SSL/TLS.
+Secrets are protected when transmitted over these channels.
+
+{{< feature-state for_k8s_version="v1.13" state="beta" >}}
+
+You can enable [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/)
+for secret data, so that the secrets are not stored in the clear into {{< glossary_tooltip term_id="etcd" >}}.
+
### Risks
- - In the API server secret data is stored as plaintext in etcd; therefore:
+ - In the API server secret data is stored in {{< glossary_tooltip term_id="etcd" >}};
+ therefore:
+ - Administrators should enable encryption at rest for cluster data (requires v1.13 or later)
- Administrators should limit access to etcd to admin users
- - Secret data in the API server is at rest on the disk that etcd uses; admins may want to wipe/shred disks
- used by etcd when no longer in use
+ - Administrators may want to wipe/shred disks used by etcd when no longer in use
+ - If running etcd in a cluster, administrators should make sure to use SSL/TLS
+ for etcd peer-to-peer communication.
- If you configure the secret through a manifest (JSON or YAML) file which has
the secret data encoded as base64, sharing this file or checking it in to a
- source repository means the secret is compromised. Base64 encoding is not an
+ source repository means the secret is compromised. Base64 encoding is _not_ an
encryption method and is considered the same as plain text.
- Applications still need to protect the value of secret after reading it from the volume,
such as not accidentally logging it or transmitting it to an untrusted party.
- A user who can create a pod that uses a secret can also see the value of that secret. Even
if apiserver policy does not allow that user to read the secret object, the user could
run a pod which exposes the secret.
- - If multiple replicas of etcd are run, then the secrets will be shared between them.
- By default, etcd does not secure peer-to-peer communication with SSL/TLS, though this can be configured.
- - Currently, anyone with root on any node can read any secret from the apiserver,
+ - Currently, anyone with root on any node can read _any_ secret from the apiserver,
by impersonating the kubelet. It is a planned feature to only send secrets to
nodes that actually require them, to restrict the impact of a root exploit on a
single node.
-{{< note >}}
-As of 1.7 [encryption of secret data at rest is supported](/docs/tasks/administer-cluster/encrypt-data/).
-{{< /note >}}
{{% capture whatsnext %}}
diff --git a/content/en/docs/concepts/containers/container-lifecycle-hooks.md b/content/en/docs/concepts/containers/container-lifecycle-hooks.md
index 3825868850c51..08d855732fabb 100644
--- a/content/en/docs/concepts/containers/container-lifecycle-hooks.md
+++ b/content/en/docs/concepts/containers/container-lifecycle-hooks.md
@@ -36,7 +36,7 @@ No parameters are passed to the handler.
`PreStop`
-This hook is called immediately before a container is terminated.
+This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state.
It is blocking, meaning it is synchronous,
so it must complete before the call to delete the container can be sent.
No parameters are passed to the handler.
diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md
index 8885784e4c76f..acb3cd43fc892 100644
--- a/content/en/docs/concepts/containers/images.md
+++ b/content/en/docs/concepts/containers/images.md
@@ -149,9 +149,9 @@ Once you have those variables filled in you can
### Using IBM Cloud Container Registry
IBM Cloud Container Registry provides a multi-tenant private image registry that you can use to safely store and share your Docker images. By default, images in your private registry are scanned by the integrated Vulnerability Advisor to detect security issues and potential vulnerabilities. Users in your IBM Cloud account can access your images, or you can create a token to grant access to registry namespaces.
-To install the IBM Cloud Container Registry CLI plug-in and create a namespace for your images, see [Getting started with IBM Cloud Container Registry](https://console.bluemix.net/docs/services/Registry/index.html#index).
+To install the IBM Cloud Container Registry CLI plug-in and create a namespace for your images, see [Getting started with IBM Cloud Container Registry](https://cloud.ibm.com/docs/services/Registry?topic=registry-index#index).
-You can use the IBM Cloud Container Registry to deploy containers from [IBM Cloud public images](https://console.bluemix.net/docs/services/RegistryImages/index.html#ibm_images) and your private images into the `default` namespace of your IBM Cloud Kubernetes Service cluster. To deploy a container into other namespaces, or to use an image from a different IBM Cloud Container Registry region or IBM Cloud account, create a Kubernetes `imagePullSecret`. For more information, see [Building containers from images](https://console.bluemix.net/docs/containers/cs_images.html#images).
+You can use the IBM Cloud Container Registry to deploy containers from [IBM Cloud public images](https://cloud.ibm.com/docs/services/Registry?topic=registry-public_images#public_images) and your private images into the `default` namespace of your IBM Cloud Kubernetes Service cluster. To deploy a container into other namespaces, or to use an image from a different IBM Cloud Container Registry region or IBM Cloud account, create a Kubernetes `imagePullSecret`. For more information, see [Building containers from images](https://cloud.ibm.com/docs/containers?topic=containers-images#images).
### Configuring Nodes to Authenticate to a Private Registry
@@ -318,7 +318,7 @@ type: kubernetes.io/dockerconfigjson
If you get the error message `error: no objects passed to create`, it may mean the base64 encoded string is invalid.
If you get an error message like `Secret "myregistrykey" is invalid: data[.dockerconfigjson]: invalid value ...`, it means
-the data was successfully un-base64 encoded, but could not be parsed as a `.docker/config.json` file.
+the base64 encoded string in the data was successfully decoded, but could not be parsed as a `.docker/config.json` file.
#### Referring to an imagePullSecrets on a Pod
diff --git a/content/en/docs/concepts/overview/components.md b/content/en/docs/concepts/overview/components.md
index c473a42df98a2..af38f156e486a 100644
--- a/content/en/docs/concepts/overview/components.md
+++ b/content/en/docs/concepts/overview/components.md
@@ -4,6 +4,9 @@ reviewers:
title: Kubernetes Components
content_template: templates/concept
weight: 20
+card:
+ name: concepts
+ weight: 20
---
{{% capture overview %}}
@@ -76,7 +79,8 @@ network rules on the host and performing connection forwarding.
### Container Runtime
-The container runtime is the software that is responsible for running containers. Kubernetes supports several runtimes: [Docker](http://www.docker.com), [rkt](https://coreos.com/rkt/), [runc](https://github.com/opencontainers/runc) and any OCI [runtime-spec](https://github.com/opencontainers/runtime-spec) implementation.
+The container runtime is the software that is responsible for running containers.
+Kubernetes supports several runtimes: [Docker](http://www.docker.com), [containerd](https://containerd.io), [cri-o](https://cri-o.io/), [rktlet](https://github.com/kubernetes-incubator/rktlet) and any implementation of the [Kubernetes CRI (Container Runtime Interface)](https://github.com/kubernetes/community/blob/master/contributors/devel/container-runtime-interface.md).
## Addons
diff --git a/content/en/docs/concepts/overview/kubernetes-api.md b/content/en/docs/concepts/overview/kubernetes-api.md
index 179b471dcd891..2ef08ade264bb 100644
--- a/content/en/docs/concepts/overview/kubernetes-api.md
+++ b/content/en/docs/concepts/overview/kubernetes-api.md
@@ -4,6 +4,9 @@ reviewers:
title: The Kubernetes API
content_template: templates/concept
weight: 30
+card:
+ name: concepts
+ weight: 30
---
{{% capture overview %}}
diff --git a/content/en/docs/concepts/overview/object-management-kubectl/imperative-command.md b/content/en/docs/concepts/overview/object-management-kubectl/imperative-command.md
index 38b194d2a4fbd..bc83bd6b03e95 100644
--- a/content/en/docs/concepts/overview/object-management-kubectl/imperative-command.md
+++ b/content/en/docs/concepts/overview/object-management-kubectl/imperative-command.md
@@ -73,7 +73,7 @@ that must be set:
The `kubectl` command also supports update commands driven by an aspect of the object.
Setting this aspect may set different fields for different object types:
-- `set` : Set an aspect of an object.
+- `set` ``: Set an aspect of an object.
{{< note >}}
In Kubernetes version 1.5, not every verb-driven command has an associated aspect-driven command.
@@ -160,5 +160,3 @@ kubectl create --edit -f /tmp/srv.yaml
- [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl/)
- [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
{{% /capture %}}
-
-
diff --git a/content/en/docs/concepts/overview/what-is-kubernetes.md b/content/en/docs/concepts/overview/what-is-kubernetes.md
index 014d4945c03f4..6bfd4404f0cae 100644
--- a/content/en/docs/concepts/overview/what-is-kubernetes.md
+++ b/content/en/docs/concepts/overview/what-is-kubernetes.md
@@ -5,6 +5,9 @@ reviewers:
title: What is Kubernetes?
content_template: templates/concept
weight: 10
+card:
+ name: concepts
+ weight: 10
---
{{% capture overview %}}
diff --git a/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md
index ae529e39bcfe7..00d1cc65f8171 100644
--- a/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md
+++ b/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md
@@ -2,6 +2,9 @@
title: Understanding Kubernetes Objects
content_template: templates/concept
weight: 10
+card:
+ name: concepts
+ weight: 40
---
{{% capture overview %}}
@@ -28,7 +31,7 @@ Every Kubernetes object includes two nested object fields that govern the object
For example, a Kubernetes Deployment is an object that can represent an application running on your cluster. When you create the Deployment, you might set the Deployment spec to specify that you want three replicas of the application to be running. The Kubernetes system reads the Deployment spec and starts three instances of your desired application--updating the status to match your spec. If any of those instances should fail (a status change), the Kubernetes system responds to the difference between spec and status by making a correction--in this case, starting a replacement instance.
-For more information on the object spec, status, and metadata, see the [Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/api-conventions.md).
+For more information on the object spec, status, and metadata, see the [Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md).
### Describing a Kubernetes Object
diff --git a/content/en/docs/concepts/policy/pod-security-policy.md b/content/en/docs/concepts/policy/pod-security-policy.md
index 4f1c658aa81d6..607d2c93e6db9 100644
--- a/content/en/docs/concepts/policy/pod-security-policy.md
+++ b/content/en/docs/concepts/policy/pod-security-policy.md
@@ -41,7 +41,7 @@ administrator to control the following:
| Restricting escalation to root privileges | [`allowPrivilegeEscalation`, `defaultAllowPrivilegeEscalation`](#privilege-escalation) |
| Linux capabilities | [`defaultAddCapabilities`, `requiredDropCapabilities`, `allowedCapabilities`](#capabilities) |
| The SELinux context of the container | [`seLinux`](#selinux) |
-| The Allowed Proc Mount types for the container | [`allowedProcMountTypes`](#allowedProcMountTypes) |
+| The Allowed Proc Mount types for the container | [`allowedProcMountTypes`](#allowedprocmounttypes) |
| The AppArmor profile used by containers | [annotations](#apparmor) |
| The seccomp profile used by containers | [annotations](#seccomp) |
| The sysctl profile used by containers | [annotations](#sysctl) |
@@ -336,7 +336,6 @@ pause-7774d79b5-qrgcb 0/1 Pending 0 1s
pause-7774d79b5-qrgcb 0/1 Pending 0 1s
pause-7774d79b5-qrgcb 0/1 ContainerCreating 0 1s
pause-7774d79b5-qrgcb 1/1 Running 0 2s
-^C
```
### Clean up
diff --git a/content/en/docs/concepts/services-networking/connect-applications-service.md b/content/en/docs/concepts/services-networking/connect-applications-service.md
index ae0160ad9fb3b..796e66d30612d 100644
--- a/content/en/docs/concepts/services-networking/connect-applications-service.md
+++ b/content/en/docs/concepts/services-networking/connect-applications-service.md
@@ -17,7 +17,7 @@ Now that you have a continuously running, replicated application you can expose
By default, Docker uses host-private networking, so containers can talk to other containers only if they are on the same machine. In order for Docker containers to communicate across nodes, there must be allocated ports on the machine’s own IP address, which are then forwarded or proxied to the containers. This obviously means that containers must either coordinate which ports they use very carefully or ports must be allocated dynamically.
-Coordinating ports across multiple developers is very difficult to do at scale and exposes users to cluster-level issues outside of their control. Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. We give every pod its own cluster-private-IP address so you do not need to explicitly create links between pods or mapping container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document will elaborate on how you can run reliable services on such a networking model.
+Coordinating ports across multiple developers is very difficult to do at scale and exposes users to cluster-level issues outside of their control. Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. We give every pod its own cluster-private-IP address so you do not need to explicitly create links between pods or map container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document will elaborate on how you can run reliable services on such a networking model.
This guide uses a simple nginx server to demonstrate proof of concept. The same principles are embodied in a more complete [Jenkins CI application](https://kubernetes.io/blog/2015/07/strong-simple-ssl-for-kubernetes).
diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md
index da5162073b750..5c519d479a904 100644
--- a/content/en/docs/concepts/services-networking/dns-pod-service.md
+++ b/content/en/docs/concepts/services-networking/dns-pod-service.md
@@ -84,7 +84,7 @@ the hostname of the pod. For example, given a Pod with `hostname` set to
The Pod spec also has an optional `subdomain` field which can be used to specify
its subdomain. For example, a Pod with `hostname` set to "`foo`", and `subdomain`
set to "`bar`", in namespace "`my-namespace`", will have the fully qualified
-domain name (FQDN) "`foo.bar.my-namespace.pod.cluster.local`".
+domain name (FQDN) "`foo.bar.my-namespace.svc.cluster.local`".
Example:
@@ -141,7 +141,7 @@ record for the Pod's fully qualified hostname.
For example, given a Pod with the hostname set to "`busybox-1`" and the subdomain set to
"`default-subdomain`", and a headless Service named "`default-subdomain`" in
the same namespace, the pod will see its own FQDN as
-"`busybox-1.default-subdomain.my-namespace.pod.cluster.local`". DNS serves an
+"`busybox-1.default-subdomain.my-namespace.svc.cluster.local`". DNS serves an
A record at that name, pointing to the Pod's IP. Both pods "`busybox1`" and
"`busybox2`" can have their distinct A records.
diff --git a/content/en/docs/concepts/services-networking/ingress-controllers.md b/content/en/docs/concepts/services-networking/ingress-controllers.md
new file mode 100644
index 0000000000000..57af46a01a0c5
--- /dev/null
+++ b/content/en/docs/concepts/services-networking/ingress-controllers.md
@@ -0,0 +1,74 @@
+---
+title: Ingress Controllers
+reviewers:
+content_template: templates/concept
+weight: 40
+---
+
+{{% capture overview %}}
+
+In order for the Ingress resource to work, the cluster must have an ingress controller running.
+
+Unlike other types of controllers which run as part of the `kube-controller-manager` binary, Ingress controllers
+are not started automatically with a cluster. Use this page to choose the ingress controller implementation
+that best fits your cluster.
+
+Kubernetes as a project currently supports and maintains [GCE](https://git.k8s.io/ingress-gce/README.md) and
+ [nginx](https://git.k8s.io/ingress-nginx/README.md) controllers.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Additional controllers
+
+* [Ambassador](https://www.getambassador.io/) API Gateway is an [Envoy](https://www.envoyproxy.io) based ingress
+ controller with [community](https://www.getambassador.io/docs) or
+ [commercial](https://www.getambassador.io/pro/) support from [Datawire](https://www.datawire.io/).
+* [AppsCode Inc.](https://appscode.com) offers support and maintenance for the most widely used [HAProxy](http://www.haproxy.org/) based ingress controller [Voyager](https://appscode.com/products/voyager).
+* [Contour](https://github.com/heptio/contour) is an [Envoy](https://www.envoyproxy.io) based ingress controller
+ provided and supported by Heptio.
+* Citrix provides an [Ingress Controller](https://github.com/citrix/citrix-k8s-ingress-controller) for its hardware (MPX), virtualized (VPX) and [free containerized (CPX) ADC](https://www.citrix.com/products/citrix-adc/cpx-express.html) for [baremetal](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment/baremetal) and [cloud](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment) deployments.
+* F5 Networks provides [support and maintenance](https://support.f5.com/csp/article/K86859508)
+ for the [F5 BIG-IP Controller for Kubernetes](http://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest).
+* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io) which offers API Gateway functionality with enterprise support from [solo.io](https://www.solo.io).
+* [HAProxy](http://www.haproxy.org/) based ingress controller
+ [jcmoraisjr/haproxy-ingress](https://github.com/jcmoraisjr/haproxy-ingress) which is mentioned on the blog post
+ [HAProxy Ingress Controller for Kubernetes](https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/).
+ [HAProxy Technologies](https://www.haproxy.com/) offers support and maintenance for HAProxy Enterprise and
+ the ingress controller [jcmoraisjr/haproxy-ingress](https://github.com/jcmoraisjr/haproxy-ingress).
+* [Istio](https://istio.io/) based ingress controller
+ [Control Ingress Traffic](https://istio.io/docs/tasks/traffic-management/ingress/).
+* [Kong](https://konghq.com/) offers [community](https://discuss.konghq.com/c/kubernetes) or
+ [commercial](https://konghq.com/kong-enterprise/) support and maintenance for the
+ [Kong Ingress Controller for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller).
+* [NGINX, Inc.](https://www.nginx.com/) offers support and maintenance for the
+ [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx/kubernetes-ingress-controller).
+* [Traefik](https://github.com/containous/traefik) is a fully featured ingress controller
+ ([Let's Encrypt](https://letsencrypt.org), secrets, http2, websocket), and it also comes with commercial
+ support by [Containous](https://containo.us/services).
+
+## Using multiple Ingress controllers
+
+You may deploy [any number of ingress controllers](https://git.k8s.io/ingress-nginx/docs/user-guide/multiple-ingress.md#multiple-ingress-controllers)
+within a cluster. When you create an ingress, you should annotate each ingress with the appropriate
+[`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster)
+to indicate which ingress controller should be used if more than one exists within your cluster.
+
+If you do not define a class, your cloud provider may use a default ingress provider.
+
+Ideally, all ingress controllers should fulfill this specification, but the various ingress
+controllers operate slightly differently.
+
+{{< note >}}
+Make sure you review your ingress controller's documentation to understand the caveats of choosing it.
+{{< /note >}}
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* Learn more about [Ingress](/docs/concepts/services-networking/ingress/).
+* [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube).
+
+{{% /capture %}}
diff --git a/content/en/docs/concepts/services-networking/ingress.md b/content/en/docs/concepts/services-networking/ingress.md
index a29ff81515d19..64b094ff21a7a 100644
--- a/content/en/docs/concepts/services-networking/ingress.md
+++ b/content/en/docs/concepts/services-networking/ingress.md
@@ -25,7 +25,7 @@ For the sake of clarity, this guide defines the following terms:
Ingress, added in Kubernetes v1.1, exposes HTTP and HTTPS routes from outside the cluster to
{{< link text="services" url="/docs/concepts/services-networking/service/" >}} within the cluster.
-Traffic routing is controlled by rules defined on the ingress resource.
+Traffic routing is controlled by rules defined on the Ingress resource.
```none
internet
@@ -35,9 +35,9 @@ Traffic routing is controlled by rules defined on the ingress resource.
[ Services ]
```
-An ingress can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, and offer name based virtual hosting. An [ingress controller](#ingress-controllers) is responsible for fulfilling the ingress, usually with a loadbalancer, though it may also configure your edge router or additional frontends to help handle the traffic.
+An Ingress can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, and offer name based virtual hosting. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers) is responsible for fulfilling the Ingress, usually with a loadbalancer, though it may also configure your edge router or additional frontends to help handle the traffic.
-An ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically
+An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically
uses a service of type [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport) or
[Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer).
@@ -45,52 +45,19 @@ uses a service of type [Service.Type=NodePort](/docs/concepts/services-networkin
{{< feature-state for_k8s_version="v1.1" state="beta" >}}
-Before you start using an ingress, there are a few things you should understand. The ingress is a beta resource. You will need an ingress controller to satisfy an ingress, simply creating the resource will have no effect.
+Before you start using an Ingress, there are a few things you should understand. The Ingress is a beta resource.
-GCE/Google Kubernetes Engine deploys an [ingress controller](#ingress-controllers) on the master. Review the
+{{< note >}}
+You must have an [Ingress controller](/docs/concepts/services-networking/ingress-controllers) to satisfy an Ingress. Only creating an Ingress resource has no effect.
+{{< /note >}}
+
+GCE/Google Kubernetes Engine deploys an Ingress controller on the master. Review the
[beta limitations](https://github.com/kubernetes/ingress-gce/blob/master/BETA_LIMITATIONS.md#glbc-beta-limitations)
of this controller if you are using GCE/GKE.
In environments other than GCE/Google Kubernetes Engine, you may need to
[deploy an ingress controller](https://kubernetes.github.io/ingress-nginx/deploy/). There are a number of
-[ingress controller](#ingress-controllers) you may choose from.
-
-## Ingress controllers
-
-In order for the ingress resource to work, the cluster must have an ingress controller running. This is unlike other types of controllers, which run as part of the `kube-controller-manager` binary, and are typically started automatically with a cluster. Choose the ingress controller implementation that best fits your cluster.
-
-* Kubernetes as a project currently supports and maintains [GCE](https://git.k8s.io/ingress-gce/README.md) and
- [nginx](https://git.k8s.io/ingress-nginx/README.md) controllers.
-
-Additional controllers include:
-
-* [Contour](https://github.com/heptio/contour) is an [Envoy](https://www.envoyproxy.io) based ingress controller
- provided and supported by Heptio.
-* Citrix provides an [Ingress Controller](https://github.com/citrix/citrix-k8s-ingress-controller) for its hardware (MPX), virtualized (VPX) and [free containerized (CPX) ADC](https://www.citrix.com/products/citrix-adc/cpx-express.html) for [baremetal](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment/baremetal) and [cloud](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment) deployments.
-* F5 Networks provides [support and maintenance](https://support.f5.com/csp/article/K86859508)
- for the [F5 BIG-IP Controller for Kubernetes](http://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest).
-* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io) which offers API Gateway functionality with enterprise support from [solo.io](https://www.solo.io).
-* [HAProxy](http://www.haproxy.org/) based ingress controller
- [jcmoraisjr/haproxy-ingress](https://github.com/jcmoraisjr/haproxy-ingress) which is mentioned on the blog post
- [HAProxy Ingress Controller for Kubernetes](https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/).
- [HAProxy Technologies](https://www.haproxy.com/) offers support and maintenance for HAProxy Enterprise and
- the ingress controller [jcmoraisjr/haproxy-ingress](https://github.com/jcmoraisjr/haproxy-ingress).
-* [Istio](https://istio.io/) based ingress controller
- [Control Ingress Traffic](https://istio.io/docs/tasks/traffic-management/ingress/).
-* [Kong](https://konghq.com/) offers [community](https://discuss.konghq.com/c/kubernetes) or
- [commercial](https://konghq.com/kong-enterprise/) support and maintenance for the
- [Kong Ingress Controller for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller).
-* [NGINX, Inc.](https://www.nginx.com/) offers support and maintenance for the
- [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx/kubernetes-ingress-controller).
-* [Traefik](https://github.com/containous/traefik) is a fully featured ingress controller
- ([Let's Encrypt](https://letsencrypt.org), secrets, http2, websocket), and it also comes with commercial
- support by [Containous](https://containo.us/services).
-
-You may deploy [any number of ingress controllers](https://git.k8s.io/ingress-nginx/docs/user-guide/multiple-ingress.md#multiple-ingress-controllers) within a cluster.
-When you create an ingress, you should annotate each ingress with the appropriate
-[`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster) to indicate which ingress
-controller should be used if more than one exists within your cluster.
-If you do not define a class, your cloud provider may use a default ingress provider.
+[ingress controllers](/docs/concepts/services-networking/ingress-controllers) you may choose from.
### Before you begin
@@ -122,14 +89,14 @@ spec:
servicePort: 80
```
- As with all other Kubernetes resources, an ingress needs `apiVersion`, `kind`, and `metadata` fields.
+ As with all other Kubernetes resources, an Ingress needs `apiVersion`, `kind`, and `metadata` fields.
For general information about working with config files, see [deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/), [configuring containers](/docs/tasks/configure-pod-container/configure-pod-configmap/), [managing resources](/docs/concepts/cluster-administration/manage-deployment/).
- Ingress frequently uses annotations to configure some options depending on the ingress controller, an example of which
+ Ingress frequently uses annotations to configure some options depending on the Ingress controller, an example of which
is the [rewrite-target annotation](https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md).
- Different [ingress controller](#ingress-controllers) support different annotations. Review the documentation for
- your choice of ingress controller to learn which annotations are supported.
+ Different [Ingress controller]((/docs/concepts/services-networking/ingress-controllers)) support different annotations. Review the documentation for
+ your choice of Ingress controller to learn which annotations are supported.
-The ingress [spec](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status)
+The Ingress [spec](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status)
has all the information needed to configure a loadbalancer or proxy server. Most importantly, it
contains a list of rules matched against all incoming requests. Ingress resource only supports rules
for directing HTTP traffic.
@@ -146,18 +113,17 @@ Each http rule contains the following information:
loadbalancer will direct traffic to the referenced service.
* A backend is a combination of service and port names as described in the
[services doc](/docs/concepts/services-networking/service/). HTTP (and HTTPS) requests to the
- ingress matching the host and path of the rule will be sent to the listed backend.
+ Ingress matching the host and path of the rule will be sent to the listed backend.
-A default backend is often configured in an ingress controller that will service any requests that do not
+A default backend is often configured in an Ingress controller that will service any requests that do not
match a path in the spec.
### Default Backend
-An ingress with no rules sends all traffic to a single default backend. The default
-backend is typically a configuration option of the [ingress controller](#ingress-controllers)
-and is not specified in your ingress resources.
+An Ingress with no rules sends all traffic to a single default backend. The default
+backend is typically a configuration option of the [Ingress controller](/docs/concepts/services-networking/ingress-controllers) and is not specified in your Ingress resources.
-If none of the hosts or paths match the HTTP request in the ingress objects, the traffic is
+If none of the hosts or paths match the HTTP request in the Ingress objects, the traffic is
routed to your default backend.
## Types of Ingress
@@ -165,7 +131,7 @@ routed to your default backend.
### Single Service Ingress
There are existing Kubernetes concepts that allow you to expose a single Service
-(see [alternatives](#alternatives)). You can also do this with an ingress by specifying a
+(see [alternatives](#alternatives)). You can also do this with an Ingress by specifying a
*default backend* with no rules.
{{< codenew file="service/networking/ingress.yaml" >}}
@@ -181,8 +147,8 @@ NAME HOSTS ADDRESS PORTS AGE
test-ingress * 107.178.254.228 80 59s
```
-Where `107.178.254.228` is the IP allocated by the ingress controller to satisfy
-this ingress.
+Where `107.178.254.228` is the IP allocated by the Ingress controller to satisfy
+this Ingress.
{{< note >}}
Ingress controllers and load balancers may take a minute or two to allocate an IP address.
@@ -192,7 +158,7 @@ Until that time you will often see the address listed as ``.
### Simple fanout
A fanout configuration routes traffic from a single IP address to more than one service,
-based on the HTTP URI being requested. An ingress allows you to keep the number of loadbalancers
+based on the HTTP URI being requested. An Ingress allows you to keep the number of loadbalancers
down to a minimum. For example, a setup like:
```shell
@@ -200,7 +166,7 @@ foo.bar.com -> 178.91.123.132 -> / foo service1:4200
/ bar service2:8080
```
-would require an ingress such as:
+would require an Ingress such as:
```yaml
apiVersion: extensions/v1beta1
@@ -224,7 +190,7 @@ spec:
servicePort: 8080
```
-When you create the ingress with `kubectl create -f`:
+When you create the Ingress with `kubectl create -f`:
```shell
kubectl describe ingress simple-fanout-example
@@ -249,13 +215,13 @@ Events:
Normal ADD 22s loadbalancer-controller default/test
```
-The ingress controller will provision an implementation specific loadbalancer
-that satisfies the ingress, as long as the services (`s1`, `s2`) exist.
-When it has done so, you will see the address of the loadbalancer at the
+The Ingress controller provisions an implementation specific loadbalancer
+that satisfies the Ingress, as long as the services (`s1`, `s2`) exist.
+When it has done so, you can see the address of the loadbalancer at the
Address field.
{{< note >}}
-Depending on the [ingress controller](#ingress-controllers) you are using, you may need to
+Depending on the [Ingress controller](/docs/concepts/services-networking/ingress-controllers) you are using, you may need to
create a default-http-backend [Service](/docs/concepts/services-networking/service/).
{{< /note >}}
@@ -269,7 +235,7 @@ foo.bar.com --| |-> foo.bar.com s1:80
bar.foo.com --| |-> bar.foo.com s2:80
```
-The following ingress tells the backing loadbalancer to route requests based on
+The following Ingress tells the backing loadbalancer to route requests based on
the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4).
```yaml
@@ -293,9 +259,9 @@ spec:
servicePort: 80
```
-If you create an ingress resource without any hosts defined in the rules, then any
-web traffic to the IP address of your ingress controller can be matched without a name based
-virtual host being required. For example, the following ingress resource will route traffic
+If you create an Ingress resource without any hosts defined in the rules, then any
+web traffic to the IP address of your Ingress controller can be matched without a name based
+virtual host being required. For example, the following Ingress resource will route traffic
requested for `first.bar.com` to `service1`, `second.foo.com` to `service2`, and any traffic
to the IP address without a hostname defined in request (that is, without a request header being
presented) to `service3`.
@@ -328,12 +294,12 @@ spec:
### TLS
-You can secure an ingress by specifying a [secret](/docs/concepts/configuration/secret)
-that contains a TLS private key and certificate. Currently the ingress only
+You can secure an Ingress by specifying a [secret](/docs/concepts/configuration/secret)
+that contains a TLS private key and certificate. Currently the Ingress only
supports a single TLS port, 443, and assumes TLS termination. If the TLS
-configuration section in an ingress specifies different hosts, they will be
+configuration section in an Ingress specifies different hosts, they will be
multiplexed on the same port according to the hostname specified through the
-SNI TLS extension (provided the ingress controller supports SNI). The TLS secret
+SNI TLS extension (provided the Ingress controller supports SNI). The TLS secret
must contain keys named `tls.crt` and `tls.key` that contain the certificate
and private key to use for TLS, e.g.:
@@ -346,10 +312,10 @@ kind: Secret
metadata:
name: testsecret-tls
namespace: default
-type: Opaque
+type: kubernetes.io/tls
```
-Referencing this secret in an ingress will tell the ingress controller to
+Referencing this secret in an Ingress will tell the Ingress controller to
secure the channel from the client to the loadbalancer using TLS. You need to make
sure the TLS secret you created came from a certificate that contains a CN
for `sslexample.foo.com`.
@@ -375,24 +341,24 @@ spec:
```
{{< note >}}
-There is a gap between TLS features supported by various ingress
+There is a gap between TLS features supported by various Ingress
controllers. Please refer to documentation on
[nginx](https://git.k8s.io/ingress-nginx/README.md#https),
[GCE](https://git.k8s.io/ingress-gce/README.md#frontend-https), or any other
-platform specific ingress controller to understand how TLS works in your environment.
+platform specific Ingress controller to understand how TLS works in your environment.
{{< /note >}}
### Loadbalancing
-An ingress controller is bootstrapped with some load balancing policy settings
-that it applies to all ingress, such as the load balancing algorithm, backend
+An Ingress controller is bootstrapped with some load balancing policy settings
+that it applies to all Ingress, such as the load balancing algorithm, backend
weight scheme, and others. More advanced load balancing concepts
(e.g. persistent sessions, dynamic weights) are not yet exposed through the
-ingress. You can still get these features through the
+Ingress. You can still get these features through the
[service loadbalancer](https://github.com/kubernetes/ingress-nginx).
It's also worth noting that even though health checks are not exposed directly
-through the ingress, there exist parallel concepts in Kubernetes such as
+through the Ingress, there exist parallel concepts in Kubernetes such as
[readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/)
which allow you to achieve the same end result. Please review the controller
specific docs to see how they handle health checks (
@@ -401,7 +367,7 @@ specific docs to see how they handle health checks (
## Updating an Ingress
-To update an existing ingress to add a new Host, you can update it by editing the resource:
+To update an existing Ingress to add a new Host, you can update it by editing the resource:
```shell
kubectl describe ingress test
@@ -452,7 +418,7 @@ spec:
```
Saving the yaml will update the resource in the API server, which should tell the
-ingress controller to reconfigure the loadbalancer.
+Ingress controller to reconfigure the loadbalancer.
```shell
kubectl describe ingress test
@@ -478,25 +444,24 @@ Events:
Normal ADD 45s loadbalancer-controller default/test
```
-You can achieve the same by invoking `kubectl replace -f` on a modified ingress yaml file.
+You can achieve the same by invoking `kubectl replace -f` on a modified Ingress yaml file.
## Failing across availability zones
Techniques for spreading traffic across failure domains differs between cloud providers.
-Please check the documentation of the relevant [ingress controller](#ingress-controllers) for
-details. You can also refer to the [federation documentation](/docs/concepts/cluster-administration/federation/)
-for details on deploying ingress in a federated cluster.
+Please check the documentation of the relevant [Ingress controller](/docs/concepts/services-networking/ingress-controllers) for details. You can also refer to the [federation documentation](/docs/concepts/cluster-administration/federation/)
+for details on deploying Ingress in a federated cluster.
## Future Work
Track [SIG Network](https://github.com/kubernetes/community/tree/master/sig-network)
for more details on the evolution of the ingress and related resources. You may also track the
-[ingress repository](https://github.com/kubernetes/ingress/tree/master) for more details on the
-evolution of various ingress controllers.
+[Ingress repository](https://github.com/kubernetes/ingress/tree/master) for more details on the
+evolution of various Ingress controllers.
## Alternatives
-You can expose a Service in multiple ways that don't directly involve the ingress resource:
+You can expose a Service in multiple ways that don't directly involve the Ingress resource:
* Use [Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer)
* Use [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport)
@@ -505,6 +470,5 @@ You can expose a Service in multiple ways that don't directly involve the ingres
{{% /capture %}}
{{% capture whatsnext %}}
-
+* [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube)
{{% /capture %}}
-
diff --git a/content/en/docs/concepts/services-networking/network-policies.md b/content/en/docs/concepts/services-networking/network-policies.md
index 570ce7f65423c..cd18e6ecaad9f 100644
--- a/content/en/docs/concepts/services-networking/network-policies.md
+++ b/content/en/docs/concepts/services-networking/network-policies.md
@@ -92,11 +92,12 @@ __egress__: Each `NetworkPolicy` may include a list of whitelist `egress` rules.
So, the example NetworkPolicy:
1. isolates "role=db" pods in the "default" namespace for both ingress and egress traffic (if they weren't already isolated)
-2. allows connections to TCP port 6379 of "role=db" pods in the "default" namespace from:
+2. (Ingress rules) allows connections to all pods in the “default” namespace with the label “role=db” on TCP port 6379 from:
+
* any pod in the "default" namespace with the label "role=frontend"
* any pod in a namespace with the label "project=myproject"
* IP addresses in the ranges 172.17.0.0–172.17.0.255 and 172.17.2.0–172.17.255.255 (ie, all of 172.17.0.0/16 except 172.17.1.0/24)
-3. allows connections from any pod in the "default" namespace with the label "role=db" to CIDR 10.0.0.0/24 on TCP port 5978
+3. (Egress rules) allows connections from any pod in the "default" namespace with the label "role=db" to CIDR 10.0.0.0/24 on TCP port 5978
See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) walkthrough for further examples.
@@ -266,4 +267,3 @@ The CNI plugin has to support SCTP as `protocol` value in `NetworkPolicy`.
- See more [Recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for common scenarios enabled by the NetworkPolicy resource.
{{% /capture %}}
-
diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md
index 4b5bbba41f087..b25bb0ec5f506 100644
--- a/content/en/docs/concepts/services-networking/service.md
+++ b/content/en/docs/concepts/services-networking/service.md
@@ -519,6 +519,16 @@ metadata:
[...]
```
{{% /tab %}}
+{{% tab name="Baidu Cloud" %}}
+```yaml
+[...]
+metadata:
+ name: my-service
+ annotations:
+ service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true"
+[...]
+```
+{{% /tab %}}
{{< /tabs >}}
diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md
index 48fb78be5f84d..1638ee981ea7f 100644
--- a/content/en/docs/concepts/storage/volumes.md
+++ b/content/en/docs/concepts/storage/volumes.md
@@ -1150,12 +1150,12 @@ CSI support was introduced as alpha in Kubernetes v1.9, moved to beta in
Kubernetes v1.10, and is GA in Kubernetes v1.13.
{{< note >}}
-**Note:** Support for CSI spec versions 0.2 and 0.3 are deprecated in Kubernetes
+Support for CSI spec versions 0.2 and 0.3 are deprecated in Kubernetes
v1.13 and will be removed in a future release.
{{< /note >}}
{{< note >}}
-**Note:** CSI drivers may not be compatible across all Kubernetes releases.
+CSI drivers may not be compatible across all Kubernetes releases.
Please check the specific CSI driver's documentation for supported
deployments steps for each Kubernetes release and a compatibility matrix.
{{< /note >}}
@@ -1306,8 +1306,8 @@ MountFlags=shared
```
Or, remove `MountFlags=slave` if present. Then restart the Docker daemon:
```shell
-$ sudo systemctl daemon-reload
-$ sudo systemctl restart docker
+sudo systemctl daemon-reload
+sudo systemctl restart docker
```
diff --git a/content/en/docs/concepts/workloads/controllers/cron-jobs.md b/content/en/docs/concepts/workloads/controllers/cron-jobs.md
index 413547e2cb151..339cb81f78855 100644
--- a/content/en/docs/concepts/workloads/controllers/cron-jobs.md
+++ b/content/en/docs/concepts/workloads/controllers/cron-jobs.md
@@ -46,11 +46,13 @@ It is important to note that if the `startingDeadlineSeconds` field is set (not
A CronJob is counted as missed if it has failed to be created at its scheduled time. For example, If `concurrencyPolicy` is set to `Forbid` and a CronJob was attempted to be scheduled when there was a previous schedule still running, then it would count as missed.
-For example, suppose a cron job is set to start at exactly `08:30:00` and its
-`startingDeadlineSeconds` is set to 10, if the CronJob controller happens to
-be down from `08:29:00` to `08:42:00`, the job will not start.
-Set a longer `startingDeadlineSeconds` if starting later is better than not
-starting at all.
+For example, suppose a CronJob is set to schedule a new Job every one minute beginning at `08:30:00`, and its
+`startingDeadlineSeconds` field is not set. The default for this field is `100` seconds. If the CronJob controller happens to
+be down from `08:29:00` to `10:21:00`, the job will not start as the number of missed jobs which missed their schedule is greater than 100.
+
+To illustrate this concept further, suppose a CronJob is set to schedule a new Job every one minute beginning at `08:30:00`, and its
+`startingDeadlineSeconds` is set to 200 seconds. If the CronJob controller happens to
+be down for the same period as the previous example (`08:29:00` to `10:21:00`,) the Job will still start at 10:22:00. This happens as the controller now checks how many missed schedules happened in the last 200 seconds (ie, 3 missed schedules), rather than from the last scheduled time until now.
The Cronjob is only responsible for creating Jobs that match its schedule, and
the Job in turn is responsible for the management of the Pods it represents.
diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md
index e3ed2574ccd93..7ec2c97600a7e 100644
--- a/content/en/docs/concepts/workloads/controllers/deployment.md
+++ b/content/en/docs/concepts/workloads/controllers/deployment.md
@@ -55,9 +55,9 @@ In this example:
In this case, you simply select a label that is defined in the Pod template (`app: nginx`).
However, more sophisticated selection rules are possible,
as long as the Pod template itself satisfies the rule.
-
+
{{< note >}}
- `matchLabels` is a map of {key,value} pairs. A single {key,value} in the `matchLabels` map
+ `matchLabels` is a map of {key,value} pairs. A single {key,value} in the `matchLabels` map
is equivalent to an element of `matchExpressions`, whose key field is "key", the operator is "In",
and the values array contains only "value". The requirements are ANDed.
{{< /note >}}
@@ -128,18 +128,19 @@ To see the ReplicaSet (`rs`) created by the deployment, run `kubectl get rs`:
```shell
NAME DESIRED CURRENT READY AGE
-nginx-deployment-2035384211 3 3 3 18s
+nginx-deployment-75675f5897 3 3 3 18s
```
-Notice that the name of the ReplicaSet is always formatted as `[DEPLOYMENT-NAME]-[POD-TEMPLATE-HASH-VALUE]`. The hash value is automatically generated when the Deployment is created.
+Notice that the name of the ReplicaSet is always formatted as `[DEPLOYMENT-NAME]-[RANDOM-STRING]`. The random string is
+randomly generated and uses the pod-template-hash as a seed.
To see the labels automatically generated for each pod, run `kubectl get pods --show-labels`. The following output is returned:
```shell
NAME READY STATUS RESTARTS AGE LABELS
-nginx-deployment-2035384211-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211
-nginx-deployment-2035384211-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211
-nginx-deployment-2035384211-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211
+nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
+nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
+nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
```
The created ReplicaSet ensures that there are three `nginx` Pods running at all times.
@@ -171,8 +172,8 @@ Suppose that you now want to update the nginx Pods to use the `nginx:1.9.1` imag
instead of the `nginx:1.7.9` image.
```shell
-$ kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment
-nginx=nginx:1.9.1 image updated
+$ kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1
+image updated
```
Alternatively, you can `edit` the Deployment and change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`:
@@ -467,7 +468,7 @@ $ kubectl rollout undo deployment.v1.apps/nginx-deployment
deployment.apps/nginx-deployment
```
-Alternatively, you can rollback to a specific revision by specify that in `--to-revision`:
+Alternatively, you can rollback to a specific revision by specifying it with `--to-revision`:
```shell
$ kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2
@@ -988,15 +989,12 @@ Field `.spec.rollbackTo` has been deprecated in API versions `extensions/v1beta1
### Revision History Limit
-A Deployment's revision history is stored in the replica sets it controls.
+A Deployment's revision history is stored in the ReplicaSets it controls.
`.spec.revisionHistoryLimit` is an optional field that specifies the number of old ReplicaSets to retain
-to allow rollback. Its ideal value depends on the frequency and stability of new Deployments. All old
-ReplicaSets will be kept by default, consuming resources in `etcd` and crowding the output of `kubectl get rs`,
-if this field is not set. The configuration of each Deployment revision is stored in its ReplicaSets;
-therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment.
+to allow rollback. These old ReplicaSets consume resources in `etcd` and crowd the output of `kubectl get rs`. The configuration of each Deployment revision is stored in its ReplicaSets; therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment. By default, 10 old ReplicaSets will be kept, however its ideal value depends on the frequency and stability of new Deployments.
-More specifically, setting this field to zero means that all old ReplicaSets with 0 replica will be cleaned up.
+More specifically, setting this field to zero means that all old ReplicaSets with 0 replicas will be cleaned up.
In this case, a new Deployment rollout cannot be undone, since its revision history is cleaned up.
### Paused
@@ -1015,5 +1013,3 @@ in a similar fashion. But Deployments are recommended, since they are declarativ
additional features, such as rolling back to any previous revision even after the rolling update is done.
{{% /capture %}}
-
-
diff --git a/content/en/docs/concepts/workloads/controllers/garbage-collection.md b/content/en/docs/concepts/workloads/controllers/garbage-collection.md
index a2b8517afaa62..ee3fb1fcd5da8 100644
--- a/content/en/docs/concepts/workloads/controllers/garbage-collection.md
+++ b/content/en/docs/concepts/workloads/controllers/garbage-collection.md
@@ -60,6 +60,14 @@ metadata:
...
```
+{{< note >}}
+Cross-namespace owner references is disallowed by design. This means:
+1) Namespace-scoped dependents can only specify owners in the same namespace,
+and owners that are cluster-scoped.
+2) Cluster-scoped dependents can only specify cluster-scoped owners, but not
+namespace-scoped owners.
+{{< /note >}}
+
## Controlling how the garbage collector deletes dependents
When you delete an object, you can specify whether the object's dependents are
diff --git a/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md
index 214fe74b41a2b..6a6a2275011b6 100644
--- a/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md
+++ b/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md
@@ -13,16 +13,16 @@ weight: 70
{{% capture overview %}}
-A _job_ creates one or more pods and ensures that a specified number of them successfully terminate.
-As pods successfully complete, the _job_ tracks the successful completions. When a specified number
-of successful completions is reached, the job itself is complete. Deleting a Job will cleanup the
-pods it created.
+A Job creates one or more Pods and ensures that a specified number of them successfully terminate.
+As pods successfully complete, the Job tracks the successful completions. When a specified number
+of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up
+the Pods it created.
A simple case is to create one Job object in order to reliably run one Pod to completion.
-The Job object will start a new Pod if the first pod fails or is deleted (for example
+The Job object will start a new Pod if the first Pod fails or is deleted (for example
due to a node hardware failure or a node reboot).
-A Job can also be used to run multiple pods in parallel.
+You can also use a Job to run multiple Pods in parallel.
{{% /capture %}}
@@ -36,14 +36,14 @@ It takes around 10s to complete.
{{< codenew file="controllers/job.yaml" >}}
-Run the example job by downloading the example file and then running this command:
+You can run the example with this command:
```shell
$ kubectl create -f https://k8s.io/examples/controllers/job.yaml
job "pi" created
```
-Check on the status of the job using this command:
+Check on the status of the Job with `kubectl`:
```shell
$ kubectl describe jobs/pi
@@ -78,18 +78,18 @@ Events:
1m 1m 1 {job-controller } Normal SuccessfulCreate Created pod: pi-dtn4q
```
-To view completed pods of a job, use `kubectl get pods`.
+To view completed Pods of a Job, use `kubectl get pods`.
-To list all the pods that belong to a job in a machine readable form, you can use a command like this:
+To list all the Pods that belong to a Job in a machine readable form, you can use a command like this:
```shell
-$ pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath={.items..metadata.name})
+$ pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
$ echo $pods
pi-aiw0a
```
-Here, the selector is the same as the selector for the job. The `--output=jsonpath` option specifies an expression
-that just gets the name from each pod in the returned list.
+Here, the selector is the same as the selector for the Job. The `--output=jsonpath` option specifies an expression
+that just gets the name from each Pod in the returned list.
View the standard output of one of the pods:
@@ -110,7 +110,7 @@ The `.spec.template` is the only required field of the `.spec`.
The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [pod](/docs/user-guide/pods), except it is nested and does not have an `apiVersion` or `kind`.
-In addition to required fields for a Pod, a pod template in a job must specify appropriate
+In addition to required fields for a Pod, a pod template in a Job must specify appropriate
labels (see [pod selector](#pod-selector)) and an appropriate restart policy.
Only a [`RestartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Never` or `OnFailure` is allowed.
@@ -123,31 +123,30 @@ See section [specifying your own pod selector](#specifying-your-own-pod-selector
### Parallel Jobs
-There are three main types of jobs:
+There are three main types of task suitable to run as a Job:
1. Non-parallel Jobs
- - normally only one pod is started, unless the pod fails.
- - job is complete as soon as Pod terminates successfully.
+ - normally, only one Pod is started, unless the Pod fails.
+ - the Job is complete as soon as its Pod terminates successfully.
1. Parallel Jobs with a *fixed completion count*:
- specify a non-zero positive value for `.spec.completions`.
- - the job is complete when there is one successful pod for each value in the range 1 to `.spec.completions`.
- - **not implemented yet:** Each pod passed a different index in the range 1 to `.spec.completions`.
+ - the Job represents the overall task, and is complete when there is one successful Pod for each value in the range 1 to `.spec.completions`.
+ - **not implemented yet:** Each Pod is passed a different index in the range 1 to `.spec.completions`.
1. Parallel Jobs with a *work queue*:
- do not specify `.spec.completions`, default to `.spec.parallelism`.
- - the pods must coordinate with themselves or an external service to determine what each should work on.
- - each pod is independently capable of determining whether or not all its peers are done, thus the entire Job is done.
- - when _any_ pod terminates with success, no new pods are created.
- - once at least one pod has terminated with success and all pods are terminated, then the job is completed with success.
- - once any pod has exited with success, no other pod should still be doing any work or writing any output. They should all be
- in the process of exiting.
-
-For a Non-parallel job, you can leave both `.spec.completions` and `.spec.parallelism` unset. When both are
+ - the Pods must coordinate amongst themselves or an external service to determine what each should work on. For example, a Pod might fetch a batch of up to N items from the work queue.
+ - each Pod is independently capable of determining whether or not all its peers are done, and thus that the entire Job is done.
+ - when _any_ Pod from the Job terminates with success, no new Pods are created.
+ - once at least one Pod has terminated with success and all Pods are terminated, then the Job is completed with success.
+ - once any Pod has exited with success, no other Pod should still be doing any work for this task or writing any output. They should all be in the process of exiting.
+
+For a _non-parallel_ Job, you can leave both `.spec.completions` and `.spec.parallelism` unset. When both are
unset, both are defaulted to 1.
-For a Fixed Completion Count job, you should set `.spec.completions` to the number of completions needed.
+For a _fixed completion count_ Job, you should set `.spec.completions` to the number of completions needed.
You can set `.spec.parallelism`, or leave it unset and it will default to 1.
-For a Work Queue Job, you must leave `.spec.completions` unset, and set `.spec.parallelism` to
+For a _work queue_ Job, you must leave `.spec.completions` unset, and set `.spec.parallelism` to
a non-negative integer.
For more information about how to make use of the different types of job, see the [job patterns](#job-patterns) section.
@@ -162,28 +161,28 @@ If it is specified as 0, then the Job is effectively paused until it is increase
Actual parallelism (number of pods running at any instant) may be more or less than requested
parallelism, for a variety of reasons:
-- For Fixed Completion Count jobs, the actual number of pods running in parallel will not exceed the number of
+- For _fixed completion count_ Jobs, the actual number of pods running in parallel will not exceed the number of
remaining completions. Higher values of `.spec.parallelism` are effectively ignored.
-- For work queue jobs, no new pods are started after any pod has succeeded -- remaining pods are allowed to complete, however.
+- For _work queue_ Jobs, no new Pods are started after any Pod has succeeded -- remaining Pods are allowed to complete, however.
- If the controller has not had time to react.
-- If the controller failed to create pods for any reason (lack of ResourceQuota, lack of permission, etc.),
+- If the controller failed to create Pods for any reason (lack of `ResourceQuota`, lack of permission, etc.),
then there may be fewer pods than requested.
-- The controller may throttle new pod creation due to excessive previous pod failures in the same Job.
-- When a pod is gracefully shutdown, it takes time to stop.
+- The controller may throttle new Pod creation due to excessive previous pod failures in the same Job.
+- When a Pod is gracefully shut down, it takes time to stop.
## Handling Pod and Container Failures
-A Container in a Pod may fail for a number of reasons, such as because the process in it exited with
-a non-zero exit code, or the Container was killed for exceeding a memory limit, etc. If this
+A container in a Pod may fail for a number of reasons, such as because the process in it exited with
+a non-zero exit code, or the container was killed for exceeding a memory limit, etc. If this
happens, and the `.spec.template.spec.restartPolicy = "OnFailure"`, then the Pod stays
-on the node, but the Container is re-run. Therefore, your program needs to handle the case when it is
+on the node, but the container is re-run. Therefore, your program needs to handle the case when it is
restarted locally, or else specify `.spec.template.spec.restartPolicy = "Never"`.
-See [pods-states](/docs/concepts/workloads/pods/pod-lifecycle/#example-states) for more information on `restartPolicy`.
+See [pod lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/#example-states) for more information on `restartPolicy`.
An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node
(node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the
`.spec.template.spec.restartPolicy = "Never"`. When a Pod fails, then the Job controller
-starts a new Pod. Therefore, your program needs to handle the case when it is restarted in a new
+starts a new Pod. This means that your application needs to handle the case when it is restarted in a new
pod. In particular, it needs to handle temporary files, locks, incomplete output and the like
caused by previous runs.
@@ -194,7 +193,7 @@ sometimes be started twice.
If you do specify `.spec.parallelism` and `.spec.completions` both greater than 1, then there may be
multiple pods running at once. Therefore, your pods must also be tolerant of concurrency.
-### Pod Backoff failure policy
+### Pod backoff failure policy
There are situations where you want to fail a Job after some amount of retries
due to a logical error in configuration etc.
@@ -244,7 +243,7 @@ spec:
restartPolicy: Never
```
-Note that both the Job Spec and the [Pod Template Spec](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#detailed-behavior) within the Job have an `activeDeadlineSeconds` field. Ensure that you set this field at the proper level.
+Note that both the Job spec and the [Pod template spec](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#detailed-behavior) within the Job have an `activeDeadlineSeconds` field. Ensure that you set this field at the proper level.
## Clean Up Finished Jobs Automatically
@@ -316,7 +315,7 @@ The tradeoffs are:
- One Job object for each work item, vs. a single Job object for all work items. The latter is
better for large numbers of work items. The former creates some overhead for the user and for the
system to manage large numbers of Job objects.
-- Number of pods created equals number of work items, vs. each pod can process multiple work items.
+- Number of pods created equals number of work items, vs. each Pod can process multiple work items.
The former typically requires less modification to existing code and containers. The latter
is better for large numbers of work items, for similar reasons to the previous bullet.
- Several approaches use a work queue. This requires running a queue service,
@@ -336,7 +335,7 @@ The pattern names are also links to examples and more detailed description.
When you specify completions with `.spec.completions`, each Pod created by the Job controller
has an identical [`spec`](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status). This means that
-all pods will have the same command line and the same
+all pods for a task will have the same command line and the same
image, the same volumes, and (almost) the same environment variables. These patterns
are different ways to arrange for pods to work on different things.
@@ -355,29 +354,29 @@ Here, `W` is the number of work items.
### Specifying your own pod selector
-Normally, when you create a job object, you do not specify `.spec.selector`.
-The system defaulting logic adds this field when the job is created.
+Normally, when you create a Job object, you do not specify `.spec.selector`.
+The system defaulting logic adds this field when the Job is created.
It picks a selector value that will not overlap with any other jobs.
However, in some cases, you might need to override this automatically set selector.
-To do this, you can specify the `.spec.selector` of the job.
+To do this, you can specify the `.spec.selector` of the Job.
Be very careful when doing this. If you specify a label selector which is not
-unique to the pods of that job, and which matches unrelated pods, then pods of the unrelated
-job may be deleted, or this job may count other pods as completing it, or one or both
-of the jobs may refuse to create pods or run to completion. If a non-unique selector is
-chosen, then other controllers (e.g. ReplicationController) and their pods may behave
+unique to the pods of that Job, and which matches unrelated Pods, then pods of the unrelated
+job may be deleted, or this Job may count other Pods as completing it, or one or both
+Jobs may refuse to create Pods or run to completion. If a non-unique selector is
+chosen, then other controllers (e.g. ReplicationController) and their Pods may behave
in unpredictable ways too. Kubernetes will not stop you from making a mistake when
specifying `.spec.selector`.
Here is an example of a case when you might want to use this feature.
-Say job `old` is already running. You want existing pods
-to keep running, but you want the rest of the pods it creates
-to use a different pod template and for the job to have a new name.
-You cannot update the job because these fields are not updatable.
-Therefore, you delete job `old` but leave its pods
-running, using `kubectl delete jobs/old --cascade=false`.
+Say Job `old` is already running. You want existing Pods
+to keep running, but you want the rest of the Pods it creates
+to use a different pod template and for the Job to have a new name.
+You cannot update the Job because these fields are not updatable.
+Therefore, you delete Job `old` but _leave its pods
+running_, using `kubectl delete jobs/old --cascade=false`.
Before deleting it, you make a note of what selector it uses:
```
@@ -392,11 +391,11 @@ spec:
...
```
-Then you create a new job with name `new` and you explicitly specify the same selector.
-Since the existing pods have label `job-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`,
-they are controlled by job `new` as well.
+Then you create a new Job with name `new` and you explicitly specify the same selector.
+Since the existing Pods have label `job-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`,
+they are controlled by Job `new` as well.
-You need to specify `manualSelector: true` in the new job since you are not using
+You need to specify `manualSelector: true` in the new Job since you are not using
the selector that the system normally generates for you automatically.
```
@@ -420,25 +419,25 @@ mismatch.
### Bare Pods
-When the node that a pod is running on reboots or fails, the pod is terminated
-and will not be restarted. However, a Job will create new pods to replace terminated ones.
-For this reason, we recommend that you use a job rather than a bare pod, even if your application
-requires only a single pod.
+When the node that a Pod is running on reboots or fails, the pod is terminated
+and will not be restarted. However, a Job will create new Pods to replace terminated ones.
+For this reason, we recommend that you use a Job rather than a bare Pod, even if your application
+requires only a single Pod.
### Replication Controller
Jobs are complementary to [Replication Controllers](/docs/user-guide/replication-controller).
-A Replication Controller manages pods which are not expected to terminate (e.g. web servers), and a Job
-manages pods that are expected to terminate (e.g. batch jobs).
+A Replication Controller manages Pods which are not expected to terminate (e.g. web servers), and a Job
+manages Pods that are expected to terminate (e.g. batch tasks).
-As discussed in [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/), `Job` is *only* appropriate for pods with
-`RestartPolicy` equal to `OnFailure` or `Never`. (Note: If `RestartPolicy` is not set, the default
-value is `Always`.)
+As discussed in [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/), `Job` is *only* appropriate
+for pods with `RestartPolicy` equal to `OnFailure` or `Never`.
+(Note: If `RestartPolicy` is not set, the default value is `Always`.)
### Single Job starts Controller Pod
-Another pattern is for a single Job to create a pod which then creates other pods, acting as a sort
-of custom controller for those pods. This allows the most flexibility, but may be somewhat
+Another pattern is for a single Job to create a Pod which then creates other Pods, acting as a sort
+of custom controller for those Pods. This allows the most flexibility, but may be somewhat
complicated to get started with and offers less integration with Kubernetes.
One example of this pattern would be a Job which starts a Pod which runs a script that in turn
@@ -446,10 +445,10 @@ starts a Spark master controller (see [spark example](https://github.com/kuberne
driver, and then cleans up.
An advantage of this approach is that the overall process gets the completion guarantee of a Job
-object, but complete control over what pods are created and how work is assigned to them.
+object, but complete control over what Pods are created and how work is assigned to them.
-## Cron Jobs
+## Cron Jobs {#cron-jobs}
-Support for creating Jobs at specified times/dates (i.e. cron) is available in Kubernetes [1.4](https://github.com/kubernetes/kubernetes/pull/11980). More information is available in the [cron job documents](/docs/concepts/workloads/controllers/cron-jobs/)
+You can use a [`CronJob`](/docs/concepts/workloads/controllers/cron-jobs/) to create a Job that will run at specified times/dates, similar to the Unix tool `cron`.
{{% /capture %}}
diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md
index adeaadc469674..9ca82477fe7fd 100644
--- a/content/en/docs/concepts/workloads/controllers/replicaset.md
+++ b/content/en/docs/concepts/workloads/controllers/replicaset.md
@@ -239,6 +239,10 @@ matchLabels:
In the ReplicaSet, `.spec.template.metadata.labels` must match `spec.selector`, or it will
be rejected by the API.
+{{< note >}}
+For 2 ReplicaSets specifying the same `.spec.selector` but different `.spec.template.metadata.labels` and `.spec.template.spec` fields, each ReplicaSet ignores the Pods created by the other ReplicaSet.
+{{< /note >}}
+
### Replicas
You can specify how many Pods should run concurrently by setting `.spec.replicas`. The ReplicaSet will create/delete
diff --git a/content/en/docs/concepts/workloads/controllers/statefulset.md b/content/en/docs/concepts/workloads/controllers/statefulset.md
index 0e3a4e85698e9..e83a4f3c8bea6 100644
--- a/content/en/docs/concepts/workloads/controllers/statefulset.md
+++ b/content/en/docs/concepts/workloads/controllers/statefulset.md
@@ -134,6 +134,10 @@ As each Pod is created, it gets a matching DNS subdomain, taking the form:
`$(podname).$(governing service domain)`, where the governing service is defined
by the `serviceName` field on the StatefulSet.
+As mentioned in the [limitations](#limitations) section, you are responsible for
+creating the [Headless Service](/docs/concepts/services-networking/service/#headless-services)
+responsible for the network identity of the pods.
+
Here are some examples of choices for Cluster Domain, Service name,
StatefulSet name, and how that affects the DNS names for the StatefulSet's Pods.
diff --git a/content/en/docs/concepts/workloads/pods/disruptions.md b/content/en/docs/concepts/workloads/pods/disruptions.md
index 3a3ecb36cfc0d..6725c887f49cb 100644
--- a/content/en/docs/concepts/workloads/pods/disruptions.md
+++ b/content/en/docs/concepts/workloads/pods/disruptions.md
@@ -63,6 +63,11 @@ Ask your cluster administrator or consult your cloud provider or distribution do
to determine if any sources of voluntary disruptions are enabled for your cluster.
If none are enabled, you can skip creating Pod Disruption Budgets.
+{{< caution >}}
+Not all voluntary disruptions are constrained by Pod Disruption Budgets. For example,
+deleting deployments or pods bypasses Pod Disruption Budgets.
+{{< /caution >}}
+
## Dealing with Disruptions
Here are some ways to mitigate involuntary disruptions:
@@ -102,7 +107,7 @@ percentage of the total.
Cluster managers and hosting providers should use tools which
respect Pod Disruption Budgets by calling the [Eviction API](/docs/tasks/administer-cluster/safely-drain-node/#the-eviction-api)
-instead of directly deleting pods. Examples are the `kubectl drain` command
+instead of directly deleting pods or deployments. Examples are the `kubectl drain` command
and the Kubernetes-on-GCE cluster upgrade script (`cluster/gce/upgrade.sh`).
When a cluster administrator wants to drain a node
diff --git a/content/en/docs/concepts/workloads/pods/init-containers.md b/content/en/docs/concepts/workloads/pods/init-containers.md
index 6ee6dd45ce7aa..a6c3e64b112d1 100644
--- a/content/en/docs/concepts/workloads/pods/init-containers.md
+++ b/content/en/docs/concepts/workloads/pods/init-containers.md
@@ -83,7 +83,7 @@ Here are some ideas for how to use Init Containers:
* Register this Pod with a remote server from the downward API with a command like:
- curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$()&ip=$()'
+ `curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$()&ip=$()'`
* Wait for some time before starting the app Container with a command like `sleep 60`.
* Clone a git repository into a volume.
diff --git a/content/en/docs/concepts/workloads/pods/pod-overview.md b/content/en/docs/concepts/workloads/pods/pod-overview.md
index 8bb3f68504655..4e5d8109a7ff1 100644
--- a/content/en/docs/concepts/workloads/pods/pod-overview.md
+++ b/content/en/docs/concepts/workloads/pods/pod-overview.md
@@ -4,6 +4,9 @@ reviewers:
title: Pod Overview
content_template: templates/concept
weight: 10
+card:
+ name: concepts
+ weight: 60
---
{{% capture overview %}}
@@ -101,5 +104,5 @@ Rather than specifying the current desired state of all replicas, pod templates
{{% capture whatsnext %}}
* Learn more about Pod behavior:
* [Pod Termination](/docs/concepts/workloads/pods/pod/#termination-of-pods)
- * Other Pod Topics
+ * [Pod Lifecycle](../pod-lifecycle)
{{% /capture %}}
diff --git a/content/en/docs/contribute/intermediate.md b/content/en/docs/contribute/intermediate.md
index a786fbb955cfe..2ef7278621798 100644
--- a/content/en/docs/contribute/intermediate.md
+++ b/content/en/docs/contribute/intermediate.md
@@ -3,6 +3,9 @@ title: Intermediate contributing
slug: intermediate
content_template: templates/concept
weight: 20
+card:
+ name: contribute
+ weight: 50
---
{{% capture overview %}}
diff --git a/content/en/docs/contribute/localization.md b/content/en/docs/contribute/localization.md
index 6e73492a14cc7..db3ef4e9801d9 100644
--- a/content/en/docs/contribute/localization.md
+++ b/content/en/docs/contribute/localization.md
@@ -5,29 +5,25 @@ approvers:
- chenopis
- zacharysarah
- zparnold
+card:
+ name: contribute
+ weight: 30
+ title: Translating the docs
---
{{% capture overview %}}
-Documentation for Kubernetes is available in multiple languages:
-
-- English
-- Chinese
-- Japanese
-- Korean
-
-We encourage you to add new [localizations](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/)!
+Documentation for Kubernetes is available in multiple languages. We encourage you to add new [localizations](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/)!
{{% /capture %}}
-
{{% capture body %}}
## Getting started
-Localizations must meet some requirements for workflow (*how* to localize) and output (*what* to localize).
+Localizations must meet some requirements for workflow (*how* to localize) and output (*what* to localize) before publishing.
-To add a new localization of the Kubernetes documentation, you'll need to update the website by modifying the [site configuration](#modify-the-site-configuration) and [directory structure](#add-a-new-localization-directory). Then you can start [translating documents](#translating-documents)!
+To add a new localization of the Kubernetes documentation, you'll need to update the website by modifying the [site configuration](#modify-the-site-configuration) and [directory structure](#add-a-new-localization-directory). Then you can start [translating documents](#translating-documents)!
{{< note >}}
For an example localization-related [pull request](../create-pull-request), see [this pull request](https://github.com/kubernetes/website/pull/8636) to the [Kubernetes website repo](https://github.com/kubernetes/website) adding Korean localization to the Kubernetes docs.
@@ -209,7 +205,7 @@ SIG Docs welcomes [upstream contributions and corrections](/docs/contribute/inte
{{% capture whatsnext %}}
-Once a l10n meets requirements for workflow and minimum output, SIG docs will:
+Once a localization meets requirements for workflow and minimum output, SIG docs will:
- Enable language selection on the website
- Publicize the localization's availability through [Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF) channels, including the [Kubernetes blog](https://kubernetes.io/blog/).
diff --git a/content/en/docs/contribute/participating.md b/content/en/docs/contribute/participating.md
index 029f3bf7463ae..ca3be91d7d725 100644
--- a/content/en/docs/contribute/participating.md
+++ b/content/en/docs/contribute/participating.md
@@ -1,6 +1,9 @@
---
title: Participating in SIG Docs
content_template: templates/concept
+card:
+ name: contribute
+ weight: 40
---
{{% capture overview %}}
diff --git a/content/en/docs/contribute/start.md b/content/en/docs/contribute/start.md
index f4ce637c65634..75fdfb5cb8904 100644
--- a/content/en/docs/contribute/start.md
+++ b/content/en/docs/contribute/start.md
@@ -3,6 +3,9 @@ title: Start contributing
slug: start
content_template: templates/concept
weight: 10
+card:
+ name: contribute
+ weight: 10
---
{{% capture overview %}}
@@ -133,10 +136,9 @@ The SIG Docs team communicates using the following mechanisms:
introduce yourself!
- [Join the `kubernetes-sig-docs` mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs),
where broader discussions take place and official decisions are recorded.
-- Participate in the weekly SIG Docs video meeting, which is announced on the
- Slack channel and the mailing list. Currently, these meetings take place on
- Zoom, so you'll need to download the [Zoom client](https://zoom.us/download)
- or dial in using a phone.
+- Participate in the [weekly SIG Docs](https://github.com/kubernetes/community/tree/master/sig-docs) video meeting, which is announced on the Slack channel and the mailing list. Currently, these meetings take place on Zoom, so you'll need to download the [Zoom client](https://zoom.us/download) or dial in using a phone.
+
+{{< note >}} You can also check the SIG Docs weekly meeting on the [K8s calnedar meetings](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles) {{< /note >}}
## Improve existing content
diff --git a/content/en/docs/contribute/style/page-templates.md b/content/en/docs/contribute/style/page-templates.md
index 1684ed3c2699c..033ddf6223eef 100644
--- a/content/en/docs/contribute/style/page-templates.md
+++ b/content/en/docs/contribute/style/page-templates.md
@@ -2,6 +2,9 @@
title: Using Page Templates
content_template: templates/concept
weight: 30
+card:
+ name: contribute
+ weight: 30
---
{{% capture overview %}}
diff --git a/content/en/docs/contribute/style/style-guide.md b/content/en/docs/contribute/style/style-guide.md
index 8cd5e1433703e..8bae85f61d66c 100644
--- a/content/en/docs/contribute/style/style-guide.md
+++ b/content/en/docs/contribute/style/style-guide.md
@@ -3,6 +3,10 @@ title: Documentation Style Guide
linktitle: Style guide
content_template: templates/concept
weight: 10
+card:
+ name: contribute
+ weight: 20
+ title: Documentation Style Guide
---
{{% capture overview %}}
diff --git a/content/en/docs/doc-contributor-tools/snippets/atom-snippets.cson b/content/en/docs/doc-contributor-tools/snippets/atom-snippets.cson
index 5d4573938be10..878ccc4ed73fc 100644
--- a/content/en/docs/doc-contributor-tools/snippets/atom-snippets.cson
+++ b/content/en/docs/doc-contributor-tools/snippets/atom-snippets.cson
@@ -116,7 +116,7 @@
'body': '{{< toc >}}'
'Insert code from file':
'prefix': 'codefile'
- 'body': '{{< code file="$1" >}}'
+ 'body': '{{< codenew file="$1" >}}'
'Insert feature state':
'prefix': 'fstate'
'body': '{{< feature-state for_k8s_version="$1" state="$2" >}}'
@@ -223,4 +223,4 @@
${7:"next-steps-or-delete"}
{{% /capture %}}
"""
-
\ No newline at end of file
+
diff --git a/content/en/docs/getting-started-guides/ubuntu/_index.md b/content/en/docs/getting-started-guides/ubuntu/_index.md
index 1b1eced18964a..ed60790b67388 100644
--- a/content/en/docs/getting-started-guides/ubuntu/_index.md
+++ b/content/en/docs/getting-started-guides/ubuntu/_index.md
@@ -51,7 +51,7 @@ These are more in-depth guides for users choosing to run Kubernetes in productio
- [Decommissioning](/docs/getting-started-guides/ubuntu/decommissioning/)
- [Operational Considerations](/docs/getting-started-guides/ubuntu/operational-considerations/)
- [Glossary](/docs/getting-started-guides/ubuntu/glossary/)
-
+ - [Authenticating with LDAP](https://www.ubuntu.com/kubernetes/docs/ldap)
## Third-party Product Integrations
@@ -73,5 +73,3 @@ We're normally following the following Slack channels:
and we monitor the Kubernetes mailing lists.
{{% /capture %}}
-
-
diff --git a/content/en/docs/home/_index.md b/content/en/docs/home/_index.md
index f6f5f6150a22b..31f37880ff631 100644
--- a/content/en/docs/home/_index.md
+++ b/content/en/docs/home/_index.md
@@ -16,4 +16,43 @@ menu:
weight: 20
post: >
Learn how to use Kubernetes with conceptual, tutorial, and reference documentation. You can even help contribute to the docs!
+overview: >
+ Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications. The open source project is hosted by the Cloud Native Computing Foundation (CNCF).
+cards:
+- name: concepts
+ title: "Understand the basics"
+ description: "Learn about Kubernetes and its fundamental concepts."
+ button: "Learn Concepts"
+ button_path: "/docs/concepts"
+- name: tutorials
+ title: "Try Kubernetes"
+ description: "Follow tutorials to learn how to deploy applications in Kubernetes."
+ button: "View Tutorials"
+ button_path: "/docs/tutorials"
+- name: setup
+ title: "Set up a cluster"
+ description: "Get Kubernetes running based on your resources and needs."
+ button: "Set up Kubernetes"
+ button_path: "/docs/setup"
+- name: tasks
+ title: "Learn how to use Kubernetes"
+ description: "Look up common tasks and how to perform them using a short sequence of steps."
+ button: "View Tasks"
+ button_path: "/docs/tasks"
+- name: reference
+ title: Look up reference information
+ description: Browse terminology, command line syntax, API resource types, and setup tool documentation.
+ button: View Reference
+ button_path: /docs/reference
+- name: contribute
+ title: Contribute to the docs
+ description: Anyone can contribute, whether you’re new to the project or you’ve been around a long time.
+ button: Contribute to the docs
+ button_path: /docs/contribute
+- name: download
+ title: Download Kubernetes
+ description: If you are installing Kubernetes or upgrading to the newest version, refer to the current release notes.
+- name: about
+ title: About the documentation
+ description: This website contains documentation for the current and previous 4 versions of Kubernetes.
---
diff --git a/content/en/docs/home/supported-doc-versions.md b/content/en/docs/home/supported-doc-versions.md
index 7747ea2b76435..45a6012eaa148 100644
--- a/content/en/docs/home/supported-doc-versions.md
+++ b/content/en/docs/home/supported-doc-versions.md
@@ -1,6 +1,10 @@
---
title: Supported Versions of the Kubernetes Documentation
content_template: templates/concept
+card:
+ name: about
+ weight: 10
+ title: Supported Versions of the Documentation
---
{{% capture overview %}}
diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md
index 440a9a1d86307..17201656276fd 100644
--- a/content/en/docs/reference/access-authn-authz/admission-controllers.md
+++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md
@@ -142,20 +142,24 @@ if the pods don't already have toleration for taints
This admission controller will intercept all requests to exec a command in a pod if that pod has a privileged container.
-If your cluster supports privileged containers, and you want to restrict the ability of end-users to exec
-commands in those containers, we strongly encourage enabling this admission controller.
-
This functionality has been merged into [DenyEscalatingExec](#denyescalatingexec).
+The DenyExecOnPrivileged admission plugin is deprecated and will be removed in v1.18.
+
+Use of a policy-based admission plugin (like [PodSecurityPolicy](#podsecuritypolicy) or a custom admission plugin)
+which can be targeted at specific users or Namespaces and also protects against creation of overly privileged Pods
+is recommended instead.
-### DenyEscalatingExec {#denyescalatingexec}
+### DenyEscalatingExec (deprecated) {#denyescalatingexec}
This admission controller will deny exec and attach commands to pods that run with escalated privileges that
allow host access. This includes pods that run as privileged, have access to the host IPC namespace, and
have access to the host PID namespace.
-If your cluster supports containers that run with escalated privileges, and you want to
-restrict the ability of end-users to exec commands in those containers, we strongly encourage
-enabling this admission controller.
+The DenyEscalatingExec admission plugin is deprecated and will be removed in v1.18.
+
+Use of a policy-based admission plugin (like [PodSecurityPolicy](#podsecuritypolicy) or a custom admission plugin)
+which can be targeted at specific users or Namespaces and also protects against creation of overly privileged Pods
+is recommended instead.
### EventRateLimit (alpha) {#eventratelimit}
diff --git a/content/en/docs/reference/access-authn-authz/authentication.md b/content/en/docs/reference/access-authn-authz/authentication.md
index d7a96526929a0..ae42fb5e49ead 100644
--- a/content/en/docs/reference/access-authn-authz/authentication.md
+++ b/content/en/docs/reference/access-authn-authz/authentication.md
@@ -322,6 +322,7 @@ To enable the plugin, configure the following flags on the API server:
| `--oidc-username-prefix` | Prefix prepended to username claims to prevent clashes with existing names (such as `system:` users). For example, the value `oidc:` will create usernames like `oidc:jane.doe`. If this flag isn't provided and `--oidc-user-claim` is a value other than `email` the prefix defaults to `( Issuer URL )#` where `( Issuer URL )` is the value of `--oidc-issuer-url`. The value `-` can be used to disable all prefixing. | `oidc:` | No |
| `--oidc-groups-claim` | JWT claim to use as the user's group. If the claim is present it must be an array of strings. | groups | No |
| `--oidc-groups-prefix` | Prefix prepended to group claims to prevent clashes with existing names (such as `system:` groups). For example, the value `oidc:` will create group names like `oidc:engineering` and `oidc:infra`. | `oidc:` | No |
+| `--oidc-required-claim` | A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims. | `claim=value` | No |
| `--oidc-ca-file` | The path to the certificate for the CA that signed your identity provider's web certificate. Defaults to the host's root CAs. | `/etc/kubernetes/ssl/kc-ca.pem` | No |
Importantly, the API server is not an OAuth2 client, rather it can only be
diff --git a/content/en/docs/reference/access-authn-authz/authorization.md b/content/en/docs/reference/access-authn-authz/authorization.md
index 366fbefe21ff8..e955053c63374 100644
--- a/content/en/docs/reference/access-authn-authz/authorization.md
+++ b/content/en/docs/reference/access-authn-authz/authorization.md
@@ -47,7 +47,7 @@ Kubernetes reviews only the following API request attributes:
* **extra** - A map of arbitrary string keys to string values, provided by the authentication layer.
* **API** - Indicates whether the request is for an API resource.
* **Request path** - Path to miscellaneous non-resource endpoints like `/api` or `/healthz`.
- * **API request verb** - API verbs `get`, `list`, `create`, `update`, `patch`, `watch`, `proxy`, `redirect`, `delete`, and `deletecollection` are used for resource requests. To determine the request verb for a resource API endpoint, see [Determine the request verb](/docs/reference/access-authn-authz/authorization/#determine-whether-a-request-is-allowed-or-denied) below.
+ * **API request verb** - API verbs `get`, `list`, `create`, `update`, `patch`, `watch`, `proxy`, `redirect`, `delete`, and `deletecollection` are used for resource requests. To determine the request verb for a resource API endpoint, see [Determine the request verb](/docs/reference/access-authn-authz/authorization/#determine-the-request-verb).
* **HTTP request verb** - HTTP verbs `get`, `post`, `put`, and `delete` are used for non-resource requests.
* **Resource** - The ID or name of the resource that is being accessed (for resource requests only) -- For resource requests using `get`, `update`, `patch`, and `delete` verbs, you must provide the resource name.
* **Subresource** - The subresource that is being accessed (for resource requests only).
diff --git a/content/en/docs/reference/access-authn-authz/rbac.md b/content/en/docs/reference/access-authn-authz/rbac.md
index b90509b924341..ed51dcefb5cf7 100644
--- a/content/en/docs/reference/access-authn-authz/rbac.md
+++ b/content/en/docs/reference/access-authn-authz/rbac.md
@@ -148,6 +148,24 @@ roleRef:
apiGroup: rbac.authorization.k8s.io
```
+You cannot modify which `Role` or `ClusterRole` a binding object refers to.
+Attempts to change the `roleRef` field of a binding object will result in a validation error.
+To change the `roleRef` field on an existing binding object, the binding object must be deleted and recreated.
+There are two primary reasons for this restriction:
+
+1. A binding to a different role is a fundamentally different binding.
+Requiring a binding to be deleted/recreated in order to change the `roleRef`
+ensures the full list of subjects in the binding is intended to be granted
+the new role (as opposed to enabling accidentally modifying just the roleRef
+without verifying all of the existing subjects should be given the new role's permissions).
+2. Making `roleRef` immutable allows giving `update` permission on an existing binding object
+to a user, which lets them manage the list of subjects, without being able to change the
+role that is granted to those subjects.
+
+The `kubectl auth reconcile` command-line utility creates or updates a manifest file containing RBAC objects,
+and handles deleting and recreating binding objects if required to change the role they refer to.
+See [command usage and examples](#kubectl-auth-reconcile) for more information.
+
### Referring to Resources
Most resources are represented by a string representation of their name, such as "pods", just as it
@@ -471,14 +489,19 @@ NOTE: editing the role is not recommended as changes will be overwritten on API
system:basic-user
-
system:authenticated and system:unauthenticated groups
+
system:authenticated group
Allows a user read-only access to basic information about themselves.
system:discovery
-
system:authenticated and system:unauthenticated groups
+
system:authenticated group
Allows read-only access to API discovery endpoints needed to discover and negotiate an API level.
+
+
system:public-info-viewer
+
system:authenticated and system:unauthenticated groups
+
Allows read-only access to non-sensitive cluster information.
+
### User-facing Roles
@@ -738,46 +761,156 @@ To bootstrap initial roles and role bindings:
## Command-line Utilities
-Two `kubectl` commands exist to grant roles within a namespace or across the entire cluster.
+### `kubectl create role`
+
+Creates a `Role` object defining permissions within a single namespace. Examples:
+
+* Create a `Role` named "pod-reader" that allows user to perform "get", "watch" and "list" on pods:
+
+ ```
+ kubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods
+ ```
+
+* Create a `Role` named "pod-reader" with resourceNames specified:
+
+ ```
+ kubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod
+ ```
+
+* Create a `Role` named "foo" with apiGroups specified:
+
+ ```
+ kubectl create role foo --verb=get,list,watch --resource=replicasets.apps
+ ```
+
+* Create a `Role` named "foo" with subresource permissions:
+
+ ```
+ kubectl create role foo --verb=get,list,watch --resource=pods,pods/status
+ ```
+
+* Create a `Role` named "my-component-lease-holder" with permissions to get/update a resource with a specific name:
+
+ ```
+ kubectl create role my-component-lease-holder --verb=get,list,watch,update --resource=lease --resource-name=my-component
+ ```
+
+### `kubectl create clusterrole`
+
+Creates a `ClusterRole` object. Examples:
+
+* Create a `ClusterRole` named "pod-reader" that allows user to perform "get", "watch" and "list" on pods:
+
+ ```
+ kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods
+ ```
+
+* Create a `ClusterRole` named "pod-reader" with resourceNames specified:
+
+ ```
+ kubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod
+ ```
+
+* Create a `ClusterRole` named "foo" with apiGroups specified:
+
+ ```
+ kubectl create clusterrole foo --verb=get,list,watch --resource=replicasets.apps
+ ```
+
+* Create a `ClusterRole` named "foo" with subresource permissions:
+
+ ```
+ kubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status
+ ```
+
+* Create a `ClusterRole` name "foo" with nonResourceURL specified:
+
+ ```
+ kubectl create clusterrole "foo" --verb=get --non-resource-url=/logs/*
+ ```
+
+* Create a `ClusterRole` name "monitoring" with aggregationRule specified:
+
+ ```
+ kubectl create clusterrole monitoring --aggregation-rule="rbac.example.com/aggregate-to-monitoring=true"
+ ```
### `kubectl create rolebinding`
Grants a `Role` or `ClusterRole` within a specific namespace. Examples:
-* Grant the `admin` `ClusterRole` to a user named "bob" in the namespace "acme":
+* Within the namespace "acme", grant the permissions in the `admin` `ClusterRole` to a user named "bob":
```
kubectl create rolebinding bob-admin-binding --clusterrole=admin --user=bob --namespace=acme
```
-* Grant the `view` `ClusterRole` to a service account named "myapp" in the namespace "acme":
+* Within the namespace "acme", grant the permissions in the `view` `ClusterRole` to the service account in the namespace "acme" named "myapp" :
```
kubectl create rolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp --namespace=acme
```
+* Within the namespace "acme", grant the permissions in the `view` `ClusterRole` to a service account in the namespace "myappnamespace" named "myapp":
+
+ ```
+ kubectl create rolebinding myappnamespace-myapp-view-binding --clusterrole=view --serviceaccount=myappnamespace:myapp --namespace=acme
+ ```
+
### `kubectl create clusterrolebinding`
Grants a `ClusterRole` across the entire cluster, including all namespaces. Examples:
-* Grant the `cluster-admin` `ClusterRole` to a user named "root" across the entire cluster:
+* Across the entire cluster, grant the permissions in the `cluster-admin` `ClusterRole` to a user named "root":
```
kubectl create clusterrolebinding root-cluster-admin-binding --clusterrole=cluster-admin --user=root
```
-* Grant the `system:node` `ClusterRole` to a user named "kubelet" across the entire cluster:
+* Across the entire cluster, grant the permissions in the `system:node-proxier ` `ClusterRole` to a user named "system:kube-proxy":
```
- kubectl create clusterrolebinding kubelet-node-binding --clusterrole=system:node --user=kubelet
+ kubectl create clusterrolebinding kube-proxy-binding --clusterrole=system:node-proxier --user=system:kube-proxy
```
-* Grant the `view` `ClusterRole` to a service account named "myapp" in the namespace "acme" across the entire cluster:
+* Across the entire cluster, grant the permissions in the `view` `ClusterRole` to a service account named "myapp" in the namespace "acme":
```
kubectl create clusterrolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp
```
+### `kubectl auth reconcile` {#kubectl-auth-reconcile}
+
+Creates or updates `rbac.authorization.k8s.io/v1` API objects from a manifest file.
+
+Missing objects are created, and the containing namespace is created for namespaced objects, if required.
+
+Existing roles are updated to include the permissions in the input objects,
+and remove extra permissions if `--remove-extra-permissions` is specified.
+
+Existing bindings are updated to include the subjects in the input objects,
+and remove extra subjects if `--remove-extra-subjects` is specified.
+
+Examples:
+
+* Test applying a manifest file of RBAC objects, displaying changes that would be made:
+
+ ```
+ kubectl auth reconcile -f my-rbac-rules.yaml --dry-run
+ ```
+
+* Apply a manifest file of RBAC objects, preserving any extra permissions (in roles) and any extra subjects (in bindings):
+
+ ```
+ kubectl auth reconcile -f my-rbac-rules.yaml
+ ```
+
+* Apply a manifest file of RBAC objects, removing any extra permissions (in roles) and any extra subjects (in bindings):
+
+ ```
+ kubectl auth reconcile -f my-rbac-rules.yaml --remove-extra-subjects --remove-extra-permissions
+ ```
+
See the CLI help for detailed usage.
## Service Account Permissions
diff --git a/content/en/docs/reference/glossary/container-lifecycle-hooks.md b/content/en/docs/reference/glossary/container-lifecycle-hooks.md
index 527e2f3e6efc8..5f19d0606f6f9 100644
--- a/content/en/docs/reference/glossary/container-lifecycle-hooks.md
+++ b/content/en/docs/reference/glossary/container-lifecycle-hooks.md
@@ -6,13 +6,12 @@ full_link: /docs/concepts/containers/container-lifecycle-hooks/
short_description: >
The lifecycle hooks expose events in the container management lifecycle and let the user run code when the events occur.
-aka:
+aka:
tags:
- extension
---
- The lifecycle hooks expose events in the {{< glossary_tooltip text="Container" term_id="container" >}}container management lifecycle and let the user run code when the events occur.
+ The lifecycle hooks expose events in the {{< glossary_tooltip text="Container" term_id="container" >}} management lifecycle and let the user run code when the events occur.
-
+
Two hooks are exposed to Containers: PostStart which executes immediately after a container is created and PreStop which is blocking and is called immediately before a container is terminated.
-
diff --git a/content/en/docs/reference/glossary/cronjob.md b/content/en/docs/reference/glossary/cronjob.md
index 3173740b5b6d8..d09dc8e0d4263 100755
--- a/content/en/docs/reference/glossary/cronjob.md
+++ b/content/en/docs/reference/glossary/cronjob.md
@@ -13,7 +13,7 @@ tags:
---
Manages a [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/) that runs on a periodic schedule.
-
+
-Similar to a line in a *crontab* file, a Cronjob object specifies a schedule using the [Cron](https://en.wikipedia.org/wiki/Cron) format.
+Similar to a line in a *crontab* file, a CronJob object specifies a schedule using the [cron](https://en.wikipedia.org/wiki/Cron) format.
diff --git a/content/en/docs/reference/glossary/device-plugin.md b/content/en/docs/reference/glossary/device-plugin.md
new file mode 100644
index 0000000000000..02e5500677d45
--- /dev/null
+++ b/content/en/docs/reference/glossary/device-plugin.md
@@ -0,0 +1,17 @@
+---
+title: Device Plugin
+id: device-plugin
+date: 2019-02-02
+full_link: https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/
+short_description: >
+ Device Plugins are containers running in Kubernetes that provide access to a vendor specific resource.
+aka:
+tags:
+- fundamental
+- extension
+---
+ Device Plugins are containers running in Kubernetes that provide access to a vendor specific resource.
+
+
+
+[Device Plugin](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) are containers running in Kubernetes that provide access to a vendor specific resource. Device Plugins advertise these resources to kubelet and can be deployed manually or as a DeamonSet, rather than writing custom Kubernetes code.
diff --git a/content/en/docs/reference/glossary/index.md b/content/en/docs/reference/glossary/index.md
index a8c229569a05a..1fb8799a16b51 100755
--- a/content/en/docs/reference/glossary/index.md
+++ b/content/en/docs/reference/glossary/index.md
@@ -7,5 +7,9 @@ layout: glossary
noedit: true
default_active_tag: fundamental
weight: 5
+card:
+ name: reference
+ weight: 10
+ title: Glossary
---
diff --git a/content/en/docs/reference/glossary/pod-disruption-budget.md b/content/en/docs/reference/glossary/pod-disruption-budget.md
new file mode 100644
index 0000000000000..1e3f2f7d6ea28
--- /dev/null
+++ b/content/en/docs/reference/glossary/pod-disruption-budget.md
@@ -0,0 +1,19 @@
+---
+id: pod-disruption-budget
+title: Pod Disruption Budget
+full-link: /docs/concepts/workloads/pods/disruptions/
+date: 2019-02-12
+short-description: >
+ An object that limits the number of {{< glossary_tooltip text="Pods" term_id="pod" >}} of a replicated application, that are down simultaneously from voluntary disruptions.
+
+aka:
+ - PDB
+related:
+ - pod
+ - container
+tags:
+ - operation
+---
+
+ A [Pod Disruption Budget](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/) allows an application owner to create an object for a replicated application, that ensures a certain number or percentage of Pods with an assigned label will not be voluntarily evicted at any point in time. PDBs cannot prevent an involuntary disruption, but will count against the budget.
+
diff --git a/content/en/docs/reference/glossary/pod-lifecycle.md b/content/en/docs/reference/glossary/pod-lifecycle.md
new file mode 100644
index 0000000000000..ec496de440ac7
--- /dev/null
+++ b/content/en/docs/reference/glossary/pod-lifecycle.md
@@ -0,0 +1,16 @@
+---
+title: Pod Lifecycle
+id: pod-lifecycle
+date: 2019-02-17
+full-link: /docs/concepts/workloads/pods/pod-lifecycle/
+related:
+ - pod
+ - container
+tags:
+ - fundamental
+short-description: >
+ A high-level summary of what phase the Pod is in within its lifecyle.
+
+---
+
+The [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/) is a high level summary of where a Pod is in its lifecyle. A Pod’s `status` field is a [PodStatus](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#podstatus-v1-core) object, which has a `phase` field that displays one of the following phases: Running, Pending, Succeeded, Failed, Unknown, Completed, or CrashLoopBackOff.
diff --git a/content/en/docs/reference/glossary/Preemption.md b/content/en/docs/reference/glossary/preemption.md
similarity index 97%
rename from content/en/docs/reference/glossary/Preemption.md
rename to content/en/docs/reference/glossary/preemption.md
index c94d9a3a3e378..ac1334c9793a6 100644
--- a/content/en/docs/reference/glossary/Preemption.md
+++ b/content/en/docs/reference/glossary/preemption.md
@@ -1,6 +1,6 @@
---
title: Preemption
-id: Preemption
+id: preemption
date: 2019-01-31
full_link: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#preemption
short_description: >
diff --git a/content/en/docs/reference/glossary/rkt.md b/content/en/docs/reference/glossary/rkt.md
new file mode 100644
index 0000000000000..455484c8ca2de
--- /dev/null
+++ b/content/en/docs/reference/glossary/rkt.md
@@ -0,0 +1,18 @@
+---
+title: rkt
+id: rkt
+date: 2019-01-24
+full_link: https://coreos.com/rkt/
+short_description: >
+ A security-minded, standards-based container engine.
+
+aka:
+tags:
+- security
+- tool
+---
+ A security-minded, standards-based container engine.
+
+
+
+rkt is an application {% glossary_tooltip text="container" term_id="container" %} engine featuring a {% glossary_tooltip text="pod" term_id="pod" %}-native approach, a pluggable execution environment, and a well-defined surface area. rkt allows users to apply different configurations at both the pod and application level and each pod executes directly in the classic Unix process model, in a self-contained, isolated environment.
diff --git a/content/en/docs/reference/glossary/sysctl.md b/content/en/docs/reference/glossary/sysctl.md
new file mode 100755
index 0000000000000..7b73af4c56776
--- /dev/null
+++ b/content/en/docs/reference/glossary/sysctl.md
@@ -0,0 +1,23 @@
+---
+title: sysctl
+id: sysctl
+date: 2019-02-12
+full_link: /docs/tasks/administer-cluster/sysctl-cluster/
+short_description: >
+ An interface for getting and setting Unix kernel parameters
+
+aka:
+tags:
+- tool
+---
+ `sysctl` is a semi-standardized interface for reading or changing the
+ attributes of the running Unix kernel.
+
+
+
+On Unix-like systems, `sysctl` is both the name of the tool that administrators
+use to view and modify these settings, and also the system call that the tool
+uses.
+
+{{< glossary_tooltip text="Container" term_id="container" >}} runtimes and
+network plugins may rely on `sysctl` values being set a certain way.
diff --git a/content/en/docs/reference/glossary/workload.md b/content/en/docs/reference/glossary/workload.md
new file mode 100644
index 0000000000000..1730e7b93f3ce
--- /dev/null
+++ b/content/en/docs/reference/glossary/workload.md
@@ -0,0 +1,28 @@
+---
+title: Workload
+id: workload
+date: 2019-02-12
+full_link: /docs/concepts/workloads/
+short_description: >
+ A set of applications for processing information to serve a purpose that is valuable to a single user or group of users.
+
+aka:
+tags:
+- workload
+---
+A workload consists of a system of services or applications that can run to fulfill a
+task or carry out a business process.
+
+
+
+Alongside the computer code that runs to carry out the task, a workload also entails
+the infrastructure resources that actually run that code.
+
+For example, a workload that has a web element and a database element might run the
+database in one {{< glossary_tooltip term_id="StatefulSet" >}} of
+{{< glossary_tooltip text="pods" term_id="pod" >}} and the webserver via
+a {{< glossary_tooltip term_id="Deployment" >}} that consists of many web app
+{{< glossary_tooltip text="pods" term_id="pod" >}}, all alike.
+
+The organisation running this workload may well have other workloads that together
+provide a valuable outcome to its users.
diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md
index d46e6336bd056..47c9a505e50f9 100644
--- a/content/en/docs/reference/kubectl/cheatsheet.md
+++ b/content/en/docs/reference/kubectl/cheatsheet.md
@@ -6,6 +6,9 @@ reviewers:
- krousey
- clove
content_template: templates/concept
+card:
+ name: reference
+ weight: 30
---
{{% capture overview %}}
@@ -29,6 +32,13 @@ source <(kubectl completion bash) # setup autocomplete in bash into the current
echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.
```
+You can also use a shorthand alias for `kubectl` that also works with completion:
+
+```bash
+alias k=kubectl
+complete -F __start_kubectl k
+```
+
### ZSH
```bash
@@ -139,6 +149,10 @@ kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'
kubectl get pods --selector=app=cassandra rc -o \
jsonpath='{.items[*].metadata.labels.version}'
+# Get all worker nodes (use a selector to exclude results that have a label
+# named 'node-role.kubernetes.io/master')
+kubectl get node --selector='!node-role.kubernetes.io/master'
+
# Get all running pods in the namespace
kubectl get pods --field-selector=status.phase=Running
@@ -150,6 +164,10 @@ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP
sel=${$(kubectl get rc my-rc --output=json | jq -j '.spec.selector | to_entries | .[] | "\(.key)=\(.value),"')%?}
echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name})
+# Show labels for all pods (or any other Kubernetes object that supports labelling)
+# Also uses "jq"
+for item in $( kubectl get pod --output=name); do printf "Labels for %s\n" "$item" | grep --color -E '[^/]+$' && kubectl get "$item" --output=json | jq -r -S '.metadata.labels | to_entries | .[] | " \(.key)=\(.value)"' 2>/dev/null; printf "\n"; done
+
# Check which nodes are ready
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"
diff --git a/content/en/docs/reference/kubectl/overview.md b/content/en/docs/reference/kubectl/overview.md
index 3fa3836460726..71befbd9664b8 100644
--- a/content/en/docs/reference/kubectl/overview.md
+++ b/content/en/docs/reference/kubectl/overview.md
@@ -5,6 +5,9 @@ reviewers:
title: Overview of kubectl
content_template: templates/concept
weight: 20
+card:
+ name: reference
+ weight: 20
---
{{% capture overview %}}
@@ -258,52 +261,52 @@ Use the following set of examples to help you familiarize yourself with running
`kubectl create` - Create a resource from a file or stdin.
```shell
-// Create a service using the definition in example-service.yaml.
+# Create a service using the definition in example-service.yaml.
$ kubectl create -f example-service.yaml
-// Create a replication controller using the definition in example-controller.yaml.
+# Create a replication controller using the definition in example-controller.yaml.
$ kubectl create -f example-controller.yaml
-// Create the objects that are defined in any .yaml, .yml, or .json file within the directory.
+# Create the objects that are defined in any .yaml, .yml, or .json file within the directory.
$ kubectl create -f
```
`kubectl get` - List one or more resources.
```shell
-// List all pods in plain-text output format.
+# List all pods in plain-text output format.
$ kubectl get pods
-// List all pods in plain-text output format and include additional information (such as node name).
+# List all pods in plain-text output format and include additional information (such as node name).
$ kubectl get pods -o wide
-// List the replication controller with the specified name in plain-text output format. Tip: You can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'.
+# List the replication controller with the specified name in plain-text output format. Tip: You can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'.
$ kubectl get replicationcontroller
-// List all replication controllers and services together in plain-text output format.
+# List all replication controllers and services together in plain-text output format.
$ kubectl get rc,services
-// List all daemon sets, including uninitialized ones, in plain-text output format.
+# List all daemon sets, including uninitialized ones, in plain-text output format.
$ kubectl get ds --include-uninitialized
-// List all pods running on node server01
+# List all pods running on node server01
$ kubectl get pods --field-selector=spec.nodeName=server01
```
`kubectl describe` - Display detailed state of one or more resources, including the uninitialized ones by default.
```shell
-// Display the details of the node with name .
+# Display the details of the node with name .
$ kubectl describe nodes
-// Display the details of the pod with name .
+# Display the details of the pod with name .
$ kubectl describe pods/
-// Display the details of all the pods that are managed by the replication controller named .
-// Remember: Any pods that are created by the replication controller get prefixed with the name of the replication controller.
+# Display the details of all the pods that are managed by the replication controller named .
+# Remember: Any pods that are created by the replication controller get prefixed with the name of the replication controller.
$ kubectl describe pods
-// Describe all pods, not including uninitialized ones
+# Describe all pods, not including uninitialized ones
$ kubectl describe pods --include-uninitialized=false
```
@@ -322,39 +325,39 @@ the pods running on it, the events generated for the node etc.
`kubectl delete` - Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources.
```shell
-// Delete a pod using the type and name specified in the pod.yaml file.
+# Delete a pod using the type and name specified in the pod.yaml file.
$ kubectl delete -f pod.yaml
-// Delete all the pods and services that have the label name=.
+# Delete all the pods and services that have the label name=.
$ kubectl delete pods,services -l name=
-// Delete all the pods and services that have the label name=, including uninitialized ones.
+# Delete all the pods and services that have the label name=, including uninitialized ones.
$ kubectl delete pods,services -l name= --include-uninitialized
-// Delete all pods, including uninitialized ones.
+# Delete all pods, including uninitialized ones.
$ kubectl delete pods --all
```
`kubectl exec` - Execute a command against a container in a pod.
```shell
-// Get output from running 'date' from pod . By default, output is from the first container.
+# Get output from running 'date' from pod . By default, output is from the first container.
$ kubectl exec date
-// Get output from running 'date' in container of pod .
+# Get output from running 'date' in container of pod .
$ kubectl exec -c date
-// Get an interactive TTY and run /bin/bash from pod . By default, output is from the first container.
+# Get an interactive TTY and run /bin/bash from pod . By default, output is from the first container.
$ kubectl exec -ti /bin/bash
```
`kubectl logs` - Print the logs for a container in a pod.
```shell
-// Return a snapshot of the logs from pod .
+# Return a snapshot of the logs from pod .
$ kubectl logs
-// Start streaming the logs from pod . This is similar to the 'tail -f' Linux command.
+# Start streaming the logs from pod . This is similar to the 'tail -f' Linux command.
$ kubectl logs -f
```
@@ -363,26 +366,26 @@ $ kubectl logs -f
Use the following set of examples to help you familiarize yourself with writing and using `kubectl` plugins:
```shell
-// create a simple plugin in any language and name the resulting executable file
-// so that it begins with the prefix "kubectl-"
+# create a simple plugin in any language and name the resulting executable file
+# so that it begins with the prefix "kubectl-"
$ cat ./kubectl-hello
#!/bin/bash
# this plugin prints the words "hello world"
echo "hello world"
-// with our plugin written, let's make it executable
+# with our plugin written, let's make it executable
$ sudo chmod +x ./kubectl-hello
-// and move it to a location in our PATH
+# and move it to a location in our PATH
$ sudo mv ./kubectl-hello /usr/local/bin
-// we have now created and "installed" a kubectl plugin.
-// we can begin using our plugin by invoking it from kubectl as if it were a regular command
+# we have now created and "installed" a kubectl plugin.
+# we can begin using our plugin by invoking it from kubectl as if it were a regular command
$ kubectl hello
hello world
-// we can "uninstall" a plugin, by simply removing it from our PATH
+# we can "uninstall" a plugin, by simply removing it from our PATH
$ sudo rm /usr/local/bin/kubectl-hello
```
@@ -397,9 +400,9 @@ The following kubectl-compatible plugins are available:
/usr/local/bin/kubectl-foo
/usr/local/bin/kubectl-bar
-// this command can also warn us about plugins that are
-// not executable, or that are overshadowed by other
-// plugins, for example
+# this command can also warn us about plugins that are
+# not executable, or that are overshadowed by other
+# plugins, for example
$ sudo chmod -x /usr/local/bin/kubectl-foo
$ kubectl plugin list
The following kubectl-compatible plugins are available:
@@ -428,10 +431,10 @@ Running the above plugin gives us an output containing the user for the currentl
context in our KUBECONFIG file:
```shell
-// make the file executable
+# make the file executable
$ sudo chmod +x ./kubectl-whoami
-// and move it into our PATH
+# and move it into our PATH
$ sudo mv ./kubectl-whoami /usr/local/bin
$ kubectl whoami
diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm.md
index eccf635588e72..80d4dff5b3e61 100644
--- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm.md
+++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm.md
@@ -5,6 +5,9 @@ reviewers:
- jbeda
title: Overview of kubeadm
weight: 10
+card:
+ name: reference
+ weight: 40
---
Kubeadm is a tool built to provide `kubeadm init` and `kubeadm join` as best-practice “fast paths” for creating Kubernetes clusters.
diff --git a/content/en/docs/reference/using-api/api-overview.md b/content/en/docs/reference/using-api/api-overview.md
index 38baa5aa92ac8..f74a204f38116 100644
--- a/content/en/docs/reference/using-api/api-overview.md
+++ b/content/en/docs/reference/using-api/api-overview.md
@@ -7,6 +7,10 @@ reviewers:
- jbeda
content_template: templates/concept
weight: 10
+card:
+ name: reference
+ weight: 50
+ title: Overview of API
---
{{% capture overview %}}
diff --git a/content/en/docs/setup/cri.md b/content/en/docs/setup/cri.md
index 6f16f6a8f6644..dfe484aa2b220 100644
--- a/content/en/docs/setup/cri.md
+++ b/content/en/docs/setup/cri.md
@@ -7,20 +7,56 @@ content_template: templates/concept
weight: 100
---
{{% capture overview %}}
-Since v1.6.0, Kubernetes has enabled the use of CRI, Container Runtime Interface, by default.
-This page contains installation instruction for various runtimes.
+{{< feature-state for_k8s_version="v1.6" state="stable" >}}
+To run containers in Pods, Kubernetes uses a container runtime. Here are
+the installation instruction for various runtimes.
{{% /capture %}}
{{% capture body %}}
-Please proceed with executing the following commands based on your OS as root.
-You may become the root user by executing `sudo -i` after SSH-ing to each host.
+
+{{< caution >}}
+A flaw was found in the way runc handled system file descriptors when running containers.
+A malicious container could use this flaw to overwrite contents of the runc binary and
+consequently run arbitrary commands on the container host system.
+
+Please refer to this link for more information about this issue
+[cve-2019-5736 : runc vulnerability ] (https://access.redhat.com/security/cve/cve-2019-5736)
+{{< /caution >}}
+
+### Applicability
+
+{{< note >}}
+This document is written for users installing CRI onto Linux. For other operating
+systems, look for documentation specific to your platform.
+{{< /note >}}
+
+You should execute all the commands in this guide as `root`. For example, prefix commands
+with `sudo `, or become `root` and run the commands as that user.
+
+### Cgroup drivers
+
+When systemd is chosen as the init system for a Linux distribution, the init process generates
+and consumes a root control group (`cgroup`) and acts as a cgroup manager. Systemd has a tight
+integration with cgroups and will allocate cgroups per process. It's possible to configure your
+container runtime and the kubelet to use `cgroupfs`. Using `cgroupfs` alongside systemd means
+that there will then be two different cgroup managers.
+
+Control groups are used to constrain resources that are allocated to processes.
+A single cgroup manager will simplify the view of what resources are being allocated
+and will by default have a more consistent view of the available and in-use resources. When we have
+two managers we end up with two views of those resources. We have seen cases in the field
+where nodes that are configured to use `cgroupfs` for the kubelet and Docker, and `systemd`
+for the rest of the processes running on the node becomes unstable under resource pressure.
+
+Changing the settings such that your container runtime and kubelet use `systemd` as the cgroup driver
+stabilized the system. Please note the `native.cgroupdriver=systemd` option in the Docker setup below.
## Docker
On each of your machines, install Docker.
-Version 18.06 is recommended, but 1.11, 1.12, 1.13, 17.03 and 18.09 are known to work as well.
+Version 18.06.2 is recommended, but 1.11, 1.12, 1.13, 17.03 and 18.09 are known to work as well.
Keep track of the latest verified Docker version in the Kubernetes release notes.
Use the following commands to install Docker on your system:
@@ -45,7 +81,7 @@ Use the following commands to install Docker on your system:
stable"
## Install docker ce.
-apt-get update && apt-get install docker-ce=18.06.0~ce~3-0~ubuntu
+apt-get update && apt-get install docker-ce=18.06.2~ce~3-0~ubuntu
# Setup daemon.
cat > /etc/docker/daemon.json < node-role.kubernetes.io/master:NoSchedule-
+```
-# Generate and deploy etcd certificates
-export CLUSTER_DOMAIN=$(kubectl get ConfigMap --namespace kube-system coredns -o yaml | awk '/kubernetes/ {print $2}')
-tls/certs/gen-cert.sh $CLUSTER_DOMAIN
-tls/deploy-certs.sh
+To deploy Cilium you just need to run:
-# Label kube-dns with fixed identity label
-kubectl label -n kube-system pod $(kubectl -n kube-system get pods -l k8s-app=kube-dns -o jsonpath='{range .items[]}{.metadata.name}{" "}{end}') io.cilium.fixed-identity=kube-dns
+```shell
+kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.4/examples/kubernetes/1.13/cilium.yaml
+```
-kubectl create -f ./
+Once all Cilium pods are marked as `READY`, you start using your cluster.
-# Wait several minutes for Cilium, coredns and etcd pods to converge to a working state
+```shell
+$ kubectl get pods -n kube-system --selector=k8s-app=cilium
+NAME READY STATUS RESTARTS AGE
+cilium-drxkl 1/1 Running 0 18m
```
-
{{% /tab %}}
{{% tab name="Flannel" %}}
@@ -337,10 +336,11 @@ Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net
to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information
please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements).
-Note that `flannel` works on `amd64`, `arm`, `arm64` and `ppc64le`.
+Note that `flannel` works on `amd64`, `arm`, `arm64`, `ppc64le` and `s390x` under Linux.
+Windows (`amd64`) is claimed as supported in v0.11.0 but the usage is undocumented.
```shell
-kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
+kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
```
For more information about `flannel`, see [the CoreOS flannel repository on GitHub
@@ -398,6 +398,16 @@ There are multiple, flexible ways to install JuniperContrail/TungstenFabric CNI.
Kindly refer to this quickstart: [TungstenFabric](https://tungstenfabric.github.io/website/)
{{% /tab %}}
+
+{{% tab name="Contiv-VPP" %}}
+[Contiv-VPP](https://contivpp.io/) employs a programmable CNF vSwitch based on [FD.io VPP](https://fd.io/),
+offering feature-rich & high-performance cloud-native networking and services.
+
+It implements k8s services and network policies in the user space (on VPP).
+
+Please refer to this installation guide: [Contiv-VPP Manual Installation](https://github.com/contiv/vpp/blob/master/docs/setup/MANUAL_INSTALL.md)
+{{% /tab %}}
+
{{< /tabs >}}
diff --git a/content/en/docs/setup/independent/high-availability.md b/content/en/docs/setup/independent/high-availability.md
index 10e0e2b32ce37..1ae27f0236e64 100644
--- a/content/en/docs/setup/independent/high-availability.md
+++ b/content/en/docs/setup/independent/high-availability.md
@@ -69,7 +69,7 @@ networking provider, make sure to replace any default values as needed.
## First steps for both methods
{{< note >}}
-**Note**: All commands on any control plane or etcd node should be
+All commands on any control plane or etcd node should be
run as root.
{{< /note >}}
diff --git a/content/en/docs/setup/independent/install-kubeadm.md b/content/en/docs/setup/independent/install-kubeadm.md
index afc92e220f0f6..16570b2a36e82 100644
--- a/content/en/docs/setup/independent/install-kubeadm.md
+++ b/content/en/docs/setup/independent/install-kubeadm.md
@@ -2,6 +2,10 @@
title: Installing kubeadm
content_template: templates/task
weight: 20
+card:
+ name: setup
+ weight: 20
+ title: Install the kubeadm setup tool
---
{{% capture overview %}}
@@ -105,7 +109,7 @@ You will install these packages on all of your machines:
* `kubectl`: the command line util to talk to your cluster.
kubeadm **will not** install or manage `kubelet` or `kubectl` for you, so you will
-need to ensure they match the version of the Kubernetes control panel you want
+need to ensure they match the version of the Kubernetes control plane you want
kubeadm to install for you. If you do not, there is a risk of a version skew occurring that
can lead to unexpected, buggy behaviour. However, _one_ minor version skew between the
kubelet and the control plane is supported, but the kubelet version may never exceed the API
diff --git a/content/en/docs/setup/independent/kubelet-integration.md b/content/en/docs/setup/independent/kubelet-integration.md
index 03feb7cf4dcf2..d5cc7d31326a8 100644
--- a/content/en/docs/setup/independent/kubelet-integration.md
+++ b/content/en/docs/setup/independent/kubelet-integration.md
@@ -193,10 +193,10 @@ The DEB and RPM packages shipped with the Kubernetes releases are:
| Package name | Description |
|--------------|-------------|
-| `kubeadm` | Installs the `/usr/bin/kubeadm` CLI tool and [The kubelet drop-in file(#the-kubelet-drop-in-file-for-systemd) for the kubelet. |
+| `kubeadm` | Installs the `/usr/bin/kubeadm` CLI tool and the [kubelet drop-in file](#the-kubelet-drop-in-file-for-systemd) for the kubelet. |
| `kubelet` | Installs the `/usr/bin/kubelet` binary. |
| `kubectl` | Installs the `/usr/bin/kubectl` binary. |
| `kubernetes-cni` | Installs the official CNI binaries into the `/opt/cni/bin` directory. |
-| `cri-tools` | Installs the `/usr/bin/crictl` binary from [https://github.com/kubernetes-incubator/cri-tools](https://github.com/kubernetes-incubator/cri-tools). |
+| `cri-tools` | Installs the `/usr/bin/crictl` binary from the [cri-tools git repository](https://github.com/kubernetes-incubator/cri-tools). |
{{% /capture %}}
diff --git a/content/en/docs/setup/independent/troubleshooting-kubeadm.md b/content/en/docs/setup/independent/troubleshooting-kubeadm.md
index cf59d2c191480..edc0f2e9ef3c0 100644
--- a/content/en/docs/setup/independent/troubleshooting-kubeadm.md
+++ b/content/en/docs/setup/independent/troubleshooting-kubeadm.md
@@ -56,7 +56,7 @@ This may be caused by a number of problems. The most common are:
```
There are two common ways to fix the cgroup driver problem:
-
+
1. Install Docker again following instructions
[here](/docs/setup/independent/install-kubeadm/#installing-docker).
1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to
@@ -100,9 +100,8 @@ Right after `kubeadm init` there should not be any pods in these states.
until you have deployed the network solution.
- If you see Pods in the `RunContainerError`, `CrashLoopBackOff` or `Error` state
after deploying the network solution and nothing happens to `coredns` (or `kube-dns`),
- it's very likely that the Pod Network solution and nothing happens to the DNS server, it's very
- likely that the Pod Network solution that you installed is somehow broken. You
- might have to grant it more RBAC privileges or use a newer version. Please file
+ it's very likely that the Pod Network solution that you installed is somehow broken.
+ You might have to grant it more RBAC privileges or use a newer version. Please file
an issue in the Pod Network providers' issue tracker and get the issue triaged there.
- If you install a version of Docker older than 1.12.1, remove the `MountFlags=slave` option
when booting `dockerd` with `systemd` and restart `docker`. You can see the MountFlags in `/usr/lib/systemd/system/docker.service`.
@@ -155,6 +154,18 @@ Unable to connect to the server: x509: certificate signed by unknown authority (
regenerate a certificate if necessary. The certificates in a kubeconfig file
are base64 encoded. The `base64 -d` command can be used to decode the certificate
and `openssl x509 -text -noout` can be used for viewing the certificate information.
+- Unset the `KUBECONFIG` environment variable using:
+
+ ```sh
+ unset KUBECONFIG
+ ```
+
+ Or set it to the default `KUBECONFIG` location:
+
+ ```sh
+ export KUBECONFIG=/etc/kubernetes/admin.conf
+ ```
+
- Another workaround is to overwrite the existing `kubeconfig` for the "admin" user:
```sh
diff --git a/content/en/docs/setup/minikube.md b/content/en/docs/setup/minikube.md
index 122ce8b598841..1b148756c430f 100644
--- a/content/en/docs/setup/minikube.md
+++ b/content/en/docs/setup/minikube.md
@@ -407,8 +407,9 @@ For more information about Minikube, see the [proposal](https://git.k8s.io/commu
* **Goals and Non-Goals**: For the goals and non-goals of the Minikube project, please see our [roadmap](https://git.k8s.io/minikube/docs/contributors/roadmap.md).
* **Development Guide**: See [CONTRIBUTING.md](https://git.k8s.io/minikube/CONTRIBUTING.md) for an overview of how to send pull requests.
* **Building Minikube**: For instructions on how to build/test Minikube from source, see the [build guide](https://git.k8s.io/minikube/docs/contributors/build_guide.md).
-* **Adding a New Dependency**: For instructions on how to add a new dependency to Minikube see the [adding dependencies guide](https://git.k8s.io/minikube/docs/contributors/adding_a_dependency.md).
-* **Adding a New Addon**: For instruction on how to add a new addon for Minikube see the [adding an addon guide](https://git.k8s.io/minikube/docs/contributors/adding_an_addon.md).
+* **Adding a New Dependency**: For instructions on how to add a new dependency to Minikube, see the [adding dependencies guide](https://git.k8s.io/minikube/docs/contributors/adding_a_dependency.md).
+* **Adding a New Addon**: For instructions on how to add a new addon for Minikube, see the [adding an addon guide](https://git.k8s.io/minikube/docs/contributors/adding_an_addon.md).
+* **MicroK8s**: Linux users wishing to avoid running a virtual machine may consider [MicroK8s](https://microk8s.io/) as an alternative.
## Community
diff --git a/content/en/docs/setup/on-premises-metal/krib.md b/content/en/docs/setup/on-premises-metal/krib.md
index 3762068ccddd1..4ee90777e29b7 100644
--- a/content/en/docs/setup/on-premises-metal/krib.md
+++ b/content/en/docs/setup/on-premises-metal/krib.md
@@ -8,7 +8,7 @@ author: Rob Hirschfeld (zehicle)
This guide helps to install a Kubernetes cluster hosted on bare metal with [Digital Rebar Provision](https://github.com/digitalrebar/provision) using only its Content packages and *kubeadm*.
-Digital Rebar Provision (DRP) is an integrated Golang DHCP, bare metal provisioning (PXE/iPXE) and workflow automation platform. While [DRP can be used to invoke](https://provision.readthedocs.io/en/tip/doc/integrations/ansible.html) [kubespray](../kubespray), it also offers a self-contained Kubernetes installation known as [KRIB (Kubernetes Rebar Integrated Bootstrap)](https://github.com/digitalrebar/provision-content/tree/master/krib).
+Digital Rebar Provision (DRP) is an integrated Golang DHCP, bare metal provisioning (PXE/iPXE) and workflow automation platform. While [DRP can be used to invoke](https://provision.readthedocs.io/en/tip/doc/integrations/ansible.html) [kubespray](/docs/setup/custom-cloud/kubespray), it also offers a self-contained Kubernetes installation known as [KRIB (Kubernetes Rebar Integrated Bootstrap)](https://github.com/digitalrebar/provision-content/tree/master/krib).
{{< note >}}
KRIB is not a _stand-alone_ installer: Digital Rebar templates drive a standard *[kubeadm](/docs/admin/kubeadm/)* configuration that manages the Kubernetes installation with the [Digital Rebar cluster pattern](https://provision.readthedocs.io/en/tip/doc/arch/cluster.html#rs-cluster-pattern) to elect leaders _without external supervision_.
diff --git a/content/en/docs/setup/pick-right-solution.md b/content/en/docs/setup/pick-right-solution.md
index e32cbc0d4fd91..96514bfca5d40 100644
--- a/content/en/docs/setup/pick-right-solution.md
+++ b/content/en/docs/setup/pick-right-solution.md
@@ -6,6 +6,20 @@ reviewers:
title: Picking the Right Solution
weight: 10
content_template: templates/concept
+card:
+ name: setup
+ weight: 20
+ anchors:
+ - anchor: "#hosted-solutions"
+ title: Hosted Solutions
+ - anchor: "#turnkey-cloud-solutions"
+ title: Turnkey Cloud Solutions
+ - anchor: "#on-premises-turnkey-cloud-solutions"
+ title: On-Premises Solutions
+ - anchor: "#custom-solutions"
+ title: Custom Solutions
+ - anchor: "#local-machine-solutions"
+ title: Local Machine
---
{{% capture overview %}}
@@ -34,12 +48,12 @@ a Kubernetes cluster from scratch.
* [Minikube](/docs/setup/minikube/) is a method for creating a local, single-node Kubernetes cluster for development and testing. Setup is completely automated and doesn't require a cloud provider account.
-* [Docker Desktop](https://www.docker.com/products/docker-desktop) is an
-easy-to-install application for your Mac or Windows environment that enables you to
+* [Docker Desktop](https://www.docker.com/products/docker-desktop) is an
+easy-to-install application for your Mac or Windows environment that enables you to
start coding and deploying in containers in minutes on a single-node Kubernetes
cluster.
-* [Minishift](https://docs.okd.io/latest/minishift/) installs the community version of the Kubernetes enterprise platform OpenShift for local development & testing. It offers an All-In-One VM (`minishift start`) for Windows, macOS and Linux and the containeriz based `oc cluster up` (Linux only) and [comes with some easy to install Add Ons](https://github.com/minishift/minishift-addons/tree/master/add-ons).
+* [Minishift](https://docs.okd.io/latest/minishift/) installs the community version of the Kubernetes enterprise platform OpenShift for local development & testing. It offers an all-in-one VM (`minishift start`) for Windows, macOS, and Linux. The container start is based on `oc cluster up` (Linux only). You can also install [the included add-ons](https://github.com/minishift/minishift-addons/tree/master/add-ons).
* [MicroK8s](https://microk8s.io/) provides a single command installation of the latest Kubernetes release on a local machine for development and testing. Setup is quick, fast (~30 sec) and supports many plugins including Istio with a single command.
@@ -69,12 +83,14 @@ cluster.
* [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) offers managed Kubernetes clusters.
-* [IBM Cloud Kubernetes Service](https://console.bluemix.net/docs/containers/container_index.html) offers managed Kubernetes clusters with isolation choice, operational tools, integrated security insight into images and containers, and integration with Watson, IoT, and data.
+* [IBM Cloud Kubernetes Service](https://cloud.ibm.com/docs/containers?topic=containers-container_index#container_index) offers managed Kubernetes clusters with isolation choice, operational tools, integrated security insight into images and containers, and integration with Watson, IoT, and data.
* [Kubermatic](https://www.loodse.com) provides managed Kubernetes clusters for various public clouds, including AWS and Digital Ocean, as well as on-premises with OpenStack integration.
* [Kublr](https://kublr.com) offers enterprise-grade secure, scalable, highly reliable Kubernetes clusters on AWS, Azure, GCP, and on-premise. It includes out-of-the-box backup and disaster recovery, multi-cluster centralized logging and monitoring, and built-in alerting.
+* [KubeSail](https://kubesail.com) is an easy, free way to try Kubernetes.
+
* [Madcore.Ai](https://madcore.ai) is devops-focused CLI tool for deploying Kubernetes infrastructure in AWS. Master, auto-scaling group nodes with spot-instances, ingress-ssl-lego, Heapster, and Grafana.
* [Nutanix Karbon](https://www.nutanix.com/products/karbon/) is a multi-cluster, highly available Kubernetes management and operational platform that simplifies the provisioning, operations, and lifecycle management of Kubernetes.
@@ -121,6 +137,7 @@ few commands. These solutions are actively developed and have active community s
* [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service)
* [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/)
* [Stackpoint.io](/docs/setup/turnkey/stackpoint/)
+* [Supergiant.io](https://supergiant.io/)
* [Tectonic by CoreOS](https://coreos.com/tectonic)
* [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks)
@@ -155,13 +172,10 @@ it will be easier than starting from scratch. If you do want to start from scrat
have special requirements, or just because you want to understand what is underneath a Kubernetes
cluster, try the [Getting Started from Scratch](/docs/setup/scratch/) guide.
-If you are interested in supporting Kubernetes on a new platform, see
-[Writing a Getting Started Guide](https://git.k8s.io/community/contributors/devel/writing-a-getting-started-guide.md).
-
### Universal
If you already have a way to configure hosting resources, use
-[kubeadm](/docs/setup/independent/create-cluster-kubeadm/) to easily bring up a cluster
+[kubeadm](/docs/setup/independent/create-cluster-kubeadm/) to bring up a cluster
with a single command per machine.
### Cloud
@@ -216,6 +230,7 @@ IaaS Provider | Config. Mgmt. | OS | Networking | Docs
any | any | multi-support | any CNI | [docs](/docs/setup/independent/create-cluster-kubeadm/) | Project ([SIG-cluster-lifecycle](https://git.k8s.io/community/sig-cluster-lifecycle))
Google Kubernetes Engine | | | GCE | [docs](https://cloud.google.com/kubernetes-engine/docs/) | Commercial
Docker Enterprise | custom | [multi-support](https://success.docker.com/article/compatibility-matrix) | [multi-support](https://docs.docker.com/ee/ucp/kubernetes/install-cni-plugin/) | [docs](https://docs.docker.com/ee/) | Commercial
+IBM Cloud Private | Ansible | multi-support | multi-support | [docs](https://www.ibm.com/support/knowledgecenter/SSBS6K/product_welcome_cloud_private.html) | [Commercial](https://www.ibm.com/mysupport/s/topic/0TO500000001o0fGAA/ibm-cloud-private?language=en_US&productId=01t50000004X1PWAA0) and [Community](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.2/troubleshoot/support_types.html) |
Red Hat OpenShift | Ansible & CoreOS | RHEL & CoreOS | [multi-support](https://docs.openshift.com/container-platform/3.11/architecture/networking/network_plugins.html) | [docs](https://docs.openshift.com/container-platform/3.11/welcome/index.html) | Commercial
Stackpoint.io | | multi-support | multi-support | [docs](https://stackpoint.io/) | Commercial
AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | Commercial
@@ -223,7 +238,7 @@ Madcore.Ai | Jenkins DSL | Ubuntu | flannel | [docs](https://madc
Platform9 | | multi-support | multi-support | [docs](https://platform9.com/managed-kubernetes/) | Commercial
Kublr | custom | multi-support | multi-support | [docs](http://docs.kublr.com/) | Commercial
Kubermatic | | multi-support | multi-support | [docs](http://docs.kubermatic.io/) | Commercial
-IBM Cloud Kubernetes Service | | Ubuntu | IBM Cloud Networking + Calico | [docs](https://console.bluemix.net/docs/containers/) | Commercial
+IBM Cloud Kubernetes Service | | Ubuntu | IBM Cloud Networking + Calico | [docs](https://cloud.ibm.com/docs/containers?topic=containers-container_index#container_index) | Commercial
Giant Swarm | | CoreOS | flannel and/or Calico | [docs](https://docs.giantswarm.io/) | Commercial
GCE | Saltstack | Debian | GCE | [docs](/docs/setup/turnkey/gce/) | Project
Azure Kubernetes Service | | Ubuntu | Azure | [docs](https://docs.microsoft.com/en-us/azure/aks/) | Commercial
@@ -257,7 +272,7 @@ any | RKE | multi-support | flannel or canal
any | [Gardener Cluster-Operator](https://kubernetes.io/blog/2018/05/17/gardener/) | multi-support | multi-support | [docs](https://gardener.cloud) | [Project/Community](https://github.com/gardener) and [Commercial]( https://cloudplatform.sap.com/)
Alibaba Cloud Container Service For Kubernetes | ROS | CentOS | flannel/Terway | [docs](https://www.aliyun.com/product/containerservice) | Commercial
Agile Stacks | Terraform | CoreOS | multi-support | [docs](https://www.agilestacks.com/products/kubernetes) | Commercial
-IBM Cloud Kubernetes Service | | Ubuntu | calico | [docs](https://console.bluemix.net/docs/containers/container_index.html) | Commercial
+IBM Cloud Kubernetes Service | | Ubuntu | calico | [docs](https://cloud.ibm.com/docs/containers?topic=containers-container_index#container_index) | Commercial
Digital Rebar | kubeadm | any | metal | [docs](/docs/setup/on-premises-metal/krib/) | Community ([@digitalrebar](https://github.com/digitalrebar))
VMware Cloud PKS | | Photon OS | Canal | [docs](https://docs.vmware.com/en/VMware-Kubernetes-Engine/index.html) | Commercial
Mirantis Cloud Platform | Salt | Ubuntu | multi-support | [docs](https://docs.mirantis.com/mcp/) | Commercial
diff --git a/content/en/docs/setup/release/building-from-source.md b/content/en/docs/setup/release/building-from-source.md
index 866d3d7b23a90..973ea3b3944d8 100644
--- a/content/en/docs/setup/release/building-from-source.md
+++ b/content/en/docs/setup/release/building-from-source.md
@@ -2,13 +2,19 @@
reviewers:
- david-mcmahon
- jbeda
-title: Building from Source
+title: Building a release
+card:
+ name: download
+ weight: 20
+ title: Building a release
---
-
+{{% capture overview %}}
You can either build a release from source or download a pre-built release. If you do not plan on developing Kubernetes itself, we suggest using a pre-built version of the current release, which can be found in the [Release Notes](/docs/setup/release/notes/).
The Kubernetes source code can be downloaded from the [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repo.
+{{% /capture %}}
+{{% capture body %}}
## Building from source
If you are simply building a release from source there is no need to set up a full golang environment as all building happens in a Docker container.
@@ -22,3 +28,5 @@ make release
```
For more details on the release process see the kubernetes/kubernetes [`build`](http://releases.k8s.io/{{< param "githubbranch" >}}/build/) directory.
+
+{{% /capture %}}
diff --git a/content/en/docs/setup/release/notes.md b/content/en/docs/setup/release/notes.md
index 336579e5a8ca8..c024e441356c2 100644
--- a/content/en/docs/setup/release/notes.md
+++ b/content/en/docs/setup/release/notes.md
@@ -1,5 +1,13 @@
---
title: v1.13 Release Notes
+card:
+ name: download
+ weight: 10
+ anchors:
+ - anchor: "#"
+ title: Current Release Notes
+ - anchor: "#urgent-upgrade-notes"
+ title: Urgent Upgrade Notes
---
diff --git a/content/en/docs/setup/turnkey/icp.md b/content/en/docs/setup/turnkey/icp.md
index 7fdf2e7ecf884..df2c835b2a3a0 100644
--- a/content/en/docs/setup/turnkey/icp.md
+++ b/content/en/docs/setup/turnkey/icp.md
@@ -6,7 +6,7 @@ title: Running Kubernetes on Multiple Clouds with IBM Cloud Private
IBM® Cloud Private is a turnkey cloud solution and an on-premises turnkey cloud solution. IBM Cloud Private delivers pure upstream Kubernetes with the typical management components that are required to run real enterprise workloads. These workloads include health management, log management, audit trails, and metering for tracking usage of workloads on the platform.
-IBM Cloud Private is available in a community edition and a fully supported enterprise edition. The community edition is available at no charge from Docker Hub. The enterprise edition supports high availability topologies and includes commercial support from IBM for Kubernetes and the IBM Cloud Private management platform.
+IBM Cloud Private is available in a community edition and a fully supported enterprise edition. The community edition is available at no charge from [Docker Hub](https://hub.docker.com/r/ibmcom/icp-inception/). The enterprise edition supports high availability topologies and includes commercial support from IBM for Kubernetes and the IBM Cloud Private management platform. If you want to try IBM Cloud Private, you can use either the hosted trial, the tutorial, or the self-guided demo. You can also try the free community edition. For details, see [Get started with IBM Cloud Private](https://www.ibm.com/cloud/private/get-started).
For more information, explore the following resources:
@@ -18,37 +18,26 @@ For more information, explore the following resources:
The following modules are available where you can deploy IBM Cloud Private by using Terraform:
-* Terraform module: [Deploy IBM Cloud Private on any supported infrastructure vendor](https://github.com/ibm-cloud-architecture/terraform-module-icp-deploy)
-* IBM Cloud: [Deploy IBM Cloud Private cluster to IBM Cloud](https://github.com/ibm-cloud-architecture/terraform-icp-ibmcloud)
-* VMware: [Deploy IBM Cloud Private to VMware](https://github.com/ibm-cloud-architecture/terraform-icp-vmware)
* AWS: [Deploy IBM Cloud Private to AWS](https://github.com/ibm-cloud-architecture/terraform-icp-aws)
-* OpenStack: [Deploy IBM Cloud Private to OpenStack](https://github.com/ibm-cloud-architecture/terraform-icp-openstack)
* Azure: [Deploy IBM Cloud Private to Azure](https://github.com/ibm-cloud-architecture/terraform-icp-azure)
+* IBM Cloud: [Deploy IBM Cloud Private cluster to IBM Cloud](https://github.com/ibm-cloud-architecture/terraform-icp-ibmcloud)
+* OpenStack: [Deploy IBM Cloud Private to OpenStack](https://github.com/ibm-cloud-architecture/terraform-icp-openstack)
+* Terraform module: [Deploy IBM Cloud Private on any supported infrastructure vendor](https://github.com/ibm-cloud-architecture/terraform-module-icp-deploy)
+* VMware: [Deploy IBM Cloud Private to VMware](https://github.com/ibm-cloud-architecture/terraform-icp-vmware)
-## IBM Cloud Private on Azure
-
-You can enable Microsoft Azure as a cloud provider for IBM Cloud Private deployment and take advantage of all the IBM Cloud Private features on the Azure public cloud. For more information, see [Configuring settings to enable Azure Cloud Provider](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.1/manage_cluster/azure_conf_settings.html).
-
-## IBM Cloud Private on VMware
-
-You can install IBM Cloud Private on VMware with either Ubuntu or RHEL images. For details, see the following projects:
-
-* [Installing IBM Cloud Private with Ubuntu](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_prem_ubuntu.md)
-* [Installing IBM Cloud Private with Red Hat Enterprise](https://github.com/ibm-cloud-architecture/refarch-privatecloud/tree/master/icp-on-rhel)
-
-The IBM Cloud Private Hosted service automatically deploys IBM Cloud Private Hosted on your VMware vCenter Server instances. This service brings the power of microservices and containers to your VMware environment on IBM Cloud. With this service, you can extend the same familiar VMware and IBM Cloud Private operational model and tools from on-premises into the IBM Cloud.
+## IBM Cloud Private on AWS
-For more information, see [IBM Cloud Private Hosted service](https://console.bluemix.net/docs/services/vmwaresolutions/services/icp_overview.html#ibm-cloud-private-hosted-overview).
+You can deploy an IBM Cloud Private cluster on Amazon Web Services (AWS) by using either AWS CloudFormation or Terraform.
-## IBM Cloud Private on AWS
+IBM Cloud Private has a Quick Start that automatically deploys IBM Cloud Private into a new virtual private cloud (VPC) on the AWS Cloud. A regular deployment takes about 60 minutes, and a high availability (HA) deployment takes about 75 minutes to complete. The Quick Start includes AWS CloudFormation templates and a deployment guide.
-IBM Cloud Private can run on the AWS cloud platform. To deploy IBM Cloud Private in an AWS EC2 environment, see [Installing IBM Cloud Private on AWS](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_aws.md).
+This Quick Start is for users who want to explore application modernization and want to accelerate meeting their digital transformation goals, by using IBM Cloud Private and IBM tooling. The Quick Start helps users rapidly deploy a high availability (HA), production-grade, IBM Cloud Private reference architecture on AWS. For all of the details and the deployment guide, see the [IBM Cloud Private on AWS Quick Start](https://aws.amazon.com/quickstart/architecture/ibm-cloud-private/).
-Stay tuned for the IBM Cloud Private on AWS Quick Start Guide.
+IBM Cloud Private can also run on the AWS cloud platform by using Terraform. To deploy IBM Cloud Private in an AWS EC2 environment, see [Installing IBM Cloud Private on AWS](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_aws.md).
-## IBM Cloud Private on VirtualBox
+## IBM Cloud Private on Azure
-To install IBM Cloud Private to a VirtualBox environment, see [Installing IBM Cloud Private on VirtualBox](https://github.com/ibm-cloud-architecture/refarch-privatecloud-virtualbox).
+You can enable Microsoft Azure as a cloud provider for IBM Cloud Private deployment and take advantage of all the IBM Cloud Private features on the Azure public cloud. For more information, see [IBM Cloud Private on Azure](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.2/supported_environments/azure_overview.html).
## IBM Cloud Private on Red Hat OpenShift
@@ -62,4 +51,19 @@ Integration capabilities:
* Integrated core platform services, such as monitoring, metering, and logging
* IBM Cloud Private uses the OpenShift image registry
-For more information see, [IBM Cloud Private on OpenShift](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.1/supported_environments/openshift/overview.html).
+For more information see, [IBM Cloud Private on OpenShift](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.2/supported_environments/openshift/overview.html).
+
+## IBM Cloud Private on VirtualBox
+
+To install IBM Cloud Private to a VirtualBox environment, see [Installing IBM Cloud Private on VirtualBox](https://github.com/ibm-cloud-architecture/refarch-privatecloud-virtualbox).
+
+## IBM Cloud Private on VMware
+
+You can install IBM Cloud Private on VMware with either Ubuntu or RHEL images. For details, see the following projects:
+
+* [Installing IBM Cloud Private with Ubuntu](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_prem_ubuntu.md)
+* [Installing IBM Cloud Private with Red Hat Enterprise](https://github.com/ibm-cloud-architecture/refarch-privatecloud/tree/master/icp-on-rhel)
+
+The IBM Cloud Private Hosted service automatically deploys IBM Cloud Private Hosted on your VMware vCenter Server instances. This service brings the power of microservices and containers to your VMware environment on IBM Cloud. With this service, you can extend the same familiar VMware and IBM Cloud Private operational model and tools from on-premises into the IBM Cloud.
+
+For more information, see [IBM Cloud Private Hosted service](https://cloud.ibm.com/docs/services/vmwaresolutions/vmonic?topic=vmware-solutions-prod_overview#ibm-cloud-private-hosted).
diff --git a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md
index cb8b67e557cd0..1b47eee140269 100644
--- a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md
+++ b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md
@@ -2,6 +2,9 @@
title: Configure Access to Multiple Clusters
content_template: templates/task
weight: 30
+card:
+ name: tasks
+ weight: 40
---
@@ -251,22 +254,31 @@ The preceding configuration file defines a new context named `dev-ramp-up`.
See whether you have an environment variable named `KUBECONFIG`. If so, save the
current value of your `KUBECONFIG` environment variable, so you can restore it later.
-For example, on Linux:
+For example:
+### Linux
```shell
export KUBECONFIG_SAVED=$KUBECONFIG
```
-
+### Windows PowerShell
+```shell
+ $Env:KUBECONFIG_SAVED=$ENV:KUBECONFIG
+ ```
The `KUBECONFIG` environment variable is a list of paths to configuration files. The list is
colon-delimited for Linux and Mac, and semicolon-delimited for Windows. If you have
a `KUBECONFIG` environment variable, familiarize yourself with the configuration files
in the list.
-Temporarily append two paths to your `KUBECONFIG` environment variable. For example, on Linux:
+Temporarily append two paths to your `KUBECONFIG` environment variable. For example:
+### Linux
```shell
export KUBECONFIG=$KUBECONFIG:config-demo:config-demo-2
```
+### Windows PowerShell
+```shell
+$Env:KUBECONFIG_SAVED=(config-demo;config-demo-2)
+```
In your `config-exercise` directory, enter this command:
@@ -320,11 +332,16 @@ familiarize yourself with the contents of these files.
If you have a `$HOME/.kube/config` file, and it's not already listed in your
`KUBECONFIG` environment variable, append it to your `KUBECONFIG` environment variable now.
-For example, on Linux:
+For example:
+### Linux
```shell
export KUBECONFIG=$KUBECONFIG:$HOME/.kube/config
```
+### Windows Powershell
+```shell
+ $Env:KUBECONFIG=($Env:KUBECONFIG;$HOME/.kube/config)
+```
View configuration information merged from all the files that are now listed
in your `KUBECONFIG` environment variable. In your config-exercise directory, enter:
@@ -335,11 +352,15 @@ kubectl config view
## Clean up
-Return your `KUBECONFIG` environment variable to its original value. For example, on Linux:
-
+Return your `KUBECONFIG` environment variable to its original value. For example:
+Linux:
```shell
export KUBECONFIG=$KUBECONFIG_SAVED
```
+Windows PowerShell
+```shell
+ $Env:KUBECONFIG=$ENV:KUBECONFIG_SAVED
+```
{{% /capture %}}
diff --git a/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md b/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md
index a447e85d16d10..0033f51c9b28c 100644
--- a/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md
+++ b/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md
@@ -169,16 +169,16 @@ This displays the configuration for the `frontend` Service and watches for
changes. Initially, the external IP is listed as ``:
```
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-frontend ClusterIP 10.51.252.116 80/TCP 10s
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+frontend LoadBalancer 10.51.252.116 80/TCP 10s
```
As soon as an external IP is provisioned, however, the configuration updates
to include the new IP under the `EXTERNAL-IP` heading:
```
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-frontend ClusterIP 10.51.252.116 XXX.XXX.XXX.XXX 80/TCP 1m
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+frontend LoadBalancer 10.51.252.116 XXX.XXX.XXX.XXX 80/TCP 1m
```
That IP can now be used to interact with the `frontend` service from outside the
diff --git a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md
new file mode 100644
index 0000000000000..a810603f19aa2
--- /dev/null
+++ b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md
@@ -0,0 +1,292 @@
+---
+title: Set up Ingress on Minikube with the NGINX Ingress Controller
+content_template: templates/task
+weight: 100
+---
+
+{{% capture overview %}}
+
+An [Ingress](/docs/concepts/services-networking/ingress/) is an API object that defines rules which allow external access
+to services in a cluster. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers/) fulfills the rules set in the Ingress.
+
+{{< caution >}}
+For the Ingress resource to work, the cluster **must** also have an Ingress controller running.
+{{< /caution >}}
+
+This page shows you how to set up a simple Ingress which routes requests to Service web or web2 depending on the HTTP URI.
+
+{{% /capture %}}
+
+{{% capture prerequisites %}}
+
+{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
+
+{{% /capture %}}
+
+{{% capture steps %}}
+
+## Create a Minikube cluster
+
+1. Click **Launch Terminal**
+
+ {{< kat-button >}}
+
+1. (Optional) If you installed Minikube locally, run the following command:
+
+ ```shell
+ minikube start
+ ```
+
+## Enable the Ingress controller
+
+1. To enable the NGINX Ingress controller, run the following command:
+
+ ```shell
+ minikube addons enable ingress
+ ```
+
+1. Verify that the NGINX Ingress controller is running
+
+ ```shell
+ kubectl get pods -n kube-system
+ ```
+
+ {{< note >}}This can take up to a minute.{{< /note >}}
+
+ Output:
+
+ ```shell
+ NAME READY STATUS RESTARTS AGE
+ default-http-backend-59868b7dd6-xb8tq 1/1 Running 0 1m
+ kube-addon-manager-minikube 1/1 Running 0 3m
+ kube-dns-6dcb57bcc8-n4xd4 3/3 Running 0 2m
+ kubernetes-dashboard-5498ccf677-b8p5h 1/1 Running 0 2m
+ nginx-ingress-controller-5984b97644-rnkrg 1/1 Running 0 1m
+ storage-provisioner 1/1 Running 0 2m
+ ```
+
+## Deploy a hello, world app
+
+1. Create a Deployment using the following command:
+
+ ```shell
+ kubectl run web --image=gcr.io/google-samples/hello-app:1.0 --port=8080
+ ```
+
+ Output:
+
+ ```shell
+ deployment.apps/web created
+ ```
+
+1. Expose the Deployment:
+
+ ```shell
+ kubectl expose deployment web --target-port=8080 --type=NodePort
+ ```
+
+ Output:
+
+ ```shell
+ service/web exposed
+ ```
+
+1. Verify the Service is created and is available on a node port:
+
+ ```shell
+ kubectl get service web
+ ```
+
+ Output:
+
+ ```shell
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ web NodePort 10.104.133.249 8080:31637/TCP 12m
+ ```
+
+1. Visit the service via NodePort:
+
+ ```shell
+ minikube service web --url
+ ```
+
+ Output:
+
+ ```shell
+ http://172.17.0.15:31637
+ ```
+
+ {{< note >}}Katacoda environment only: at the top of the terminal panel, click the plus sign, and then click **Select port to view on Host 1**. Enter the NodePort, in this case `31637`, and then click **Display Port**.{{< /note >}}
+
+ Output:
+
+ ```shell
+ Hello, world!
+ Version: 1.0.0
+ Hostname: web-55b8c6998d-8k564
+ ```
+
+ You can now access the sample app via the Minikube IP address and NodePort. The next step lets you access
+ the app using the Ingress resource.
+
+## Create an Ingress resource
+
+The following file is an Ingress resource that sends traffic to your Service via hello-world.info.
+
+1. Create `example-ingress.yaml` from the following file:
+
+ ```yaml
+ ---
+ apiVersion: extensions/v1beta1
+ kind: Ingress
+ metadata:
+ name: example-ingress
+ annotations:
+ nginx.ingress.kubernetes.io/rewrite-target: /
+ spec:
+ rules:
+ - host: hello-world.info
+ http:
+ paths:
+ - path: /*
+ backend:
+ serviceName: web
+ servicePort: 8080
+ ```
+
+1. Create the Ingress resource by running the following command:
+
+ ```shell
+ kubectl apply -f example-ingress.yaml
+ ```
+
+ Output:
+
+ ```shell
+ ingress.extensions/example-ingress created
+ ```
+
+1. Verify the IP address is set:
+
+ ```shell
+ kubectl get ingress
+ ```
+
+ {{< note >}}This can take a couple of minutes.{{< /note >}}
+
+ ```shell
+ NAME HOSTS ADDRESS PORTS AGE
+ example-ingress hello-world.info 172.17.0.15 80 38s
+ ```
+
+1. Add the following line to the bottom of the `/etc/hosts` file.
+
+ ```
+ 172.17.0.15 hello-world.info
+ ```
+
+ This sends requests from hello-world.info to Minikube.
+
+1. Verify that the Ingress controller is directing traffic:
+
+ ```shell
+ curl hello-world.info
+ ```
+
+ Output:
+
+ ```shell
+ Hello, world!
+ Version: 1.0.0
+ Hostname: web-55b8c6998d-8k564
+ ```
+
+ {{< note >}}If you are running Minikube locally, you can visit hello-world.info from your browser.{{< /note >}}
+
+## Create Second Deployment
+
+1. Create a v2 Deployment using the following command:
+
+ ```shell
+ kubectl run web2 --image=gcr.io/google-samples/hello-app:2.0 --port=8080
+ ```
+ Output:
+
+ ```shell
+ deployment.apps/web2 created
+ ```
+
+1. Expose the Deployment:
+
+ ```shell
+ kubectl expose deployment web2 --target-port=8080 --type=NodePort
+ ```
+
+ Output:
+
+ ```shell
+ service/web2 exposed
+ ```
+
+## Edit Ingress
+
+1. Edit the existing `example-ingress.yaml` and add the following lines:
+
+ ```yaml
+ - path: /v2/*
+ backend:
+ serviceName: web2
+ servicePort: 8080
+ ```
+
+1. Apply the changes:
+
+ ```shell
+ kubectl apply -f example-ingress.yaml
+ ```
+
+ Output:
+ ```shell
+ ingress.extensions/example-ingress configured
+ ```
+
+## Test Your Ingress
+
+1. Access the 1st version of the Hello World app.
+
+ ```shell
+ curl hello-world.info
+ ```
+
+ Output:
+ ```shell
+ Hello, world!
+ Version: 1.0.0
+ Hostname: web-55b8c6998d-8k564
+ ```
+
+1. Access the 2nd version of the Hello World app.
+
+ ```shell
+ curl hello-world.info/v2
+ ```
+
+ Output:
+ ```shell
+ Hello, world!
+ Version: 2.0.0
+ Hostname: web2-75cd47646f-t8cjk
+ ```
+
+ {{< note >}}If you are running Minikube locally, you can visit hello-world.info and hello-world.info/v2 from your browser.{{< /note >}}
+
+{{% /capture %}}
+
+
+{{% capture whatsnext %}}
+* Read more about [Ingress](/docs/concepts/services-networking/ingress/)
+* Read more about [Ingress Controllers](/docs/concepts/services-networking/ingress-controllers/)
+* Read more about [Services](/docs/concepts/services-networking/service/)
+
+{{% /capture %}}
+
diff --git a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md
index ceea70165fec8..e62aeca24e7c2 100644
--- a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md
+++ b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md
@@ -6,6 +6,10 @@ reviewers:
title: Web UI (Dashboard)
content_template: templates/concept
weight: 10
+card:
+ name: tasks
+ weight: 30
+ title: Use the Web UI Dashboard
---
{{% capture overview %}}
diff --git a/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md b/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md
index 225951dc64145..a66be5a1add79 100644
--- a/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md
+++ b/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md
@@ -22,13 +22,173 @@ Configuring the [aggregation layer](/docs/concepts/extend-kubernetes/api-extensi
There are a few setup requirements for getting the aggregation layer working in your environment to support mutual TLS auth between the proxy and extension apiservers. Kubernetes and the kube-apiserver have multiple CAs, so make sure that the proxy is signed by the aggregation layer CA and not by something else, like the master CA.
{{< /note >}}
+{{% /capture %}}
+
+{{% capture authflow %}}
+
+## Authentication Flow
+
+Unlike Custom Resource Definitions (CRDs), the Aggregation API involves another server - your Extension apiserver - in addition to the standard Kubernetes apiserver. The Kubernetes apiserver will need to communicate with your extension apiserver, and your extension apiserver will need to communicate with the Kubernetes apiserver. In order for this communication to be secured, the Kubernetes apiserver uses x509 certificates to authenticate itself to the extension apiserver.
+
+This section describes how the authentication and authorization flows work, and how to configure them.
+
+The high-level flow is as follows:
+
+1. Kubenetes apiserver: authenticate the requesting user and authorize their rights to the requested API path.
+2. Kubenetes apiserver: proxy the request to the extension apiserver
+3. Extension apiserver: authenticate the request from the Kubernetes apiserver
+4. Extension apiserver: authorize the request from the original user
+5. Extension apiserver: execute
+
+The rest of this section describes these steps in detail.
+
+The flow can be seen in the following diagram.
+
+![aggregation auth flows](/images/docs/aggregation-api-auth-flow.png).
+
+The source for the above swimlanes can be found in the source of this document.
+
+
+
+### Kubernetes Apiserver Authentication and Authorization
+
+A request to an API path that is served by an extension apiserver begins the same way as all API requests: communication to the Kubernetes apiserver. This path already has been registered with the Kubernetes apiserver by the extension apiserver.
+
+The user communicates with the Kubernetes apiserver, requesting access to the path. The Kubernetes apiserver uses standard authentication and authorization configured with the Kubernetes apiserver to authenticate the user and authorize access to the specific path.
+
+For an overview of authenticating to a Kubernetes cluster, see ["Authenticating to a Cluster"](/docs/reference/access-authn-authz/authentication/). For an overview of authorization of access to Kubernetes cluster resources, see ["Authorization Overview"](/docs/reference/access-authn-authz/authorization/).
+
+Everything to this point has been standard Kubernetes API requests, authentication and authorization.
+
+The Kubernetes apiserver now is prepared to send the request to the extension apiserver.
+
+### Kubernetes Apiserver Proxies the Request
+
+The Kubernetes apiserver now will send, or proxy, the request to the extension apiserver that registered to handle the request. In order to do so, it needs to know several things:
+
+1. How should the Kubernetes apiserver authenticate to the extension apiserver, informing the extension apiserver that the request, which comes over the network, is coming from a valid Kubernetes apiserver?
+2. How should the Kubernetes apiserver inform the extension apiserver of the username and group for which the original request was authenticated?
+
+In order to provide for these two, you must configure the Kubernetes apiserver using several flags.
+
+#### Kubernetes Apiserver Client Authentication
+
+The Kubernetes apiserver connects to the extension apiserver over TLS, authenticating itself using a client certificate. You must provide the following to the Kubernetes apiserver upon startup, using the provided flags:
+
+* private key file via `--proxy-client-key-file`
+* signed client certificate file via `--proxy-client-cert-file`
+* certificate of the CA that signed the client certificate file via `--requestheader-client-ca-file`
+* valid Common Names (CN) in the signed client certificate via `--requestheader-allowed-names`
+
+The Kubernetes apiserver will use the files indicated by `--proxy-client-*-file` to authenticate to the extension apiserver. In order for the request to be considered valid by a compliant extension apiserver, the following conditions must be met:
+
+1. The connection must be made using a client certificate that is signed by the CA whose certificate is in `--requestheader-client-ca-file`.
+2. The connection must be made using a client certificate whose CN is one of those listed in `--requestheader-allowed-names`. **Note:** You can set this option to blank as `--requestheader-allowed-names=""`. This will indicate to an extension apiserver that _any_ CN is acceptable.
+
+When started with these options, the Kubernetes apiserver will:
+
+1. Use them to authenticate to the extension apiserver.
+2. Create a configmap in the `kube-system` namespace called `extension-apiserver-authentication`, in which it will place the CA certificate and the allowed CNs. These in turn can be retrieved by extension apiservers to validate requests.
+
+Note that the same client certificate is used by the Kubernetes apiserver to authenticate against _all_ extension apiservers. It does not create a client certificate per extension apiserver, but rather a single one to authenticate as the Kubernetes apiserver. This same one is reused for all extension apiserver requests.
+
+#### Original Request Username and Group
+
+When the Kubernetes apiserver proxies the request to the extension apiserver, it informs the extension apiserver of the username and group with which the original request successfully authenticated. It provides these in http headers of its proxied request. You must inform the Kubernetes apiserver of the names of the headers to be used.
+
+* the header in which to store the username via `--requestheader-username-headers`
+* the header in which to store the group via `--requestheader-group-headers`
+* the prefix to append to all extra headers via `--requestheader-extra-headers-prefix`
+
+These header names are also placed in the `extension-apiserver-authentication` configmap, so they can be retrieved and used by extension apiservers.
+
+### Extension Apiserver Authenticates the Request
+
+The extension apiserver, upon receiving a proxied request from the Kubernetes apiserver, must validate that the request actually did come from a valid authenticating proxy, which role the Kubernetes apiserver is fulfilling. The extension apiserver validates it via:
+
+1. Retrieve the following from the configmap in `kube-system`, as described above:
+ * Client CA certificate
+ * List of allowed names (CNs)
+ * Header names for username, group and extra info
+2. Check that the TLS connection was authenticated using a client certificate which:
+ * Was signed by the CA whose certificate matches the retrieved CA certificate.
+ * Has a CN in the list of allowed CNs, unless the list is blank, in which case all CNs are allowed.
+ * Extract the username and group from the appropriate headers
+
+If the above passes, then the request is a valid proxied request from a legitimate authenticating proxy, in this case the Kubernetes apiserver.
+
+Note that it is the responsibility of the extension apiserver implementation to provide the above. Many do it by default, leveraging the `k8s.io/apiserver/` package. Others may provide options to override it using command-line options.
+
+In order to have permission to retrieve the configmap, an extension apiserver requires the appropriate role. There is a default role named `extension-apiserver-authentication-reader` in the `kube-system` namespace which can be assigned.
+
+### Extension Apiserver Authorizes the Request
+
+The extension apiserver now can validate that the user/group retrieved from the headers are authorized to execute the given request. It does so by sending a standard [SubjectAccessReview](/docs/reference/access-authn-authz/authorization/) request to the Kubernetes apiserver.
+
+In order for the extension apiserver to be authorized itself to submit the `SubjectAccessReview` request to the Kubernetes apiserver, it needs the correct permissions. Kubernetes includes a default `ClusterRole` named `system:auth-delegator` that has the appropriate permissions. It can be granted to the extension apiserver's service account.
+
+### Extension Apiserver Executes
+
+If the `SubjectAccessReview` passes, the extension apiserver executes the request.
+
+
{{% /capture %}}
{{% capture steps %}}
-## Enable apiserver flags
+## Enable Kubernetes Apiserver flags
-Enable the aggregation layer via the following kube-apiserver flags. They may have already been taken care of by your provider.
+Enable the aggregation layer via the following `kube-apiserver` flags. They may have already been taken care of by your provider.
--requestheader-client-ca-file=
--requestheader-allowed-names=front-proxy-client
@@ -42,7 +202,7 @@ Enable the aggregation layer via the following kube-apiserver flags. They may ha
Do **not** reuse a CA that is used in a different context unless you understand the risks and the mechanisms to protect the CA's usage.
{{< /warning >}}
-If you are not running kube-proxy on a host running the API server, then you must make sure that the system is enabled with the following apiserver flag:
+If you are not running kube-proxy on a host running the API server, then you must make sure that the system is enabled with the following `kube-apiserver` flag:
--enable-aggregator-routing=true
@@ -56,5 +216,3 @@ If you are not running kube-proxy on a host running the API server, then you mus
{{% /capture %}}
-
-
diff --git a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md
index 034200a1e46e9..275955c8f5238 100644
--- a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md
+++ b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md
@@ -110,7 +110,7 @@ same way that the Kubernetes project sorts Kubernetes versions. Versions start w
`v` followed by a number, an optional `beta` or `alpha` designation, and
optional additional numeric versioning information. Broadly, a version string might look
like `v2` or `v2beta1`. Versions are sorted using the following algorithm:
-
+
- Entries that follow Kubernetes version patterns are sorted before those that
do not.
- For entries that follow Kubernetes version patterns, the numeric portions of
@@ -185,7 +185,7 @@ how to [authenticate API servers](/docs/reference/access-authn-authz/extensible-
### Deploy the conversion webhook service
Documentation for deploying the conversion webhook is the same as for the [admission webhook example service](/docs/reference/access-authn-authz/extensible-admission-controllers/#deploy_the_admission_webhook_service).
-The assumption for next sections is that the conversion webhook server is deployed to a service named `example-conversion-webhook-server` in `default` namespace.
+The assumption for next sections is that the conversion webhook server is deployed to a service named `example-conversion-webhook-server` in `default` namespace and serving traffic on path `/crdconvert`.
{{< note >}}
When the webhook server is deployed into the Kubernetes cluster as a
@@ -242,6 +242,8 @@ spec:
service:
namespace: default
name: example-conversion-webhook-server
+ # path is the url the API server will call. It should match what the webhook is serving at. The default is '/'.
+ path: /crdconvert
caBundle:
# either Namespaced or Cluster
scope: Namespaced
diff --git a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md
index eaded418130ea..362a455ab1c10 100644
--- a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md
+++ b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md
@@ -73,14 +73,14 @@ If you have a DNS Deployment, your scale target is:
Deployment/
-where is the name of your DNS Deployment. For example, if
+where `` is the name of your DNS Deployment. For example, if
your DNS Deployment name is coredns, your scale target is Deployment/coredns.
If you have a DNS ReplicationController, your scale target is:
ReplicationController/
-where is the name of your DNS ReplicationController. For example,
+where `` is the name of your DNS ReplicationController. For example,
if your DNS ReplicationController name is kube-dns-v20, your scale target is
ReplicationController/kube-dns-v20.
@@ -238,6 +238,3 @@ is under consideration as a future development.
Learn more about the
[implementation of cluster-proportional-autoscaler](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler).
{{% /capture %}}
-
-
-
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md
index 436b6ccfbd302..c3a4444f35da5 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md
@@ -23,7 +23,9 @@ You should be familiar with [PKI certificates and requirements in Kubernetes](/d
## Renew certificates with the certificates API
-Kubeadm can renew certificates with the `kubeadm alpha certs renew` commands.
+The Kubernetes certificates normally reach their expiration date after one year.
+
+Kubeadm can renew certificates with the `kubeadm alpha certs renew` commands; you should run these commands on control-plane nodes only.
Typically this is done by loading on-disk CA certificates and keys and using them to issue new certificates.
This approach works well if your certificate tree is self-contained. However, if your certificates are externally
@@ -89,16 +91,16 @@ To better integrate with external CAs, kubeadm can also produce certificate sign
A CSR represents a request to a CA for a signed certificate for a client.
In kubeadm terms, any certificate that would normally be signed by an on-disk CA can be produced as a CSR instead. A CA, however, cannot be produced as a CSR.
-You can create an individual CSR with `kubeadm init phase certs apiserver --use-csr`.
-The `--use-csr` flag can be applied only to individual phases. After [all certificates are in place][certs], you can run `kubeadm init --external-ca`.
+You can create an individual CSR with `kubeadm init phase certs apiserver --csr-only`.
+The `--csr-only` flag can be applied only to individual phases. After [all certificates are in place][certs], you can run `kubeadm init --external-ca`.
You can pass in a directory with `--csr-dir` to output the CSRs to the specified location.
-If `--csr-dire` is not specified, the default certificate directory (`/etc/kubernetes/pki`) is used.
+If `--csr-dir` is not specified, the default certificate directory (`/etc/kubernetes/pki`) is used.
Both the CSR and the accompanying private key are given in the output. After a certificate is signed, the certificate and the private key must be copied to the PKI directory (by default `/etc/kubernetes/pki`).
### Renew certificates
-Certificates can be renewed with `kubeadm alpha certs renew --use-csr`.
+Certificates can be renewed with `kubeadm alpha certs renew --csr-only`.
As with `kubeadm init`, an output directory can be specified with the `--csr-dir` flag.
To use the new certificates, copy the signed certificate and private key into the PKI directory (by default `/etc/kubernetes/pki`)
diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-12.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-12.md
index 314b5cde7065b..fa8831a92ca05 100644
--- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-12.md
+++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-12.md
@@ -39,12 +39,14 @@ This page explains how to upgrade a Kubernetes cluster created with `kubeadm` fr
{{< tabs name="k8s_install" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
+ # replace "x" with the latest patch version
apt-mark unhold kubeadm && \
- apt-get update && apt-get upgrade -y kubeadm && \
+ apt-get update && apt-get upgrade -y kubeadm=1.12.x-00 && \
apt-mark hold kubeadm
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
- yum upgrade -y kubeadm --disableexcludes=kubernetes
+ # replace "x" with the latest patch version
+ yum upgrade -y kubeadm-1.12.x --disableexcludes=kubernetes
{{% /tab %}}
{{< /tabs >}}
@@ -230,11 +232,13 @@ This page explains how to upgrade a Kubernetes cluster created with `kubeadm` fr
{{< tabs name="k8s_upgrade" >}}
{{% tab name="Ubuntu, Debian or HypriotOS" %}}
+ # replace "x" with the latest patch version
apt-get update
- apt-get upgrade -y kubelet kubeadm
+ apt-get upgrade -y kubelet=1.12.x-00 kubeadm=1.12.x-00
{{% /tab %}}
{{% tab name="CentOS, RHEL or Fedora" %}}
- yum upgrade -y kubelet kubeadm --disableexcludes=kubernetes
+ # replace "x" with the latest patch version
+ yum upgrade -y kubelet-1.12.x kubeadm-1.12.x --disableexcludes=kubernetes
{{% /tab %}}
{{< /tabs >}}
diff --git a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md
index 4cd6eba7461c2..5ae36d2a6d88c 100644
--- a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md
+++ b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md
@@ -7,7 +7,8 @@ content_template: templates/task
---
{{% capture overview %}}
-Kubernetes _namespaces_ help different projects, teams, or customers to share a Kubernetes cluster.
+Kubernetes {{< glossary_tooltip text="namespaces" term_id="namespace" >}}
+help different projects, teams, or customers to share a Kubernetes cluster.
It does this by providing the following:
@@ -62,25 +63,25 @@ are relaxed to enable agile development.
The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of
Pods, Services, and Deployments that run the production site.
-One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production.
+One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: `development` and `production`.
Let's create two new namespaces to hold our work.
-Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which describes a development namespace:
+Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which describes a `development` namespace:
{{< codenew language="json" file="admin/namespace-dev.json" >}}
-Create the development namespace using kubectl.
+Create the `development` namespace using kubectl.
```shell
$ kubectl create -f https://k8s.io/examples/admin/namespace-dev.json
```
-Save the following contents into file [`namespace-prod.json`](/examples/admin/namespace-prod.json) which describes a production namespace:
+Save the following contents into file [`namespace-prod.json`](/examples/admin/namespace-prod.json) which describes a `production` namespace:
{{< codenew language="json" file="admin/namespace-prod.json" >}}
-And then let's create the production namespace using kubectl.
+And then let's create the `production` namespace using kubectl.
```shell
$ kubectl create -f https://k8s.io/examples/admin/namespace-prod.json
@@ -102,7 +103,7 @@ A Kubernetes namespace provides the scope for Pods, Services, and Deployments in
Users interacting with one namespace do not see the content in another namespace.
-To demonstrate this, let's spin up a simple Deployment and Pods in the development namespace.
+To demonstrate this, let's spin up a simple Deployment and Pods in the `development` namespace.
We first check what is the current context:
@@ -192,7 +193,7 @@ users:
username: admin
```
-Let's switch to operate in the development namespace.
+Let's switch to operate in the `development` namespace.
```shell
$ kubectl config use-context dev
@@ -205,14 +206,14 @@ $ kubectl config current-context
dev
```
-At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.
+At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the `development` namespace.
Let's create some contents.
```shell
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
```
-We have just created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
+We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname.
Note that `kubectl run` creates deployments only on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead.
If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) for more details.
@@ -227,15 +228,15 @@ snowflake-3968820950-9dgr8 1/1 Running 0 2m
snowflake-3968820950-vgc4n 1/1 Running 0 2m
```
-And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.
+And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the `production` namespace.
-Let's switch to the production namespace and show how resources in one namespace are hidden from the other.
+Let's switch to the `production` namespace and show how resources in one namespace are hidden from the other.
```shell
$ kubectl config use-context prod
```
-The production namespace should be empty, and the following commands should return nothing.
+The `production` namespace should be empty, and the following commands should return nothing.
```shell
$ kubectl get deployment
diff --git a/content/en/docs/tasks/administer-cluster/namespaces.md b/content/en/docs/tasks/administer-cluster/namespaces.md
index ce1ebbce3df85..6311ba8b64194 100644
--- a/content/en/docs/tasks/administer-cluster/namespaces.md
+++ b/content/en/docs/tasks/administer-cluster/namespaces.md
@@ -7,7 +7,7 @@ content_template: templates/task
---
{{% capture overview %}}
-This page shows how to view, work in, and delete namespaces. The page also shows how to use Kubernetes namespaces to subdivide your cluster.
+This page shows how to view, work in, and delete {{< glossary_tooltip text="namespaces" term_id="namespace" >}}. The page also shows how to use Kubernetes namespaces to subdivide your cluster.
{{% /capture %}}
{{% capture prerequisites %}}
@@ -140,21 +140,21 @@ are relaxed to enable agile development.
The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of
Pods, Services, and Deployments that run the production site.
-One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production.
+One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: `development` and `production`.
Let's create two new namespaces to hold our work.
-Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which describes a development namespace:
+Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which describes a `development` namespace:
{{< codenew language="json" file="admin/namespace-dev.json" >}}
-Create the development namespace using kubectl.
+Create the `development` namespace using kubectl.
```shell
$ kubectl create -f https://k8s.io/examples/admin/namespace-dev.json
```
-And then let's create the production namespace using kubectl.
+And then let's create the `production` namespace using kubectl.
```shell
$ kubectl create -f https://k8s.io/examples/admin/namespace-prod.json
@@ -176,7 +176,7 @@ A Kubernetes namespace provides the scope for Pods, Services, and Deployments in
Users interacting with one namespace do not see the content in another namespace.
-To demonstrate this, let's spin up a simple Deployment and Pods in the development namespace.
+To demonstrate this, let's spin up a simple Deployment and Pods in the `development` namespace.
We first check what is the current context:
@@ -221,7 +221,7 @@ $ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-9
The above commands provided two request contexts you can alternate against depending on what namespace you
wish to work against.
-Let's switch to operate in the development namespace.
+Let's switch to operate in the `development` namespace.
```shell
$ kubectl config use-context dev
@@ -234,14 +234,14 @@ $ kubectl config current-context
dev
```
-At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace.
+At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the `development` namespace.
Let's create some contents.
```shell
$ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2
```
-We have just created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
+We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname.
Note that `kubectl run` creates deployments only on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead.
If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) for more details.
@@ -256,15 +256,15 @@ snowflake-3968820950-9dgr8 1/1 Running 0 2m
snowflake-3968820950-vgc4n 1/1 Running 0 2m
```
-And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.
+And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the `production` namespace.
-Let's switch to the production namespace and show how resources in one namespace are hidden from the other.
+Let's switch to the `production` namespace and show how resources in one namespace are hidden from the other.
```shell
$ kubectl config use-context prod
```
-The production namespace should be empty, and the following commands should return nothing.
+The `production` namespace should be empty, and the following commands should return nothing.
```shell
$ kubectl get deployment
diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md
index ed3fd0f7157c8..4c9f1b07df801 100644
--- a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md
+++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md
@@ -26,20 +26,22 @@ To get familiar with Cilium easily you can follow the
[Cilium Kubernetes Getting Started Guide](https://cilium.readthedocs.io/en/stable/gettingstarted/minikube/)
to perform a basic DaemonSet installation of Cilium in minikube.
-As Cilium requires a standalone etcd instance, for minikube you can deploy it
-by running:
+To start minikube, minimal version required is >= v0.33.1, run the with the
+following arguments:
```shell
-kubectl create -n kube-system -f https://raw.githubusercontent.com/cilium/cilium/v1.3/examples/kubernetes/addons/etcd/standalone-etcd.yaml
+$ minikube version
+minikube version: v0.33.1
+$
+$ minikube start --network-plugin=cni --memory=4096
```
-After etcd is up and running you can deploy Cilium Kubernetes descriptor which
-is a simple ''all-in-one'' YAML file that includes DaemonSet configurations for
-Cilium, to connect to the etcd instance previously deployed as well as
-appropriate RBAC settings:
+For minikube you can deploy this simple ''all-in-one'' YAML file that includes
+DaemonSet configurations for Cilium, and the necessary configurations to connect
+to the etcd instance deployed in minikube as well as appropriate RBAC settings:
```shell
-$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.3/examples/kubernetes/1.12/cilium.yaml
+$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.4/examples/kubernetes/1.13/cilium-minikube.yaml
configmap/cilium-config created
daemonset.apps/cilium created
clusterrolebinding.rbac.authorization.k8s.io/cilium created
@@ -54,7 +56,7 @@ policies using an example application.
## Deploying Cilium for Production Use
For detailed instructions around deploying Cilium for production, see:
-[Cilium Kubernetes Installation Guide](https://cilium.readthedocs.io/en/latest/kubernetes/install/)
+[Cilium Kubernetes Installation Guide](https://cilium.readthedocs.io/en/stable/kubernetes/intro/)
This documentation includes detailed requirements, instructions and example
production DaemonSet files.
@@ -83,7 +85,7 @@ There are two main components to be aware of:
- One `cilium` Pod runs on each node in your cluster and enforces network policy
on the traffic to/from Pods on that node using Linux BPF.
- For production deployments, Cilium should leverage a key-value store
-(e.g., etcd). The [Cilium Kubernetes Installation Guide](https://cilium.readthedocs.io/en/latest/kubernetes/install/)
+(e.g., etcd). The [Cilium Kubernetes Installation Guide](https://cilium.readthedocs.io/en/stable/kubernetes/intro/)
will provide the necessary steps on how to install this required key-value
store as well how to configure it in Cilium.
diff --git a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md
index 2110c98385ede..83c24f639efee 100644
--- a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md
+++ b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md
@@ -36,11 +36,6 @@ Successfully running cloud-controller-manager requires some changes to your clus
* `kube-apiserver` and `kube-controller-manager` MUST NOT specify the `--cloud-provider` flag. This ensures that it does not run any cloud specific loops that would be run by cloud controller manager. In the future, this flag will be deprecated and removed.
* `kubelet` must run with `--cloud-provider=external`. This is to ensure that the kubelet is aware that it must be initialized by the cloud controller manager before it is scheduled any work.
-* `kube-apiserver` SHOULD NOT run the `PersistentVolumeLabel` admission controller
- since the cloud controller manager takes over labeling persistent volumes.
-* For the `cloud-controller-manager` to label persistent volumes, initializers will need to be enabled and an InitializerConfiguration needs to be added to the system. Follow [these instructions](/docs/reference/access-authn-authz/extensible-admission-controllers/#enable-initializers-alpha-feature) to enable initializers. Use the following YAML to create the InitializerConfiguration:
-
-{{< codenew file="admin/cloud/pvl-initializer-config.yaml" >}}
Keep in mind that setting up your cluster to use cloud controller manager will change your cluster behaviour in a few ways:
@@ -53,7 +48,6 @@ As of v1.8, cloud controller manager can implement:
* node controller - responsible for updating kubernetes nodes using cloud APIs and deleting kubernetes nodes that were deleted on your cloud.
* service controller - responsible for loadbalancers on your cloud against services of type LoadBalancer.
* route controller - responsible for setting up network routes on your cloud
-* persistent volume labels controller - responsible for setting the zone and region labels on PersistentVolumes created in GCP and AWS clouds.
* any other features you would like to implement if you are running an out-of-tree provider.
diff --git a/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md b/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md
index fd8e610a212f9..c7cc8c46bb162 100644
--- a/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md
+++ b/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md
@@ -223,7 +223,7 @@ kubectl describe nodes
The output includes a record of the Container being killed because of an out-of-memory condition:
```
-Warning OOMKilling Memory cgroup out of memory: Kill process 4481 (stress) score 1994 or sacrifice child
+Warning OOMKilling Memory cgroup out of memory: Kill process 4481 (stress) score 1994 or sacrifice child
```
Delete your Pod:
diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md
index a34de6eb45bcb..9203476fe40d7 100644
--- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md
+++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md
@@ -2,6 +2,9 @@
title: Configure a Pod to Use a ConfigMap
content_template: templates/task
weight: 150
+card:
+ name: tasks
+ weight: 50
---
{{% capture overview %}}
diff --git a/content/en/docs/tasks/debug-application-cluster/audit.md b/content/en/docs/tasks/debug-application-cluster/audit.md
index ec099f595f415..6d7735cb499ee 100644
--- a/content/en/docs/tasks/debug-application-cluster/audit.md
+++ b/content/en/docs/tasks/debug-application-cluster/audit.md
@@ -207,13 +207,13 @@ By default truncate is disabled in both `webhook` and `log`, a cluster administr
{{< feature-state for_k8s_version="v1.13" state="alpha" >}}
-In Kubernetes version 1.13, you can configure dynamic audit webhook backends AuditSink API objects.
+In Kubernetes version 1.13, you can configure dynamic audit webhook backends AuditSink API objects.
To enable dynamic auditing you must set the following apiserver flags:
-- `--audit-dynamic-configuration`: the primary switch. When the feature is at GA, the only required flag.
-- `--feature-gates=DynamicAuditing=true`: feature gate at alpha and beta.
-- `--runtime-config=auditregistration.k8s.io/v1alpha1=true`: enable API.
+- `--audit-dynamic-configuration`: the primary switch. When the feature is at GA, the only required flag.
+- `--feature-gates=DynamicAuditing=true`: feature gate at alpha and beta.
+- `--runtime-config=auditregistration.k8s.io/v1alpha1=true`: enable API.
When enabled, an AuditSink object can be provisioned:
@@ -301,7 +301,11 @@ Fluent-plugin-forest and fluent-plugin-rewrite-tag-filter are plugins for fluent
# route audit according to namespace element in context
@type rewrite_tag_filter
- rewriterule1 namespace ^(.+) ${tag}.$1
+
+ key namespace
+ pattern /^(.+)/
+ tag ${tag}.$1
+
@@ -420,8 +424,8 @@ plugin which supports full-text search and analytics.
[gce-audit-profile]: https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh#L735
[kubeconfig]: /docs/tasks/access-application-cluster/configure-access-multiple-clusters/
[fluentd]: http://www.fluentd.org/
-[fluentd_install_doc]: http://docs.fluentd.org/v0.12/articles/quickstart#step1-installing-fluentd
-[fluentd_plugin_management_doc]: https://docs.fluentd.org/v0.12/articles/plugin-management
+[fluentd_install_doc]: https://docs.fluentd.org/v1.0/articles/quickstart#step-1:-installing-fluentd
+[fluentd_plugin_management_doc]: https://docs.fluentd.org/v1.0/articles/plugin-management
[logstash]: https://www.elastic.co/products/logstash
[logstash_install_doc]: https://www.elastic.co/guide/en/logstash/current/installing-logstash.html
[kube-aggregator]: /docs/concepts/api-extension/apiserver-aggregation
diff --git a/content/en/docs/tasks/debug-application-cluster/debug-service.md b/content/en/docs/tasks/debug-application-cluster/debug-service.md
index 29a3cb047bb05..872d956e4e5c6 100644
--- a/content/en/docs/tasks/debug-application-cluster/debug-service.md
+++ b/content/en/docs/tasks/debug-application-cluster/debug-service.md
@@ -489,7 +489,7 @@ u@node$ iptables-save | grep hostnames
There should be 2 rules for each port on your `Service` (just one in this
example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST". If you do
-not see these, try restarting `kube-proxy` with the `-V` flag set to 4, and
+not see these, try restarting `kube-proxy` with the `-v` flag set to 4, and
then look at the logs again.
Almost nobody should be using the "userspace" mode any more, so we won't spend
@@ -559,7 +559,7 @@ If this still fails, look at the `kube-proxy` logs for specific lines like:
Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376]
```
-If you don't see those, try restarting `kube-proxy` with the `-V` flag set to 4, and
+If you don't see those, try restarting `kube-proxy` with the `-v` flag set to 4, and
then look at the logs again.
### A Pod cannot reach itself via Service IP
diff --git a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md
index 91ae88bb8e17e..c751d00261cae 100644
--- a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md
+++ b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md
@@ -142,9 +142,9 @@ Report issues with this device plugin and installation method to [GoogleCloudPla
Instructions for using NVIDIA GPUs on GKE are
[here](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus)
-## Clusters containing different types of NVIDIA GPUs
+## Clusters containing different types of GPUs
-If different nodes in your cluster have different types of NVIDIA GPUs, then you
+If different nodes in your cluster have different types of GPUs, then you
can use [Node Labels and Node Selectors](/docs/tasks/configure-pod-container/assign-pods-nodes/)
to schedule pods to appropriate nodes.
@@ -156,6 +156,39 @@ kubectl label nodes accelerator=nvidia-tesla-k80
kubectl label nodes accelerator=nvidia-tesla-p100
```
+For AMD GPUs, you can deploy [Node Labeller](https://github.com/RadeonOpenCompute/k8s-device-plugin/tree/master/cmd/k8s-node-labeller), which automatically labels your nodes with GPU properties. Currently supported properties:
+
+* Device ID (-device-id)
+* VRAM Size (-vram)
+* Number of SIMD (-simd-count)
+* Number of Compute Unit (-cu-count)
+* Firmware and Feature Versions (-firmware)
+* GPU Family, in two letters acronym (-family)
+ * SI - Southern Islands
+ * CI - Sea Islands
+ * KV - Kaveri
+ * VI - Volcanic Islands
+ * CZ - Carrizo
+ * AI - Arctic Islands
+ * RV - Raven
+
+Example result:
+
+ $ kubectl describe node cluster-node-23
+ Name: cluster-node-23
+ Roles:
+ Labels: beta.amd.com/gpu.cu-count.64=1
+ beta.amd.com/gpu.device-id.6860=1
+ beta.amd.com/gpu.family.AI=1
+ beta.amd.com/gpu.simd-count.256=1
+ beta.amd.com/gpu.vram.16G=1
+ beta.kubernetes.io/arch=amd64
+ beta.kubernetes.io/os=linux
+ kubernetes.io/hostname=cluster-node-23
+ Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
+ node.alpha.kubernetes.io/ttl: 0
+ ......
+
Specify the GPU type in the pod spec:
```yaml
diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
index 8cdf150782167..010511f8069f4 100644
--- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
+++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md
@@ -288,7 +288,7 @@ spec:
resource:
name: cpu
target:
- kind: AverageUtilization
+ type: AverageUtilization
averageUtilization: 50
- type: Pods
pods:
diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md
index 2e3df51fd9eab..38d615293d7a9 100644
--- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md
+++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md
@@ -249,7 +249,7 @@ Kubernetes 1.6 adds support for making use of custom metrics in the Horizontal P
You can add custom metrics for the Horizontal Pod Autoscaler to use in the `autoscaling/v2beta2` API.
Kubernetes then queries the new custom metrics API to fetch the values of the appropriate custom metrics.
-See [Support for metrics APIs](#support-for-metrics-APIs) for the requirements.
+See [Support for metrics APIs](#support-for-metrics-apis) for the requirements.
## Support for metrics APIs
diff --git a/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md b/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md
index 728d7a8950854..a265c91974538 100644
--- a/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md
+++ b/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md
@@ -53,12 +53,20 @@ svc-cat/catalog 0.0.1 service-catalog API server and controller-manag...
Your Kubernetes cluster must have RBAC enabled, which requires your Tiller Pod(s) to have `cluster-admin` access.
-If you are using Minikube, run the `minikube start` command with the following flag:
+When using Minikube v0.25 or older, you must run Minikube with RBAC explicitly enabled:
```shell
minikube start --extra-config=apiserver.Authorization.Mode=RBAC
```
+When using Minikube v0.26+, run:
+
+```shell
+minikube start
+```
+
+With Minikube v0.26+, do not specify `--extra-config`. The flag has since been changed to --extra-config=apiserver.authorization-mode and Minikube now uses RBAC by default. Specifying the older flag may cause the start command to hang.
+
If you are using `hack/local-up-cluster.sh`, set the `AUTHORIZATION_MODE` environment variable with the following values:
```
diff --git a/content/en/docs/tasks/tools/install-kubectl.md b/content/en/docs/tasks/tools/install-kubectl.md
index 7a0af0190c101..28b46d9178013 100644
--- a/content/en/docs/tasks/tools/install-kubectl.md
+++ b/content/en/docs/tasks/tools/install-kubectl.md
@@ -5,6 +5,10 @@ reviewers:
title: Install and Set Up kubectl
content_template: templates/task
weight: 10
+card:
+ name: tasks
+ weight: 20
+ title: Install kubectl
---
{{% capture overview %}}
@@ -122,32 +126,39 @@ If you are on Windows and using [Powershell Gallery](https://www.powershellgalle
Updating the installation is performed by rerunning the two commands listed in step 1.
{{< /note >}}
-## Install with Chocolatey on Windows
+## Install on Windows using Chocolatey or scoop
-If you are on Windows and using [Chocolatey](https://chocolatey.org) package manager, you can install kubectl with Chocolatey.
+To install kubectl on Windows you can use either [Chocolatey](https://chocolatey.org) package manager or [scoop](https://scoop.sh) command-line installer.
+{{< tabs name="kubectl_win_install" >}}
+{{% tab name="choco" %}}
-1. Run the installation command:
-
- ```
choco install kubernetes-cli
- ```
-
+
+{{% /tab %}}
+{{% tab name="scoop" %}}
+
+ scoop install kubectl
+
+{{% /tab %}}
+{{< /tabs >}}
2. Test to ensure the version you installed is sufficiently up-to-date:
```
kubectl version
```
-3. Change to your %HOME% directory:
- For example: `cd C:\users\yourusername`
+3. Navigate to your home directory:
-4. Create the .kube directory:
+ ```
+ cd %USERPROFILE%
+ ```
+4. Create the `.kube` directory:
```
mkdir .kube
```
-5. Change to the .kube directory you just created:
+5. Change to the `.kube` directory you just created:
```
cd .kube
diff --git a/content/en/docs/tasks/tools/install-minikube.md b/content/en/docs/tasks/tools/install-minikube.md
index 32edccfd2c00a..3bb8609e838a0 100644
--- a/content/en/docs/tasks/tools/install-minikube.md
+++ b/content/en/docs/tasks/tools/install-minikube.md
@@ -2,6 +2,9 @@
title: Install Minikube
content_template: templates/task
weight: 20
+card:
+ name: tasks
+ weight: 10
---
{{% capture overview %}}
@@ -59,7 +62,7 @@ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/miniku
Here's an easy way to add the Minikube executable to your path:
```shell
-sudo cp minikube /usr/local/bin && rm minikube
+sudo mv minikube /usr/local/bin
```
### Linux
diff --git a/content/en/docs/tutorials/clusters/apparmor.md b/content/en/docs/tutorials/clusters/apparmor.md
index 34d85179fbbf1..bfb453b226d99 100644
--- a/content/en/docs/tutorials/clusters/apparmor.md
+++ b/content/en/docs/tutorials/clusters/apparmor.md
@@ -390,7 +390,7 @@ Pod is running.
To debug problems with AppArmor, you can check the system logs to see what, specifically, was
denied. AppArmor logs verbose messages to `dmesg`, and errors can usually be found in the system
logs or through `journalctl`. More information is provided in
-[AppArmor failures](http://wiki.apparmor.net/index.php/AppArmor_Failures).
+[AppArmor failures](https://gitlab.com/apparmor/apparmor/wikis/AppArmor_Failures).
## API Reference
@@ -414,7 +414,7 @@ Specifying the profile a container will run with:
containers, and unconfined (no profile) for privileged containers.
- `localhost/`: Refers to a profile loaded on the node (localhost) by name.
- The possible profile names are detailed in the
- [core policy reference](http://wiki.apparmor.net/index.php/AppArmor_Core_Policy_Reference#Profile_names_and_attachment_specifications).
+ [core policy reference](https://gitlab.com/apparmor/apparmor/wikis/AppArmor_Core_Policy_Reference#profile-names-and-attachment-specifications).
- `unconfined`: This effectively disables AppArmor on the container.
Any other profile reference format is invalid.
diff --git a/content/en/docs/tutorials/hello-minikube.md b/content/en/docs/tutorials/hello-minikube.md
index 8a5de37cbba8e..5099beadf1725 100644
--- a/content/en/docs/tutorials/hello-minikube.md
+++ b/content/en/docs/tutorials/hello-minikube.md
@@ -8,6 +8,9 @@ menu:
weight: 10
post: >
Ready to get your hands dirty? Build a simple Kubernetes cluster that runs "Hello World" for Node.js.
+card:
+ name: tutorials
+ weight: 10
---
{{% capture overview %}}
@@ -161,7 +164,6 @@ Kubernetes [*Service*](/docs/concepts/services-networking/service/).
4. Katacoda environment only: Click the plus sign, and then click **Select port to view on Host 1**.
5. Katacoda environment only: Type `30369` (see port opposite to `8080` in services output), and then click
-**Display Port**.
This opens up a browser window that serves your app and shows the "Hello World" message.
diff --git a/content/en/docs/tutorials/kubernetes-basics/_index.html b/content/en/docs/tutorials/kubernetes-basics/_index.html
index 6830dca167a58..342cf2cdd7c16 100644
--- a/content/en/docs/tutorials/kubernetes-basics/_index.html
+++ b/content/en/docs/tutorials/kubernetes-basics/_index.html
@@ -2,6 +2,10 @@
title: Learn Kubernetes Basics
linkTitle: Learn Kubernetes Basics
weight: 10
+card:
+ name: tutorials
+ weight: 20
+ title: Walkthrough the basics
---
diff --git a/content/en/docs/tutorials/online-training/overview.md b/content/en/docs/tutorials/online-training/overview.md
index e52c100556774..7521e2a28a252 100644
--- a/content/en/docs/tutorials/online-training/overview.md
+++ b/content/en/docs/tutorials/online-training/overview.md
@@ -11,26 +11,36 @@ Here are some of the sites that offer online training for Kubernetes:
{{% capture body %}}
-* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615)
+* [Certified Kubernetes Administrator Preparation Course (LinuxAcademy.com)](https://linuxacademy.com/linux/training/course/name/certified-kubernetes-administrator-preparation-course)
-* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x)
+* [Certified Kubernetes Application Developer Preparation Course with Practice Tests (KodeKloud.com)](https://kodekloud.com/p/kubernetes-certification-course)
+
+* [Getting Started with Google Kubernetes Engine (Coursera)](https://www.coursera.org/learn/google-kubernetes-engine)
* [Getting Started with Kubernetes (Pluralsight)](https://www.pluralsight.com/courses/getting-started-kubernetes)
+* [Google Kubernetes Engine Deep Dive (Linux Academy)] (https://linuxacademy.com/google-cloud-platform/training/course/name/google-kubernetes-engine-deep-dive)
+
* [Hands-on Introduction to Kubernetes (Instruqt)](https://play.instruqt.com/public/topics/getting-started-with-kubernetes)
-* [Learn Kubernetes using Interactive Hands-on Scenarios (Katacoda)](https://www.katacoda.com/courses/kubernetes/)
+* [IBM Cloud: Deploying Microservices with Kubernetes (Coursera)](https://www.coursera.org/learn/deploy-micro-kube-ibm-cloud)
-* [Certified Kubernetes Administrator Preparation Course (LinuxAcademy.com)](https://linuxacademy.com/linux/training/course/name/certified-kubernetes-administrator-preparation-course)
+* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x)
-* [Kubernetes the Hard Way (LinuxAcademy.com)](https://linuxacademy.com/linux/training/course/name/kubernetes-the-hard-way)
+* [Kubernetes Essentials (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/kubernetes-essentials)
* [Kubernetes for the Absolute Beginners with Hands-on Labs (KodeKloud.com)](https://kodekloud.com/p/kubernetes-for-the-absolute-beginners-hands-on)
-* [Certified Kubernetes Application Developer Preparation Course with Practice Tests (KodeKloud.com)](https://kodekloud.com/p/kubernetes-certification-course)
+* [Kubernetes Quick Start (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/kubernetes-quick-start)
-{{% /capture %}}
+* [Kubernetes the Hard Way (LinuxAcademy.com)](https://linuxacademy.com/linux/training/course/name/kubernetes-the-hard-way)
+* [Learn Kubernetes using Interactive Hands-on Scenarios (Katacoda)](https://www.katacoda.com/courses/kubernetes/)
+
+* [Monitoring Kubernetes With Prometheus (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/kubernetes-and-prometheus)
+* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615)
+* [Self-paced Kubernetes online course (Learnk8s Academy)](https://learnk8s.io/academy)
+{{% /capture %}}
diff --git a/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md b/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md
index 63b6b032db012..f518c4bb0ddc6 100644
--- a/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md
+++ b/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md
@@ -4,6 +4,10 @@ reviewers:
- ahmetb
content_template: templates/tutorial
weight: 20
+card:
+ name: tutorials
+ weight: 40
+ title: "Stateful Example: Wordpress with Persistent Volumes"
---
{{% capture overview %}}
@@ -104,12 +108,15 @@ The following manifest describes a single-instance MySQL Deployment. The MySQL c
kubectl create -f https://k8s.io/examples/application/wordpress/mysql-deployment.yaml
```
-2. Verify that a PersistentVolume got dynamically provisioned. Note that it can
- It can take up to a few minutes for the PVs to be provisioned and bound.
-
+2. Verify that a PersistentVolume got dynamically provisioned.
+
```shell
kubectl get pvc
```
+
+ {{< note >}}
+ It can take up to a few minutes for the PVs to be provisioned and bound.
+ {{< /note >}}
The response should be like this:
diff --git a/content/en/docs/tutorials/stateless-application/guestbook.md b/content/en/docs/tutorials/stateless-application/guestbook.md
index 2d82a7a045d1c..b8d7045e325ef 100644
--- a/content/en/docs/tutorials/stateless-application/guestbook.md
+++ b/content/en/docs/tutorials/stateless-application/guestbook.md
@@ -4,6 +4,10 @@ reviewers:
- ahmetb
content_template: templates/tutorial
weight: 20
+card:
+ name: tutorials
+ weight: 30
+ title: "Stateless Example: PHP Guestbook with Redis"
---
{{% capture overview %}}
diff --git a/content/en/examples/admin/cloud/pvl-initializer-config.yaml b/content/en/examples/admin/cloud/pvl-initializer-config.yaml
deleted file mode 100644
index 4a2576cc2a55e..0000000000000
--- a/content/en/examples/admin/cloud/pvl-initializer-config.yaml
+++ /dev/null
@@ -1,13 +0,0 @@
-kind: InitializerConfiguration
-apiVersion: admissionregistration.k8s.io/v1alpha1
-metadata:
- name: pvlabel.kubernetes.io
-initializers:
- - name: pvlabel.kubernetes.io
- rules:
- - apiGroups:
- - ""
- apiVersions:
- - "*"
- resources:
- - persistentvolumes
diff --git a/content/en/examples/examples_test.go b/content/en/examples/examples_test.go
index 3d0fefdc25580..08cceb5fd174a 100644
--- a/content/en/examples/examples_test.go
+++ b/content/en/examples/examples_test.go
@@ -298,8 +298,7 @@ func TestExampleObjectSchemas(t *testing.T) {
"namespace-prod": {&api.Namespace{}},
},
"admin/cloud": {
- "ccm-example": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &extensions.DaemonSet{}},
- "pvl-initializer-config": {&admissionregistration.InitializerConfiguration{}},
+ "ccm-example": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &extensions.DaemonSet{}},
},
"admin/dns": {
"busybox": {&api.Pod{}},
diff --git a/content/en/examples/pods/lifecycle-events.yaml b/content/en/examples/pods/lifecycle-events.yaml
index e5fcffcc9e755..4b79d7289c568 100644
--- a/content/en/examples/pods/lifecycle-events.yaml
+++ b/content/en/examples/pods/lifecycle-events.yaml
@@ -12,5 +12,5 @@ spec:
command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
preStop:
exec:
- command: ["/usr/sbin/nginx","-s","quit"]
+ command: ["/bin/sh","-c","nginx -s quit; while killall -0 nginx; do sleep 1; done"]
diff --git a/content/en/examples/pods/probe/http-liveness.yaml b/content/en/examples/pods/probe/http-liveness.yaml
index 23d37b480a06e..670af18399e20 100644
--- a/content/en/examples/pods/probe/http-liveness.yaml
+++ b/content/en/examples/pods/probe/http-liveness.yaml
@@ -15,7 +15,7 @@ spec:
path: /healthz
port: 8080
httpHeaders:
- - name: X-Custom-Header
+ - name: Custom-Header
value: Awesome
initialDelaySeconds: 3
periodSeconds: 3
diff --git a/content/fr/OWNERS b/content/fr/OWNERS
new file mode 100644
index 0000000000000..c91ec02821e6f
--- /dev/null
+++ b/content/fr/OWNERS
@@ -0,0 +1,13 @@
+# See the OWNERS docs at https://go.k8s.io/owners
+
+# This is the localization project for French.
+# Teams and members are visible at https://github.com/orgs/kubernetes/teams.
+
+reviewers:
+- sig-docs-fr-reviews
+
+approvers:
+- sig-docs-fr-owners
+
+labels:
+- language/fr
diff --git a/content/fr/_common-resources/index.md b/content/fr/_common-resources/index.md
new file mode 100644
index 0000000000000..3d65eaa0ff97e
--- /dev/null
+++ b/content/fr/_common-resources/index.md
@@ -0,0 +1,3 @@
+---
+headless: true
+---
\ No newline at end of file
diff --git a/content/fr/docs/_index.md b/content/fr/docs/_index.md
new file mode 100644
index 0000000000000..05e96e2901631
--- /dev/null
+++ b/content/fr/docs/_index.md
@@ -0,0 +1,3 @@
+---
+title: Documentation
+---
diff --git a/content/fr/docs/concepts/_index.md b/content/fr/docs/concepts/_index.md
new file mode 100644
index 0000000000000..cd1ea84780bee
--- /dev/null
+++ b/content/fr/docs/concepts/_index.md
@@ -0,0 +1,91 @@
+---
+title: Concepts
+main_menu: true
+content_template: templates/concept
+weight: 40
+---
+
+{{% capture overview %}}
+
+La section Concepts vous aide à mieux comprendre les composants du système Kubernetes et les abstractions que Kubernetes utilise pour représenter votre cluster.
+Elle vous aide également à mieux comprendre le fonctionnement de Kubernetes en général.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Vue d'ensemble
+
+Pour utiliser Kubernetes, vous utilisez *les objets de l'API Kubernetes* pour décrire *l'état souhaité* de votre cluster: quelles applications ou autres processus que vous souhaitez exécuter, quelles images de conteneur elles utilisent, le nombre de réplicas, les ressources réseau et disque que vous mettez à disposition, et plus encore.
+Vous définissez l'état souhaité en créant des objets à l'aide de l'API Kubernetes, généralement via l'interface en ligne de commande, `kubectl`.
+Vous pouvez également utiliser l'API Kubernetes directement pour interagir avec le cluster et définir ou modifier l'état souhaité.
+
+Une fois que vous avez défini l'état souhaité, le *plan de contrôle Kubernetes* (control plane en anglais) permet de faire en sorte que l'état actuel du cluster corresponde à l'état souhaité.
+Pour ce faire, Kubernetes effectue automatiquement diverses tâches, telles que le démarrage ou le redémarrage de conteneurs, la mise à jour du nombre de réplicas d'une application donnée, etc.
+Le control plane Kubernetes comprend un ensemble de processus en cours d'exécution sur votre cluster:
+
+* Le **maître Kubernetes** (Kubernetes master en anglais) qui est un ensemble de trois processus qui s'exécutent sur un seul nœud de votre cluster, désigné comme nœud maître (master node en anglais). Ces processus sont: [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) et [kube-scheduler](/docs/admin/kube-scheduler/).
+* Chaque nœud non maître de votre cluster exécute deux processus:
+ * **[kubelet](/docs/admin/kubelet/)**, qui communique avec le Kubernetes master.
+ * **[kube-proxy](/docs/admin/kube-proxy/)**, un proxy réseau reflétant les services réseau Kubernetes sur chaque nœud.
+
+## Objets Kubernetes
+
+Kubernetes contient un certain nombre d'abstractions représentant l'état de votre système: applications et processus conteneurisés déployés, leurs ressources réseau et disque associées, ainsi que d'autres informations sur les activités de votre cluster.
+Ces abstractions sont représentées par des objets de l'API Kubernetes; consultez [Vue d'ensemble des objets Kubernetes](/docs/concepts/abstractions/overview/) pour plus d'informations.
+
+Les objets de base de Kubernetes incluent:
+
+* [Pod](/docs/concepts/workloads/pods/pod-overview/)
+* [Service](/docs/concepts/services-networking/service/)
+* [Volume](/docs/concepts/storage/volumes/)
+* [Namespace](/docs/concepts/overview/working-with-objects/namespaces/)
+
+En outre, Kubernetes contient un certain nombre d'abstractions de niveau supérieur appelées Contrôleurs.
+Les contrôleurs s'appuient sur les objets de base et fournissent des fonctionnalités supplémentaires.
+
+Voici quelques exemples:
+
+* [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/)
+* [Deployment](/docs/concepts/workloads/controllers/deployment/)
+* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/)
+* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/)
+* [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/)
+
+## Kubernetes control plane
+
+Les différentes parties du control plane Kubernetes, telles que les processus Kubernetes master et kubelet, déterminent la manière dont Kubernetes communique avec votre cluster.
+Le control plane conserve un enregistrement de tous les objets Kubernetes du système et exécute des boucles de contrôle continues pour gérer l'état de ces objets.
+À tout moment, les boucles de contrôle du control plane répondent aux modifications du cluster et permettent de faire en sorte que l'état réel de tous les objets du système corresponde à l'état souhaité que vous avez fourni.
+
+Par exemple, lorsque vous utilisez l'API Kubernetes pour créer un objet Deployment, vous fournissez un nouvel état souhaité pour le système.
+Le control plane Kubernetes enregistre la création de cet objet et exécute vos instructions en lançant les applications requises et en les planifiant vers des nœuds de cluster, afin que l'état actuel du cluster corresponde à l'état souhaité.
+
+### Kubernetes master
+
+Le Kubernetes master est responsable du maintien de l'état souhaité pour votre cluster.
+Lorsque vous interagissez avec Kubernetes, par exemple en utilisant l'interface en ligne de commande `kubectl`, vous communiquez avec le master Kubernetes de votre cluster.
+
+> Le "master" fait référence à un ensemble de processus gérant l'état du cluster.
+En règle générale, tous les processus sont exécutés sur un seul nœud du cluster.
+Ce nœud est également appelé master.
+Le master peut également être répliqué pour la disponibilité et la redondance.
+
+### Noeuds Kubernetes
+
+Les nœuds d’un cluster sont les machines (serveurs physiques, machines virtuelles, etc.) qui exécutent vos applications et vos workflows.
+Le master node Kubernetes contrôle chaque noeud; vous interagirez rarement directement avec les nœuds.
+
+#### Metadonnées des objets Kubernetes
+
+* [Annotations](/docs/concepts/overview/working-with-objects/annotations/)
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+Si vous souhaitez écrire une page de concept, consultez
+[Utilisation de modèles de page](/docs/home/contribute/page-templates/)
+pour plus d'informations sur le type de page pour la documentation d'un concept.
+
+{{% /capture %}}
diff --git a/content/fr/docs/concepts/containers/_index.md b/content/fr/docs/concepts/containers/_index.md
new file mode 100644
index 0000000000000..9a86e2af74afe
--- /dev/null
+++ b/content/fr/docs/concepts/containers/_index.md
@@ -0,0 +1,4 @@
+---
+title: "Les conteneurs"
+weight: 40
+---
\ No newline at end of file
diff --git a/content/fr/docs/concepts/containers/container-environment-variables.md b/content/fr/docs/concepts/containers/container-environment-variables.md
new file mode 100644
index 0000000000000..efe686422bf0e
--- /dev/null
+++ b/content/fr/docs/concepts/containers/container-environment-variables.md
@@ -0,0 +1,69 @@
+---
+reviewers:
+- sieben
+- perriea
+- lledru
+- awkif
+- yastij
+- rbenzair
+- oussemos
+title: Les variables d’environnement du conteneur
+content_template: templates/concept
+weight: 20
+---
+
+{{% capture overview %}}
+
+Cette page décrit les ressources disponibles pour les conteneurs dans l'environnement de conteneur.
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## L'environnement du conteneur
+
+L’environnement Kubernetes conteneur fournit plusieurs ressources importantes aux conteneurs:
+
+* Un système de fichier, qui est une combinaison d'une [image](/docs/concepts/containers/images/) et un ou plusieurs [volumes](/docs/concepts/storage/volumes/).
+* Informations sur le conteneur lui-même.
+* Informations sur les autres objets du cluster.
+
+### Informations sur le conteneur
+
+Le nom d'*hôte* d'un conteneur est le nom du pod dans lequel le conteneur est en cours d'exécution.
+Il est disponible via la commande `hostname` ou
+[`gethostname`](http://man7.org/linux/man-pages/man2/gethostname.2.html)
+dans libc.
+
+Le nom du pod et le namespace sont disponibles en tant que variables d'environnement via
+[l'API downward](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/).
+
+Les variables d'environnement définies par l'utilisateur à partir de la définition de pod sont également disponibles pour le conteneur,
+de même que toutes les variables d'environnement spécifiées de manière statique dans l'image Docker.
+
+### Informations sur le cluster
+
+Une liste de tous les services en cours d'exécution lors de la création d'un conteneur est disponible pour ce conteneur en tant que variables d'environnement.
+Ces variables d'environnement correspondent à la syntaxe des liens Docker.
+
+Pour un service nommé *foo* qui correspond à un conteneur *bar*,
+les variables suivantes sont définies:
+
+```shell
+FOO_SERVICE_HOST=
+FOO_SERVICE_PORT=
+```
+
+Les services ont des adresses IP dédiées et sont disponibles pour le conteneur avec le DNS,
+si le [module DNS](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/) est activé.
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* En savoir plus sur [les hooks du cycle de vie d'un conteneur](/docs/concepts/containers/container-lifecycle-hooks/).
+* Acquérir une expérience pratique
+ [en attachant les handlers aux événements du cycle de vie du conteneur](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/).
+
+{{% /capture %}}
diff --git a/content/fr/docs/concepts/overview/_index.md b/content/fr/docs/concepts/overview/_index.md
new file mode 100644
index 0000000000000..df9dc83e3d831
--- /dev/null
+++ b/content/fr/docs/concepts/overview/_index.md
@@ -0,0 +1,4 @@
+---
+title: "Vue d'ensemble"
+weight: 20
+---
diff --git a/content/fr/docs/concepts/overview/what-is-kubernetes.md b/content/fr/docs/concepts/overview/what-is-kubernetes.md
new file mode 100644
index 0000000000000..a6166f73a9e7e
--- /dev/null
+++ b/content/fr/docs/concepts/overview/what-is-kubernetes.md
@@ -0,0 +1,136 @@
+---
+reviewers:
+ - jygastaud
+ - lledru
+ - sieben
+title: Qu'est-ce-que Kubernetes ?
+content_template: templates/concept
+weight: 10
+card:
+ name: concepts
+ weight: 10
+---
+
+{{% capture overview %}}
+Cette page est une vue d'ensemble de Kubernetes.
+{{% /capture %}}
+
+{{% capture body %}}
+Kubernetes est une plate-forme open-source extensible et portable pour la gestion de charges de travail (workloads) et des services conteneurisés.
+Elle favorise à la fois l'écriture de configuration déclarative (declarative configuration) et l'automatisation.
+C'est un large écosystème en rapide expansion.
+Les services, le support et les outils Kubernetes sont largement disponibles.
+
+Google a rendu open-source le projet Kubernetes en 2014.
+Le développement de Kubernetes est basé sur une [décennie et demie d’expérience de Google avec la gestion de la charge et de la mise à l'échelle (scale) en production](https://research.google.com/pubs/pub43438.html), associé aux meilleures idées et pratiques de la communauté.
+
+## Pourquoi ai-je besoin de Kubernetes et que peut-il faire ?
+
+Kubernetes a un certain nombre de fonctionnalités. Il peut être considéré comme:
+
+- une plate-forme de conteneur
+- une plate-forme de microservices
+- une plate-forme cloud portable
+et beaucoup plus.
+
+Kubernetes fournit un environnement de gestion **focalisé sur le conteneur** (container-centric).
+Il orchestre les ressources machines (computing), la mise en réseau et l’infrastructure de stockage sur les workloads des utilisateurs.
+Cela permet de se rapprocher de la simplicité des Platform as a Service (PaaS) avec la flexibilité des solutions d'Infrastructure as a Service (IaaS), tout en gardant de la portabilité entre les différents fournisseurs d'infrastructures (providers).
+
+## Comment Kubernetes est-il une plate-forme ?
+
+Même si Kubernetes fournit de nombreuses fonctionnalités, il existe toujours de nouveaux scénarios qui bénéficieraient de fonctionnalités complémentaires.
+Ces workflows spécifiques à une application permettent d'accélérer la vitesse de développement.
+Si l'orchestration fournie de base est acceptable pour commencer, il est souvent nécessaire d'avoir une automatisation robuste lorsque l'on doit la faire évoluer.
+C'est pourquoi Kubernetes a également été conçu pour servir de plate-forme et favoriser la construction d’un écosystème de composants et d’outils facilitant le déploiement, la mise à l’échelle et la gestion des applications.
+
+[Les Labels](/docs/concepts/overview/working-with-objects/labels/) permettent aux utilisateurs d'organiser leurs ressources comme ils/elles le souhaitent.
+[Les Annotations](/docs/concepts/overview/working-with-objects/annotations/) autorisent les utilisateurs à définir des informations personnalisées sur les ressources pour faciliter leurs workflows et fournissent un moyen simple aux outils de gérer la vérification d'un état (checkpoint state).
+
+De plus, le [plan de contrôle Kubernetes (control
+plane)](/docs/concepts/overview/components/) est construit sur les mêmes [APIs](/docs/reference/using-api/api-overview/) que celles accessibles aux développeurs et utilisateurs.
+Les utilisateurs peuvent écrire leurs propres controlleurs (controllers), tels que les [ordonnanceurs (schedulers)](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/devel/scheduler.md),
+avec [leurs propres APIs](/docs/concepts/api-extension/custom-resources/) qui peuvent être utilisés par un [outil en ligne de commande](/docs/user-guide/kubectl-overview/).
+
+Ce choix de [conception](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) a permis de construire un ensemble d'autres systèmes par dessus Kubernetes.
+
+## Ce que Kubernetes n'est pas
+
+Kubernetes n’est pas une solution PaaS (Platform as a Service).
+Kubernetes opérant au niveau des conteneurs plutôt qu'au niveau du matériel, il fournit une partie des fonctionnalités des offres PaaS, telles que le déploiement, la mise à l'échelle, l'équilibrage de charge (load balancing), la journalisation (logging) et la surveillance (monitoring).
+Cependant, Kubernetes n'est pas monolithique.
+Ces implémentations par défaut sont optionnelles et interchangeables. Kubernetes fournit les bases permettant de construire des plates-formes orientées développeurs, en laissant la possibilité à l'utilisateur de faire ses propres choix.
+
+Kubernetes:
+
+- Ne limite pas les types d'applications supportées. Kubernetes prend en charge des workloads extrêmement divers, dont des applications stateless, stateful ou orientées traitement de données (data-processing).
+Si l'application peut fonctionner dans un conteneur, elle devrait bien fonctionner sur Kubernetes.
+- Ne déploie pas de code source et ne build pas d'application non plus. Les workflows d'Intégration Continue, de Livraison Continue et de Déploiement Continu (CI/CD) sont réalisés en fonction de la culture d'entreprise, des préférences ou des pré-requis techniques.
+- Ne fournit pas nativement de services au niveau applicatif tels que des middlewares (e.g., message buses), des frameworks de traitement de données (par exemple, Spark), des bases de données (e.g., mysql), caches, ou systèmes de stockage clusterisés (e.g., Ceph).
+Ces composants peuvent être lancés dans Kubernetes et/ou être accessibles à des applications tournant dans Kubernetes via des mécaniques d'intermédiation tel que Open Service Broker.
+- N'impose pas de solutions de logging, monitoring, ou alerting.
+Kubernetes fournit quelques intégrations primaires et des mécanismes de collecte et export de métriques.
+- Ne fournit ou n'impose un langague/système de configuration (e.g., [jsonnet](https://github.com/google/jsonnet)).
+Il fournit une API déclarative qui peut être ciblée par n'importe quelle forme de spécifications déclaratives.
+- Ne fournit ou n'adopte aucune mécanique de configuration des machines, de maintenance, de gestion ou de contrôle de la santé des systèmes.
+
+De plus, Kubernetes n'est pas vraiment un _système d'orchestration_. En réalité, il élimine le besoin d'orchestration.
+Techniquement, l'_orchestration_ se définie par l'exécution d'un workflow défini : premièrement faire A, puis B, puis C.
+Kubernetes quant à lui est composé d'un ensemble de processus de contrôle qui pilote l'état courant vers l'état désiré.
+Peu importe comment on arrive du point A au point C.
+Un contrôle centralisé n'est pas non plus requis.
+Cela abouti à un système plus simple à utiliser et plus puissant, robuste, résiliant et extensible.
+
+## Pourquoi les conteneurs ?
+
+Vous cherchez des raisons d'utiliser des conteneurs ?
+
+![Pourquoi les conteneurs ?](/images/docs/why_containers.svg)
+
+L'_ancienne façon (old way)_ de déployer des applications consistait à installer les applications sur un hôte en utilisant les systèmes de gestions de paquets natifs.
+Cela avait pour principale inconvénient de lier fortement les exécutables, la configuration, les librairies et le cycle de vie de chacun avec l'OS.
+Il est bien entendu possible de construire une image de machine virtuelle (VM) immuable pour arriver à produire des publications (rollouts) ou retours arrières (rollbacks), mais les VMs sont lourdes et non-portables.
+
+La _nouvelle façon (new way)_ consiste à déployer des conteneurs basés sur une virtualisation au niveau du système d'opération (operation-system-level) plutôt que de la virtualisation hardware.
+Ces conteneurs sont isolés les uns des autres et de l'hôte :
+ils ont leurs propres systèmes de fichiers, ne peuvent voir que leurs propres processus et leur usage des ressources peut être contraint.
+Ils sont aussi plus facile à construire que des VMs, et vu qu'ils sont décorrélés de l'infrastructure sous-jacente et du système de fichiers de l'hôte, ils sont aussi portables entre les différents fournisseurs de Cloud et les OS.
+
+Étant donné que les conteneurs sont petits et rapides, une application peut être packagées dans chaque image de conteneurs.
+Cette relation application-image tout-en-un permet de bénéficier de tous les bénéfices des conteneurs. Avec les conteneurs, des images immuables de conteneur peuvent être créées au moment du build/release plutôt qu'au déploiement, vu que chaque application ne dépend pas du reste de la stack applicative et n'est pas liée à l'environnement de production.
+La génération d'images de conteneurs au moment du build permet d'obtenir un environnement constant qui peut être déployé tant en développement qu'en production. De la même manière, les conteneurs sont bien plus transparents que les VMs, ce qui facilite le monitoring et le management.
+Cela est particulièrement vrai lorsque le cycle de vie des conteneurs est géré par l'infrastructure plutôt que caché par un gestionnaire de processus à l'intérieur du conteneur. Avec une application par conteneur, gérer ces conteneurs équivaut à gérer le déploiement de son application.
+
+Résumé des bénéfices des conteneurs :
+
+- **Création et déploiement agile d'application** :
+ Augmente la simplicité et l'efficacité de la création d'images par rapport à l'utilisation d'image de VM.
+- **Développement, intégration et déploiement Continus**:
+ Fournit un processus pour constuire et déployer fréquemment et de façon fiable avec la capacité de faire des rollbacks rapide et simple (grâce à l'immuabilité de l'image).
+- **Séparation des besoins entre Dev et Ops**:
+ Création d'images applicatives au moment du build plutôt qu'au déploiement, tout en séparant l'application de l'infrastructure.
+- **Observabilité**
+ Pas seulement des informations venant du système d'exploitation sous-jacent mais aussi des signaux propres de l'application.
+- **Consistance entre les environnements de développement, tests et production**:
+ Fonctionne de la même manière que ce soit sur un poste local que chez un fournisseur d'hébergement / dans le Cloud.
+- **Portabilité entre Cloud et distribution système**:
+ Fonctionne sur Ubuntu, RHEL, CoreOS, on-prem, Google Kubernetes Engine, et n'importe où.
+- **Gestion centrée Application**:
+ Bascule le niveau d'abstraction d'une virtualisation hardware liée à l'OS à une logique de ressources orientée application.
+- **[Micro-services](https://martinfowler.com/articles/microservices.html) faiblement couplés, distribués, élastiques**:
+ Les applications sont séparées en petits morceaux indépendants et peuvent être déployés et gérés dynamiquement -- pas une stack monolithique dans une seule machine à tout faire.
+- **Isolation des ressources**:
+ Performances de l'application prédictible.
+- **Utilisation des ressources**:
+ Haute efficacité et densité.
+
+## Qu'est-ce-que Kubenetes signifie ? K8s ?
+
+Le nom **Kubernetes** tire son origine du grec ancien, signifiant _capitaine_ ou _pilôte_ et est la racine de _gouverneur_ et [cybernetic](http://www.etymonline.com/index.php?term=cybernetics). _K8s_ est l'abréviation dérivée par le remplacement des 8 lettres "ubernete" par "8".
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+* Prêt à [commencer](/docs/setup/) ?
+* Pour plus de détails, voir la [documentation Kubernetes](/docs/home/).
+{{% /capture %}}
diff --git a/content/fr/docs/home/_index.md b/content/fr/docs/home/_index.md
new file mode 100644
index 0000000000000..46b0678291a4b
--- /dev/null
+++ b/content/fr/docs/home/_index.md
@@ -0,0 +1,19 @@
+---
+approvers:
+- chenopis
+title: Documentation de Kubernetes
+noedit: true
+cid: docsHome
+layout: docsportal_home
+class: gridPage
+linkTitle: "Home"
+main_menu: true
+weight: 10
+hide_feedback: true
+menu:
+ main:
+ title: "Documentation"
+ weight: 20
+ post: >
+
Apprenez à utiliser Kubernetes à l'aide d'une documentation conceptuelle, didactique et de référence. Vous pouvez même aider en contribuant à la documentation!
+---
diff --git a/content/fr/docs/home/supported-doc-versions.md b/content/fr/docs/home/supported-doc-versions.md
new file mode 100644
index 0000000000000..3be5b0d2d83be
--- /dev/null
+++ b/content/fr/docs/home/supported-doc-versions.md
@@ -0,0 +1,22 @@
+---
+title: Versions supportées de la documentation Kubernetes
+content_template: templates/concept
+---
+
+{{% capture overview %}}
+
+Ce site contient la documentation de la version actuelle de Kubernetes et les quatre versions précédentes de Kubernetes.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Version courante
+
+La version actuelle est [{{< param "version" >}}](/).
+
+## Versions précédentes
+
+{{< versions-other >}}
+
+{{% /capture %}}
diff --git a/content/fr/docs/reference/kubectl/cheatsheet.md b/content/fr/docs/reference/kubectl/cheatsheet.md
new file mode 100644
index 0000000000000..d8252970e7e05
--- /dev/null
+++ b/content/fr/docs/reference/kubectl/cheatsheet.md
@@ -0,0 +1,342 @@
+---
+title: Aide-mémoire kubectl
+content_template: templates/concept
+card:
+ name: reference
+ weight: 30
+---
+
+{{% capture overview %}}
+
+Voir aussi : [Aperçu Kubectl](/docs/reference/kubectl/overview/) et [Guide JsonPath](/docs/reference/kubectl/jsonpath).
+
+Cette page donne un aperçu de la commande `kubectl`.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+# Aide-mémoire kubectl
+
+## Auto-complétion avec Kubectl
+
+### BASH
+
+```bash
+source <(kubectl completion bash) # active l'auto-complétion pour bash dans le shell courant, le paquet bash-completion devant être installé au préalable
+echo "source <(kubectl completion bash)" >> ~/.bashrc # ajoute l'auto-complétion de manière permanente à votre shell bash
+```
+
+Vous pouvez de plus déclarer un alias pour `kubectl` qui fonctionne aussi avec l'auto-complétion :
+
+```bash
+alias k=kubectl
+complete -F __start_kubectl k
+```
+
+### ZSH
+
+```bash
+source <(kubectl completion zsh) # active l'auto-complétion pour zsh dans le shell courant
+echo "if [ $commands[kubectl] ]; then source <(kubectl completion zsh); fi" >> ~/.zshrc # ajoute l'auto-complétion de manière permanente à votre shell zsh
+```
+
+## Contexte et configuration de Kubectl
+
+Indique avec quel cluster Kubernetes `kubectl` communique et modifie les informations de configuration. Voir la documentation [Authentification multi-clusters avec kubeconfig](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) pour des informations détaillées sur le fichier de configuration.
+
+```bash
+kubectl config view # Affiche les paramètres fusionnés de kubeconfig
+
+# Utilise plusieurs fichiers kubeconfig en même temps et affiche la configuration fusionnée
+KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view
+
+# Affiche le mot de passe pour l'utilisateur e2e
+kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}'
+
+kubectl config current-context # Affiche le contexte courant (current-context)
+kubectl config use-context my-cluster-name # Définit my-cluster-name comme contexte courant
+
+# Ajoute un nouveau cluster à votre kubeconf, prenant en charge l'authentification de base (basic auth)
+kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword
+
+# Définit et utilise un contexte qui utilise un nom d'utilisateur et un namespace spécifiques
+kubectl config set-context gce --user=cluster-admin --namespace=foo \
+ && kubectl config use-context gce
+```
+
+## Création d'objets
+
+Les manifests Kubernetes peuvent être définis en json ou yaml. Les extensions de fichier `.yaml`,
+`.yml`, et `.json` peuvent être utilisés.
+
+```bash
+kubectl create -f ./my-manifest.yaml # crée une ou plusieurs ressources
+kubectl create -f ./my1.yaml -f ./my2.yaml # crée depuis plusieurs fichiers
+kubectl create -f ./dir # crée une ou plusieurs ressources depuis tous les manifests dans dir
+kubectl create -f https://git.io/vPieo # crée une ou plusieurs ressources depuis une url
+kubectl create deployment nginx --image=nginx # démarre une instance unique de nginx
+kubectl explain pods,svc # affiche la documentation pour les manifests pod et svc
+
+# Crée plusieurs objets YAML depuis l'entrée standard (stdin)
+cat </dev/null; printf "\n"; done
+
+# Vérifie quels noeuds sont prêts
+JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
+ && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"
+
+# Liste tous les Secrets actuellement utilisés par un pod
+kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq
+
+# Liste les événements (Events) classés par timestamp
+kubectl get events --sort-by=.metadata.creationTimestamp
+```
+
+## Mise à jour de ressources
+
+Depuis la version 1.11, `rolling-update` a été déprécié (voir [CHANGELOG-1.11.md](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md)), utilisez plutôt `rollout`.
+
+```bash
+kubectl set image deployment/frontend www=image:v2 # Rolling update du conteneur "www" du déploiement "frontend", par mise à jour de son image
+kubectl rollout undo deployment/frontend # Rollback du déploiement précédent
+kubectl rollout status -w deployment/frontend # Écoute (Watch) le status du rolling update du déploiement "frontend" jusqu'à ce qu'il se termine
+
+# déprécié depuis la version 1.11
+kubectl rolling-update frontend-v1 -f frontend-v2.json # (déprécié) Rolling update des pods de frontend-v1
+kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2 # (déprécié) Modifie le nom de la ressource et met à jour l'image
+kubectl rolling-update frontend --image=image:v2 # (déprécié) Met à jour l'image du pod du déploiement frontend
+kubectl rolling-update frontend-v1 frontend-v2 --rollback # (déprécié) Annule (rollback) le rollout en cours
+
+cat pod.json | kubectl replace -f - # Remplace un pod, en utilisant un JSON passé en entrée standard
+
+# Remplace de manière forcée (Force replace), supprime puis re-crée la ressource. Provoque une interruption de service.
+kubectl replace --force -f ./pod.json
+
+# Crée un service pour un nginx repliqué, qui rend le service sur le port 80 et se connecte aux conteneurs sur le port 8000
+kubectl expose rc nginx --port=80 --target-port=8000
+
+# Modifie la version (tag) de l'image du conteneur unique du pod à v4
+kubectl get pod mypod -o yaml | sed 's/\(image: myimage\):.*$/\1:v4/' | kubectl replace -f -
+
+kubectl label pods my-pod new-label=awesome # Ajoute un Label
+kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq # Ajoute une annotation
+kubectl autoscale deployment foo --min=2 --max=10 # Mise à l'échelle automatique (Auto scale) d'un déploiement "foo"
+```
+
+## Mise à jour partielle de ressources
+
+```bash
+kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}' # Met à jour partiellement un noeud
+
+# Met à jour l'image d'un conteneur ; spec.containers[*].name est requis car c'est une clé du merge
+kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'
+
+# Met à jour l'image d'un conteneur en utilisant un patch json avec tableaux indexés
+kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]'
+
+# Désactive la livenessProbe d'un déploiement en utilisant un patch json avec tableaux indexés
+kubectl patch deployment valid-deployment --type json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/livenessProbe"}]'
+
+# Ajoute un nouvel élément à un tableau indexé
+kubectl patch sa default --type='json' -p='[{"op": "add", "path": "/secrets/1", "value": {"name": "whatever" } }]'
+```
+
+## Édition de ressources
+Ceci édite n'importe quelle ressource de l'API dans un éditeur.
+
+```bash
+kubectl edit svc/docker-registry # Édite le service nommé docker-registry
+KUBE_EDITOR="nano" kubectl edit svc/docker-registry # Utilise un autre éditeur
+```
+
+## Mise à l'échelle de ressources
+
+```bash
+kubectl scale --replicas=3 rs/foo # Scale un replicaset nommé 'foo' à 3
+kubectl scale --replicas=3 -f foo.yaml # Scale une ressource spécifiée dans foo.yaml" à 3
+kubectl scale --current-replicas=2 --replicas=3 deployment/mysql # Si la taille du déploiement nommé mysql est actuellement 2, scale mysql à 3
+kubectl scale --replicas=5 rc/foo rc/bar rc/baz # Scale plusieurs contrôleurs de réplication
+```
+
+## Suppression de ressources
+
+```bash
+kubectl delete -f ./pod.json # Supprime un pod en utilisant le type et le nom spécifiés dans pod.json
+kubectl delete pod,service baz foo # Supprime les pods et services ayant les mêmes noms "baz" et "foo"
+kubectl delete pods,services -l name=myLabel # Supprime les pods et services ayant le label name=myLabel
+kubectl delete pods,services -l name=myLabel --include-uninitialized # Supprime les pods et services, dont ceux non initialisés, ayant le label name=myLabel
+kubectl -n my-ns delete po,svc --all # Supprime tous les pods et services, dont ceux non initialisés, dans le namespace my-ns
+```
+
+## Interaction avec des Pods en cours d'exécution
+
+```bash
+kubectl logs my-pod # Affiche les logs du pod (stdout)
+kubectl logs my-pod --previous # Affiche les logs du pod (stdout) pour une instance précédente du conteneur
+kubectl logs my-pod -c my-container # Affiche les logs d'un conteneur particulier du pod (stdout, cas d'un pod multi-conteneurs)
+kubectl logs my-pod -c my-container --previous # Affiche les logs d'un conteneur particulier du pod (stdout, cas d'un pod multi-conteneurs) pour une instance précédente du conteneur
+kubectl logs -f my-pod # Fait défiler (stream) les logs du pod (stdout)
+kubectl logs -f my-pod -c my-container # Fait défiler (stream) les logs d'un conteneur particulier du pod (stdout, cas d'un pod multi-conteneurs)
+kubectl run -i --tty busybox --image=busybox -- sh # Exécute un pod comme un shell interactif
+kubectl attach my-pod -i # Attache à un conteneur en cours d'exécution
+kubectl port-forward my-pod 5000:6000 # Écoute le port 5000 de la machine locale et forwarde vers le port 6000 de my-pod
+kubectl exec my-pod -- ls / # Exécute une commande dans un pod existant (cas d'un seul conteneur)
+kubectl exec my-pod -c my-container -- ls / # Exécute une commande dans un pod existant (cas multi-conteneurs)
+kubectl top pod POD_NAME --containers # Affiche les métriques pour un pod donné et ses conteneurs
+```
+
+## Interaction avec des Noeuds et Clusters
+
+```bash
+kubectl cordon mon-noeud # Marque mon-noeud comme non assignable (unschedulable)
+kubectl drain mon-noeud # Draine mon-noeud en préparation d'une mise en maintenance
+kubectl uncordon mon-noeud # Marque mon-noeud comme assignable
+kubectl top node mon-noeud # Affiche les métriques pour un noeud donné
+kubectl cluster-info # Affiche les adresses du master et des services
+kubectl cluster-info dump # Affiche l'état courant du cluster sur stdout
+kubectl cluster-info dump --output-directory=/path/to/cluster-state # Affiche l'état courant du cluster sur /path/to/cluster-state
+
+# Si une teinte avec cette clé et cet effet existe déjà, sa valeur est remplacée comme spécifié.
+kubectl taint nodes foo dedicated=special-user:NoSchedule
+```
+
+### Types de ressources
+
+Liste tous les types de ressources pris en charge avec leurs noms courts (shortnames), [groupe d'API (API group)](/docs/concepts/overview/kubernetes-api/#api-groups), si elles sont [cantonnées à un namespace (namespaced)](/docs/concepts/overview/working-with-objects/namespaces), et leur [Genre (Kind)](/docs/concepts/overview/working-with-objects/kubernetes-objects):
+
+```bash
+kubectl api-resources
+```
+
+Autres opérations pour explorer les ressources de l'API :
+
+```bash
+kubectl api-resources --namespaced=true # Toutes les ressources cantonnées à un namespace
+kubectl api-resources --namespaced=false # Toutes les ressources non cantonnées à un namespace
+kubectl api-resources -o name # Toutes les ressources avec un affichage simple (uniquement le nom de la ressource)
+kubectl api-resources -o wide # Toutes les ressources avec un affichage étendu (alias "wide")
+kubectl api-resources --verbs=list,get # Toutes les ressources prenant en charge les verbes de requête "list" et "get"
+kubectl api-resources --api-group=extensions # Toutes les ressources dans le groupe d'API "extensions"
+```
+
+### Formattage de l'affichage
+
+Pour afficher les détails sur votre terminal dans un format spécifique, vous pouvez utiliser une des options `-o` ou `--output` avec les commandes `kubectl` qui les prennent en charge.
+
+| Format d'affichage | Description |
+|-------------------------------------|-----------------------------------------------------------------------------------------------------------------------|
+| `-o=custom-columns=` | Affiche un tableau en spécifiant une liste de colonnes séparées par des virgules |
+| `-o=custom-columns-file=` | Affiche un tableau en utilisant les colonnes spécifiées dans le fichier `` |
+| `-o=json` | Affiche un objet de l'API formaté en JSON |
+| `-o=jsonpath=` | Affiche les champs définis par une expression [jsonpath](/docs/reference/kubectl/jsonpath) |
+| `-o=jsonpath-file=` | Affiche les champs définis par l'expression [jsonpath](/docs/reference/kubectl/jsonpath) dans le fichier `` |
+| `-o=name` | Affiche seulement le nom de la ressource et rien de plus |
+| `-o=wide` | Affiche dans le format texte avec toute information supplémentaire, et pour des pods, le nom du noeud est inclus |
+| `-o=yaml` | Affiche un objet de l'API formaté en YAML |
+### Verbosité de l'affichage de Kubectl et débogage
+
+La verbosité de Kubectl est contrôlée par une des options `-v` ou `--v` suivie d'un entier représentant le niveau de log. Les conventions générales de logging de Kubernetes et les niveaux de log associés sont décrits [ici](https://github.com/kubernetes/community/blob/master/contributors/devel/logging.md).
+
+| Verbosité | Description |
+|-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `--v=0` | Le minimum qui doit TOUJOURS être affiché à un opérateur. |
+| `--v=1` | Un niveau de log par défaut raisonnable si vous n'avez pas besoin de verbosité. |
+| `--v=2` | Informations utiles sur l'état stable du service et messages de logs importants qui peuvent être corrélés à des changements significatifs dans le système. C'est le niveau de log par défaut recommandé pour la plupart des systèmes. |
+| `--v=3` | Informations étendues sur les changements. |
+| `--v=4` | Verbosité de Debug. |
+| `--v=6` | Affiche les ressources requêtées. |
+| `--v=7` | Affiche les entêtes des requêtes HTTP. |
+| `--v=8` | Affiche les contenus des requêtes HTTP. |
+| `--v=9` | Affiche les contenus des requêtes HTTP sans les tronquer. |
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* En savoir plus sur l'[Aperçu de kubectl](/docs/reference/kubectl/overview/).
+
+* Voir les options [kubectl](/docs/reference/kubectl/kubectl/).
+
+* Voir aussi les [Conventions d'usage de kubectl](/docs/reference/kubectl/conventions/) pour comprendre comment l'utiliser dans des scripts réutilisables.
+
+* Voir plus d'[aides-mémoire kubectl](https://github.com/dennyzhang/cheatsheet-kubernetes-A4).
+
+{{% /capture %}}
diff --git a/content/fr/docs/setup/_index.md b/content/fr/docs/setup/_index.md
new file mode 100644
index 0000000000000..29851b225b97a
--- /dev/null
+++ b/content/fr/docs/setup/_index.md
@@ -0,0 +1,94 @@
+---
+reviewers:
+- sieben
+- perriea
+- lledru
+- awkif
+- yastij
+no_issue: true
+title: Setup
+main_menu: true
+weight: 30
+content_template: templates/concept
+---
+{{% capture overview %}}
+
+Utilisez cette page pour trouver le type de solution qui correspond le mieux à vos besoins.
+
+Le choix de l'emplacement de Kubernetes dépend des ressources dont vous disposez
+et de la flexibilité dont vous avez besoin. Vous pouvez executer Kubernetes presque partout,
+de votre ordinateur portable aux machines virtuelles d'un fournisseur de cloud jusqu'à un rack de serveurs en bare metal.
+Vous pouvez également mettre en place un cluster entièrement géré en exécutant une seule commande ou bien créer
+votre propre cluster personnalisé sur vos serveurs bare-metal.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Solutions locales
+
+La solution locale, installée sur votre machine, est un moyen facile de démarrer avec Kubernetes. Vous
+pouvez créer et tester des clusters Kubernetes sans vous soucier de la consommation
+des ressources et des quotas d'un cloud.
+
+Vous devriez choisir une solution locale si vous souhaitez :
+
+* Essayer ou commencer à apprendre Kubernetes
+* Développer et réaliser des tests sur des clusters locaux
+
+Choisissez une [solution locale] (/docs/setup/pick-right-solution/#local-machine-solutions).
+
+## Solutions hébergées
+
+Les solutions hébergées sont un moyen pratique de créer et de maintenir des clusters Kubernetes. Elles
+permettent de gérer et d'exploiter vos clusters pour que vous n'ayez pas à le faire.
+
+Vous devriez choisir une solution hébergée si vous :
+
+* Voulez une solution entièrement gérée
+* Voulez vous concentrer sur le développement de vos applications ou services
+* N'avez pas d'équipe de Site Reliability Engineering (SRE) dédiée, mais que vous souhaitez une haute disponibilité.
+* Vous n'avez pas les ressources pour héberger et surveiller vos clusters
+
+Choisissez une [solution hébergée] (/docs/setup/pick-right-solution/#hosted-solutions).
+
+## Solutions cloud clés en main
+
+Ces solutions vous permettent de créer des clusters Kubernetes avec seulement quelques commandes et
+sont activement développées et bénéficient du soutien actif de la communauté. Elles peuvent également être hébergés sur
+un ensemble de fournisseurs de Cloud de type IaaS, mais elles offrent plus de liberté et de flexibilité en contrepartie
+d'un effort à fournir plus important.
+
+Vous devriez choisir une solution cloud clés en main si vous :
+
+* Voulez plus de contrôle sur vos clusters que ne le permettent les solutions hébergées
+* Voulez réaliser vous même un plus grand nombre d'operations
+
+Choisissez une [solution clé en main] (/docs/setup/pick-right-solution/#turnkey-cloud-solutions)
+
+## Solutions clés en main sur site
+
+Ces solutions vous permettent de créer des clusters Kubernetes sur votre cloud privé, interne et sécurisé,
+avec seulement quelques commandes.
+
+Vous devriez choisir une solution de cloud clé en main sur site si vous :
+
+* Souhaitez déployer des clusters sur votre cloud privé
+* Disposez d'une équipe SRE dédiée
+* Avez les ressources pour héberger et surveiller vos clusters
+
+Choisissez une [solution clé en main sur site] (/docs/setup/pick-right-solution/#on-premises-turnkey-cloud-solutions).
+
+## Solutions personnalisées
+
+Les solutions personnalisées vous offrent le maximum de liberté sur vos clusters, mais elles nécessitent le plus
+d'expertise. Ces solutions vont du bare-metal aux fournisseurs de cloud sur
+différents systèmes d'exploitation.
+
+Choisissez une [solution personnalisée] (/docs/setup/pick-right-solution/#custom-solutions).
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+Allez à [Choisir la bonne solution] (/docs/setup/pick-right-solution/) pour une list complète de solutions.
+{{% /capture %}}
diff --git a/content/fr/docs/setup/custom-cloud/_index.md b/content/fr/docs/setup/custom-cloud/_index.md
new file mode 100644
index 0000000000000..7f8dd95d2cc9a
--- /dev/null
+++ b/content/fr/docs/setup/custom-cloud/_index.md
@@ -0,0 +1,4 @@
+---
+title: Solutions Cloud personnalisées
+weight: 50
+---
diff --git a/content/fr/docs/setup/custom-cloud/coreos.md b/content/fr/docs/setup/custom-cloud/coreos.md
new file mode 100644
index 0000000000000..e172e8888bac6
--- /dev/null
+++ b/content/fr/docs/setup/custom-cloud/coreos.md
@@ -0,0 +1,96 @@
+---
+title: CoreOS sur AWS ou GCE
+reviewers:
+- sieben
+- perriea
+- lledru
+- awkif
+- yastij
+- rbenzair
+content_template: templates/concept
+---
+
+{{% capture overview %}}
+
+Il existe plusieurs guides permettant d'utiliser Kubernetes avec [CoreOS](https://coreos.com/kubernetes/docs/latest/).
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Guides officiels CoreOS
+
+Ces guides sont maintenus par CoreOS et déploient Kubernetes à la "façon CoreOS" avec du TLS, le composant complémentaire pour le DNS interne, etc. Ces guides passent les tests de conformité Kubernetes et nous vous recommendons de [les tester vous-même] (https://coreos.com/kubernetes/docs/latest/conformance-tests.html).
+
+
+* [**Multi-noeuds sur AWS**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html)
+
+ Guide et outil en ligne de commande pour créer un cluster multi-noeuds sur AWS.
+ CloudFormation est utilisé pour créer un noeud maitre ("master") et plusieurs noeuds de type "worker".
+
+* [**Multi-noeuds sur serveurs physiques (Bare Metal)**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-baremetal.html#automated-provisioning)
+
+ Guide et service HTTP / API pour l'initialisation et l'installation d’un cluster à plusieurs nœuds bare metal à partir d'un PXE.
+ [Ignition](https://coreos.com/ignition/docs/latest/) est utilisé pour provisionner un cluster composé d'un master et de plusieurs workers lors du démarrage initial des serveurs.
+
+* [**Multi-noeuds sur Vagrant**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html)
+
+ Guide pour l'installation d'un cluster multi-noeuds sur Vagrant.
+ L'outil de déploiement permet de configurer indépendemment le nombre de noeuds etcd, masters et workers afin d'obtenir un control plane en haute disponibilité.
+
+* [**Noeud unique sur Vagrant**](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html)
+
+ C'est la façon la plus rapide d'installer un environnement de développement local Kubernetes.
+ Il suffit simplement de `git clone`, `vagrant up` puis configurer `kubectl`.
+
+
+* [**Guide complet pas à pas**](https://coreos.com/kubernetes/docs/latest/getting-started.html)
+
+ Un guide générique qui permet de déployer un cluster en haute disponibilité (avec du TLS) sur n'importe cloud ou sur du bare metal.
+ Répéter les étapes pour obtenir plus de noeuds master ou worker
+
+## Guide de la communauté
+
+Ces guides sont maintenus par des membres de la communauté et couvrent des besoins et cas d'usages spécifiques. Ils proposent différentes manières de configurer Kubernetes sur CoreOS.
+
+* [**Cluster multi-noeuds facile sur Google Compute Engine**](https://github.com/rimusz/coreos-multi-node-k8s-gce/blob/master/README.md)
+
+ Installation scriptée d'un master unique et de plusieurs workers sur GCE.
+ Les composants Kubernetes sont gérés par [fleet](https://github.com/coreos/fleet)
+
+* [**Cluster multi-noeuds en utilisant cloud-config et Weave sur Vagrant**](https://github.com/errordeveloper/weave-demos/blob/master/poseidon/README.md)
+
+ Configure un cluster de 3 machines sur Vagrant, le réseau du cluster étant fourni par Weave.
+
+* [**Cluster multi-noeuds en utilisant cloud-config et Vagrant**](https://github.com/pires/kubernetes-vagrant-coreos-cluster/blob/master/README.md)
+
+ Configure un cluster local composé d'un master et de plusieurs workers sur l'hyperviseur de votre choix: VirtualBox, Parallels, ou VMware
+
+* [**Cluster d'un seul noeud en utilisant une application macOS**](https://github.com/rimusz/kube-solo-osx/blob/master/README.md)
+
+ Guide permettant d'obtenir un cluster d'un seul noeud faisant office de master et worker et contrôlé par une application macOS menubar.
+ (basé sur xhyve et CoreOS)
+
+* [**Cluster multi-noeuds avec Vagrant et fleet en utilisant une petite application macOS**](https://github.com/rimusz/coreos-osx-gui-kubernetes-cluster/blob/master/README.md)
+
+ Guide permettant d'obtenir un cluster composé d'un master, de plusieurs workers contrôlés par une application macOS menubar.
+
+* [**Multi-node cluster using cloud-config, CoreOS and VMware ESXi**](https://github.com/xavierbaude/VMware-coreos-multi-nodes-Kubernetes)
+
+ Configure un cluster composé d'un master et de plusieurs workers sur VMWare ESXi
+
+* [**Cluster Unique/Multi noeuds en utilisant cloud-config, CoreOS et Foreman**](https://github.com/johscheuer/theforeman-coreos-kubernetes)
+
+ Configure un cluster composé d'un ou de plusieurs noeuds avec [Foreman](https://theforeman.org).
+
+## Niveau de support
+
+
+| IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level |
+| ------------- | ------------ | ------ | ---------- | ------------------------------------------- | -------- | ------------------------------------------------------------------------------------------------------ |
+| GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires)) |
+| Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos) | | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) |
+
+Pour le niveau de support de toutes les solutions se référer au [Tableau des solutions](/docs/getting-started-guides/#table-of-solutions).
+
+{{% /capture %}}
diff --git a/content/fr/docs/setup/custom-cloud/kubespray.md b/content/fr/docs/setup/custom-cloud/kubespray.md
new file mode 100644
index 0000000000000..2cbc8cbf79640
--- /dev/null
+++ b/content/fr/docs/setup/custom-cloud/kubespray.md
@@ -0,0 +1,124 @@
+---
+title: Installer Kubernetes avec Kubespray (on-premises et fournisseurs de cloud)
+content_template: templates/concept
+---
+
+{{% capture overview %}}
+
+Cette documentation permet d'installer rapidement un cluster Kubernetes hébergé sur GCE, Azure, Openstack, AWS, vSphere, Oracle Cloud Infrastructure (expérimental) ou sur des serveurs physiques (bare metal) grâce à [Kubespray](https://github.com/kubernetes-incubator/kubespray).
+
+Kubespray se base sur des outils de provisioning, des [paramètres](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ansible.md) et playbooks [Ansible](http://docs.ansible.com/) ainsi que sur des connaissances spécifiques à Kubernetes et l'installation de systèmes d'exploitation afin de fournir:
+
+* Un cluster en haute disponibilité
+* des composants modulables
+* Le support des principales distributions Linux:
+ * Container Linux de CoreOS
+ * Debian Jessie, Stretch, Wheezy
+ * Ubuntu 16.04, 18.04
+ * CentOS/RHEL 7
+ * Fedora/CentOS Atomic
+ * openSUSE Leap 42.3/Tumbleweed
+* des tests d'intégration continue
+
+Afin de choisir l'outil le mieux adapté à votre besoin, veuillez lire [cette comparaison](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/comparisons.md) avec [kubeadm](/docs/admin/kubeadm/) et [kops](../kops).
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Créer un cluster
+
+### (1/5) Prérequis
+
+Les serveurs doivent être installés en s'assurant des éléments suivants:
+
+* **Ansible v2.6 (ou version plus récente) et python-netaddr installés sur la machine qui exécutera les commandes Ansible**
+* **Jinja 2.9 (ou version plus récente) est nécessaire pour exécuter les playbooks Ansible**
+* Les serveurs cibles doivent avoir **accès à Internet** afin de télécharger les images Docker. Autrement, une configuration supplémentaire est nécessaire, (se référer à [Offline Environment](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/downloads.md#offline-environment))
+* Les serveurs cibles doivent être configurés afin d'autoriser le transfert IPv4 (**IPv4 forwarding**)
+* **Votre clé ssh doit être copiée** sur tous les serveurs faisant partie de votre inventaire Ansible.
+* La configuration du **pare-feu n'est pas gérée**. Vous devrez vous en charger en utilisant votre méthode habituelle. Afin d'éviter tout problème pendant l'installation nous vous conseillons de le désacttiver.
+* Si Kubespray est exécuté avec un utilisateur autre que "root", une méthode d'autorisation appropriée devra être configurée sur les serveurs cibles (exemple: sudo). Il faudra aussi utiliser le paramètre `ansible_become` ou ajouter `--become` ou `b` à la ligne de commande.
+
+Afin de vous aider à préparer votre de votre environnement, Kubespray fournit les outils suivants:
+
+* Scripts [Terraform](https://www.terraform.io/) pour les fournisseurs de cloud suivants:
+ * [AWS](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform/aws)
+ * [OpenStack](https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform/openstack)
+
+### (2/5) Construire un fichier d'inventaire Ansible
+
+Lorsque vos serveurs sont disponibles, créez un fichier d'inventaire Ansible ([inventory](http://docs.ansible.com/ansible/intro_inventory.html)).
+Vous pouvez le créer manuellement ou en utilisant un script d'inventaire dynamique. Pour plus d'informations se référer à [Building your own inventory](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#building-your-own-inventory).
+
+### (3/5) Préparation au déploiement de votre cluster
+
+Kubespray permet de personnaliser de nombreux éléments:
+
+* Choix du mode: kubeadm ou non-kubeadm
+* Plugins CNI (réseau)
+* Configuration du DNS
+* Choix du control plane: natif/binaire ou dans un conteneur docker/rkt
+* Version de chaque composant
+* "route reflectors" Calico
+* Moteur de conteneur
+ * docker
+ * rkt
+ * cri-o
+* Méthode de génération des certificats (**Vault n'étant plus maintenu**)
+
+Ces paramètres Kubespray peuvent être définis dans un fichier de [variables](http://docs.ansible.com/ansible/playbooks_variables.html).
+Si vous venez juste de commencer à utiliser Kubespray nous vous recommandons d'utiliser les paramètres par défaut pour déployer votre cluster et découvrir Kubernetes
+
+### (4/5) Déployer un Cluster
+
+Vous pouvez ensuite lancer le déploiement de votre cluster:
+
+Déploiement du cluster en utilisant l'outil en ligne de commande [ansible-playbook](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#starting-custom-deployment).
+
+```shell
+ansible-playbook -i your/inventory/hosts.ini cluster.yml -b -v \
+ --private-key=~/.ssh/private_key
+```
+
+Pour des déploiements plus importants (>100 noeuds) quelques [ajustements](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/large-deployments.md) peuvent être nécessaires afin d'obtenir de meilleurs résultats.
+
+### (5/5) Vérifier le déploiement
+
+Kubespray fournit le moyen de vérifier la connectivité inter-pods ainsi que la résolution DNS grâce à [Netchecker](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/netcheck.md).
+Les pods netchecker-agents s'assurent que la résolution DNS (services Kubernetes) ainsi que le ping entre les pods fonctionnent correctement.
+Ces pods reproduisent un comportement similaire à celui des autres applications et offrent un indicateur de santé du cluster.
+
+## Opérations sur le clutser
+
+Kubespray fournit des playbooks supplémentaires qui permettent de gérer votre cluster: _scale_ et _upgrade_.
+
+### Mise à l'échelle du cluster
+
+Vous pouvez ajouter des noeuds à votre cluter en exécutant le playbook `scale`. Pour plus d'informations se référer à [Adding nodes](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#adding-nodes).
+vous pouvez retirer des noeuds de votre cluster en exécutant le playbook `remove-node`. Se référer à [Remove nodes](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/getting-started.md#remove-nodes).
+
+### Mise à jour du cluster
+
+Vous pouvez mettre à jour votre cluster en exécutant le playbook `upgrade-cluster`. Pour plus d'informations se référer à [Upgrades](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/upgrades.md).
+
+## Nettoyage
+
+Vous pouvez réinitialiser vos noeuds et supprimer tous les composants installés par Kubespray en utilisant le playbook [reset](https://github.com/kubernetes-incubator/kubespray/blob/master/reset.yml).
+
+{{< caution >}}
+Quand vous utilisez le playbook `reset`, assurez-vous de ne pas cibler accidentellement un cluster de production !
+{{< /caution >}}
+
+## Retours
+
+* Channel Slack: [#kubespray](https://kubernetes.slack.com/messages/kubespray/)
+* [Issues GitHub](https://github.com/kubernetes-incubator/kubespray/issues)
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+Jetez un oeil aux travaux prévus sur Kubespray: [roadmap](https://github.com/kubernetes-incubator/kubespray/blob/master/docs/roadmap.md).
+
+{{% /capture %}}
diff --git a/content/fr/docs/setup/pick-right-solution.md b/content/fr/docs/setup/pick-right-solution.md
new file mode 100644
index 0000000000000..bea5f70562935
--- /dev/null
+++ b/content/fr/docs/setup/pick-right-solution.md
@@ -0,0 +1,302 @@
+---
+reviewers:
+- yastij
+title: Choisir la bonne solution
+weight: 10
+content_template: templates/concept
+---
+
+{{% capture overview %}}
+
+Kubernetes peut fonctionner sur des plateformes variées: sur votre PC portable, sur des VMs d'un fournisseur de cloud, ou un rack
+de serveurs bare-metal. L'effort demandé pour configurer un cluster varie de l'éxécution d'une simple commande à la création
+de votre propre cluster personnalisé. Utilisez ce guide pour choisir la solution qui correspond le mieux à vos besoins.
+
+Si vous voulez simplement jeter un coup d'oeil rapide, utilisez alors de préférence les [solutions locales basées sur Docker](#local-machine-solutions).
+
+Lorsque vous êtes prêts à augmenter le nombre de machines et souhaitez bénéficier de la haute disponibilité, une
+[solution hébergée](#hosted-solutions) est la plus simple à déployer et à maintenir.
+
+[Les solutions cloud clés en main](#turnkey-cloud-solutions) ne demandent que peu de commande pour déployer et couvrent un large panel de
+ fournisseurs de cloud. [Les solutions clés en main pour cloud privé](#on-premises-turnkey-cloud-solutions) possèdent la simplicité des solutions cloud clés en main combinées avec la sécurité de votre propre réseau privé.
+
+Si vous avez déjà un moyen de configurer vos resources, utilisez [kubeadm](/docs/setup/independent/create-cluster-kubeadm/) pour facilement
+déployer un cluster grâce à une seule ligne de commande par machine.
+
+[Les solutions personnalisées](#custom-solutions) varient d'instructions pas à pas, à des conseils relativement généraux pour déployer un
+cluster Kubernetes en partant du début.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## Solutions locales
+
+* [Minikube](/docs/setup/minikube/) est une méthode pour créer un cluster Kubernetes local à noeud unique pour le développement et le test. L'installation est entièrement automatisée et ne nécessite pas de compte de fournisseur de cloud.
+
+* [Docker Desktop](https://www.docker.com/products/docker-desktop) est une
+application facile à installer pour votre environnement Mac ou Windows qui vous permet de
+commencer à coder et déployer votre code dans des conteneurs en quelques minutes sur un nœud unique Kubernetes.
+
+* [Minishift](https://docs.okd.io/latest/minishift/) installe la version communautaire de la plate-forme d'entreprise OpenShift
+de Kubernetes pour le développement local et les tests. Il offre une VM tout-en-un (`minishift start`) pour Windows, macOS et Linux,
+ le `oc cluster up` containerisé (Linux uniquement) et [est livré avec quelques Add Ons faciles à installer](https://github.com/minishift/minishift-addons/tree/master/add-ons).
+
+* [MicroK8s](https://microk8s.io/) fournit une commande unique d'installation de la dernière version de Kubernetes sur une machine locale
+pour le développement et les tests. L'installation est rapide (~30 sec) et supporte de nombreux plugins dont Istio avec une seule commande.
+
+* [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private) peut utiliser VirtualBox sur votre machine
+pour déployer Kubernetes sur une ou plusieurs machines virtuelles afin de développer et réaliser des scénarios de test. Cette solution
+peut créer un cluster multi-nœuds complet.
+
+* [IBM Cloud Private-CE (Community Edition) sur Linux Containers](https://github.com/HSBawa/icp-ce-on-linux-containers) est un script IaC (Infrastructure as Code) basé sur Terraform/Packer/BASH pour créer un cluster LXD à sept nœuds (1 Boot, 1 Master, 1 Management, 1 Proxy et 3 Workers) sur une machine Linux.
+
+* [Kubeadm-dind](https://github.com/kubernetes-sigs/kubeadm-dind-cluster) est un cluster Kubernetes multi-nœuds (tandis que minikube est
+un nœud unique) qui ne nécessite qu'un docker-engine. Il utilise la technique du docker-in-docker pour déployer le cluster Kubernetes.
+
+* [Ubuntu sur LXD](/docs/getting-start-guides/ubuntu/local/) supporte un déploiement de 9 instances sur votre machine locale.
+
+## Solutions hebergées
+
+* [AppsCode.com](https://appscode.com/products/cloud-deployment/) fournit des clusters Kubernetes managés pour divers clouds publics, dont AWS et Google Cloud Platform.
+
+* [APPUiO](https://appuio.ch) propose une plate-forme de cloud public OpenShift, supportant n'importe quel workload Kubernetes. De plus, APPUiO propose des Clusters OpenShift privés et managés, fonctionnant sur n'importe quel cloud public ou privé.
+
+* [Amazon Elastic Container Service for Kubernetes](https://aws.amazon.com/eks/) offre un service managé de Kubernetes.
+
+* [Azure Kubernetes Service](https://azure.microsoft.com/services/container-service/) offre des clusters Kubernetes managés.
+
+* [Containership Kubernetes Engine (CKE)](https://containership.io/containership-platform) Approvisionnement et gestion intuitive de clusters
+ Kubernetes sur GCP, Azure, AWS, Packet, et DigitalOcean. Mises à niveau transparentes, auto-scaling, métriques, création de
+workloads, et plus encore.
+
+* [DigitalOcean Kubernetes](https://www.digitalocean.com/products/kubernetes/) offre un service managé de Kubernetes.
+
+* [Giant Swarm](https://giantswarm.io/product/) offre des clusters Kubernetes managés dans leur propre centre de données, on-premises ou sur des clouds public.
+
+* [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) offre des clusters Kubernetes managés.
+
+* [IBM Cloud Kubernetes Service](https://cloud.ibm.com/docs/containers?topic=containers-container_index#container_index) offre des clusters Kubernetes managés
+ avec choix d'isolation, des outils opérationnels, une vision intégrée de la sécurité des images et des conteneurs et une intégration avec Watson, IoT et les données.
+
+* [Kubermatic](https://www.loodse.com) fournit des clusters Kubernetes managés pour divers clouds publics, y compris AWS et Digital Ocean, ainsi que sur site avec intégration OpenStack.
+
+* [Kublr](https://kublr.com) offre des clusters Kubernetes sécurisés, évolutifs et hautement fiables sur AWS, Azure, GCP et on-premises,
+ de qualité professionnelle. Il inclut la sauvegarde et la reprise après sinistre prêtes à l'emploi, la journalisation et la surveillance centralisées multi-clusters, ainsi qu'une fonction d'alerte intégrée.
+
+* [Madcore.Ai](https://madcore.ai) est un outil CLI orienté développement pour déployer l'infrastructure Kubernetes dans AWS. Les masters, un groupe d'autoscaling pour les workers sur des spot instances, les ingress-ssl-lego, Heapster, et Grafana.
+
+* [Nutanix Karbon](https://www.nutanix.com/products/karbon/) est une plateforme de gestion et d'exploitation Kubernetes multi-clusters hautement disponibles qui simplifie l'approvisionnement, les opérations et la gestion du cycle de vie de Kubernetes.
+
+* [OpenShift Dedicated](https://www.openshift.com/dedicated/) offre des clusters Kubernetes gérés et optimisés par OpenShift.
+
+* [OpenShift Online](https://www.openshift.com/features/) fournit un accès hébergé gratuit aux applications Kubernetes.
+
+* [Oracle Container Engine for Kubernetes](https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengoverview.htm) est un service entièrement géré, évolutif et hautement disponible que vous pouvez utiliser pour déployer vos applications conteneurisées dans le cloud.
+
+* [Platform9](https://platform9.com/products/kubernetes/) offre des Kubernetes gérés on-premises ou sur n'importe quel cloud public, et fournit une surveillance et des alertes de santé 24h/24 et 7j/7. (Kube2go, une plate-forme de service de déploiement de cluster Kubernetes pour le déploiement de l'interface utilisateur Web9, a été intégrée à Platform9 Sandbox.)
+
+* [Stackpoint.io](https://stackpoint.io) fournit l'automatisation et la gestion de l'infrastructure Kubernetes pour plusieurs clouds publics.
+
+* [SysEleven MetaKube](https://www.syseleven.io/products-services/managed-kubernetes/) offre un Kubernetes-as-a-Service sur un cloud public OpenStack. Il inclut la gestion du cycle de vie, les tableaux de bord d'administration, la surveillance, la mise à l'échelle automatique et bien plus encore.
+
+* [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks) est une offre d'entreprise Kubernetes-as-a-Service faisant partie du catalogue de services Cloud VMware qui fournit des clusters Kubernetes faciles à utiliser, sécurisés par défaut, rentables et basés sur du SaaS.
+
+## Solutions clés en main
+
+Ces solutions vous permettent de créer des clusters Kubernetes sur une gamme de fournisseurs de Cloud IaaaS avec seulement
+quelques commandes. Ces solutions sont activement développées et bénéficient du soutien actif de la communauté.
+
+* [Agile Stacks](https://www.agilestacks.com/products/kubernetes)
+* [Alibaba Cloud](/docs/setup/turnkey/alibaba-cloud/)
+* [APPUiO](https://appuio.ch)
+* [AWS](/docs/setup/turnkey/aws/)
+* [Azure](/docs/setup/turnkey/azure/)
+* [CenturyLink Cloud](/docs/setup/turnkey/clc/)
+* [Conjure-up Kubernetes with Ubuntu on AWS, Azure, Google Cloud, Oracle Cloud](/docs/getting-started-guides/ubuntu/)
+* [Containership](https://containership.io/containership-platform)
+* [Docker Enterprise](https://www.docker.com/products/docker-enterprise)
+* [Gardener](https://gardener.cloud/)
+* [Giant Swarm](https://giantswarm.io)
+* [Google Compute Engine (GCE)](/docs/setup/turnkey/gce/)
+* [IBM Cloud](https://github.com/patrocinio/kubernetes-softlayer)
+* [Kontena Pharos](https://kontena.io/pharos/)
+* [Kubermatic](https://cloud.kubermatic.io)
+* [Kublr](https://kublr.com/)
+* [Madcore.Ai](https://madcore.ai/)
+* [Nirmata](https://nirmata.com/)
+* [Nutanix Karbon](https://www.nutanix.com/products/karbon/)
+* [Oracle Container Engine for K8s](https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengprerequisites.htm)
+* [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service)
+* [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/)
+* [Stackpoint.io](/docs/setup/turnkey/stackpoint/)
+* [Tectonic by CoreOS](https://coreos.com/tectonic)
+* [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks)
+
+## Solutions On-Premises clés en main
+
+Ces solutions vous permettent de créer des clusters Kubernetes sur votre cloud privé sécurisé avec seulement quelques commandes.
+
+* [Agile Stacks](https://www.agilestacks.com/products/kubernetes)
+* [APPUiO](https://appuio.ch)
+* [Docker Enterprise](https://www.docker.com/products/docker-enterprise)
+* [Giant Swarm](https://giantswarm.io)
+* [GKE On-Prem | Google Cloud](https://cloud.google.com/gke-on-prem/)
+* [IBM Cloud Private](https://www.ibm.com/cloud-computing/products/ibm-cloud-private/)
+* [Kontena Pharos](https://kontena.io/pharos/)
+* [Kubermatic](https://www.loodse.com)
+* [Kublr](https://kublr.com/)
+* [Mirantis Cloud Platform](https://www.mirantis.com/software/kubernetes/)
+* [Nirmata](https://nirmata.com/)
+* [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) (OCP) by [Red Hat](https://www.redhat.com)
+* [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service)
+* [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/)
+* [SUSE CaaS Platform](https://www.suse.com/products/caas-platform)
+* [SUSE Cloud Application Platform](https://www.suse.com/products/cloud-application-platform/)
+
+## Solutions personnalisées
+
+Kubernetes peut fonctionner sur une large gamme de fournisseurs de Cloud et d'environnements bare-metal, ainsi qu'avec de nombreux
+systèmes d'exploitation.
+
+Si vous pouvez trouver un guide ci-dessous qui correspond à vos besoins, utilisez-le. C'est peut-être un peu dépassé, mais...
+ce sera plus facile que de partir de zéro. Si vous voulez repartir de zéro, soit parce que vous avez des exigences particulières,
+ou simplement parce que vous voulez comprendre ce qu'il y a à l'interieur de Kubernetes
+essayez le guide [Getting Started from Scratch](/docs/setup/scratch/).
+
+### Universel
+
+Si vous avez déjà un moyen de configurer les ressources d'hébergement, utilisez
+[kubeadm](/docs/setup/independent/create-cluster-kubeadm/) pour déployer facilement un cluster
+avec une seule commande par machine.
+
+### Cloud
+
+Ces solutions sont des combinaisons de fournisseurs de cloud computing et de systèmes d'exploitation qui ne sont pas couverts par les solutions ci-dessus.
+
+* [Cloud Foundry Container Runtime (CFCR)](https://docs-cfcr.cfapps.io/)
+* [CoreOS on AWS or GCE](/docs/setup/custom-cloud/coreos/)
+* [Gardener](https://gardener.cloud/)
+* [Kublr](https://kublr.com/)
+* [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/)
+* [Kubespray](/docs/setup/custom-cloud/kubespray/)
+* [Rancher Kubernetes Engine (RKE)](https://github.com/rancher/rke)
+
+### VMs On-Premises
+
+* [Cloud Foundry Container Runtime (CFCR)](https://docs-cfcr.cfapps.io/)
+* [CloudStack](/docs/setup/on-premises-vm/cloudstack/) (uses Ansible, CoreOS and flannel)
+* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) (uses Fedora and flannel)
+* [Nutanix AHV](https://www.nutanix.com/products/acropolis/virtualization/)
+* [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) (OCP) Kubernetes platform by [Red Hat](https://www.redhat.com)
+* [oVirt](/docs/setup/on-premises-vm/ovirt/)
+* [Vagrant](/docs/setup/custom-cloud/coreos/) (uses CoreOS and flannel)
+* [VMware](/docs/setup/custom-cloud/coreos/) (uses CoreOS and flannel)
+* [VMware vSphere](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/)
+* [VMware vSphere, OpenStack, or Bare Metal](/docs/getting-started-guides/ubuntu/) (uses Juju, Ubuntu and flannel)
+
+### Bare Metal
+
+* [CoreOS](/docs/setup/custom-cloud/coreos/)
+* [Digital Rebar](/docs/setup/on-premises-metal/krib/)
+* [Docker Enterprise](https://www.docker.com/products/docker-enterprise)
+* [Fedora (Single Node)](/docs/getting-started-guides/fedora/fedora_manual_config/)
+* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/)
+* [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/)
+* [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) (OCP) Kubernetes platform by [Red Hat](https://www.redhat.com)
+
+### Integrations
+
+Ces solutions fournissent une intégration avec des orchestrateurs, des resources managers ou des plateformes tierces.
+
+* [DCOS](/docs/setup/on-premises-vm/dcos/)
+ * Community Edition DCOS utilise AWS
+ * Enterprise Edition DCOS supporte l'hébergement cloud, les VMs on-premises, et le bare-metal
+
+## Tableau des Solutions
+
+Ci-dessous vous trouverez un tableau récapitulatif de toutes les solutions listées précédemment.
+
+| Fournisseur de IaaS | Config. Mgmt. | OS | Réseau | Docs | Niveau de support |
+|------------------------------------------------|------------------------------------------------------------------------------|--------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| tous | tous | multi-support | tout les CNI | [docs](/docs/setup/independent/create-cluster-kubeadm/) | Project ([SIG-cluster-lifecycle](https://git.k8s.io/community/sig-cluster-lifecycle)) |
+| Google Kubernetes Engine | | | GCE | [docs](https://cloud.google.com/kubernetes-engine/docs/) | Commercial |
+| Docker Enterprise | personnalisé | [multi-support](https://success.docker.com/article/compatibility-matrix) | [multi-support](https://docs.docker.com/ee/ucp/kubernetes/install-cni-plugin/) | [docs](https://docs.docker.com/ee/) | Commercial |
+| IBM Cloud Private | Ansible | multi-support | multi-support | [docs](https://www.ibm.com/support/knowledgecenter/SSBS6K/product_welcome_cloud_private.html) | [Commercial](https://www.ibm.com/mysupport/s/topic/0TO500000001o0fGAA/ibm-cloud-private?language=en_US&productId=01t50000004X1PWAA0) and [Community](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.2/troubleshoot/support_types.html) |
+| Red Hat OpenShift | Ansible & CoreOS | RHEL & CoreOS | [multi-support](https://docs.openshift.com/container-platform/3.11/architecture/networking/network_plugins.html) | [docs](https://docs.openshift.com/container-platform/3.11/welcome/index.html) | Commercial |
+| Stackpoint.io | | multi-support | multi-support | [docs](https://stackpoint.io/) | Commercial |
+| AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | Commercial |
+| Madcore.Ai | Jenkins DSL | Ubuntu | flannel | [docs](https://madcore.ai) | Community ([@madcore-ai](https://github.com/madcore-ai)) |
+| Platform9 | | multi-support | multi-support | [docs](https://platform9.com/managed-kubernetes/) | Commercial |
+| Kublr | personnalisé | multi-support | multi-support | [docs](http://docs.kublr.com/) | Commercial |
+| Kubermatic | | multi-support | multi-support | [docs](http://docs.kubermatic.io/) | Commercial |
+| IBM Cloud Kubernetes Service | | Ubuntu | IBM Cloud Networking + Calico | [docs](https://cloud.ibm.com/docs/containers?topic=containers-container_index#container_index) | Commercial |
+| Giant Swarm | | CoreOS | flannel and/or Calico | [docs](https://docs.giantswarm.io/) | Commercial |
+| GCE | Saltstack | Debian | GCE | [docs](/docs/setup/turnkey/gce/) | Project |
+| Azure Kubernetes Service | | Ubuntu | Azure | [docs](https://docs.microsoft.com/en-us/azure/aks/) | Commercial |
+| Azure (IaaS) | | Ubuntu | Azure | [docs](/docs/setup/turnkey/azure/) | [Community (Microsoft)](https://github.com/Azure/acs-engine) |
+| Bare-metal | personnalisé | Fedora | _none_ | [docs](/docs/getting-started-guides/fedora/fedora_manual_config/) | Project |
+| Bare-metal | personnalisé | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) |
+| libvirt | personnalisé | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) |
+| KVM | personnalisé | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal)) |
+| DCOS | Marathon | CoreOS/Alpine | personnalisé | [docs](/docs/getting-started-guides/dcos/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md)) |
+| AWS | CoreOS | CoreOS | flannel | [docs](/docs/setup/turnkey/aws/) | Community |
+| GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires)) |
+| Vagrant | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles)) |
+| CloudStack | Ansible | CoreOS | flannel | [docs](/docs/getting-started-guides/cloudstack/) | Community ([@sebgoa](https://github.com/sebgoa)) |
+| VMware vSphere | tous | multi-support | multi-support | [docs](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/) | [Community](https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/contactus.html) |
+| Bare-metal | personnalisé | CentOS | flannel | [docs](/docs/getting-started-guides/centos/centos_manual_config/) | Community ([@coolsvap](https://github.com/coolsvap)) |
+| lxd | Juju | Ubuntu | flannel/canal | [docs](/docs/getting-started-guides/ubuntu/local/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) |
+| AWS | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) |
+| Azure | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) |
+| GCE | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) |
+| Oracle Cloud | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) |
+| Rackspace | personnalisé | CoreOS | flannel/calico/canal | [docs](https://developer.rackspace.com/docs/rkaas/latest/) | [Commercial](https://www.rackspace.com/managed-kubernetes) |
+| VMware vSphere | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) |
+| Bare Metal | Juju | Ubuntu | flannel/calico/canal | [docs](/docs/getting-started-guides/ubuntu/) | [Commercial](https://www.ubuntu.com/kubernetes) and [Community](https://jujucharms.com/kubernetes) |
+| AWS | Saltstack | Debian | AWS | [docs](/docs/setup/turnkey/aws/) | Community ([@justinsb](https://github.com/justinsb)) |
+| AWS | kops | Debian | AWS | [docs](https://github.com/kubernetes/kops/) | Community ([@justinsb](https://github.com/justinsb)) |
+| Bare-metal | personnalisé | Ubuntu | flannel | [docs](/docs/getting-started-guides/ubuntu/) | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY)) |
+| oVirt | | | | [docs](/docs/setup/on-premises-vm/ovirt/) | Community ([@simon3z](https://github.com/simon3z)) |
+| tous | tous | tous | tous | [docs](/docs/setup/scratch/) | Community ([@erictune](https://github.com/erictune)) |
+| tous | tous | tous | tous | [docs](http://docs.projectcalico.org/v2.2/getting-started/kubernetes/installation/) | Commercial and Community |
+| tous | RKE | multi-support | flannel or canal | [docs](https://rancher.com/docs/rancher/v2.x/en/quick-start-guide/) | [Commercial](https://rancher.com/what-is-rancher/overview/) and [Community](https://github.com/rancher/rancher) |
+| tous | [Gardener Cluster-Operator](https://kubernetes.io/blog/2018/05/17/gardener/) | multi-support | multi-support | [docs](https://gardener.cloud) | [Project/Community](https://github.com/gardener) and [Commercial]( https://cloudplatform.sap.com/) |
+| Alibaba Cloud Container Service For Kubernetes | ROS | CentOS | flannel/Terway | [docs](https://www.aliyun.com/product/containerservice) | Commercial |
+| Agile Stacks | Terraform | CoreOS | multi-support | [docs](https://www.agilestacks.com/products/kubernetes) | Commercial |
+| IBM Cloud Kubernetes Service | | Ubuntu | calico | [docs](https://cloud.ibm.com/docs/containers?topic=containers-container_index#container_index) | Commercial |
+| Digital Rebar | kubeadm | tous | metal | [docs](/docs/setup/on-premises-metal/krib/) | Community ([@digitalrebar](https://github.com/digitalrebar)) |
+| VMware Cloud PKS | | Photon OS | Canal | [docs](https://docs.vmware.com/en/VMware-Kubernetes-Engine/index.html) | Commercial |
+| Mirantis Cloud Platform | Salt | Ubuntu | multi-support | [docs](https://docs.mirantis.com/mcp/) | Commercial |
+
+{{< note >}}
+Le tableau ci-dessus est ordonné par versions testées et utilisées dans les noeuds, suivis par leur niveau de support.
+{{< /note >}}
+
+### Définition des colonnes
+
+* **IaaS Provider** est le produit ou l'organisation qui fournit les machines virtuelles ou physiques (nœuds) sur lesquelles Kubernetes fonctionne.
+* **OS** est le système d'exploitation de base des nœuds.
+* **Config. Mgmt.** est le système de gestion de configuration qui permet d'installer et de maintenir Kubernetes sur les
+ nœuds.
+* **Le réseau** est ce qui implémente le [modèle de réseau](/docs/concepts/cluster-administration/networking/). Ceux qui ont le type de réseautage
+ Aucun_ ne peut pas prendre en charge plus d'un nœud unique, ou peut prendre en charge plusieurs nœuds VM dans un nœud physique unique.
+* **Conformité** indique si un cluster créé avec cette configuration a passé la conformité du projet.
+ pour le support de l'API et des fonctionnalités de base de Kubernetes v1.0.0.
+* **Niveaux de soutien**
+ * **Projet** : Les contributeurs de Kubernetes utilisent régulièrement cette configuration, donc elle fonctionne généralement avec la dernière version.
+ de Kubernetes.
+ * **Commercial** : Une offre commerciale avec son propre dispositif d'accompagnement.
+ * **Communauté** : Soutenu activement par les contributions de la communauté. Peut ne pas fonctionner avec les versions récentes de Kubernetes.
+ * **Inactif** : Pas de maintenance active. Déconseillé aux nouveaux utilisateurs de Kubernetes et peut être retiré.
+* **Note** contient d'autres informations pertinentes, telles que la version de Kubernetes utilisée.
+
+
+
+[1]: https://gist.github.com/erictune/4cabc010906afbcc5061
+
+[2]: https://gist.github.com/derekwaynecarr/505e56036cdf010bf6b6
+
+[3]: https://gist.github.com/erictune/2f39b22f72565365e59b
+
+{{% /capture %}}
diff --git a/content/fr/docs/setup/release/_index.md b/content/fr/docs/setup/release/_index.md
new file mode 100755
index 0000000000000..4fd044a6ceb02
--- /dev/null
+++ b/content/fr/docs/setup/release/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Télécharger Kubernetes"
+weight: 20
+---
+
diff --git a/content/fr/docs/setup/release/building-from-source.md b/content/fr/docs/setup/release/building-from-source.md
new file mode 100644
index 0000000000000..88105ddc8c102
--- /dev/null
+++ b/content/fr/docs/setup/release/building-from-source.md
@@ -0,0 +1,29 @@
+---
+title: Construire une release
+content_template: templates/concept
+---
+{{% capture overview %}}
+Vous pouvez soit compiler une version à partir des sources, soit télécharger une version pré-compilée. Si vous ne
+prévoyez pas de développer Kubernetes nous vous suggérons d'utiliser une version pré-compilée de la version actuelle,
+ que l'on peut trouver dans le répertoire [Release Notes](/docs/setup/release/notes/).
+
+Le code source de Kubernetes peut être téléchargé sur le repo [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes).
+
+{{% /capture %}}
+
+{{% capture body %}}
+## Installer à partir des sources
+
+Si vous installez simplement une version à partir des sources, il n'est pas nécessaire de mettre en place un environnement golang complet car tous les builds se font dans un conteneur Docker.
+
+Construire une release est simple.
+
+```shell
+git clone https://github.com/kubernetes/kubernetes.git
+cd kubernetes
+make release
+```
+
+Pour plus de détails sur le processus de release, voir le repertoire [`build`](http://releases.k8s.io/{{< param "githubbranch" >}}/build/) dans kubernetes/kubernetes.
+
+{{% /capture %}}
diff --git a/content/fr/docs/templates/feature-state-alpha.txt b/content/fr/docs/templates/feature-state-alpha.txt
new file mode 100644
index 0000000000000..3347e1656ee59
--- /dev/null
+++ b/content/fr/docs/templates/feature-state-alpha.txt
@@ -0,0 +1,7 @@
+Cette fonctionnalité est actuellement dans un état *alpha*, ce qui signifie :
+
+* Les noms de version contiennent: alpha (par ex. v1alpha1).
+* La fonctionnalité peut contenir des bugs. L'activation de cette fonctionnalité peut donc vous exposer aux effets de ces bugs. La fonctionnalité est désactivée par défaut.
+* La prise en charge de cette fonctionnalité peut être supprimée à tout moment sans préavis.
+* La retro-compatibilité n'est pas assurée pour les prochaines versions.
+* Recommandé pour une utilisation uniquement dans les clusters de test de courte durée, en raison d'un risque accru de bugs et d'un manque de support à long terme.
diff --git a/content/fr/docs/templates/feature-state-beta.txt b/content/fr/docs/templates/feature-state-beta.txt
new file mode 100644
index 0000000000000..3f08b6e86965f
--- /dev/null
+++ b/content/fr/docs/templates/feature-state-beta.txt
@@ -0,0 +1,8 @@
+Cette fonctionnalité est actuellement dans un état *beta*, c'est-à-dire :
+
+* Les noms de version contiennent: beta (par ex. v2beta3).
+* Le code est bien testé. L'activation de cette fonctionnalité est considérée comme sûre. Elle est activée par défaut.
+* La prise en charge de l'ensemble de la fonctionnalité ne sera pas supprimée, bien que des détails puissent changer.
+* Le schéma et/ou la sémantique des objets peuvent changer de manière incompatible dans une version bêta ou stable ultérieure. Lorsque cela se produira, nous vous fournirons des instructions pour migrer vers la prochaine version. Cela peut nécessiter la suppression, l'édition et la recréation d'objets API. Le processus de changement de version peut nécessiter une certaine réflexion. Cela peut aussi nécessiter une interruption de service pour les applications qui s'appuient sur cette fonctionnalité.
+* Recommandé uniquement pour les utilisations non critiques en raison du risque de changements incompatibles dans les versions ultérieures. Si vous avez plusieurs clusters qui peuvent être mis à niveau indépendamment, vous pouvez assouplir cette restriction.
+* Donnez votre avis sur cette fonctionnalité bêta ! Une fois qu'elle aura quitté la version bêta, ce sera probablement moins évident pour nous d'apporter d'autres changements.
diff --git a/content/fr/docs/templates/feature-state-deprecated.txt b/content/fr/docs/templates/feature-state-deprecated.txt
new file mode 100644
index 0000000000000..bb48cc245035d
--- /dev/null
+++ b/content/fr/docs/templates/feature-state-deprecated.txt
@@ -0,0 +1 @@
+Cette fonctionnalité est *obsolète*. Pour plus d'informations sur cet état, voir la [Politique d'obsolescence de Kubernetes](/docs/reference/deprecation-policy/).
diff --git a/content/fr/docs/templates/feature-state-stable.txt b/content/fr/docs/templates/feature-state-stable.txt
new file mode 100644
index 0000000000000..e97e69712db57
--- /dev/null
+++ b/content/fr/docs/templates/feature-state-stable.txt
@@ -0,0 +1,4 @@
+Cette fonctionnalité est *stable*, ce qui signifie :
+
+* Le nom de version est vX où X est un entier.
+* Les versions stables des fonctionnalités seront maintenues pour de nombreuses versions ultérieures.
diff --git a/content/fr/docs/templates/index.md b/content/fr/docs/templates/index.md
new file mode 100644
index 0000000000000..9d7bccd143f5f
--- /dev/null
+++ b/content/fr/docs/templates/index.md
@@ -0,0 +1,13 @@
+---
+headless: true
+
+resources:
+- src: "*alpha*"
+ title: "alpha"
+- src: "*beta*"
+ title: "beta"
+- src: "*deprecated*"
+ title: "deprecated"
+- src: "*stable*"
+ title: "stable"
+---
diff --git a/content/fr/docs/tutorials/hello-minikube.md b/content/fr/docs/tutorials/hello-minikube.md
new file mode 100644
index 0000000000000..e8e7d46fbe24a
--- /dev/null
+++ b/content/fr/docs/tutorials/hello-minikube.md
@@ -0,0 +1,269 @@
+---
+titre : Hello Minikube
+content_template : templates/tutorial
+poids : 5
+menu :
+ principal :
+ titre : "Démarrer"
+ poids : 10
+ poste : >
+
Prêt à vous salir les mains ? Créez un cluster Kubernetes simple qui exécute "Hello World" pour Node.js.
>.
+---
+
+{{% capture overview %}}
+
+Ce tutoriel vous montre comment exécuter une simple application Hello World Node.js sur Kubernetes en utilisant [Minikube](/docs/getting-start-guides/minikube) et Katacoda.
+Katacoda fournit un environnement Kubernetes gratuit dans le navigateur.
+
+{{< note >}}
+Vous pouvez également suivre ce tutoriel si vous avez installé [Minikube localement](/docs/tasks/tools/install-minikube/).
+{{< /note >}}
+
+{{% /capture %}}
+
+{{% capture objectives %}}
+
+* Déployez une application Hello World sur Minikube.
+* Lancez l'application.
+* Afficher les journaux des applications.
+
+{{% /capture %}}
+
+{{% capture prerequisites %}}
+
+Ce tutoriel fournit une image de conteneur construite à partir des fichiers suivants :
+
+{{< codenew language="js" file="minikube/server.js" >}}
+
+{{< codenew language="conf" file="minikube/Dockerfile" >}}
+
+Pour plus d'informations sur la commande `docker build`, lisez la documentation de[Docker](https://docs.docker.com/engine/reference/commandline/build/).
+
+{{% /capture %}}
+
+{{% capture lessoncontent %}}
+
+## Créer un cluster Minikube
+
+1. Cliquez sur **Lancer le terminal**.
+
+ {{< kat-button >}}
+
+ {{< note >}} Si vous avez installé Minikube localement, lancez `minikube start`. {{< /note >}}
+
+2. Ouvrez le tableau de bord Kubernetes dans un navigateur :
+
+ ```shell
+ minikube dashboard
+ ```
+
+3. Environnement Katacoda seulement : En haut du volet du terminal, cliquez sur le signe plus, puis cliquez sur **Sélectionner le port pour afficher sur l'hôte 1**.
+
+4. Environnement Katacoda seulement : Tapez `30000`, puis cliquez sur **Afficher le port**.
+
+## Créer un déploiement
+
+Un [*Pod*](/docs/concepts/workloads/pods/pods/pod/) Kubernetes est un groupe d'un ou plusieurs conteneurs, liés entre eux à des fins d'administration et de mise en réseau.
+Dans ce tutoriel, le Pod n'a qu'un seul conteneur.
+Un [*Déploiement*](/docs/concepts/concepts/charges de travail/contrôleurs/déploiement/) Kubernetes vérifie l'état de santé de votre Pod et redémarre le conteneur du Pod s'il se termine.
+Les déploiements sont le moyen recommandé pour gérer la création et la mise à l'échelle des Pods.
+
+1. Utilisez la commande `kubectl create` pour créer un déploiement qui gère un Pod. Le
+Pod utilise un conteneur basé sur l'image Docker fournie.
+
+ ```shell
+ kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
+ ```
+
+2. Affichez le déploiement :
+
+ ```shell
+ Déploiements de kubectl
+ ```
+
+ Sortie :
+
+ ```shell
+ NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
+ hello-node 1 1 1 1 1m
+ ```
+
+3. Voir le Pod :
+
+ ```shell
+ kubectl get pods
+ ```
+ Sortie :
+
+ ```shell
+ NAME READY STATUS RESTARTS AGE
+ hello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m
+ ```
+
+4. Afficher les événements du cluster :
+
+ ```shell
+ kubectl get events
+ ```
+
+5. Voir la configuration de `kubectl` :
+
+ ```shell
+ kubectl config view
+ ```
+
+ {{< note >}}Pour plus d'informations sur les commandes `kubectl`, voir la [vue d'ensemble de kubectl](/docs/user-guide/kubectl-overview/) {{< /note >}}.
+
+## Créer un service
+
+Par défaut, le Pod n'est accessible que par son adresse IP interne dans le
+Kubernetes cluster.
+Pour rendre le conteneur `hello-node` accessible de l'extérieur du réseau virtuel Kubernetes, vous devez exposer le Pod comme un [*Service*](/docs/concepts/services-networking/service/) Kubernetes.
+
+1. Exposez le Pod à l'Internet public en utilisant la commande `kubectl expose` :
+
+ ```shell
+ kubectl expose deployment hello-node --type=LoadBalancer --port=8080
+ ```
+
+ L'indicateur `--type=LoadBalancer` indique que vous voulez exposer votre Service
+ à l'extérieur du cluster.
+
+2. Affichez le Service que vous venez de créer :
+
+ ```shell
+ kubectl get services
+ ```
+
+ Sortie :
+
+ ```shell
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ hello-node LoadBalancer 10.108.144.78 8080:30369/TCP 21s
+ kubernetes ClusterIP 10.96.0.1 443/TCP 23m
+ ```
+
+ Sur les fournisseurs de cloud qui supportent les load balancers, une adresse IP externe serait fournie pour accéder au Service.
+ Sur Minikube, le type `LoadBalancer` rend le Service accessible via la commande `minikube service`.
+
+3. Exécutez la commande suivante :
+
+ ```shell
+ minikube service hello-node hello-node
+ ```
+
+4. Environnement Katacoda seulement : Cliquez sur le signe plus, puis cliquez sur **Sélectionner le port pour afficher sur l'hôte 1**.
+
+5. Environnement Katacoda seulement : Tapez `30369` (voir port en face de `8080` dans la sortie services), puis cliquez sur **Afficher le port**.
+
+ Cela ouvre une fenêtre de navigateur qui sert votre application et affiche le message `Hello World`.
+
+## Activer les addons
+
+Minikube dispose d'un ensemble d'addons intégrés qui peuvent être activés, désactivés et ouverts dans l'environnement Kubernetes local.
+
+1. Énumérer les addons actuellement pris en charge :
+
+ ```shell
+ minikube addons list
+ ```
+
+ Sortie:
+
+ ```
+ addon-manager: enabled
+ coredns: disabled
+ dashboard: enabled
+ default-storageclass: enabled
+ efk: disabled
+ freshpod: disabled
+ heapster: disabled
+ ingress: disabled
+ kube-dns: enabled
+ metrics-server: disabled
+ nvidia-driver-installer: disabled
+ nvidia-gpu-device-plugin: disabled
+ registry: disabled
+ registry-creds: disabled
+ storage-provisioner: enabled
+ ```
+
+2. Activez un addon, par exemple, `heapster` :
+
+ ```shell
+ minikube addons enable heapster
+ ```
+
+ Sortie :
+
+ ```shell
+ heapster was successfully enabled
+ ```
+
+3. Affichez le pod et le service que vous venez de créer :
+
+ ```shell
+ kubectl get pod,svc -n kube-system
+ ```
+
+ Sortie :
+
+ ```shell
+ NAME READY STATUS RESTARTS AGE
+ pod/heapster-9jttx 1/1 Running 0 26s
+ pod/influxdb-grafana-b29w8 2/2 Running 0 26s
+ pod/kube-addon-manager-minikube 1/1 Running 0 34m
+ pod/kube-dns-6dcb57bcc8-gv7mw 3/3 Running 0 34m
+ pod/kubernetes-dashboard-5498ccf677-cgspw 1/1 Running 0 34m
+ pod/storage-provisioner 1/1 Running 0 34m
+
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ service/heapster ClusterIP 10.96.241.45 80/TCP 26s
+ service/kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 34m
+ service/kubernetes-dashboard NodePort 10.109.29.1 80:30000/TCP 34m
+ service/monitoring-grafana NodePort 10.99.24.54 80:30002/TCP 26s
+ service/monitoring-influxdb ClusterIP 10.111.169.94 8083/TCP,8086/TCP 26s
+ ```
+
+4. Désactivez `heapster` :
+
+ ```shell
+ minikube addons disable heapster
+ ```
+
+ Sortie :
+
+ ```shell
+ heapster was successfully disabled
+ ```
+
+## Nettoyage
+
+Vous pouvez maintenant nettoyer les ressources que vous avez créées dans votre cluster :
+
+```shell
+kubectl delete service hello-node
+kubectl delete deployment hello-node
+```
+
+Si nécessaire, arrêtez la machine virtuelle Minikube (VM) :
+
+```shell
+minikube stop
+```
+
+Si nécessaire, effacez le Minikube VM :
+
+```shell
+minikube delete
+```
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* En savoir plus sur les [déploiement](/docs/concepts/workloads/controllers/deployment/).
+* En savoir plus sur le [Déploiement d'applications](/docs/user-guide/deploying-applications/).
+* En savoir plus sur les [Services](/docs/concepts/services-networking/service/).
+
+{{% /capture %}}
diff --git a/content/fr/examples/application/deployment.yaml b/content/fr/examples/application/deployment.yaml
new file mode 100644
index 0000000000000..68ab8289b5a0f
--- /dev/null
+++ b/content/fr/examples/application/deployment.yaml
@@ -0,0 +1,19 @@
+apiVersion: apps/v1 # apps/v1beta2를 사용하는 1.9.0보다 더 이전의 버전용
+kind: Deployment
+metadata:
+ name: nginx-deployment
+spec:
+ selector:
+ matchLabels:
+ app: nginx
+ replicas: 2 # 템플릿에 매칭되는 파드 2개를 구동하는 디플로이먼트임
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx:1.7.9
+ ports:
+ - containerPort: 80
diff --git a/content/fr/examples/application/guestbook/frontend-deployment.yaml b/content/fr/examples/application/guestbook/frontend-deployment.yaml
new file mode 100644
index 0000000000000..50d6e1f0d4819
--- /dev/null
+++ b/content/fr/examples/application/guestbook/frontend-deployment.yaml
@@ -0,0 +1,38 @@
+apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
+kind: Deployment
+metadata:
+ name: frontend
+ labels:
+ app: guestbook
+spec:
+ selector:
+ matchLabels:
+ app: guestbook
+ tier: frontend
+ replicas: 3
+ template:
+ metadata:
+ labels:
+ app: guestbook
+ tier: frontend
+ spec:
+ containers:
+ - name: php-redis
+ image: gcr.io/google-samples/gb-frontend:v4
+ resources:
+ requests:
+ cpu: 100m
+ memory: 100Mi
+ env:
+ - name: GET_HOSTS_FROM
+ value: dns
+ # Using `GET_HOSTS_FROM=dns` requires your cluster to
+ # provide a dns service. As of Kubernetes 1.3, DNS is a built-in
+ # service launched automatically. However, if the cluster you are using
+ # does not have a built-in DNS service, you can instead
+ # access an environment variable to find the master
+ # service's host. To do so, comment out the 'value: dns' line above, and
+ # uncomment the line below:
+ # value: env
+ ports:
+ - containerPort: 80
diff --git a/content/fr/examples/application/guestbook/frontend-service.yaml b/content/fr/examples/application/guestbook/frontend-service.yaml
new file mode 100644
index 0000000000000..dca33530c15ce
--- /dev/null
+++ b/content/fr/examples/application/guestbook/frontend-service.yaml
@@ -0,0 +1,18 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: frontend
+ labels:
+ app: guestbook
+ tier: frontend
+spec:
+ # comment or delete the following line if you want to use a LoadBalancer
+ type: NodePort
+ # if your cluster supports it, uncomment the following to automatically create
+ # an external load-balanced IP for the frontend service.
+ # type: LoadBalancer
+ ports:
+ - port: 80
+ selector:
+ app: guestbook
+ tier: frontend
diff --git a/content/fr/examples/application/guestbook/redis-master-deployment.yaml b/content/fr/examples/application/guestbook/redis-master-deployment.yaml
new file mode 100644
index 0000000000000..fc6f418c39ed1
--- /dev/null
+++ b/content/fr/examples/application/guestbook/redis-master-deployment.yaml
@@ -0,0 +1,29 @@
+apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
+kind: Deployment
+metadata:
+ name: redis-master
+ labels:
+ app: redis
+spec:
+ selector:
+ matchLabels:
+ app: redis
+ role: master
+ tier: backend
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: redis
+ role: master
+ tier: backend
+ spec:
+ containers:
+ - name: master
+ image: k8s.gcr.io/redis:e2e # or just image: redis
+ resources:
+ requests:
+ cpu: 100m
+ memory: 100Mi
+ ports:
+ - containerPort: 6379
diff --git a/content/fr/examples/application/guestbook/redis-master-service.yaml b/content/fr/examples/application/guestbook/redis-master-service.yaml
new file mode 100644
index 0000000000000..a484014f1fe3b
--- /dev/null
+++ b/content/fr/examples/application/guestbook/redis-master-service.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: redis-master
+ labels:
+ app: redis
+ role: master
+ tier: backend
+spec:
+ ports:
+ - port: 6379
+ targetPort: 6379
+ selector:
+ app: redis
+ role: master
+ tier: backend
diff --git a/content/fr/examples/application/guestbook/redis-slave-deployment.yaml b/content/fr/examples/application/guestbook/redis-slave-deployment.yaml
new file mode 100644
index 0000000000000..ec4e48bc211a0
--- /dev/null
+++ b/content/fr/examples/application/guestbook/redis-slave-deployment.yaml
@@ -0,0 +1,40 @@
+apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
+kind: Deployment
+metadata:
+ name: redis-slave
+ labels:
+ app: redis
+spec:
+ selector:
+ matchLabels:
+ app: redis
+ role: slave
+ tier: backend
+ replicas: 2
+ template:
+ metadata:
+ labels:
+ app: redis
+ role: slave
+ tier: backend
+ spec:
+ containers:
+ - name: slave
+ image: gcr.io/google_samples/gb-redisslave:v1
+ resources:
+ requests:
+ cpu: 100m
+ memory: 100Mi
+ env:
+ - name: GET_HOSTS_FROM
+ value: dns
+ # Using `GET_HOSTS_FROM=dns` requires your cluster to
+ # provide a dns service. As of Kubernetes 1.3, DNS is a built-in
+ # service launched automatically. However, if the cluster you are using
+ # does not have a built-in DNS service, you can instead
+ # access an environment variable to find the master
+ # service's host. To do so, comment out the 'value: dns' line above, and
+ # uncomment the line below:
+ # value: env
+ ports:
+ - containerPort: 6379
diff --git a/content/fr/examples/application/guestbook/redis-slave-service.yaml b/content/fr/examples/application/guestbook/redis-slave-service.yaml
new file mode 100644
index 0000000000000..238fd63fb6a29
--- /dev/null
+++ b/content/fr/examples/application/guestbook/redis-slave-service.yaml
@@ -0,0 +1,15 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: redis-slave
+ labels:
+ app: redis
+ role: slave
+ tier: backend
+spec:
+ ports:
+ - port: 6379
+ selector:
+ app: redis
+ role: slave
+ tier: backend
diff --git a/content/fr/examples/application/hpa/php-apache.yaml b/content/fr/examples/application/hpa/php-apache.yaml
new file mode 100644
index 0000000000000..c73ae7d631b58
--- /dev/null
+++ b/content/fr/examples/application/hpa/php-apache.yaml
@@ -0,0 +1,13 @@
+apiVersion: autoscaling/v1
+kind: HorizontalPodAutoscaler
+metadata:
+ name: php-apache
+ namespace: default
+spec:
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: php-apache
+ minReplicas: 1
+ maxReplicas: 10
+ targetCPUUtilizationPercentage: 50
diff --git a/content/fr/examples/minikube/Dockerfile b/content/fr/examples/minikube/Dockerfile
new file mode 100644
index 0000000000000..1fe745295a47f
--- /dev/null
+++ b/content/fr/examples/minikube/Dockerfile
@@ -0,0 +1,4 @@
+FROM node:6.14.2
+EXPOSE 8080
+COPY server.js .
+CMD node server.js
diff --git a/content/fr/examples/minikube/server.js b/content/fr/examples/minikube/server.js
new file mode 100644
index 0000000000000..76345a17d81db
--- /dev/null
+++ b/content/fr/examples/minikube/server.js
@@ -0,0 +1,9 @@
+var http = require('http');
+
+var handleRequest = function(request, response) {
+ console.log('Received request for URL: ' + request.url);
+ response.writeHead(200);
+ response.end('Hello World!');
+};
+var www = http.createServer(handleRequest);
+www.listen(8080);
diff --git a/content/fr/examples/pods/config/redis-config b/content/fr/examples/pods/config/redis-config
new file mode 100644
index 0000000000000..ead340713c830
--- /dev/null
+++ b/content/fr/examples/pods/config/redis-config
@@ -0,0 +1,2 @@
+maxmemory 2mb
+maxmemory-policy allkeys-lru
diff --git a/content/fr/examples/pods/config/redis-pod.yaml b/content/fr/examples/pods/config/redis-pod.yaml
new file mode 100644
index 0000000000000..259dbf853aa4c
--- /dev/null
+++ b/content/fr/examples/pods/config/redis-pod.yaml
@@ -0,0 +1,30 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: redis
+spec:
+ containers:
+ - name: redis
+ image: kubernetes/redis:v1
+ env:
+ - name: MASTER
+ value: "true"
+ ports:
+ - containerPort: 6379
+ resources:
+ limits:
+ cpu: "0.1"
+ volumeMounts:
+ - mountPath: /redis-master-data
+ name: data
+ - mountPath: /redis-master
+ name: config
+ volumes:
+ - name: data
+ emptyDir: {}
+ - name: config
+ configMap:
+ name: example-redis-config
+ items:
+ - key: redis-config
+ path: redis.conf
diff --git a/content/it/OWNERS b/content/it/OWNERS
new file mode 100644
index 0000000000000..1152b909db0b1
--- /dev/null
+++ b/content/it/OWNERS
@@ -0,0 +1,11 @@
+# This is the localization project for Italian.
+# Teams and members are visible at https://github.com/orgs/kubernetes/teams.
+
+reviewers:
+- sig-docs-it-reviews
+
+approvers:
+- sig-docs-it-owners
+
+labels:
+- language/it
\ No newline at end of file
diff --git a/content/it/_common-resources/images/blocks.png b/content/it/_common-resources/images/blocks.png
new file mode 100644
index 0000000000000..3bf60834212a4
Binary files /dev/null and b/content/it/_common-resources/images/blocks.png differ
diff --git a/content/it/_common-resources/images/flower.png b/content/it/_common-resources/images/flower.png
new file mode 100755
index 0000000000000..adc99f5df50b3
Binary files /dev/null and b/content/it/_common-resources/images/flower.png differ
diff --git a/content/it/_common-resources/images/kub_video_banner_homepage.jpg b/content/it/_common-resources/images/kub_video_banner_homepage.jpg
new file mode 100644
index 0000000000000..57582e493845f
Binary files /dev/null and b/content/it/_common-resources/images/kub_video_banner_homepage.jpg differ
diff --git a/content/it/_common-resources/images/scalable.png b/content/it/_common-resources/images/scalable.png
new file mode 100644
index 0000000000000..21bdb0393c8f1
Binary files /dev/null and b/content/it/_common-resources/images/scalable.png differ
diff --git a/content/it/_common-resources/images/suitcase.png b/content/it/_common-resources/images/suitcase.png
new file mode 100644
index 0000000000000..59e070f64aeb2
Binary files /dev/null and b/content/it/_common-resources/images/suitcase.png differ
diff --git a/content/it/_common-resources/index.md b/content/it/_common-resources/index.md
new file mode 100644
index 0000000000000..3d65eaa0ff97e
--- /dev/null
+++ b/content/it/_common-resources/index.md
@@ -0,0 +1,3 @@
+---
+headless: true
+---
\ No newline at end of file
diff --git a/content/it/_index.html b/content/it/_index.html
new file mode 100644
index 0000000000000..5d5e7fadf255c
--- /dev/null
+++ b/content/it/_index.html
@@ -0,0 +1,62 @@
+---
+title: "Container Orchestration a livello di produzione"
+abstract: "Implementazione, ridimensionamento e gestione automatizzata dei container "
+cid: home
+---
+
+{{< deprecationwarning >}}
+
+{{< blocks/section id="oceanNodes" >}}
+{{% blocks/feature image="flower" %}}
+### [Kubernetes (k8s)]({{< relref "/docs/concepts/overview/what-is-kubernetes" >}})
+
+è un sistema open source per automatizzare la distribuzione il ridimensionamento e la gestione di applicazioni containerizzate.
+
+Raggruppa i contenitori che costituiscono un'applicazione in unità logiche per una piu facile gestione. Kubernetes si basa [su 15 anni di esperienza di Google](http://queue.acm.org/detail.cfm?id=2898444) ,combinando idee e messe in pratica suggerite da una comunità.
+{{% /blocks/feature %}}
+
+{{% blocks/feature image="scalable" %}}
+#### Planet Scale
+
+Progettato secondo gli stessi principi che consentono a Google di gestire miliardi di container alla settimana, Kubernetes può scalare senza aumentare il tuo team operativo.
+
+{{% /blocks/feature %}}
+
+{{% blocks/feature image="blocks" %}}
+#### Never Outgrow
+
+Che si tratti di eseguire test localmente o di gestire un'azienda globale, la flessibilità di Kubernetes cresce con te per fornire le tue applicazioni in modo coerente e semplice, indipendentemente dalla complessità delle tue esigenze.
+{{% /blocks/feature %}}
+
+{{% blocks/feature image="suitcase" %}}
+#### Run Anywhere
+
+Kubernetes è open source e ti offre la libertà di trarre vantaggio dall'infrastruttura cloud locale, ibrida o pubblica, consentendo di spostare facilmente i carichi di lavoro nel punto in cui è importante per te.
+{{% /blocks/feature %}}
+
+{{< /blocks/section >}}
+
+{{< blocks/section id="video" background-image="kub_video_banner_homepage" >}}
+
+
Le sfide della migrazione di oltre 150 microservizi a Kubernetes
+
By Sarah Wells, Technical Director for Operations and Reliability, Financial Times
hai un interesse particolare nel modo in cui Kubernetes funziona con un'altra tecnologia? Guarda la nostra crescita
+ lista dei SIG,
+ da AWS e Openstack a Big Data e Scalabilità, c'è un posto per te per contribuire e istruzioni
+ puoi creare un nuovo SIG di tuo interesse se non e presente(ancora).
+
+
Come membro della comunità di Kubernetes, puoi partecipare a qualsiasi riunione SIG
+ a cui sei interessato. Non è richiesta alcuna registrazione.
+
+
+
+
+
Codice di condotta
+
La comunità di Kubernetes apprezza il rispetto e l'inclusività, e
+ applica un Codice di condotta in tutte
+ le interazioni. Se si nota una violazione del Codice di condotta a
+ in un evento o meeting in Slack, or in qualsiasi canale di comunicazione
+ contatta il Comitato di Condotta di Kubernetes
+ conduct@kubernetes.io.
+ Il tuo anonimato sarà protetto.
+
+
+
+
+
+
+
+
Parla con noi!
+
Ci piacerebbe sapere da te, come stai usando Kubernetes, e cosa possiamo fare per renderlo migliore.
+{{< include "/static/cncf-code-of-conduct.md" >}}
+
+
diff --git a/content/it/community/static/README.md b/content/it/community/static/README.md
new file mode 100644
index 0000000000000..28cbb54eef430
--- /dev/null
+++ b/content/it/community/static/README.md
@@ -0,0 +1,2 @@
+I file in questa directory sono stati importati da altre fonti. Non
+modificali direttamente, tranne sostituendoli con nuove versioni.
\ No newline at end of file
diff --git a/content/it/community/static/cncf-code-of-conduct.md b/content/it/community/static/cncf-code-of-conduct.md
new file mode 100644
index 0000000000000..3ee025ba344fa
--- /dev/null
+++ b/content/it/community/static/cncf-code-of-conduct.md
@@ -0,0 +1,46 @@
+
+## CNCF Community Code of Conduct v1.0
+
+### Contributor Code of Conduct
+
+As contributors and maintainers of this project, and in the interest of fostering
+an open and welcoming community, we pledge to respect all people who contribute
+through reporting issues, posting feature requests, updating documentation,
+submitting pull requests or patches, and other activities.
+
+We are committed to making participation in this project a harassment-free experience for
+everyone, regardless of level of experience, gender, gender identity and expression,
+sexual orientation, disability, personal appearance, body size, race, ethnicity, age,
+religion, or nationality.
+
+Examples of unacceptable behavior by participants include:
+
+* The use of sexualized language or imagery
+* Personal attacks
+* Trolling or insulting/derogatory comments
+* Public or private harassment
+* Publishing other's private information, such as physical or electronic addresses,
+ without explicit permission
+* Other unethical or unprofessional conduct.
+
+Project maintainers have the right and responsibility to remove, edit, or reject
+comments, commits, code, wiki edits, issues, and other contributions that are not
+aligned to this Code of Conduct. By adopting this Code of Conduct, project maintainers
+commit themselves to fairly and consistently applying these principles to every aspect
+of managing this project. Project maintainers who do not follow or enforce the Code of
+Conduct may be permanently removed from the project team.
+
+This code of conduct applies both within project spaces and in public spaces
+when an individual is representing the project or its community.
+
+Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting
+the [Kubernetes Code of Conduct Committee](https://github.com/kubernetes/community/tree/master/committee-code-of-conduct) .
+
+This Code of Conduct is adapted from the Contributor Covenant
+(http://contributor-covenant.org), version 1.2.0, available at
+http://contributor-covenant.org/version/1/2/0/
+
+### CNCF Events Code of Conduct
+
+CNCF events are governed by the Linux Foundation [Code of Conduct](http://events.linuxfoundation.org/events/cloudnativecon/attend/code-of-conduct) available on the event page. This is designed to be compatible with the above policy and also includes more details on responding to incidents.
diff --git a/content/it/docs/concepts/overview/what-is-kubernetes.md b/content/it/docs/concepts/overview/what-is-kubernetes.md
new file mode 100644
index 0000000000000..a330045dc4433
--- /dev/null
+++ b/content/it/docs/concepts/overview/what-is-kubernetes.md
@@ -0,0 +1,157 @@
+---
+title: Che cos'è Kubernetes?
+content_template: templates/concept
+weight: 10
+---
+
+{{% capture overview %}}
+Questa pagina è una panoramica di Kubernetes
+{{% /capture %}}
+
+{{% capture body %}}
+Kubernetes è una piattaforma open source portatile ed estensibile per la gestione
+di carichi di lavoro e servizi containerizzati, che facilita sia la configurazione
+dichiarativa che l'automazione. Ha un grande ecosistema in rapida crescita.
+I servizi, il supporto e gli strumenti di Kubernetes sono ampiamente disponibili.
+
+Google ha aperto il progetto Kubernetes nel 2014. Kubernetes si basa su un
+[decennio e mezzo di esperienza che Google ha con l'esecuzione di carichi di lavoro di produzione su larga scala](https://research.google.com/pubs/pub43438.html), combined with
+combinati con le migliori idee e pratiche della community.
+
+## Perché ho bisogno di Kubernetes e cosa può fare?
+
+
+Kubernetes ha differenti funzionalità. Può essere pensato come:
+
+- una piattaforma container
+- una piattaforma di microservizi
+- una piattaforma cloud portatile
+e molto altro.
+
+Kubernetes fornisce un ambiente di gestione **incentrato sui contenitori**.
+Organizza l'infrastruttura di elaborazione, di rete e di archiviazione per
+conto dei carichi di lavoro degli utenti.
+Ciò fornisce gran parte della semplicità di Platform as a Service (PaaS)
+con la flessibilità di Infrastructure as a Service (IaaS) e consente la portabilità
+tra i fornitori di infrastrutture.
+
+## In che modo Kubernetes è una piattaforma?
+
+Anche se Kubernetes offre molte funzionalità, ci sono sempre nuovi scenari che trarrebbero vantaggio dalle nuove funzionalità. I flussi di lavoro specifici delle applicazioni possono essere ottimizzati per accelerare la velocità degli sviluppatori. L'orchestrazione ad hoc che è accettabile inizialmente richiede spesso una robusta automazione su larga scala. Questo è il motivo per cui Kubernetes è stato anche progettato per fungere da piattaforma per la creazione di un ecosistema di componenti e strumenti per semplificare l'implementazione, la scalabilità e la gestione delle applicazioni.
+
+Le etichette[labels](/docs/concepts/overview/working-with-objects/labels/)
+[Labels](/docs/concepts/overview/working-with-objects/labels/) consentono agli utenti di organizzare le proprie risorse, a loro piacimento.
+Le annotazioni [Annotations](/docs/concepts/overview/working-with-objects/annotations/)
+consentono agli utenti di decorare le risorse con informazioni personalizzate per
+facilitare i loro flussi di lavoro e fornire un modo semplice per gli strumenti di
+gestione allo stato di checkpoint.
+
+
+Inoltre, il piano di[controllo di Kubernetes](/docs/concepts/overview/components/) è basato sulle stesse API
+[APIs](/docs/reference/using-api/api-overview/) disponibili per sviluppatori e utenti.
+Gli utenti possono scrivere i propri controllori, come ad esempio
+[schedulers](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/devel/scheduler.md),con [le proprieAPI](/docs/concepts/api-extension/custom-resources/)
+che possono essere targetizzate da uno strumento da riga di comando generico.
+ [command-line
+tool](/docs/user-guide/kubectl-overview/).
+
+Questo
+[design](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md)
+ha permesso a un certo numero di altri sistemi di costruire su Kubernetes.
+
+
+## Cosa non è Kubernetes
+
+Kubernetes non è un sistema PaaS (Platform as a Service) tradizionale e onnicomprensivo.
+Poiché Kubernetes opera a livello di contenitore anziché a livello di hardware,
+fornisce alcune caratteristiche generalmente applicabili comuni alle offerte di PaaS, quali distribuzione,
+ridimensionamento, bilanciamento del carico, registrazione e monitoraggio.
+Tuttavia, Kubernetes non è monolitico e queste soluzioni predefinite sono opzionali
+e collegabili. Kubernetes fornisce gli elementi costitutivi per le piattaforme di sviluppo degli sviluppatori,
+ma conserva la scelta dell'utente e la flessibilità laddove è importante.
+
+
+Kubernetes:
+
+* Non limita i tipi di applicazioni supportate. Kubernetes mira a supportare una varietà estremamente diversificata di carichi di lavoro,
+ inclusi carichi di lavoro stateless, stateful e di elaborazione dei dati. Se un'applicazione può essere eseguita in un contenitore,
+ dovrebbe funzionare alla grande su Kubernetes.
+
+* Non distribuisce il codice sorgente e non crea la tua applicazione. I flussi di lavoro di integrazione, consegna e distribuzione (CI / CD) continui sono determinati dalle culture organizzative e dalle preferenze, nonché dai requisiti tecnici.
+
+* Non fornisce servizi a livello di applicazione, come middleware (ad es. Bus di messaggi), framework di elaborazione dati (ad esempio, Spark), database (ad esempio mysql), cache o sistemi di archiviazione cluster (ad esempio, Ceph) come nei servizi. Tali componenti possono essere eseguiti su Kubernetes e / o possono essere accessibili dalle applicazioni in esecuzione su Kubernetes tramite meccanismi portatili, come Open Service Broker.
+
+* Non impone la registrazione, il monitoraggio o le soluzioni di avviso. Fornisce alcune integrazioni come prova del concetto e meccanismi per raccogliere ed esportare le metriche.
+
+* Non fornisce né richiede una lingua / sistema di configurazione(ad esempio.,
+ [jsonnet](https://github.com/google/jsonnet)). Fornisce un'API dichiarativa che può essere presa di mira da forme
+ arbitrarie di specifiche dichiarative.
+
+* Non fornisce né adotta sistemi completi di configurazione, manutenzione, gestione o auto-riparazione.
+
+Inoltre, Kubernetes non è un semplice *sistema di orchestrazione*.
+In realtà, elimina la necessità di orchestrazione.
+La definizione tecnica di *orchestrazione* è l'esecuzione di un flusso di lavoro definito: prima fare A, poi B, poi C.
+Al contrario, Kubernetes comprende un insieme di processi di controllo componibili indipendenti che guidano continuamente
+lo stato corrente verso lo stato desiderato fornito. Non dovrebbe importare come si ottiene da A a C.
+Il controllo centralizzato non è richiesto. Ciò si traduce in un sistema che è più facile da usare e più potente,
+robusto, resiliente ed estensibile.
+
+
+## Perché containers?
+
+Cerchi dei motivi per i quali dovresti usare i containers?
+
+![Perche' Containers?](/images/docs/why_containers.svg)
+
+Il *vecchio modo* di distribuire le applicazioni era installare le applicazioni su un host usando il gestore di pacchetti del sistema operativo. Ciò ha avuto lo svantaggio di impigliare gli eseguibili, la configurazione, le librerie e i cicli di vita delle applicazioni tra loro e con il sistema operativo host. Si potrebbero costruire immagini di macchine virtuali immutabili al fine di ottenere prevedibili rollout e rollback, ma le VM sono pesanti e non portatili.
+
+
+La *nuova strada* consiste nel distribuire contenitori basati sulla virtualizzazione a livello di sistema operativo piuttosto che sulla virtualizzazione dell'hardware. Questi contenitori sono isolati l'uno dall'altro e dall'host: hanno i loro filesystem, non possono vedere i processi degli altri e il loro utilizzo delle risorse di calcolo può essere limitato. Sono più facili da costruire rispetto alle macchine virtuali e, poiché sono disaccoppiati dall'infrastruttura sottostante e dal file system host, sono portatili attraverso cloud e distribuzioni del sistema operativo.
+
+
+Poiché i contenitori sono piccoli e veloci, è possibile imballare un'applicazione in ogni immagine del contenitore. Questa relazione one-to-one tra applicazione e immagine sblocca tutti i vantaggi dei contenitori. Con i container, è possibile creare immagini di container immutabili al momento della compilazione / del rilascio piuttosto che del tempo di implementazione, poiché ogni applicazione non deve necessariamente essere composta con il resto dello stack di applicazioni, né essere sposata con l'ambiente dell'infrastruttura di produzione. La generazione di immagini del contenitore durante il tempo di generazione / rilascio consente di trasferire un ambiente coerente dallo sviluppo alla produzione. Allo stesso modo, i contenitori sono molto più trasparenti delle macchine virtuali, il che facilita il monitoraggio e la gestione. Ciò è particolarmente vero quando i cicli di vita dei processi dei contenitori vengono gestiti dall'infrastruttura anziché nascosti da un supervisore del processo all'interno del contenitore. Infine, con una singola applicazione per contenitore, la gestione dei contenitori equivale alla gestione della distribuzione dell'applicazione.
+
+Riepilogo dei vantaggi del contenitore:
+
+
+* **Creazione e implementazione di applicazioni agile**:
+ maggiore facilità ed efficienza della creazione dell'immagine del contenitore rispetto all'uso di immagini VM.
+* **Sviluppo, integrazione e implementazione continui**:
+ fornisce la creazione e l'implementazione di un'immagine contenitore affidabile e frequente con rollback semplici e veloci (grazie all'immutabilità dell'immagine).
+
+* **Separazione delle preoccupazioni per dev e ops**:
+ immagini del contenitore dell'applicazione al momento della compilazione / rilascio piuttosto che del tempo di implementazione, disaccoppiando quindi le applicazioni dall'infrastruttura.
+* **Osservabilità**
+ Non solo le informazioni e le misurazioni a livello di sistema operativo, ma anche lo stato dell'applicazione e altri segnali.
+ Coerenza ambientale tra sviluppo, test e produzione: funziona allo stesso modo su un laptop come nel cloud.
+
+* **Environmental consistency across development, testing, and production**:
+ Runs the same on a laptop as it does in the cloud.
+* **Cloud and OS distribution portability**:
+ Runs on Ubuntu, RHEL, CoreOS, on-prem, Google Kubernetes Engine, and anywhere else.
+* **Portabilità della distribuzione di sistemi operativi e cloud**:
+ funziona su Ubuntu, RHEL, CoreOS, on-prem, Google Kubernetes Engine e in qualsiasi altro luogo.
+ Gestione incentrata sull'applicazione: aumenta il livello di astrazione dall'esecuzione di un sistema operativo su hardware virtuale per l'esecuzione di un'applicazione su un sistema operativo utilizzando risorse logiche.
+* **Loosely coupled, distributed, elastic, liberated [micro-services](https://martinfowler.com/articles/microservices.html)**:
+ le applicazioni vengono suddivise in parti più piccole e indipendenti e possono essere distribuite e gestite in modo dinamico, non uno stack monolitico in esecuzione su un'unica grande macchina monouso.
+
+* **Isolamento delle risorse**:
+ prestazioni applicative prevedibili.
+* **Utilizzo delle risorse**:
+ alta efficienza e densità.
+
+## Cosa significa Kubernetes? K8S?
+
+Il nome **Kubernetes** deriva dal greco, che significa *timoniere* o *pilota*, ed è la radice del *governatore*
+e del [cibernetico] (http://www.etymonline.com/index.php?term=cybernetics). *K8s*
+è un'abbreviazione derivata sostituendo le 8 lettere "ubernete" con "8".
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+* Pronto per iniziare [Get Started](/docs/setup/)?
+* Per ulteriori dettagli, consultare la documentazione di Kubernetes.[Kubernetes Documentation](/docs/home/).
+{{% /capture %}}
+
+
diff --git a/content/it/includes/index.md b/content/it/includes/index.md
new file mode 100644
index 0000000000000..3d65eaa0ff97e
--- /dev/null
+++ b/content/it/includes/index.md
@@ -0,0 +1,3 @@
+---
+headless: true
+---
\ No newline at end of file
diff --git a/content/ko/case-studies/ibm/ibm_featured_logo.png b/content/ko/case-studies/ibm/ibm_featured_logo.png
index b819876bf790a..adb07a8cdf588 100644
Binary files a/content/ko/case-studies/ibm/ibm_featured_logo.png and b/content/ko/case-studies/ibm/ibm_featured_logo.png differ
diff --git a/content/ko/docs/concepts/_index.md b/content/ko/docs/concepts/_index.md
index 00fc1b62c3a51..89df24dfb6c22 100644
--- a/content/ko/docs/concepts/_index.md
+++ b/content/ko/docs/concepts/_index.md
@@ -15,7 +15,7 @@ weight: 40
## 개요
-쿠버네티스를 사용하려면, *쿠버네티스 API 오브젝트로* 클러스터에 대해 사용자가 *바라는 상태를* 기술해야 한다. 어떤 애플리케이션이나 워크로드를 구동시키려고 하는지, 어떤 컨테이너 이미지를 쓰는지, 복제의 수는 몇 개인지, 어떤 네트워크와 디스크 자원을 쓸 수 있도록 할 것인지 등을 의미한다. 바라는 상태를 설정하는 방법은 쿠버네티스 API를 사용해서 오브젝트를 만드는 것인데, 대개 `kubectl`이라는 명령줄 인터페이스를 사용한다. 클러스터와 상호 작용하고 바라는 상태를 설정하거나 수정하기 위해서 쿠버네티스 API를 직접 사용할 수도 있다.
+쿠버네티스를 사용하려면, *쿠버네티스 API 오브젝트로* 클러스터에 대해 사용자가 *바라는 상태를* 기술해야 한다. 어떤 애플리케이션이나 워크로드를 구동시키려고 하는지, 어떤 컨테이너 이미지를 쓰는지, 복제의 수는 몇 개인지, 어떤 네트워크와 디스크 자원을 쓸 수 있도록 할 것인지 등을 의미한다. 바라는 상태를 설정하는 방법은 쿠버네티스 API를 사용해서 오브젝트를 만드는 것인데, 대개 `kubectl`이라는 커맨드라인 인터페이스를 사용한다. 클러스터와 상호 작용하고 바라는 상태를 설정하거나 수정하기 위해서 쿠버네티스 API를 직접 사용할 수도 있다.
일단 바라는 상태를 설정하고 나면, *쿠버네티스 컨트롤 플레인이* 클러스터의 현재 상태를 바라는 상태와 일치시키기 위한 일을 하게 된다. 그렇게 함으로써, 쿠버네티스가 컨테이너를 시작 또는 재시작 시키거나, 주어진 애플리케이션의 복제 수를 스케일링하는 등의 다양한 작업을 자동으로 수행할 수 있게 된다. 쿠버네티스 컨트롤 플레인은 클러스터에서 돌아가는 프로세스의 집합으로 구성된다.
@@ -51,7 +51,7 @@ weight: 40
### 쿠버네티스 마스터
-클러스터에 대해 바라는 상태를 유지할 책임은 쿠버네티스 마스터에 있다. `kubectl` 명령줄 인터페이스와 같은 것을 사용해서 쿠버네티스로 상호 작용할 때에는 쿠버네티스 마스터와 통신하고 있는 셈이다.
+클러스터에 대해 바라는 상태를 유지할 책임은 쿠버네티스 마스터에 있다. `kubectl` 커맨드라인 인터페이스와 같은 것을 사용해서 쿠버네티스로 상호 작용할 때에는 쿠버네티스 마스터와 통신하고 있는 셈이다.
> "마스터"는 클러스터 상태를 관리하는 프로세스의 집합이다. 주로 이 프로세스는 클러스터 내 단일 노드에서 구동되며, 이 노드가 바로 마스터이다. 마스터는 가용성과 중복을 위해 복제될 수도 있다.
diff --git a/content/ko/docs/concepts/architecture/cloud-controller.md b/content/ko/docs/concepts/architecture/cloud-controller.md
new file mode 100644
index 0000000000000..9e79b765010d0
--- /dev/null
+++ b/content/ko/docs/concepts/architecture/cloud-controller.md
@@ -0,0 +1,262 @@
+---
+title: 클라우트 컨트롤러 매니저 기반에 관한 개념
+content_template: templates/concept
+weight: 30
+---
+
+{{% capture overview %}}
+
+클라우드 컨트롤러 매니저(CCM) 개념(바이너리와 혼동하지 말 것)은 본래 클라우드 벤더에 특화된 코드와 쿠버네티스 코어가 상호 독립적으로 진화할 수 있도록 해주기 위해 생성되었다. 클라우드 컨트롤러 매니저는 쿠버네티스 컨트롤러 매니저, API 서버, 그리고 스케줄러와 같은 다른 마스터 컴포넌트와 함께 동작된다. 또한 쿠버네티스 위에서 동작하는 경우에는, 쿠버네티스 애드온으로서 구동된다.
+
+클라우드 컨트롤러 매니저의 디자인은 새로운 클라우드 제공사업자가 플러그인을 이용하여 쉽게 쿠버네티스와 함께 통합하도록 허용해 주는 플러그인 메커니즘을 토대로 한다. 쿠버네티스에 새로운 클라우드 제공사업자를 적응시키기 위한 그리고 기존 모델에서 새로운 CCM 모델로 클라우드 제공사업자들이 전환을 이루기 위한 준비된 계획들이 있다.
+
+이 문서는 클라우드 컨트롤러 매니저 이면상의 개념들을 논의하고 그것과 연관된 기능들에 대한 세부적인 사항들을 제시한다.
+
+다음은 클라우드 컨트롤러 매니저가 존재하지 않는 형태의 쿠버네티스 클러스터 아키텍처이다.
+
+![Pre CCM Kube Arch](/images/docs/pre-ccm-arch.png)
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## 디자인
+
+이전 다이어그램에서, 쿠버네티스와 클라우드 제공사업자는 여러 상이한 컴포넌트들을 통해 통합되었다.
+
+* Kubelet
+* 쿠버네티스 컨트롤러 매니저
+* 쿠버네티스 API 서버
+
+CCM은 앞의 세 컴포넌트가 가진 클라우드 의존적인 로직을 한 곳에 모아서 클라우드 통합을 위한 단일 포인트를 만들었다. CCM을 활용한 새로운 아키텍처는 다음과 같다.
+
+![CCM Kube Arch](/images/docs/post-ccm-arch.png)
+
+## CCM의 컴포넌트
+
+CCM은 쿠버네티스 컨트롤러 매니저(KCM)의 기능 일부를 독립시키고 분리된 프로세스로서 그것을 작동시킨다. 특히, 클라우드 종속적인 KCM 내 컨트롤러들을 독립시킨다. KCM은 다음과 같은 클라우드 종속적인 컨트롤러 루프를 가진다.
+
+ * 노드 컨트롤러
+ * 볼륨 컨트롤러
+ * 라우트 컨트롤러
+ * 서비스 컨트롤러
+
+버전 1.9 에서, CCM은 이전 리스트로부터 다음의 컨트롤러를 작동시킨다.
+
+* 노드 컨트롤러
+* 라우트 컨트롤러
+* 서비스 컨트롤러
+
+추가적으로, PersistentVolumeLabels 컨트롤러라 불리는 다른 컨트롤러를 작동시킨다. 이 컨트롤러는 GCP와 AWS 클라우드 내 생성된 PersistentVolumes 상에 영역과 지역 레이블을 설정하는 책임을 가진다.
+
+{{< note >}}
+볼륨 컨트롤러는 의도적으로 CCM의 일부가 되지 않도록 선택되었다. 연관된 복잡성 때문에 그리고 벤더 특유의 볼륨 로직 개념을 일반화 하기 위한 기존의 노력때문에, 볼륨 컨트롤러는 CCM으로 이전되지 않도록 결정되었다.
+{{< /note >}}
+
+CCM을 이용하는 볼륨을 지원하기 위한 원래 계획은 플러그형 볼륨을 지원하기 위한 Flex 볼륨을 사용하기 위한 것이었다. 그러나, CSI라 알려진 경쟁적인 노력이 Flex를 대체하도록 계획되고 있다.
+
+이러한 역동성을 고려하여, CSI가 준비될 때까지 차이점에 대한 측정은 도중에 중지하기로 결정하였다.
+
+## CCM의 기능
+
+CCM은 클라우드 제공사업자에 종속적인 쿠버네티스 컴포넌트로부터 그 속성을 상속받는다. 이번 섹션은 그러한 컴포넌트를 근거로 구성되었다.
+
+### 1. 쿠버네티스 컨트롤러 매니저
+
+CCM의 주요 기능은 KCM으로부터 파생된다. 이전 섹션에서 언급한 바와 같이, CCM은 다음의 컨트롤러 루프를 작동시킨다.
+
+* 노드 컨트롤러
+* 라우트 컨트롤러
+* 서비스 컨트롤러
+* PersistentVolumeLabels 컨트롤러
+
+#### 노드 컨트롤러
+
+노드 컨트롤러는 클라우드 제공사업자의 클러스터에서 동작중인 노드에 대한 정보를 얻음으로써 노드를 초기화할 책임을 가진다. 노드 컨트롤러는 다음 기능을 수행한다.
+
+1. 클라우드 특유의 영역/지역 레이블을 이용한 노드를 초기화한다.
+2. 클라우드 특유의 인스턴스 세부사항, 예를 들어, 타입 그리고 크기 등을 이용한 노드를 초기화한다.
+3. 노드의 네트워크 주소와 호스트네임을 취득한다.
+4. 노드가 무응답일 경우, 클라우드로부터 해당 노드가 삭제된 것인지 확인한다. 클라우드로부터 삭제된 것이라면, 쿠버네티스 노드 오브젝트를 삭제한다.
+
+#### 라우트 컨트롤러
+
+라우트 컨트롤러는 클라우드에서 적합하게 경로를 구성하는 책임을 가지며 쿠버네티스 클러스터 내 상이한 노드 상의 컨테이너들이 상호 소통할 수 있도록 해준다. 라우트 컨트롤러는 오직 Google Compute Engine 클러스터에서만 적용가능 하다.
+
+#### 서비스 컨트롤러
+
+서비스 컨트롤러는 서비스 생성, 업데이트, 그리고 이벤트 삭제에 대한 책임을 가진다. 쿠버네티스 내 서비스의 현재 상태를 근거로, 쿠버네티스 내 서비스의 상태를 나타내기 위해 클라우드 로드 밸런서(ELB 또는 구글 LB와 같은)를 구성해준다. 추가적으로, 클라우드 로드 밸런서를 위한 서비스 백엔드가 최신화 되도록 보장해 준다.
+
+#### PersistentVolumeLabels 컨트롤러
+
+PersistentVolumeLabels 컨트롤러는 AWS EBS/GCE PD 볼륨이 생성되는 시점에 레이블을 적용한다. 이로써 사용자가 수동으로 이들 볼륨에 레이블을 설정할 필요가 없어진다.
+
+이들 볼륨은 오직 그것들이 속한 지역/영역 내에서만 동작되도록 제한되므로 파드 스케줄에 필수적이다. 이들 볼륨을 이용하는 모든 파드는 동일한 지역/영역 내에서 스케줄 되어야 한다.
+
+PersistentVolumeLabels 컨트롤러는 특별하게 CCM을 위해 생성되었다. 즉, CCM이 생성되기 전에는 없었다. 쿠버네티스 API 서버의 PV에 레이블을 붙이는 로직(어드미션 컨트롤러였다)을 CCM에 옮겨서 그렇게 만들었다.
+
+### 2. Kubelet
+
+노드 컨트롤러는 kubelet의 클라우드 종속적인 기능을 포함한다. CCM이 도입되기 이전에는, kubelet 이 IP 주소, 지역/영역 레이블 그리고 인스턴스 타입 정보와 같은 클라우드 특유의 세부사항으로 노드를 초기화하는 책임을 가졌다. CCM의 도입으로 kubelet에서 CCM으로 이 초기화 작업이 이전되었다.
+
+이 새로운 모델에서, kubelet은 클라우드 특유의 정보 없이 노드를 초기화 해준다. 그러나, kubelet은 새로 생성된 노드에 taint를 추가해서 CCM이 클라우드에 대한 정보를 가지고 노드를 초기화하기 전까지는 스케줄되지 않도록 한다. 그러고 나서 이 taint를 제거한다.
+
+### 3. 쿠버네티스 API 서버
+
+PersistentVolumeLabels 컨트롤러는 이전 섹션에서 기술한 바와 같이, 쿠버네티스 API 서버의 클라우드 종속적인 기능을 CCM으로 이전한다.
+
+## 플러그인 메커니즘
+
+클라우드 컨트롤러 매니저는 어떠한 클라우드에서든지 플러그 인 되어 구현될 수 있도록 Go 인터페이스를 이용한다. 구체적으로, [여기](https://github.com/kubernetes/cloud-provider/blob/9b77dc1c384685cb732b3025ed5689dd597a5971/cloud.go#L42-L62)에 정의된 CloudProvider 인터페이스를 이용한다.
+
+위에서 강조되었던 4개의 공유 컨트롤러의 구현, 그리고 공유 cloudprovider 인터페이스와 더불어 일부 골격은 쿠버네티스 코어 내에 유지될 것이다. 클라우드 제공사업자 특유의 구현은 코어의 외부에 탑재되어 코어 내에 정의된 인터페이스를 구현할 것이다.
+
+개발 중인 플러그인에 대한 보다 자세한 정보는, [클라우드 컨트롤러 매니저 개발하기](/docs/tasks/administer-cluster/developing-cloud-controller-manager/)를 참고한다.
+
+## 인가
+
+이 섹션은 CCM에 의해 작업을 수행하기 위해 다양한 API 오브젝트에서 요구되는 접근에 대해 구분해 본다.
+
+### 노드 컨트롤러
+
+노드 컨트롤러는 오직 노드 오브젝트와 동작한다. 노드 오브젝트를 get, list, create, update, patch, watch, 그리고 delete 하기 위한 모든 접근을 요한다.
+
+v1/Node:
+
+- Get
+- List
+- Create
+- Update
+- Patch
+- Watch
+- Delete
+
+### 라우트 컨트롤러
+
+라우트 컨트롤러는 노드 오브젝트 생성에 대해 귀기울이고 적절하게 라우트를 구성한다. 노드 오브젝트에 대한 get 접근을 요한다.
+
+v1/Node:
+
+- Get
+
+### 서비스 컨트롤러
+
+서비스 컨트롤러는 서비스 오브젝트 create, update 그리고 delete에 대해 귀기울이고, 서비스를 위해 적절하게 엔드포인트를 구성한다.
+
+서비스에 접근하기 위해, list, 그리고 watch 접근을 요한다. 서비스 update를 위해 patch와 update 접근을 요한다.
+
+서비스에 대한 엔드포인트 설정을 위해, create, list, get, watch, 그리고 update를 하기위한 접근을 요한다.
+
+v1/Service:
+
+- List
+- Get
+- Watch
+- Patch
+- Update
+
+### PersistentVolumeLabels 컨트롤러
+
+PersistentVolumeLabels 컨트롤러는 PersistentVolume(PV) 생성 이벤트에 대해 귀기울이고 그것을 업데이트 한다. 이 컨트롤러는 PV를 get 하고 update 하기 위한 접근을 요한다.
+
+v1/PersistentVolume:
+
+- Get
+- List
+- Watch
+- Update
+
+### 그 외의 것들
+
+CCM의 코어에 대한 구현은 이벤트를 create 하고, 보안 작업을 보장하기 위한 접근을 요하며, ServiceAccount를 create 하기 위한 접근을 요한다.
+
+v1/Event:
+
+- Create
+- Patch
+- Update
+
+v1/ServiceAccount:
+
+- Create
+
+CCM에 대한 RBAC ClusterRole은 다음과 같다.
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: cloud-controller-manager
+rules:
+- apiGroups:
+ - ""
+ resources:
+ - events
+ verbs:
+ - create
+ - patch
+ - update
+- apiGroups:
+ - ""
+ resources:
+ - nodes
+ verbs:
+ - '*'
+- apiGroups:
+ - ""
+ resources:
+ - nodes/status
+ verbs:
+ - patch
+- apiGroups:
+ - ""
+ resources:
+ - services
+ verbs:
+ - list
+ - patch
+ - update
+ - watch
+- apiGroups:
+ - ""
+ resources:
+ - serviceaccounts
+ verbs:
+ - create
+- apiGroups:
+ - ""
+ resources:
+ - persistentvolumes
+ verbs:
+ - get
+ - list
+ - update
+ - watch
+- apiGroups:
+ - ""
+ resources:
+ - endpoints
+ verbs:
+ - create
+ - get
+ - list
+ - watch
+ - update
+```
+
+## 벤더 구현사항
+
+다음은 클라우드 제공사업자들이 구현한 CCM들이다.
+
+* [Digital Ocean](https://github.com/digitalocean/digitalocean-cloud-controller-manager)
+* [Oracle](https://github.com/oracle/oci-cloud-controller-manager)
+* [Azure](https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider/providers/azure)
+* [GCE](https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider/providers/gce)
+* [AWS](https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider/providers/aws)
+
+## 클러스터 관리
+
+CCM을 구성하고 작동하기 위한 전체 안내는 [여기](/docs/tasks/administer-cluster/running-cloud-controller/#cloud-controller-manager)에서 제공된다.
+
+{{% /capture %}}
diff --git a/content/ko/docs/concepts/containers/container-lifecycle-hooks.md b/content/ko/docs/concepts/containers/container-lifecycle-hooks.md
new file mode 100644
index 0000000000000..37a0c02d5a404
--- /dev/null
+++ b/content/ko/docs/concepts/containers/container-lifecycle-hooks.md
@@ -0,0 +1,120 @@
+---
+title: 컨테이너 라이프사이클 훅(Hook)
+content_template: templates/concept
+weight: 30
+---
+
+{{% capture overview %}}
+
+이 페이지는 kubelet이 관리하는 컨테이너가 관리 라이프사이클 동안의 이벤트에 의해 발동되는 코드를 실행하기 위해서
+컨테이너 라이프사이클 훅 프레임워크를 사용하는 방법에 대해서 설명한다.
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## 개요
+
+Angular와 같이, 컴포넌트 라이프사이클 훅을 가진 많은 프로그래밍 언어 프레임워크와 유사하게,
+쿠버네티스도 컨테이너에 라이프사이클 훅을 제공한다.
+훅은 컨테이너가 관리 라이프사이클의 이벤트를 인지하고 상응하는
+라이프사이클 훅이 실행될 때 핸들러에 구현된 코드를 실행할 수 있게 한다.
+
+## 컨테이너 훅
+
+컨테이너에 노출되는 훅은 두 가지가 있다.
+
+`PostStart`
+
+이 훅은 컨테이너가 생성된 직후에 실행된다.
+그러나, 훅이 컨테이너 엔트리포인트에 앞서서 실행된다는 보장은 없다.
+파라미터는 핸들러에 전달되지 않는다.
+
+`PreStop`
+
+이 훅은 컨테이너가 종료되기 직전에 호출된다.
+그것은 동기적인 동작을 의미하는, 차단(blocking)을 수행하고 있으므로,
+컨테이너를 삭제하기 위한 호출이 전송되기 전에 완료되어야한다.
+파라미터는 핸들러에 전달되지 않는다.
+
+종료 동작에 더 자세한 대한 설명은
+[파드의 종료](/docs/concepts/workloads/pods/pod/#termination-of-pods)에서 찾을 수 있다.
+
+### 훅 핸들러 구현
+
+컨테이너는 훅의 핸들러를 구현하고 등록함으로써 해당 훅에 접근할 수 있다.
+구현될 수 있는 컨테이너의 훅 핸들러에는 두 가지 유형이 있다.
+
+* Exec - 컨테이너의 cgroups와 네임스페이스 안에서, `pre-stop.sh`와 같은, 특정 커맨드를 실행.
+커맨드에 의해 소비된 리소스는 해당 컨테이너에 대해 계산된다.
+* HTTP - 컨테이너의 특정 엔드포인트에 대해서 HTTP 요청을 실행.
+
+### 훅 핸들러 실행
+
+컨테이너 라이프사이클 관리 훅이 호출되면,
+쿠버네티스 관리 시스템은 해당 훅이 등록된 컨테이너에서 핸들러를 실행한다.
+
+훅 핸들러 호출은 해당 컨테이너를 포함하고 있는 파드의 맥락과 동기적으로 동작한다.
+이것은 `PostStart` 훅에 대해서,
+훅이 컨테이너 엔트리포인트와는 비동기적으로 동작함을 의미한다.
+그러나, 만약 해당 훅이 너무 오래 동작하거나 어딘가에 걸려 있다면,
+컨테이너는 `running` 상태에 이르지 못한다.
+
+이러한 동작은 `PreStop` 훅에 대해서도 비슷하게 일어난다.
+만약 훅이 실행되던 도중에 매달려 있다면,
+파드의 단계(phase)는 `Terminating` 상태에 머물고 해당 훅은 파드의 `terminationGracePeriodSeconds`가 끝난 다음에 종료된다.
+만약 `PostStart` 또는 `PreStop` 훅이 실패하면,
+그것은 컨테이너를 종료시킨다.
+
+사용자는 훅 핸들러를 가능한 한 가볍게 만들어야 한다.
+그러나, 컨테이너가 멈추기 전 상태를 저장하는 것과 같이,
+오래 동작하는 커맨드가 의미 있는 경우도 있다.
+
+### 훅 전달 보장
+
+훅 전달은 *한 번 이상* 으로 의도되어 있는데,
+이는 `PostStart` 또는 `PreStop`와 같은 특정 이벤트에 대해서,
+훅이 여러 번 호출될 수 있다는 것을 의미한다.
+이것을 올바르게 처리하는 것은 훅의 구현에 달려 있다.
+
+일반적으로, 전달은 단 한 번만 이루어진다.
+예를 들어, HTTP 훅 수신기가 다운되어 트래픽을 받을 수 없는 경우에도,
+재전송을 시도하지 않는다.
+그러나, 드문 경우로, 이중 전달이 발생할 수 있다.
+예를 들어, 훅을 전송하는 도중에 kubelet이 재시작된다면,
+Kubelet이 구동된 후에 해당 훅은 재전송될 것이다.
+
+### 디버깅 훅 핸들러
+
+훅 핸들러의 로그는 파드 이벤트로 노출되지 않는다.
+만약 핸들러가 어떠한 이유로 실패하면, 핸들러는 이벤트를 방송한다.
+`PostStart`의 경우, 이것은 `FailedPostStartHook` 이벤트이며,
+`PreStop`의 경우, 이것은 `FailedPreStopHook` 이벤트이다.
+이 이벤트는 `kubectl describe pod <파드_이름>`를 실행하면 볼 수 있다.
+다음은 이 커맨드 실행을 통한 이벤트 출력의 몇 가지 예다.
+
+```
+Events:
+ FirstSeen LastSeen Count From SubobjectPath Type Reason Message
+ --------- -------- ----- ---- ------------- -------- ------ -------
+ 1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd
+ 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image "test:1.0"
+ 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Created Created container with docker id 5c6a256a2567; Security:[seccomp=unconfined]
+ 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulled Successfully pulled image "test:1.0"
+ 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Started Started container with docker id 5c6a256a2567
+ 38s 38s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 5c6a256a2567: PostStart handler: Error executing in Docker Container: 1
+ 37s 37s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 8df9fdfd7054: PostStart handler: Error executing in Docker Container: 1
+ 38s 37s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "main" with RunContainerError: "PostStart handler: Error executing in Docker Container: 1"
+ 1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook
+```
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* [컨테이너 환경](/docs/concepts/containers/container-environment-variables/)에 대해 더 배우기.
+* [컨테이너 라이프사이클 이벤트에 핸들러 부착](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/)
+ 실습 경험하기.
+
+{{% /capture %}}
diff --git a/content/ko/docs/concepts/containers/images.md b/content/ko/docs/concepts/containers/images.md
new file mode 100644
index 0000000000000..649b781ab0f90
--- /dev/null
+++ b/content/ko/docs/concepts/containers/images.md
@@ -0,0 +1,371 @@
+---
+title: Images
+content_template: templates/concept
+weight: 10
+---
+
+{{% capture overview %}}
+
+사용자 Docker 이미지를 생성하고 레지스트리에 푸시(push)하여 쿠버네티스 파드에서 참조되기 이전에 대비한다.
+
+컨테이너의 `image` 속성은 `docker` 커맨드에서 지원하는 문법과 같은 문법을 지원한다. 이는 프라이빗 레지스트리와 태그를 포함한다.
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## 이미지 업데이트
+
+기본 풀(pull) 정책은 `IfNotPresent`이며, 이것은 Kubelet이 이미
+존재하는 이미지에 대한 풀을 생략하게 한다. 만약 항상 풀을 강제하고 싶다면,
+다음 중 하나를 수행하면 된다.
+
+- 컨테이너의 `imagePullPolicy`를 `Always`로 설정.
+- `imagePullPolicy`를 생략하고 `:latest`를 사용할 이미지의 태그로 사용.
+- `imagePullPolicy`와 사용할 이미지의 태그를 생략.
+- [AlwaysPullImages](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages) 어드미션 컨트롤러를 활성화.
+
+`:latest` 태그 사용은 피해야 한다는 것을 참고하고, 자세한 정보는 [구성을 위한 모범 사례](/docs/concepts/configuration/overview/#container-images)를 참고한다.
+
+## 매니페스트로 멀티-아키텍처 이미지 빌드
+
+Docker CLI는 현재 `docker manifest` 커맨드와 `create`, `annotate`, `push`와 같은 서브 커맨드를 함께 지원한다. 이 커맨드는 매니페스트를 빌드하고 푸시하는데 사용할 수 있다. 매니페스트를 보기 위해서는 `docker manifest inspect`를 사용하면 된다.
+
+다음에서 docker 문서를 확인하기 바란다.
+https://docs.docker.com/edge/engine/reference/commandline/manifest/
+
+이것을 사용하는 방법에 대한 예제는 빌드 하니스(harness)에서 참조한다.
+https://cs.k8s.io/?q=docker%20manifest%20(create%7Cpush%7Cannotate)&i=nope&files=&repos=
+
+이 커맨드는 Docker CLI에 의존하며 그에 전적으로 구현된다. `$HOME/.docker/config.json` 편집 및 `experimental` 키를 `enabled`로 설정하거나, CLI 커맨드 호출 시 간단히 `DOCKER_CLI_EXPERIMENTAL` 환경 변수를 `enabled`로만 설정해도 된다.
+
+{{< note >}}
+Docker *18.06 또는 그 이상* 을 사용하길 바란다. 더 낮은 버전은 버그가 있거나 실험적인 명령줄 옵션을 지원하지 않는다. 예를 들어 https://github.com/docker/cli/issues/1135 는 containerd에서 문제를 일으킨다.
+{{< /note >}}
+
+오래된 매니페스트 업로드를 실행하는 데 어려움을 겪는다면, `$HOME/.docker/manifests`에서 오래된 매니페스트를 정리하여 새롭게 시작하면 된다.
+
+쿠버네티스의 경우, 일반적으로 접미사 `-$(ARCH)`가 있는 이미지를 사용해 왔다. 하위 호환성을 위해, 접미사가 있는 구형 이미지를 생성하길 바란다. 접미사에 대한 아이디어는 모든 아키텍처를 위한 매니페스트를 가졌다는 의미가 내포된 `pause` 이미지를 생성하고, 접미사가 붙은 이미지가 하드 코드되어 있을 오래된 구성 또는 YAML 파일에 대해 하위 호환된다는 의미가 내포되어 있는 `pause-amd64`를 생성하기 위한 것이다.
+
+## 프라이빗 레지스트리 사용
+
+프라이빗 레지스트리는 해당 레지스트리에서 이미지를 읽을 수 있는 키를 요구할 것이다.
+자격 증명(credential)은 여러 가지 방법으로 제공될 수 있다.
+
+ - Google 컨테이너 레지스트리 사용
+ - 각 클러스터에 대하여
+ - Google 컴퓨트 엔진 또는 Google 쿠버네티스 엔진에서 자동적으로 구성됨
+ - 모든 파드는 해당 프로젝트의 프라이빗 레지스트리를 읽을 수 있음
+ - AWS EC2 컨테이너 레지스트리(ECR) 사용
+ - IAM 역할 및 정책을 사용하여 ECR 저장소에 접근을 제어함
+ - ECR 로그인 자격 증명은 자동으로 갱신됨
+ - Azure 컨테이너 레지스트리(ACR) 사용
+ - IBM 클라우드 컨테이너 레지스트리 사용
+ - 프라이빗 레지스트리에 대한 인증을 위한 노드 구성
+ - 모든 파드는 구성된 프라이빗 레지스트리를 읽을 수 있음
+ - 클러스터 관리자에 의한 노드 구성 필요
+ - 미리 풀링(pre-pulling)된 이미지
+ - 모든 파드는 노드에 캐시된 모든 이미지를 사용 가능
+ - 셋업을 위해서는 모든 노드에 대해서 root 접근이 필요
+ - 파드에 ImagePullSecrets을 명시
+ - 자신의 키를 제공하는 파드만 프라이빗 레지스트리에 접근 가능
+
+각 옵션은 아래에서 더 자세히 설명한다.
+
+
+### Google 컨테이너 레지스트리 사용
+
+쿠버네티스는 Google 컴퓨트 엔진(GCE)에서 동작할 때, [Google 컨테이너
+레지스트리(GCR)](https://cloud.google.com/tools/container-registry/)를 자연스럽게
+지원한다. 사용자의 클러스터가 GCE 또는 Google 쿠버네티스 엔진에서 동작 중이라면, 간단히
+이미지의 전체 이름(예: gcr.io/my_project/image:tag)을 사용하면 된다.
+
+클러스터 내에서 모든 파드는 해당 레지스트리에 있는 이미지에 읽기 접근 권한을 가질 것이다.
+
+Kubelet은 해당 인스턴스의 Google 서비스 계정을 이용하여 GCR을 인증할 것이다.
+인스턴스의 서비스 계정은 `https://www.googleapis.com/auth/devstorage.read_only`라서,
+프로젝트의 GCR로부터 풀은 할 수 있지만 푸시는 할 수 없다.
+
+### AWS EC2 컨테이너 레지스트리 사용
+
+쿠버네티스는 노드가 AWS EC2 인스턴스일 때, [AWS EC2 컨테이너
+레지스트리](https://aws.amazon.com/ecr/)를 자연스럽게 지원한다.
+
+간단히 이미지의 전체 이름(예: `ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag`)을
+파드 정의에 사용하면 된다.
+
+파드를 생성할 수 있는 클러스터의 모든 사용자는 ECR 레지스트리에 있는 어떠한
+이미지든지 파드를 실행하는데 사용할 수 있다.
+
+kubelet은 ECR 자격 증명을 가져오고 주기적으로 갱신할 것이다. 이것을 위해서는 다음에 대한 권한이 필요하다.
+
+- `ecr:GetAuthorizationToken`
+- `ecr:BatchCheckLayerAvailability`
+- `ecr:GetDownloadUrlForLayer`
+- `ecr:GetRepositoryPolicy`
+- `ecr:DescribeRepositories`
+- `ecr:ListImages`
+- `ecr:BatchGetImage`
+
+요구 사항:
+
+- Kubelet 버전 `v1.2.0` 이상을 사용해야 한다. (예: `/usr/bin/kubelet --version=true`를 실행).
+- 노드가 지역 A에 있고 레지스트리가 다른 지역 B에 있다면, 버전 `v1.3.0` 이상이 필요하다.
+- 사용자의 지역에서 ECR이 지원되어야 한다.
+
+문제 해결:
+
+- 위의 모든 요구 사항을 확인한다.
+- 워크스테이션에서 $REGION (예: `us-west-2`)의 자격 증명을 얻는다. 그 자격 증명을 사용하여 해당 호스트로 SSH를 하고 Docker를 수동으로 실행한다. 작동하는가?
+- kubelet이 `--cloud-provider=aws`로 실행 중인지 확인한다.
+- kubelet 로그에서 (예: `journalctl -u kubelet`) 다음과 같은 로그 라인을 확인한다.
+ - `plugins.go:56] Registering credential provider: aws-ecr-key`
+ - `provider.go:91] Refreshing cache for provider: *aws_credentials.ecrProvider`
+
+### Azure 컨테이너 레지스트리(ACR) 사용
+[Azure 컨테이너 레지스트리](https://azure.microsoft.com/en-us/services/container-registry/)를 사용하는 경우
+관리자 역할의 사용자나 서비스 주체(principal) 중 하나를 사용하여 인증할 수 있다.
+어느 경우라도, 인증은 표준 Docker 인증을 통해서 수행된다. 이러한 지침은
+[azure-cli](https://github.com/azure/azure-cli) 명령줄 도구 사용을 가정한다.
+
+우선 레지스트리를 생성하고 자격 증명을 만들어야한다. 이에 대한 전체 문서는
+[Azure 컨테이너 레지스트리 문서](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-azure-cli)에서 찾을 수 있다.
+
+컨테이너 레지스트리를 생성하고 나면, 다음의 자격 증명을 사용하여 로그인한다.
+
+ * `DOCKER_USER` : 서비스 주체 또는 관리자 역할의 사용자명
+ * `DOCKER_PASSWORD`: 서비스 주체 패스워드 또는 관리자 역할의 사용자 패스워드
+ * `DOCKER_REGISTRY_SERVER`: `${some-registry-name}.azurecr.io`
+ * `DOCKER_EMAIL`: `${some-email-address}`
+
+해당 변수에 대한 값을 채우고 나면
+[쿠버네티스 시크릿을 구성하고 그것을 파드 디플로이를 위해서 사용](/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod)할 수 있다.
+
+### IBM 클라우드 컨테이너 레지스트리 사용
+IBM 클라우드 컨테이너 레지스트리는 멀티-테넌트 프라이빗 이미지 레지스트리를 제공하여 사용자가 Docker 이미지를 안전하게 저장하고 공유할 수 있도록 한다. 기본적으로,
+프라이빗 레지스트리의 이미지는 통합된 취약점 조언기(Vulnerability Advisor)를 통해 조사되어 보안 이슈와 잠재적 취약성을 검출한다. IBM 클라우드 계정의 모든 사용자가 이미지에 접근할 수 있도록 하거나, 레지스트리 네임스페이스에 접근을 승인하는 토큰을 생성할 수 있다.
+
+IBM 클라우드 컨테이너 레지스트리 CLI 플러그인을 설치하고 사용자 이미지를 위한 네임스페이스를 생성하기 위해서는, [IBM 클라우드 컨테이너 레지스트리 시작하기](https://cloud.ibm.com/docs/services/Registry?topic=registry-index#index)를 참고한다.
+
+[IBM 클라우드 퍼블릭 이미지](https://cloud.ibm.com/docs/services/Registry?topic=registry-public_images#public_images) 및 사용자의 프라이빗 이미지로부터 컨테이너를 사용자의 IBM 클라우드 쿠버네티스 서비스 클러스터의 `default` 네임스페이스에 디플로이하기 위해서 IBM 클라우드 컨테이너 레지스트리를 사용하면 된다. 컨테이너를 다른 네임스페이스에 디플로이하거나, 다른 IBM 클라우드 컨테이너 레지스트리 지역 또는 IBM 클라우드 계정을 사용하기 위해서는, 쿠버네티스 `imagePullSecret`를 생성한다. 더 자세한 정보는, [이미지로부터 컨테이너 빌드하기](https://cloud.ibm.com/docs/containers?topic=containers-images#images)를 참고한다.
+
+### 프라이빗 레지스트리에 대한 인증을 위한 노드 구성
+
+{{< note >}}
+Google 쿠버네티스 엔진에서 동작 중이라면, 이미 각 노드에 Google 컨테이너 레지스트리에 대한 자격 증명과 함께 `.dockercfg`가 있을 것이다. 그렇다면 이 방법은 쓸 수 없다.
+{{< /note >}}
+
+{{< note >}}
+AWS EC2에서 동작 중이고 EC2 컨테이너 레지스트리(ECR)을 사용 중이라면, 각 노드의 kubelet은
+ECR 로그인 자격 증명을 관리하고 업데이트할 것이다. 그렇다면 이 방법은 쓸 수 없다.
+{{< /note >}}
+
+{{< note >}}
+이 방법은 노드의 구성을 제어할 수 있는 경우에만 적합하다. 이 방법은
+GCE 및 자동 노드 교체를 수행하는 다른 클라우드 제공자에 대해서는 신뢰성 있게 작동하지
+않을 것이다.
+{{< /note >}}
+
+Docker는 프라이빗 레지스트리를 위한 키를 `$HOME/.dockercfg` 또는 `$HOME/.docker/config.json` 파일에 저장한다. 만약 동일한 파일을
+아래의 검색 경로 리스트에 넣으면, kubelete은 이미지를 풀 할 때 해당 파일을 자격 증명 공급자로 사용한다.
+
+* `{--root-dir:-/var/lib/kubelet}/config.json`
+* `{cwd of kubelet}/config.json`
+* `${HOME}/.docker/config.json`
+* `/.docker/config.json`
+* `{--root-dir:-/var/lib/kubelet}/.dockercfg`
+* `{cwd of kubelet}/.dockercfg`
+* `${HOME}/.dockercfg`
+* `/.dockercfg`
+
+{{< note >}}
+아마도 kubelet을 위한 사용자의 환경 파일에 `HOME=/root`을 명시적으로 설정해야 할 것이다.
+{{< /note >}}
+
+프라이빗 레지스트리를 사용도록 사용자의 노드를 구성하기 위해서 권장되는 단계는 다음과 같다. 이
+예제의 경우, 사용자의 데스크탑/랩탑에서 아래 내용을 실행한다.
+
+ 1. 사용하고 싶은 각 자격 증명 세트에 대해서 `docker login [서버]`를 실행한다. 이것은 `$HOME/.docker/config.json`를 업데이트한다.
+ 1. 편집기에서 `$HOME/.docker/config.json`를 보고 사용하고 싶은 자격 증명만 포함하고 있는지 확인한다.
+ 1. 노드의 리스트를 구한다. 예를 들면 다음과 같다.
+ - 이름을 원하는 경우: `nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')`
+ - IP를 원하는 경우: `nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')`
+ 1. 로컬의 `.docker/config.json`를 위의 검색 경로 리스트 중 하나에 복사한다.
+ - 예: `for n in $nodes; do scp ~/.docker/config.json root@$n:/var/lib/kubelet/config.json; done`
+
+프라이빗 이미지를 사용하는 파드를 생성하여 검증한다. 예를 들면 다음과 같다.
+
+```yaml
+kubectl create -f - <}}
+Google 쿠버네티스 엔진에서 동작 중이라면, 이미 각 노드에 Google 컨테이너 레지스트리에 대한 자격 증명과 함께 `.dockercfg`가 있을 것이다. 그렇다면 이 방법은 쓸 수 없다.
+{{< /note >}}
+
+{{< note >}}
+이 방법은 노드의 구성을 제어할 수 있는 경우에만 적합하다. 이 방법은
+GCE 및 자동 노드 교체를 수행하는 다른 클라우드 제공자에 대해서는 신뢰성 있게 작동하지
+않을 것이다.
+{{< /note >}}
+
+기본적으로, kubelet은 지정된 레지스트리에서 각 이미지를 풀 하려고 할 것이다.
+그러나, 컨테이너의 `imagePullPolicy` 속성이 `IfNotPresent` 또는 `Never`으로 설정되어 있다면,
+로컬 이미지가 사용된다(우선적으로 또는 배타적으로).
+
+레지스트리 인증의 대안으로 미리 풀 된 이미지에 의존하고 싶다면,
+클러스터의 모든 노드가 동일한 미리 풀 된 이미지를 가지고 있는지 확인해야 한다.
+
+이것은 특정 이미지를 속도를 위해 미리 로드하거나 프라이빗 레지스트리에 대한 인증의 대안으로 사용될 수 있다.
+
+모든 파드는 미리 풀 된 이미지에 대해 읽기 접근 권한을 가질 것이다.
+
+### 파드에 ImagePullSecrets 명시
+
+{{< note >}}
+이 방법은 현재 Google 쿠버네티스 엔진, GCE 및 노드 생성이 자동화된 모든 클라우드 제공자에게
+권장된다.
+{{< /note >}}
+
+쿠버네티스는 파드에 레지스트리 키를 명시하는 것을 지원한다.
+
+#### Docker 구성으로 시크릿 생성
+
+대문자 값을 적절히 대체하여, 다음 커맨드를 실행한다.
+
+```shell
+kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
+secret/myregistrykey created.
+```
+
+만약 다중 레지스트리에 접근이 필요하다면, 각 레지스트리에 대한 하나의 시크릿을 생성할 수 있다.
+Kubelet은 파드를 위한 이미지를 풀링할 때 `imagePullSecrets`를 단일의 가상 `.docker/config.json`
+에 병합할 것이다.
+
+파드는 이미지 풀 시크릿을 자신의 네임스페이스에서만 참조할 수 있다.
+따라서 이 과정은 네임스페이스 당 한 번만 수행될 필요가 있다.
+
+##### kubectl create secrets 우회
+
+어떤 이유에서 단일 `.docker/config.json`에 여러 항목이 필요하거나
+위의 커맨드를 통해서는 주어지지 않는 제어가 필요한 경우, [json 또는 yaml로
+시크릿 생성](/docs/user-guide/secrets/#creating-a-secret-manually)을 수행할 수 있다.
+
+다음 사항을 준수해야 한다.
+
+- `.dockerconfigjson`에 해당 데이터 항목의 이름을 설정
+- Docker 파일을 base64로 인코딩하여 해당 문자열을 붙여넣을 때,
+ `data[".dockerconfigjson"]` 필드의 값으로써 깨짐 방지
+- `kubernetes.io/dockerconfigjson`에 `type`을 설정
+
+예:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: myregistrykey
+ namespace: awesomeapps
+data:
+ .dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
+type: kubernetes.io/dockerconfigjson
+```
+
+
+`error: no objects passed to create`라는 에러 메시지가 나오면, 그것은 base64 인코딩된 문자열이 유효하지 않다는 것을 뜻한다.
+`Secret "myregistrykey" is invalid: data[.dockerconfigjson]: invalid value ...`와 유사한 에러 메시지가 나오면, 그것은
+데이터가 성공적으로 un-base64 인코딩되었지만, `.docker/config.json` 파일로는 파싱될 수 없었음을 의미한다.
+
+#### 파드의 imagePullSecrets 참조
+
+이제, `imagePullSecrets` 섹션을 파드의 정의에 추가함으로써 해당 시크릿을
+참조하는 파드를 생성할 수 있다.
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: foo
+ namespace: awesomeapps
+spec:
+ containers:
+ - name: foo
+ image: janedoe/awesomeapp:v1
+ imagePullSecrets:
+ - name: myregistrykey
+```
+
+이것은 프라이빗 레지스트리를 사용하는 각 파드에 대해서 수행될 필요가 있다.
+
+그러나, 이 필드의 셋팅은 [서비스 어카운트](/docs/user-guide/service-accounts) 리소스에
+imagePullSecrets을 셋팅하여 자동화할 수 있다.
+자세한 지침을 위해서는 [서비스 어카운트에 ImagePullSecrets 추가](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account)를 확인한다.
+
+이것은 노드 당 `.docker/config.json`와 함께 사용할 수 있다. 자격 증명은
+병합될 것이다. 이 방법은 Google 쿠버네티스 엔진에서 작동될 것이다.
+
+### 유스케이스
+
+프라이빗 레지스트리를 구성하기 위한 많은 솔루션이 있다. 다음은 여러 가지
+일반적인 유스케이스와 제안된 솔루션이다.
+
+1. 비소유 이미지(예를 들어, 오픈소스)만 실행하는 클러스터의 경우. 이미지를 숨길 필요가 없다.
+ - Docker hub의 퍼블릭 이미지를 사용한다.
+ - 설정이 필요 없다.
+ - GCE 및 Google 쿠버네티스 엔진에서는, 속도와 가용성 향상을 위해서 로컬 미러가 자동적으로 사용된다.
+1. 모든 클러스터 사용자에게는 보이지만, 회사 외부에는 숨겨야하는 일부 독점 이미지를
+ 실행하는 클러스터의 경우.
+ - 호스트 된 프라이빗 [Docker 레지스트리](https://docs.docker.com/registry/)를 사용한다.
+ - 그것은 [Docker Hub](https://hub.docker.com/signup)에 호스트 되어 있거나, 다른 곳에 되어 있을 것이다.
+ - 위에 설명된 바와 같이 수동으로 .docker/config.json을 구성한다.
+ - 또는, 방화벽 뒤에서 읽기 접근 권한을 가진 내부 프라이빗 레지스트리를 실행한다.
+ - 쿠버네티스 구성은 필요 없다.
+ - 또는, GCE 및 Google 쿠버네티스 엔진에서는, 프로젝트의 Google 컨테이너 레지스트리를 사용한다.
+ - 그것은 수동 노드 구성에 비해서 클러스터 오토스케일링과 더 잘 동작할 것이다.
+ - 또는, 노드의 구성 변경이 불편한 클러스터에서는, `imagePullSecrets`를 사용한다.
+1. 독점 이미지를 가진 클러스터로, 그 중 일부가 더 엄격한 접근 제어를 필요로 하는 경우.
+ - [AlwaysPullImages 어드미션 컨트롤러](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)가 활성화되어 있는지 확인한다. 그렇지 않으면, 모든 파드가 잠재적으로 모든 이미지에 접근 권한을 가진다.
+ - 민감한 데이터는 이미지 안에 포장하는 대신, "시크릿" 리소스로 이동한다.
+1. 멀티-테넌트 클러스터에서 각 테넌트가 자신의 프라이빗 레지스트리를 필요로 하는 경우.
+ - [AlwaysPullImages 어드미션 컨트롤러](/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages)가 활성화되어 있는지 확인한다. 그렇지 않으면, 모든 파드가 잠재적으로 모든 이미지에 접근 권한을 가진다.
+ - 인가가 요구되도록 프라이빗 레지스트리를 실행한다.
+ - 각 테넌트에 대한 레지스트리 자격 증명을 생성하고, 시크릿에 넣고, 각 테넌트 네임스페이스에 시크릿을 채운다.
+ - 테넌트는 해당 시크릿을 각 네임스페이스의 imagePullSecrets에 추가한다.
+
+{{% /capture %}}
diff --git a/content/ko/docs/concepts/containers/runtime-class.md b/content/ko/docs/concepts/containers/runtime-class.md
new file mode 100644
index 0000000000000..33b2d3cce2d94
--- /dev/null
+++ b/content/ko/docs/concepts/containers/runtime-class.md
@@ -0,0 +1,116 @@
+---
+title: 런타임 클래스
+content_template: templates/concept
+weight: 20
+---
+
+{{% capture overview %}}
+
+{{< feature-state for_k8s_version="v1.12" state="alpha" >}}
+
+이 페이지는 런타임 클래스 리소스와 런타임 선택 메커니즘에 대해서 설명한다.
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## 런타임 클래스
+
+런타임 클래스는 파드의 컨테이너를 실행하는데 사용할 컨테이너 런타임 설정을 선택하기 위한
+알파 특징이다.
+
+### 셋업
+
+초기 알파 특징이므로, 런타임 클래스 특징을 사용하기 위해서는 몇 가지 추가 셋업
+단계가 필요하다.
+
+1. 런타임 클래스 특징 게이트 활성화(apiservers 및 kubelets에 대해서, 버전 1.12+ 필요)
+2. 런타임 클래스 CRD 설치
+3. CRI 구현(implementation)을 노드에 설정(런타임에 따라서)
+4. 상응하는 런타임 클래스 리소스 생성
+
+#### 1. 런타임 클래스 특징 게이트 활성화
+
+특징 게이트 활성화에 대한 설명은 [특징 게이트](/docs/reference/command-line-tools-reference/feature-gates/)를
+참고한다. `RuntimeClass` 특징 게이트는 apiservers _및_ kubelets에서 활성화되어야
+한다.
+
+#### 2. 런타임 클래스 CRD 설치
+
+런타임 클래스 [CustomResourceDefinition][] (CRD)는 쿠버네티스 git 저장소의 애드온 디렉터리에서 찾을 수
+있다. [kubernetes/cluster/addons/runtimeclass/runtimeclass_crd.yaml][runtimeclass_crd]
+
+`kubectl apply -f runtimeclass_crd.yaml`을 통해서 해당 CRD를 설치한다.
+
+[CustomResourceDefinition]: /docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/
+[runtimeclass_crd]: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/runtimeclass/runtimeclass_crd.yaml
+
+
+#### 3. CRI 구현을 노드에 설정
+
+런타임 클래스와 함께 선택할 설정은 CRI 구현에 의존적이다. 사용자의 CRI
+구현에 따른 설정 방법은 연관된 문서를 통해서 확인한다. 이것은 알파
+특징이므로, 아직 모든 CRI가 다중 런타임 클래스를 지원하지는 않는다.
+
+{{< note >}}
+런타임 클래스는 클러스터 전체에 걸쳐 동질의 노드 설정
+(모든 노드가 컨테이너 런타임에 준하는 동일한 방식으로 설정되었음을 의미)을 가정한다. 어떠한 이질성(다양한
+설정)이라도
+스케줄링 특징을 통해서 런타임 클래스와는 독립적으로 관리되어야 한다([파드를 노드에
+할당하기](/docs/concepts/configuration/assign-pod-node/) 참고).
+{{< /note >}}
+
+해당 설정은 상응하는 `RuntimeHandler` 이름을 가지며, 이는 런타임 클래스에 의해서 참조된다.
+런타임 핸들러는 유효한 DNS 1123 서브도메인(알파-숫자 + `-`와 `.`문자)을 가져야 한다.
+
+#### 4. 상응하는 런타임 클래스 리소스 생성
+
+3단계에서 셋업 한 설정은 연관된 `RuntimeHandler` 이름을 가져야 하며, 이를 통해서
+설정을 식별할 수 있다. 각 런타임 핸들러(그리고 선택적으로 비어있는 `""` 핸들러)에 대해서,
+상응하는 런타임 클래스 오브젝트를 생성한다.
+
+현재 런타임 클래스 리소스는 런타임 클래스 이름(`metadata.name`)과 런타임 핸들러
+(`spec.runtimeHandler`)로 단 2개의 중요 필드만 가지고 있다. 오브젝트 정의는 다음과 같은 형태이다.
+
+```yaml
+apiVersion: node.k8s.io/v1alpha1 # 런타임 클래스는 node.k8s.io API 그룹에 정의되어 있음
+kind: RuntimeClass
+metadata:
+ name: myclass # 런타임 클래스는 해당 이름을 통해서 참조됨
+ # 런타임 클래스는 네임스페이스가 없는 리소스임
+spec:
+ runtimeHandler: myconfiguration # 상응하는 CRI 설정의 이름임
+```
+
+
+{{< note >}}
+런타임 클래스 쓰기 작업(create/update/patch/delete)은
+클러스터 관리자로 제한할 것을 권장한다. 이것은 일반적으로 기본 설정이다. 더 자세한 정보는 [권한
+개요](https://kubernetes.io/docs/reference/access-authn-authz/authorization/)를 참고한다.
+{{< /note >}}
+
+### 사용
+
+클러스터를 위해서 런타임 클래스를 설정하고 나면, 그것을 사용하는 것은 매우 간단하다. 파드 스펙에
+`runtimeClassName`를 명시한다. 예를 들면 다음과 같다.
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mypod
+spec:
+ runtimeClassName: myclass
+ # ...
+```
+
+이것은 Kubelet이 지명된 런타임 클래스를 사용하여 해당 파드를 실행하도록 지시할 것이다. 만약 지명된
+런타임 클래스가 없거나, CRI가 상응하는 핸들러를 실행할 수 없는 경우, 파드는
+`Failed` 터미널 [단계](/ko/docs/concepts/workloads/pods/pod-lifecycle/#파드의-단계-phase)로 들어간다. 에러
+메시지를 위해서는 상응하는 [이벤트](/docs/tasks/debug-application-cluster/debug-application-introspection/)를
+확인한다.
+
+만약 명시된 `runtimeClassName`가 없다면, 기본 런타임 핸들러가 사용될 것이다. 기본 런타임 핸들러는 런타임 클래스 특징이 비활성화되었을 때와 동일하게 동작한다.
+
+{{% /capture %}}
diff --git a/content/ko/docs/concepts/overview/kubernetes-api.md b/content/ko/docs/concepts/overview/kubernetes-api.md
index 33703bba05074..860cdc003b0b2 100644
--- a/content/ko/docs/concepts/overview/kubernetes-api.md
+++ b/content/ko/docs/concepts/overview/kubernetes-api.md
@@ -13,7 +13,7 @@ API 엔드포인트, 리소스 타입과 샘플은 [API Reference](/docs/referen
API에 원격 접속하는 방법은 [Controlling API Access doc](/docs/reference/access-authn-authz/controlling-access/)에서 논의되었다.
쿠버네티스 API는 시스템을 위한 선언적 설정 스키마를 위한 기초가 되기도 한다.
-[kubectl](/docs/reference/kubectl/overview/) 명령줄 도구를 사용해서 API 오브젝트를 생성, 업데이트, 삭제 및 조회할 수 있다.
+[kubectl](/docs/reference/kubectl/overview/) 커맨드라인 툴을 사용해서 API 오브젝트를 생성, 업데이트, 삭제 및 조회할 수 있다.
쿠버네티스는 또한 API 리소스에 대해 직렬화된 상태를 (현재는 [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd/)에) 저장한다.
@@ -33,10 +33,9 @@ API에 원격 접속하는 방법은 [Controlling API Access doc](/docs/referenc
## OpenAPI 및 Swagger 정의
-완전한 API 상세 내용은 [Swagger v1.2](http://swagger.io/) 및 [OpenAPI](https://www.openapis.org/)를 활용해서 문서화했다. ("마스터"로 알려진) 쿠버네티스 apiserver는 `/swaggerapi`에서 Swagger v1.2 쿠버네티스 API 규격을 조회할 수 있는 API를 노출한다.
-
-쿠버네티스 1.10부터, OpenAPI 규격은 `/openapi/v2` 엔드포인트에서만 제공된다. 형식이 구분된 엔드포인트(`/swagger.json`, `/swagger-2.0.0.json`, `/swagger-2.0.0.pb-v1`, `/swagger-2.0.0.pb-v1.gz`)는 더 이상 사용하지 않고(deprecated) 쿠버네티스 1.14에서 제거될 예정이다.
+완전한 API 상세 내용은 [OpenAPI](https://www.openapis.org/)를 활용해서 문서화했다.
+쿠버네티스 1.10부터, OpenAPI 규격은 `/openapi/v2` 엔드포인트에서만 제공된다.
요청 형식은 HTTP 헤더에 명시해서 설정할 수 있다.
헤더 | 가능한 값
@@ -44,7 +43,9 @@ API에 원격 접속하는 방법은 [Controlling API Access doc](/docs/referenc
Accept | `application/json`, `application/com.github.proto-openapi.spec.v2@v1.0+protobuf` (기본 content-type은 `*/*`에 대해 `application/json`이거나 이 헤더를 전달하지 않음)
Accept-Encoding | `gzip` (이 헤더를 전달하지 않아도 됨)
-**OpenAPI 규격을 조회하는 예제**:
+1.14 이전 버전에서 형식이 구분된 엔드포인트(`/swagger.json`, `/swagger-2.0.0.json`, `/swagger-2.0.0.pb-v1`, `/swagger-2.0.0.pb-v1.gz`)는 OpenAPI 스펙을 다른 포맷으로 제공한다. 이러한 엔드포인트는 사용 중단되었으며, 쿠버네티스 1.14에서 제거될 예정이다.
+
+**OpenAPI 규격을 조회하는 예제**
1.10 이전 | 쿠버네티스 1.10 이상
----------- | -----------------------------
@@ -52,9 +53,12 @@ GET /swagger.json | GET /openapi/v2 **Accept**: application/json
GET /swagger-2.0.0.pb-v1 | GET /openapi/v2 **Accept**: application/com.github.proto-openapi.spec.v2@v1.0+protobuf
GET /swagger-2.0.0.pb-v1.gz | GET /openapi/v2 **Accept**: application/com.github.proto-openapi.spec.v2@v1.0+protobuf **Accept-Encoding**: gzip
-
쿠버네티스는 주로 클러스터 내부 통신용 API를 위해 대안적인 Protobuf에 기반한 직렬화 형식을 구현한다. 해당 API는 [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md) 문서와 IDL 파일에 문서화되어 있고 각각의 스키마를 담고 있는 IDL 파일은 API 오브젝트를 정의하는 Go 패키지에 들어있다.
+1.14 이전 버전에서 쿠버네티스 apiserver는 `/swaggerapi`에서 [Swagger v1.2](http://swagger.io/)
+쿠버네티스 API 스펙을 검색하는데 사용할 수 있는 API도 제공한다.
+이러한 엔드포인트는 사용 중단되었으며, 쿠버네티스 1.14에서 제거될 예정이다.
+
## API 버전 규칙
필드를 없애거나 리소스 표현을 재구성하기 쉽도록, 쿠버네티스는 `/api/v1`이나
diff --git a/content/ko/docs/concepts/overview/what-is-kubernetes.md b/content/ko/docs/concepts/overview/what-is-kubernetes.md
index 39afd4c517be6..e507e8540e0f2 100644
--- a/content/ko/docs/concepts/overview/what-is-kubernetes.md
+++ b/content/ko/docs/concepts/overview/what-is-kubernetes.md
@@ -51,7 +51,7 @@ Infrastructure as a Service (IaaS)의 유연함을 더해 주며, 인프라스
추가로, [쿠버네티스 컨트롤 플레인](/docs/concepts/overview/components/)은
개발자와 사용자가 공통으로 사용할 수 있는 [API](/docs/reference/using-api/api-overview/)를
-기반으로 하고 있다. 사용자는 범용의 [명령줄 도구]((/docs/user-guide/kubectl-overview/))를
+기반으로 하고 있다. 사용자는 범용의 [커맨드라인 툴]((/docs/user-guide/kubectl-overview/))을
대상으로 하는 [자체 API](/docs/concepts/api-extension/custom-resources/)를 가진
[스케줄러](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/devel/scheduler.md)와
같은 사용자만의 컨트롤러를 작성할 수 있다.
diff --git a/content/ko/docs/concepts/workloads/pods/podpreset.md b/content/ko/docs/concepts/workloads/pods/podpreset.md
index 71436db4933e7..8f2134a686eda 100644
--- a/content/ko/docs/concepts/workloads/pods/podpreset.md
+++ b/content/ko/docs/concepts/workloads/pods/podpreset.md
@@ -64,11 +64,14 @@ weight: 50
클러스터에서 파드 프리셋을 사용하기 위해서는 다음 사항이 반드시 이행되어야 한다.
-1. API 타입 `settings.k8s.io/v1alpha1/podpreset`을 활성화하였다. 예를
- 들면, 이것은 API 서버의 `--runtime-config` 옵션에 `settings.k8s.io/v1alpha1=true`을
- 포함하여 완료할 수 있다.
+1. API 타입 `settings.k8s.io/v1alpha1/podpreset`을 활성화하였다.
+ 예를 들면, 이것은 API 서버의 `--runtime-config` 옵션에 `settings.k8s.io/v1alpha1=true`을 포함하여 완료할 수 있다.
+ minikube에서는 클러스터가 시작할 때 `--extra-config=apiserver.runtime-config=settings.k8s.io/v1alpha1=true`
+ 플래그를 추가한다.
1. 어드미션 컨트롤러 `PodPreset`을 활성화하였다. 이것을 이루는 방법 중 하나는
API 서버를 위해서 명시된 `--enable-admission-plugins` 옵션에 `PodPreset`을 포함하는 것이다.
+ minikube에서는 클러스터가 시작할 때 `--extra-config=apiserver.enable-admission-plugins=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,PodPreset`
+ 플래그를 추가한다.
1. 사용할 네임스페이스 안에서 `PodPreset` 오브젝트를 생성하여
파드 프리셋을 정의하였다.
diff --git a/content/ko/docs/contribute/_index.md b/content/ko/docs/contribute/_index.md
new file mode 100644
index 0000000000000..1059c7bed1244
--- /dev/null
+++ b/content/ko/docs/contribute/_index.md
@@ -0,0 +1,73 @@
+---
+content_template: templates/concept
+title: 쿠버네티스 문서에 기여하기
+linktitle: 기여
+main_menu: true
+weight: 80
+---
+
+{{% capture overview %}}
+
+If you would like to help contribute to the Kubernetes documentation or website,
+we're happy to have your help! Anyone can contribute, whether you're new to the
+project or you've been around a long time, and whether you self-identify as a
+developer, an end user, or someone who just can't stand seeing typos.
+
+For more ways to get involved in the Kubernetes community or to learn about us, visit the [Kubernetes community site](/community/).
+For information on the Kubernetes documentation style guide, see the [style guide](/docs/contribute/style/style-guide/).
+
+{{% capture body %}}
+
+## Types of contributors
+
+- A _member_ of the Kubernetes organization who has [signed the CLA](/docs/contribute/start#sign-the-cla)
+ and contributed some time and effort to the project. See
+ [Community membership](https://github.com/kubernetes/community/blob/master/community-membership.md)
+ for specific criteria for membership.
+- A SIG Docs _reviewer_ is a member of the Kubernetes organization who has
+ expressed interest in reviewing documentation pull requests and who has been
+ added to the appropriate Github group and `OWNERS` files in the Github
+ repository, by a SIG Docs Approver.
+- A SIG Docs _approver_ is a member in good standing who has shown a continued
+ commitment to the project. An approver can merge pull requests
+ and publish content on behalf of the Kubernetes organization.
+ Approvers can also represent SIG Docs in the larger Kubernetes community.
+ Some of the duties of a SIG Docs approver, such as coordinating a release,
+ require a significant time commitment.
+
+## Ways to contribute
+
+This list is divided into things anyone can do, things Kubernetes organization
+members can do, and things that require a higher level of access and familiarity
+with SIG Docs processes. Contributing consistently over time can help you
+understand some of the tooling and organizational decisions that have already
+been made.
+
+This is not an exhaustive list of ways you can contribute to the Kubernetes
+documentation, but it should help you get started.
+
+- [Anyone](/docs/contribute/start/)
+ - File actionable bugs
+- [Member](/docs/contribute/start/)
+ - Improve existing docs
+ - Bring up ideas for improvement on [Slack](http://slack.k8s.io/) or the [SIG docs mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)
+ - Improve docs accessibility
+ - Provide non-binding feedback on PRs
+ - Write a blog post or case study
+- [Reviewer](/docs/contribute/intermediate/)
+ - Document new features
+ - Triage and categorize issues
+ - Review PRs
+ - Create diagrams, graphics assets, and embeddable screencasts / videos
+ - Localization
+ - Contribute to other repos as a docs representative
+ - Edit user-facing strings in code
+ - Improve code comments, Godoc
+- [Approver](/docs/contribute/advanced/)
+ - Publish contributor content by approving and merging PRs
+ - Participate in a Kubernetes release team as a docs representative
+ - Propose improvements to the style guide
+ - Propose improvements to docs tests
+ - Propose improvements to the Kubernetes website or other tooling
+
+{{% /capture %}}
diff --git a/content/ko/docs/contribute/localization_ko.md b/content/ko/docs/contribute/localization_ko.md
new file mode 100644
index 0000000000000..b4e64f0fce616
--- /dev/null
+++ b/content/ko/docs/contribute/localization_ko.md
@@ -0,0 +1,96 @@
+---
+title: 쿠버네티스 문서 한글화 가이드
+content_template: templates/concept
+---
+
+{{% capture overview %}}
+
+쿠버네티스 문서 한글화를 위한 가이드
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## 문체
+
+### 높임말
+
+평어체 사용을 원칙으로 하나, 일부 페이지(예: [https://kubernetes.io/ko](https://kubernetes.io/ko))에 한해 예외적으로
+높임말을 사용한다.
+
+### 번역체 지양
+
+우리글로서 자연스럽고 무리가 없도록 번역한다.
+
+번역체 | 자연스러운 문장
+--- | ---
+-되어지다 (이중 피동 표현) | -되다
+짧은 다리를 가진 돼지 | 다리가 짧은 돼지
+그는 그의 손으로 숟가락을 들어 그의 밥을 먹었다 | 그는 손으로 숟가락을 들어 밥을 먹었다
+접시를 씻고 | 설거지를 하고
+가게에 배들, 사과들, 복숭아들이 있다 (과다한 복수형) | 가게에 배, 사과, 복숭아들이 있다
+
+## 문서 코딩 가이드
+
+### 가로폭은 원문을 따름
+
+유지보수의 편의를 위해서 원문의 가로폭을 따른다.
+
+즉, 원문이 한 문단을 줄바꿈하지 않고 한 행에 길게 기술했다면 한글화 시에도 한 행에 길게 기술하고, 원문이 한 문단을
+줄바꿈해서 여러 행으로 기술한 경우에는 한글화 시에도 가로폭을 원문과 비슷하게 유지한다.
+
+### 리뷰어 삭제
+
+일반적으로 원문 페이지의 리뷰어가 한글 페이지를 리뷰하기 어려우므로 다음과 같이 리소스 메타데이터에서 리뷰어를
+삭제한다.
+
+```diff
+---
+- reviews:
+- - reviewer1
+- - reviewer2
+- title: Kubernetes Components
++ title: 쿠버네티스 컴포넌트
+content_template: templates/concept
+weight: 10
+---
+```
+
+## 용어
+
+용어 선택은 다음의 우선 순위를 따르나, 자연스럽지 않은 용어를 무리하게 선택하지는 않는다.
+
+
+* 한글 단어
+ * 순 우리말 단어
+ * 한자어 (예: 운영 체제), 외래어 (예: 쿠버네티스, 파드)
+* 한영 병기 (예: 훅(hook))
+* 영어 단어 (예: Kubelet)
+
+단, 자연스러움을 판단하는 기준은 주관적이므로 문서 한글화팀이 공동으로 관리하는
+[한글화팀 용어집](https://goo.gl/BDNeJ1)과 기존에 번역된 문서를 참고한다.
+
+{{% note %}}
+API 오브젝트는 원 단어를
+[국립국어원 외래어 표기법](http://www.korean.go.kr/front/page/pageView.do?page_id=P000104&mn_id=97)에
+따라 한글화 한다. 예를 들면 다음과 같다.
+
+원 단어 | 외래어
+--- | ---
+Deployment | 디플로이먼트
+Pod | 파드
+Service | 서비스
+{{% /note %}}
+
+{{% note %}}
+API 오브젝트의 필드 이름, 파일 이름, 경로 이름과 같은 내용은 주로 코드 스타일로 기술된다. 이는 독자가 구성 파일이나
+커맨드라인에서 그대로 사용할 가능성이 높으므로 한글로 옮기지 않고 원문을 유지한다. 단, 주석은 한글로 옮길 수 있다.
+{{% /note %}}
+
+{{% note %}}
+한영 병기는 페이지 내에서 해당 용어가 처음 사용되는 경우에만 적용하고 이후 부터는 한글만 표기한다.
+{{% /note %}}
+
+{{% /capture %}}
+
diff --git a/content/ko/docs/home/_index.md b/content/ko/docs/home/_index.md
index 2725cd32cb3fd..7954f0b05828d 100644
--- a/content/ko/docs/home/_index.md
+++ b/content/ko/docs/home/_index.md
@@ -1,11 +1,8 @@
---
title: 쿠버네티스 문서
+noedit: true
+cid: docsHome
layout: docsportal_home
-noedit: true
-cid: userJourneys
-css: /css/style_user_journeys.css
-js: /js/user-journeys/home.js, https://use.fontawesome.com/4bcc658a89.js
-display_browse_numbers: true
linkTitle: "홈"
main_menu: true
weight: 10
@@ -15,5 +12,5 @@ menu:
title: "문서"
weight: 20
post: >
-
Learn how to use Kubernetes with the use of walkthroughs, samples, and reference documentation. You can even help contribute to the docs!
+
Learn how to use Kubernetes with conceptual, tutorial, and reference documentation. You can even help contribute to the docs!
---
diff --git a/content/ko/docs/setup/certificates.md b/content/ko/docs/setup/certificates.md
new file mode 100644
index 0000000000000..8dd327a0b9eb1
--- /dev/null
+++ b/content/ko/docs/setup/certificates.md
@@ -0,0 +1,142 @@
+---
+title: PKI 인증서 및 요구 조건
+content_template: templates/concept
+---
+
+{{% capture overview %}}
+
+쿠버네티스는 TLS 위에 인증을 위해 PKI 인증서가 필요하다.
+만약 [kubeadm](/docs/reference/setup-tools/kubeadm/kubeadm/)으로 쿠버네티스를 설치했다면, 클러스터에 필요한 인증서는 자동으로 생성된다.
+또한 더 안전하게 자신이 소유한 인증서를 생성할 수 있다. 이를 테면, 개인키를 API 서버에 저장하지 않으므로 더 안전하게 보관할 수 있다.
+이 페이지는 클러스터에 필요한 인증서를 설명한다.
+
+{{% /capture %}}
+
+{{% capture body %}}
+
+## 클러스터에서 인증서는 어떻게 이용되나?
+
+쿠버네티스는 다음 작업에서 PKI가 필요하다.
+
+* kubelet에서 API 서버 인증서를 인증시 사용하는 클라이언트 인증서
+* API 서버 엔드포인트를 위한 서버 인증서
+* API 서버에 클러스터 관리자 인증을 위한 클라이언트 인증서
+* API 서버에서 kubelet과 통신을 위한 클라이언트 인증서
+* API 서버에서 etcd 간의 통신을 위한 클라이언트 인증서
+* 컨트롤러 매니저와 API 서버 간의 통신을 위한 클라이언트 인증서/kubeconfig
+* 스케줄러와 API 서버간 통신을 위한 클라이언트 인증서/kubeconfig
+* [front-proxy][proxy]를 위한 클라이언트와 서버 인증서
+
+{{< note >}}
+`front-proxy` 인증서는 kube-proxy에서 [API 서버 확장](/docs/tasks/access-kubernetes-api/setup-extension-api-server/)을 지원할 때만 kube-proxy에서 필요하다.
+{{< /note >}}
+
+etcd 역시 클라이언트와 피어 간에 상호 TLS 인증을 구현한다.
+
+## 인증서를 저장하는 위치
+
+만약 쿠버네티스를 kubeadm으로 설치했다면 인증서는 `/etc/kubernets/pki`에 저장된다. 이 문서에 언급된 모든 파일 경로는 그 디렉토리에 상대적이다.
+
+## 인증서 수동 설정
+
+필요한 인증서를 kubeadm으로 생성하기 싫다면 다음 방법 중 하나로 생성할 수 있다.
+
+### 단일 루트 CA
+
+관리자에 의해 제어되는 단일 루트 CA를 만들 수 있다. 이 루트 CA는 여러 중간 CA를 생성할 수 있고, 모든 추가 생성에 관해서도 쿠버네티스 자체에 위임할 수 있다.
+
+필요 CA:
+
+| 경로 | 기본 CN | 설명 |
+|------------------------|---------------------------|----------------------------------|
+| ca.crt,key | kubernetes-ca | 쿠버네티스 일반 CA |
+| etcd/ca.crt,key | etcd-ca | 모든 etcd 관련 기능을 위해서 |
+| front-proxy-ca.crt,key | kubernetes-front-proxy-ca | [front-end proxy][proxy] 위해서 |
+
+### 모든 인증서
+
+이런 개인키를 API 서버에 복사하기 원치 않는다면, 모든 인증서를 스스로 생성할 수 있다.
+
+필요한 인증서:
+
+| 기본 CN | 부모 CA | O (주체에서) | 종류 | 호스트 (SAN) |
+|-------------------------------|---------------------------|----------------|----------------------------------------|---------------------------------------------|
+| kube-etcd | etcd-ca | | server, client [1][etcdbug] | `localhost`, `127.0.0.1` |
+| kube-etcd-peer | etcd-ca | | server, client | ``, ``, `localhost`, `127.0.0.1` |
+| kube-etcd-healthcheck-client | etcd-ca | | client | |
+| kube-apiserver-etcd-client | etcd-ca | system:masters | client | |
+| kube-apiserver | kubernetes-ca | | server | ``, ``, ``, `[1]` |
+| kube-apiserver-kubelet-client | kubernetes-ca | system:masters | client | |
+| front-proxy-client | kubernetes-front-proxy-ca | | client | |
+
+[1]: `kubernetes`, `kubernetes.default`, `kubernetes.default.svc`, `kubernetes.default.svc.cluster`, `kubernetes.default.svc.cluster.local`
+
+`kind`는 하나 이상의 [x509 키 사용][usage] 종류를 가진다.
+
+| 종류 | 키 사용 |
+|--------|---------------------------------------------------------------------------------|
+| server | digital signature, key encipherment, server auth |
+| client | digital signature, key encipherment, client auth |
+
+### 인증서 파일 경로
+
+인증서는 권고하는 파일 경로에 존재해야 한다([kubeadm][kubeadm]에서 사용되는 것처럼). 경로는 위치에 관계없이 주어진 파라미터를 사용하여 지정되야 한다.
+
+| 기본 CN | 권고하는 키 파일 경로 | 권고하는 인증서 파일 경로 | 명령어 | 키 파라미터 | 인증서 파라미터 |
+|------------------------------|------------------------------|-----------------------------|----------------|------------------------------|-------------------------------------------|
+| etcd-ca | | etcd/ca.crt | kube-apiserver | | --etcd-cafile |
+| etcd-client | apiserver-etcd-client.key | apiserver-etcd-client.crt | kube-apiserver | --etcd-keyfile | --etcd-certfile |
+| kubernetes-ca | | ca.crt | kube-apiserver | | --client-ca-file |
+| kube-apiserver | apiserver.key | apiserver.crt | kube-apiserver | --tls-private-key-file | --tls-cert-file |
+| apiserver-kubelet-client | | apiserver-kubelet-client.crt| kube-apiserver | | --kubelet-client-certificate |
+| front-proxy-ca | | front-proxy-ca.crt | kube-apiserver | | --requestheader-client-ca-file |
+| front-proxy-client | front-proxy-client.key | front-proxy-client.crt | kube-apiserver | --proxy-client-key-file | --proxy-client-cert-file |
+| | | | | | |
+| etcd-ca | | etcd/ca.crt | etcd | | --trusted-ca-file, --peer-trusted-ca-file |
+| kube-etcd | etcd/server.key | etcd/server.crt | etcd | --key-file | --cert-file |
+| kube-etcd-peer | etcd/peer.key | etcd/peer.crt | etcd | --peer-key-file | --peer-cert-file |
+| etcd-ca | | etcd/ca.crt | etcdctl[2] | | --cacert |
+| kube-etcd-healthcheck-client | etcd/healthcheck-client.key | etcd/healthcheck-client.crt | etcdctl[2] | --key | --cert |
+
+[2]: 셀프 호스팅시, 생존신호(liveness probe)를 위해
+
+## 각 사용자 계정을 위한 인증서 설정하기
+
+반드시 이런 관리자 계정과 서비스 계정을 설정해야 한다.
+
+| 파일명 | 자격증명 이름 | 기본 CN | O (주체에서) |
+|-------------------------|----------------------------|--------------------------------|----------------|
+| admin.conf | default-admin | kubernetes-admin | system:masters |
+| kubelet.conf | default-auth | system:node:`` (note를 보자) | system:nodes |
+| controller-manager.conf | default-controller-manager | system:kube-controller-manager | |
+| scheduler.conf | default-manager | system:kube-scheduler | |
+
+{{< note >}}
+`kubelet.conf`을 위한 ``값은 API 서버에 등록된 것처럼 kubelet에 제공되는 노드 이름 값과 **반드시** 정확히 일치해야 한다. 더 자세한 내용은 [노드 인증](/docs/reference/access-authn-authz/node/)을 살펴보자.
+{{< /note >}}
+
+1. 각 환경 설정에 대해 주어진 CN과 O를 이용하여 x509 인증서와 키쌍을 생성한다.
+
+1. 각 환경 설정에 대해 다음과 같이 `kubectl`를 실행한다.
+
+```shell
+KUBECONFIG= kubectl config set-cluster default-cluster --server=https://:6443 --certificate-authority --embed-certs
+KUBECONFIG= kubectl config set-credentials --client-key .pem --client-certificate .pem --embed-certs
+KUBECONFIG= kubectl config set-context default-system --cluster default-cluster --user
+KUBECONFIG= kubectl config use-context default-system
+```
+
+이 파일들은 다음과 같이 사용된다.
+
+| 파일명 | 명령어 | 설명 |
+|-------------------------|-------------------------|-----------------------------------------------------------------------|
+| admin.conf | kubectl | 클러스터 관리자를 설정한다. |
+| kubelet.conf | kubelet | 클러스터 각 노드를 위해 필요하다. |
+| controller-manager.conf | kube-controller-manager | 반드시 매니페스트를 `manifests/kube-controller-manager.yaml`에 추가해야한다. |
+| scheduler.conf | kube-scheduler | 반드시 매니페스트를 `manifests/kube-scheduler.yaml`에 추가해야한다. |
+
+[usage]: https://godoc.org/k8s.io/api/certificates/v1beta1#KeyUsage
+[kubeadm]: /docs/reference/setup-tools/kubeadm/kubeadm/
+[proxy]: /docs/tasks/access-kubernetes-api/configure-aggregation-layer/
+
+{{% /capture %}}
diff --git a/content/ko/docs/setup/cri.md b/content/ko/docs/setup/cri.md
index 0fc2f41b04e15..e8f7fa7677af9 100644
--- a/content/ko/docs/setup/cri.md
+++ b/content/ko/docs/setup/cri.md
@@ -218,8 +218,8 @@ tar --no-overwrite-dir -C / -xzf cri-containerd-${CONTAINERD_VERSION}.linux-amd6
systemctl start containerd
```
-## 다른 CRI 런타임: rktlet과 frakti
+## 다른 CRI 런타임: frakti
-자세한 정보는 [Frakti 빠른 시작 가이드](https://github.com/kubernetes/frakti#quickstart) 및 [Rktlet 시작하기 가이드](https://github.com/kubernetes-incubator/rktlet/blob/master/docs/getting-started-guide.md)를 참고한다.
+자세한 정보는 [Frakti 빠른 시작 가이드](https://github.com/kubernetes/frakti#quickstart)를 참고한다.
{{% /capture %}}
diff --git a/content/ko/docs/setup/minikube.md b/content/ko/docs/setup/minikube.md
index ca501a7f2cf00..d443d9959dd66 100644
--- a/content/ko/docs/setup/minikube.md
+++ b/content/ko/docs/setup/minikube.md
@@ -38,10 +38,9 @@ VM 드라이버를 바꾸기 원하면 적절한 `--vm-driver=xxx` 플래그를
* kvm ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#kvm-driver))
* hyperkit ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#hyperkit-driver))
* xhyve ([driver installation](https://git.k8s.io/minikube/docs/drivers.md#xhyve-driver)) (deprecated)
-
+* hyperv ([driver installation](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#hyperv-driver))
아래 나오는 IP 주소는 동적이므로 바뀔 수 있다. `minikube ip`를 하면 알 수 있다.
-
```shell
$ minikube start
Starting local Kubernetes cluster...
diff --git a/content/ko/docs/setup/pick-right-solution.md b/content/ko/docs/setup/pick-right-solution.md
index 2484badf986d3..6a7aa75363c44 100644
--- a/content/ko/docs/setup/pick-right-solution.md
+++ b/content/ko/docs/setup/pick-right-solution.md
@@ -29,9 +29,14 @@ content_template: templates/concept
* [Minikube](/docs/setup/minikube/)는 개발과 테스트를 위한 단일 노드 쿠버네티스 클러스터를 로컬에 생성하기 위한 하나의 방법이다. 설치는 완전히 자동화 되어 있고, 클라우드 공급자 계정 정보가 필요하지 않다.
+* [Docker Desktop](https://www.docker.com/products/docker-desktop)는
+Mac 또는 Windows 환경에서 쉽게 설치 가능한 애플리케이션이다.
+수 분 내에 단일 노드 쿠버네티스 클러스터에서 컨테이너로 코딩과 배포를
+시작할 수 있게 해준다.
+
* [Minishift](https://docs.okd.io/latest/minishift/)는 커뮤니티 버전의 쿠버네티스 엔터프라이즈 플랫폼 OpenShift를 로컬 개발과 테스트 용으로 설치한다. Windows, macOS와 리눅스를 위한 All-In-One VM (`minishift start`)과 컨테이너 기반의 `oc cluster up` (리눅스 전용)을 지원하고 [쉬운 설치가 가능한 몇 가지 애드온도 포함](https://github.com/minishift/minishift-addons/tree/master/add-ons)한다.
-* [microk8s](https://microk8s.io/)는 개발과 테스트를 위한 쿠버네티스 최신 버전을 단일 명령어로 로컬 머신 상의 설치를 제공한다. 설치는 신속하고 빠르며(~30초) 단일 명령어로 Istio를 포함한 많은 플러그인을 지원한다.
+* [Microk8s](https://microk8s.io/)는 개발과 테스트를 위한 쿠버네티스 최신 버전을 단일 명령어로 로컬 머신 상의 설치를 제공한다. 설치는 신속하고 빠르며(~30초) 단일 명령어로 Istio를 포함한 많은 플러그인을 지원한다.
* [IBM Cloud Private-CE (Community Edition)](https://github.com/IBM/deploy-ibm-cloud-private)는 개발과 테스트 시나리오를 위해 1개 또는 더 많은 VM에 쿠버네티스를 배포하기 위해서 머신의 VirtualBox를 사용할 수 있다. 이는 전체 멀티 노드 클러스터로 확장할 수 있다.
@@ -59,7 +64,7 @@ content_template: templates/concept
* [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/)은 관리형 쿠버네티스 클러스터를 제공한다.
-* [IBM Cloud Kubernetes Service](https://console.bluemix.net/docs/containers/container_index.html)는 관리형 쿠버네티스 클러스터를 제공한다. 그와 함께 격리 종류, 운영 도구, 이미지와 컨테이너 통합된 보안 통찰력, Watson, IoT, 데이터와의 통합도 제공한다.
+* [IBM Cloud Kubernetes Service](https://cloud.ibm.com/docs/containers?topic=containers-container_index#container_index)는 관리형 쿠버네티스 클러스터를 제공한다. 그와 함께 격리 종류, 운영 도구, 이미지와 컨테이너 통합된 보안 통찰력, Watson, IoT, 데이터와의 통합도 제공한다.
* [Kubermatic](https://www.loodse.com)는 AWS와 Digital Ocean을 포함한 다양한 퍼블릭 클라우드뿐만 아니라 온-프레미스 상의 OpenStack 통합을 위한 관리형 쿠버네티스 클러스터를 제공한다.
@@ -67,6 +72,8 @@ content_template: templates/concept
* [Madcore.Ai](https://madcore.ai)는 AWS에서 쿠버네티스 인프라를 배포하기 위한 devops 중심의 CLI 도구이다. 또한 마스터, 스팟 인스턴스 그룹 노드 오토-스케일링, ingress-ssl-lego, Heapster, Grafana를 지원한다.
+* [Nutanix Karbon](https://www.nutanix.com/products/karbon/)는 다중 클러스터의 고 가용성 쿠버네티스 관리 운영 플랫폼으로, 쿠버네티스의 프로비저닝, 운영 및 라이프 사이클 관리를 단순화해준다.
+
* [OpenShift Dedicated](https://www.openshift.com/dedicated/)는 OpenShift의 관리형 쿠버네티스 클러스터를 제공한다.
* [OpenShift Online](https://www.openshift.com/features/)은 쿠버네티스 애플리케이션을 위해 호스트 된 무료 접근을 제공한다.
@@ -93,7 +100,9 @@ content_template: templates/concept
* [CenturyLink Cloud](/docs/setup/turnkey/clc/)
* [Conjure-up Kubernetes with Ubuntu on AWS, Azure, Google Cloud, Oracle Cloud](/docs/getting-started-guides/ubuntu/)
* [Containership](https://containership.io/containership-platform)
+* [Docker Enterprise](https://www.docker.com/products/docker-enterprise)
* [Gardener](https://gardener.cloud/)
+* [Giant Swarm](https://giantswarm.io)
* [Google Compute Engine (GCE)](/docs/setup/turnkey/gce/)
* [IBM Cloud](https://github.com/patrocinio/kubernetes-softlayer)
* [Kontena Pharos](https://kontena.io/pharos/)
@@ -101,12 +110,13 @@ content_template: templates/concept
* [Kublr](https://kublr.com/)
* [Madcore.Ai](https://madcore.ai/)
* [Nirmata](https://nirmata.com/)
+* [Nutanix Karbon](https://www.nutanix.com/products/karbon/)
* [Oracle Container Engine for K8s](https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengprerequisites.htm)
* [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service)
-* [Giant Swarm](https://giantswarm.io)
* [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/)
* [Stackpoint.io](/docs/setup/turnkey/stackpoint/)
* [Tectonic by CoreOS](https://coreos.com/tectonic)
+* [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks)
## 온-프레미스 턴키 클라우드 솔루션
@@ -114,15 +124,17 @@ content_template: templates/concept
* [Agile Stacks](https://www.agilestacks.com/products/kubernetes)
* [APPUiO](https://appuio.ch)
+* [Docker Enterprise](https://www.docker.com/products/docker-enterprise)
+* [Giant Swarm](https://giantswarm.io)
* [GKE On-Prem | Google Cloud](https://cloud.google.com/gke-on-prem/)
* [IBM Cloud Private](https://www.ibm.com/cloud-computing/products/ibm-cloud-private/)
* [Kontena Pharos](https://kontena.io/pharos/)
* [Kubermatic](https://www.loodse.com)
* [Kublr](https://kublr.com/)
+* [Mirantis Cloud Platform](https://www.mirantis.com/software/kubernetes/)
* [Nirmata](https://nirmata.com/)
* [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) (OCP) by [Red Hat](https://www.redhat.com)
* [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service)
-* [Giant Swarm](https://giantswarm.io)
* [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/)
* [SUSE CaaS Platform](https://www.suse.com/products/caas-platform)
* [SUSE Cloud Application Platform](https://www.suse.com/products/cloud-application-platform/)
@@ -145,6 +157,7 @@ content_template: templates/concept
다음 솔루션은 위의 솔루션에서 다루지 않는 클라우드 공급자와 운영체제의 조합이다.
+* [Cloud Foundry Container Runtime (CFCR)](https://docs-cfcr.cfapps.io/)
* [CoreOS on AWS or GCE](/docs/setup/custom-cloud/coreos/)
* [Gardener](https://gardener.cloud/)
* [Kublr](https://kublr.com/)
@@ -154,8 +167,10 @@ content_template: templates/concept
### 온-프레미스 VM
+* [Cloud Foundry Container Runtime (CFCR)](https://docs-cfcr.cfapps.io/)
* [CloudStack](/docs/setup/on-premises-vm/cloudstack/) (Ansible, CoreOS와 flannel를 사용)
* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) (Fedora와 flannel를 사용)
+* [Nutanix AHV](https://www.nutanix.com/products/acropolis/virtualization/)
* [OpenShift Container Platform](https://www.openshift.com/products/container-platform/) (OCP) Kubernetes platform by [Red Hat](https://www.redhat.com)
* [oVirt](/docs/setup/on-premises-vm/ovirt/)
* [Vagrant](/docs/setup/custom-cloud/coreos/) (CoreOS와 flannel를 사용)
@@ -167,6 +182,7 @@ content_template: templates/concept
* [CoreOS](/docs/setup/custom-cloud/coreos/)
* [Digital Rebar](/docs/setup/on-premises-metal/krib/)
+* [Docker Enterprise](https://www.docker.com/products/docker-enterprise)
* [Fedora (Single Node)](/docs/getting-started-guides/fedora/fedora_manual_config/)
* [Fedora (Multi Node)](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/)
* [Kubernetes on Ubuntu](/docs/getting-started-guides/ubuntu/)
@@ -188,6 +204,7 @@ IaaS 공급자 | 구성 관리 | OS | 네트워킹 | 문서
-------------------- | ------------ | ------ | ---------- | --------------------------------------------- | ----------------------------
any | any | multi-support | any CNI | [docs](/docs/setup/independent/create-cluster-kubeadm/) | Project ([SIG-cluster-lifecycle](https://git.k8s.io/community/sig-cluster-lifecycle))
Google Kubernetes Engine | | | GCE | [docs](https://cloud.google.com/kubernetes-engine/docs/) | Commercial
+Docker Enterprise | custom | [multi-support](https://success.docker.com/article/compatibility-matrix) | [multi-support](https://docs.docker.com/ee/ucp/kubernetes/install-cni-plugin/) | [docs](https://docs.docker.com/ee/) | Commercial
Red Hat OpenShift | Ansible & CoreOS | RHEL & CoreOS | [multi-support](https://docs.openshift.com/container-platform/3.11/architecture/networking/network_plugins.html) | [docs](https://docs.openshift.com/container-platform/3.11/welcome/index.html) | Commercial
Stackpoint.io | | multi-support | multi-support | [docs](https://stackpoint.io/) | Commercial
AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | Commercial
@@ -195,7 +212,7 @@ Madcore.Ai | Jenkins DSL | Ubuntu | flannel | [docs](https://madc
Platform9 | | multi-support | multi-support | [docs](https://platform9.com/managed-kubernetes/) | Commercial
Kublr | custom | multi-support | multi-support | [docs](http://docs.kublr.com/) | Commercial
Kubermatic | | multi-support | multi-support | [docs](http://docs.kubermatic.io/) | Commercial
-IBM Cloud Kubernetes Service | | Ubuntu | IBM Cloud Networking + Calico | [docs](https://console.bluemix.net/docs/containers/) | Commercial
+IBM Cloud Kubernetes Service | | Ubuntu | IBM Cloud Networking + Calico | [docs](https://cloud.ibm.com/docs/containers?topic=containers-container_index#container_index) | Commercial
Giant Swarm | | CoreOS | flannel and/or Calico | [docs](https://docs.giantswarm.io/) | Commercial
GCE | Saltstack | Debian | GCE | [docs](/docs/setup/turnkey/gce/) | Project
Azure Kubernetes Service | | Ubuntu | Azure | [docs](https://docs.microsoft.com/en-us/azure/aks/) | Commercial
@@ -229,9 +246,10 @@ any | RKE | multi-support | flannel or canal
any | [Gardener Cluster-Operator](https://kubernetes.io/blog/2018/05/17/gardener/) | multi-support | multi-support | [docs](https://gardener.cloud) | [Project/Community](https://github.com/gardener) and [Commercial]( https://cloudplatform.sap.com/)
Alibaba Cloud Container Service For Kubernetes | ROS | CentOS | flannel/Terway | [docs](https://www.aliyun.com/product/containerservice) | Commercial
Agile Stacks | Terraform | CoreOS | multi-support | [docs](https://www.agilestacks.com/products/kubernetes) | Commercial
-IBM Cloud Kubernetes Service | | Ubuntu | calico | [docs](https://console.bluemix.net/docs/containers/container_index.html) | Commercial
+IBM Cloud Kubernetes Service | | Ubuntu | calico | [docs](https://cloud.ibm.com/docs/containers?topic=containers-container_index#container_index) | Commercial
Digital Rebar | kubeadm | any | metal | [docs](/docs/setup/on-premises-metal/krib/) | Community ([@digitalrebar](https://github.com/digitalrebar))
VMware Cloud PKS | | Photon OS | Canal | [docs](https://docs.vmware.com/en/VMware-Kubernetes-Engine/index.html) | Commercial
+Mirantis Cloud Platform | Salt | Ubuntu | multi-support | [docs](https://docs.mirantis.com/mcp/) | Commercial
{{< note >}}
위의 표는 버전 테스트/사용된 노드의 지원 레벨을 기준으로 정렬된다.
diff --git a/content/ko/docs/tasks/_index.md b/content/ko/docs/tasks/_index.md
index 1dc8e65b5bdef..4289d0544dbe4 100644
--- a/content/ko/docs/tasks/_index.md
+++ b/content/ko/docs/tasks/_index.md
@@ -21,9 +21,9 @@ content_template: templates/concept
쿠버네티스 클러스터에서 컨테이너화 된 애플리케이션을 관리 및 모니터하는 것을 돕기 위해서 대시보드 웹 유저 인터페이스를 디플로이하고 접속한다.
-## kubectl 명령줄 사용하기
+## kubectl 커맨드라인 사용하기
-쿠버네티스 클러스터를 직접 관리하기 위해서 사용되는 `kubectl` 명령줄 도구를 설치 및 설정한다.
+쿠버네티스 클러스터를 직접 관리하기 위해서 사용되는 `kubectl` 커맨드라인 툴을 설치 및 설정한다.
## 파드 및 컨테이너 구성하기
diff --git a/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md
new file mode 100644
index 0000000000000..a587db5ec199b
--- /dev/null
+++ b/content/ko/docs/tasks/run-application/horizontal-pod-autoscale.md
@@ -0,0 +1,175 @@
+---
+title: Horizontal Pod Autoscaler
+feature:
+ title: Horizontal 스케일링
+ description: >
+ 간단한 명령어, UI를 통해 또는 CPU 사용량에 따라 자동으로 어플리케이션을 스케일 업과 다운을 한다.
+
+content_template: templates/concept
+weight: 90
+---
+
+{{% capture overview %}}
+
+
+Horizontal Pod Autoscaler는 CPU 사용량(또는 [사용자 정의 메트릭](https://git.k8s.io/community/contributors/design-proposals/instrumentation/custom-metrics-api.md), 아니면 다른 애플리케이션 지원 메트릭)을 관찰하여 레플리케이션 컨트롤러, 디플로이먼트 또는 레플리카 셋의 파드 개수를 자동으로 스케일한다. Horizontal Pod Autoscaler는 크기를 조정할 수 없는 오브젝트(예: 데몬 셋)에는 적용되지 않는다.
+
+Horizontal Pod Autoscaler는 쿠버네티스 API 리소스 및 컨트롤러로 구현된다.
+리소스는 컨트롤러의 동작을 결정한다.
+컨트롤러는 관찰된 평균 CPU 사용률이 사용자가 지정한 대상과 일치하도록 레플리케이션 컨트롤러 또는 디플로이먼트에서 레플리카 개수를 주기적으로 조정한다.
+
+
+{{% /capture %}}
+
+
+{{% capture body %}}
+
+## Horizontal Pod Autoscaler는 어떻게 작동하는가?
+
+![Horizontal Pod Autoscaler 다이어그램](/images/docs/horizontal-pod-autoscaler.svg)
+
+
+Horizontal Pod Autoscaler는 컨트롤러 관리자의 `--horizontal-pod-autoscaler-sync-period` 플래그(기본값은 30 초)에 의해 제어되는 주기를 가진 컨트롤 루프로 구현된다.
+
+각 주기 동안 컨트롤러 관리자는 각 HorizontalPodAutoscaler 정의에 지정된 메트릭에 대해 리소스 사용률을 질의한다. 컨트롤러 관리자는 리소스 메트릭 API(파드 단위 리소스 메트릭 용) 또는 사용자 지정 메트릭 API(다른 모든 메트릭 용)에서 메트릭을 가져온다.
+
+
+* 파드 단위 리소스 메트릭(예 : CPU)의 경우 컨트롤러는 HorizontalPodAutoscaler가 대상으로하는 각 파드에 대한 리소스 메트릭 API에서 메트릭을 가져온다. 그런 다음, 목표 사용률 값이 설정되면, 컨트롤러는 각 파드의 컨테이너에 대한 동등한 자원 요청을 퍼센트 단위로 하여 사용률 값을 계산한다. 대상 원시 값이 설정된 경우 원시 메트릭 값이 직접 사용된다. 그리고, 컨트롤러는 모든 대상 파드에서 사용된 사용률의 평균 또는 원시 값(지정된 대상 유형에 따라 다름)을 가져와서 원하는 레플리카의 개수를 스케일하는데 사용되는 비율을 생성한다.
+
+파드의 컨테이너 중 일부에 적절한 리소스 요청이 설정되지 않은 경우, 파드의 CPU 사용률은 정의되지 않으며, 따라서 오토스케일러는 해당 메트릭에 대해 아무런 조치도 취하지 않는다. 오토스케일링 알고리즘의 작동 방식에 대한 자세한 내용은 아래 [알고리즘 세부 정보](#알고리즘-세부-정보) 섹션을 참조하기 바란다.
+
+* 파드 단위 사용자 정의 메트릭의 경우, 컨트롤러는 사용률 값이 아닌 원시 값을 사용한다는 점을 제외하고는 파드 단위 리소스 메트릭과 유사하게 작동한다.
+
+* 오브젝트 메트릭 및 외부 메트릭의 경우, 문제의 오브젝트를 표현하는 단일 메트릭을 가져온다. 이 메트릭은 목표 값과 비교되어 위와 같은 비율을 생성한다. `autoscaling/v2beta2` API 버전에서는, 비교가 이루어지기 전에 해당 값을 파드의 개수로 선택적으로 나눌 수 있다.
+
+HorizontalPodAutoscaler는 보통 일련의 API 집합(`metrics.k8s.io`, `custom.metrics.k8s.io`, `external.metrics.k8s.io`)에서 메트릭을 가져온다.
+`metrics.k8s.io` API는 대개 별도로 시작해야 하는 메트릭-서버에 의해 제공된다. 가이드는 [메트릭-서버](https://kubernetes.io/docs/tasks/debug-application-cluster/core-metrics-pipeline/#metrics-server)를 참조한다. HorizontalPodAutoscaler는 힙스터(Heapster)에서 직접 메트릭을 가져올 수도 있다.
+
+{{< note >}}
+{{< feature-state state="deprecated" for_k8s_version="1.11" >}}
+힙스터에서 메트릭 가져오기는 Kubernetes 1.11에서 사용 중단(deprecated)됨.
+{{< /note >}}
+
+자세한 사항은 [메트릭 API를 위한 지원](#메트릭-API를-위한-지원)을 참조하라.
+
+오토스케일러는 스케일 하위 리소스를 사용하여 상응하는 확장 가능 컨트롤러(예: 레플리케이션 컨트롤러, 디플로이먼트, 레플리케이션 셋)에 접근한다.
+스케일은 레플리카의 개수를 동적으로 설정하고 각 현재 상태를 검사 할 수 있게 해주는 인터페이스이다.
+하위 리소스 스케일에 대한 자세한 내용은 [여기](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#scale-subresource)에서 확인할 수 있다.
+
+### 알고리즘 세부 정보
+
+가장 기본적인 관점에서, Horizontal Pod Autoscaler 컨트롤러는 원하는(desired) 메트릭 값과 현재(current) 메트릭 값 사이의 비율로 작동한다.
+
+```
+원하는 레플리카 수 = ceil[현재 레플리카 수 * ( 현재 메트릭 값 / 원하는 메트릭 값 )]
+```
+
+예를 들어 현재 메트릭 값이 `200m`이고 원하는 값이 `100m`인 경우 `200.0 / 100.0 == 2.0`이므로 복제본 수가 두 배가 된다.
+만약 현재 값이 `50m` 이면, `50.0 / 100.0 == 0.5` 이므로 복제본 수를 반으로 줄일 것이다. 비율이 1.0(0.1을 기본값으로 사용하는 `-horizontal-pod-autoscaler-tolerance` 플래그를 사용하여 전역적으로 구성 가능한 허용 오차 내)에 충분히 가깝다면 스케일링을 건너 뛸 것이다.
+
+`targetAverageValue` 또는 `targetAverageUtilization`가 지정되면, `currentMetricValue`는 HorizontalPodAutoscaler의 스케일 목표 안에 있는 모든 파드에서 주어진 메트릭의 평균을 취하여 계산된다. 허용치를 확인하고 최종 값을 결정하기 전에, 파드 준비 상태와 누락된 메트릭을 고려한다.
+
+삭제 타임 스탬프가 설정된 모든 파드(즉, 종료 중인 파드) 및 실패한 파드는 모두 폐기된다.
+
+특정 파드에 메트릭이 누락된 경우, 나중을 위해 처리를 미뤄두는데, 이와 같이 누락된 메트릭이 있는 모든 파드는 최종 스케일 량을 조정하는데 사용된다.
+
+CPU를 스케일할 때, 어떤 파드라도 아직 준비가 안되었거나 (즉, 여전히 초기화 중인 경우) * 또는 * 파드의 최신 메트릭 포인트가 준비되기 전이라면, 마찬가지로 해당 파드는 나중에 처리된다.
+
+
+기술적 제약으로 인해, HorizontalPodAutoscaler 컨트롤러는 특정 CPU 메트릭을 나중에 사용할지 말지 결정할 때, 파드가 준비되는 시작 시간을 정확하게 알 수 없다.
+
+대신, 파드가 아직 준비되지 않았고 시작 이후 짧은 시간 내에 파드가 준비되지 않은 상태로 전환된다면, 해당 파드를 "아직 준비되지 않음(not yet ready)"으로 간주한다.
+
+이 값은 `--horizontal-pod-autoscaler-initial-readiness-delay` 플래그로 설정되며, 기본값은 30초이다. 일단 파드가 준비되고 시작된 후 구성 가능한 시간 이내이면, 준비를 위한 어떠한 전환이라도 이를 시작 시간으로 간주한다. 이 값은 `--horizontal-pod-autoscaler-cpu-initialization-period` 플래그로 설정되며 기본값은 5분이다.
+
+`현재 메트릭 값 / 원하는 메트릭 값` 기본 스케일 비율은 나중에 사용하기로 되어 있거나 위에서 폐기되지 않은 남아있는 파드를 사용하여 계산된다.
+
+누락된 메트릭이 있는 경우, 파드가 스케일 다운의 경우 원하는 값의 100%를 소비하고 스케일 업의 경우 0%를 소비한다고 가정하여 평균을 보다 보수적으로 재계산한다. 이것은 잠재적인 스케일의 크기를 약화시킨다.
+
+또한 아직-준비되지-않은 파드가 있는 경우 누락된 메트릭이나 아직-준비되지-않은 파드를 고려하지 않고 스케일 업했을 경우, 아직-준비되지-않은 파드가 원하는 메트릭의 0%를 소비한다고 보수적으로 가정하고 스케일 확장의 크기를 약화시킨다.
+
+아직-준비되지-않은 파드나 누락된 메트릭을 고려한 후에 사용 비율을 다시 계산한다. 새 비율이 스케일 방향을 바꾸거나, 허용 오차 내에 있으면 스케일링을 건너뛴다. 그렇지 않으면, 새 비율을 사용하여 스케일한다.
+
+평균 사용량에 대한 *원래* 값은 새로운 사용 비율이 사용되는 경우에도 아직-준비되지-않은 파드 또는 누락된 메트릭에 대한 고려없이 HorizontalPodAutoscaler 상태를 통해 다시 보고된다. HorizontalPodAutoscaler에 여러 메트릭이 지정된 경우, 이 계산은 각 메트릭에 대해 수행된 다음 원하는 레플리카 수 중 가장 큰 값이 선택된다.
+
+이러한 메트릭 중 어떠한 것도 원하는 레플리카 수로 변환할 수 없는 경우(예 : 메트릭 API에서 메트릭을 가져오는 중 오류 발생) 스케일을 건너뛴다.
+
+마지막으로, HPA가 목표를 스케일하기 직전에 스케일 권장 사항이 기록된다. 컨트롤러는 구성 가능한 창(window) 내에서 가장 높은 권장 사항을 선택하도록 해당 창 내의 모든 권장 사항을 고려한다. 이 값은 `--horizontal-pod-autoscaler-downscale-stabilization-window` 플래그를 사용하여 설정할 수 있고, 기본 값은 5분이다. 즉, 스케일 다운이 점진적으로 발생하여 급격히 변동하는 메트릭 값의 영향을 완만하게 한다.
+
+
+## API 오브젝트
+
+Horizontal Pod Autoscaler는 쿠버네티스 `autoscaling` API 그룹의 API 리소스이다. CPU에 대한 오토스케일링 지원만 포함하는 안정된 버전은 `autoscaling/v1` API 버전에서 찾을 수 있다.
+
+메모리 및 사용자 정의 메트릭에 대한 스케일링 지원을 포함하는 베타 버전은 `autoscaling/v2beta2`에서 확인할 수 있다. `autoscaling/v2beta2`에서 소개된 새로운 필드는 `autoscaling/v1`로 작업할 때 어노테이션으로 보존된다.
+
+API 오브젝트에 대한 자세한 내용은 [HorizontalPodAutoscaler 오브젝트](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md#horizontalpodautoscaler-object)에서 찾을 수 있다.
+
+
+## kubectl에서 Horizontal Pod Autoscaler 지원
+
+Horizontal Pod Autoscaler는 모든 API 리소스와 마찬가지로 `kubectl`에 의해 표준 방식으로 지원된다. `kubectl create` 커맨드를 사용하여 새로운 오토스케일러를 만들 수 있다. `kubectl get hpa`로 오토스케일러 목록을 조회할 수 있고, `kubectl describe hpa`로 세부 사항을 확인할 수 있다. 마지막으로 `kubectl delete hpa`를 사용하여 오토스케일러를 삭제할 수 있다.
+
+또한 Horizontal Pod Autoscaler를 쉽게 생성 할 수 있는 `kubectl autoscale`이라는 특별한 명령이 있다. 예를 들어 `kubectl autoscale rs foo --min=2 --max=5 --cpu-percent=80`을 실행하면 레플리케이션 셋 *foo* 에 대한 오토스케일러가 생성되고, 목표 CPU 사용률은 `80 %`, 그리고 2와 5 사이의 레플리카 개수로 설정된다. `kubectl autoscale`에 대한 자세한 문서는 [여기](/docs/reference/generated/kubectl/kubectl-commands/#autoscale)에서 찾을 수 있다.
+
+## 롤링 업데이트 중 오토스케일링
+
+현재 쿠버네티스에서는 레플리케이션 컨트롤러를 직접 관리하거나, 기본 레플리카 셋를 관리하는 디플로이먼트 오브젝트를 사용하여 [롤링 업데이트](/docs/tasks/run-application/rolling-update-replication-controller/)를 수행 할 수 있다. Horizontal Pod Autoscaler는 후자의 방법을 지원한다. Horizontal Pod Autoscaler는 디플로이먼트 오브젝트에 바인딩되고, 디플로이먼트 오브젝트를 위한 크기를 설정하며, 디플로이먼트는 기본 레플리카 셋의 크기를 결정한다.
+
+Horizontal Pod Autoscaler는 레플리케이션 컨트롤러를 직접 조작하는 롤링 업데이트에서 작동하지 않는다. 즉, Horizontal Pod Autoscaler를 레플리케이션 컨트롤러에 바인딩하고 롤링 업데이트를 수행할 수 없다. (예 : `kubectl rolling-update`)
+작동하지 않는 이유는 롤링 업데이트에서 새 레플리케이션 컨트롤러를 만들 때, Horizontal Pod Autoscaler가 새 레플리케이션 컨트롤러에 바인딩되지 않기 때문이다.
+
+## 쿨-다운 / 지연에 대한 지원
+
+Horizontal Pod Autoscaler를 사용하여 레플리카 그룹의 스케일을 관리할 때, 평가된 메트릭의 동적인 특징 때문에 레플리카 수가 자주 변동할 수 있다. 이것은 때로는 *스래싱 (thrashing)* 이라고도 한다.
+
+v1.6부터 클러스터 운영자는 `kube-controller-manager` 구성 요소의 플래그로 노출된 글로벌 HPA 설정을 조정하여 이 문제를 완화할 수 있다.
+
+v1.12부터는 새로운 알고리즘 업데이트가 업스케일 지연에 대한 필요성을 제거하였다.
+
+- `--horizontal-pod-autoscaler-downscale-delay` : 이 옵션 값은 오토스케일러가 현재의 작업이 완료된 후에 다른 다운스케일 작업을 수행하기까지 기다려야 하는 시간을 지정하는 지속 시간이다. 기본값은 5분(`5m0s`)이다.
+
+{{< note >}}
+이러한 파라미터 값을 조정할 때 클러스터 운영자는 가능한 결과를 알아야 한다. 지연(쿨-다운) 값이 너무 길면, Horizontal Pod Autoscaler가 워크로드 변경에 반응하지 않는다는 불만이 있을 수 있다. 그러나 지연 값을 너무 짧게 설정하면, 레플리카 셋의 크기가 평소와 같이 계속 스래싱될 수 있다.
+{{< /note >}}
+
+## 멀티 메트릭을 위한 지원
+
+Kubernetes 1.6은 멀티 메트릭을 기반으로 스케일링을 지원한다.
+
+`autoscaling/v2beta2` API 버전을 사용하여 Horizontal Pod Autoscaler가 스케일을 조정할 멀티 메트릭을 지정할 수 있다. 그런 다음 Horizontal Pod Autoscaler 컨트롤러가 각 메트릭을 평가하고, 해당 메트릭을 기반으로 새 스케일을 제안한다. 제안된 스케일 중 가장 큰 것이 새로운 스케일로 사용된다.
+
+## 사용자 정의 메트릭을 위한 지원
+
+{{< note >}}
+쿠버네티스 1.2는 특수 어노테이션을 사용하여 애플리케이션 관련 메트릭을 기반으로 하는 스케일의 알파 지원을 추가했다. 쿠버네티스 1.6에서는 이러한 어노테이션 지원이 제거되고 새로운 오토스케일링 API가 추가되었다. 이전 사용자 정의 메트릭 수집 방법을 계속 사용할 수는 있지만, Horizontal Pod Autoscaler에서는 이 메트릭을 사용할 수 없다. 그리고 Horizontal Pod Autoscaler 컨트롤러에서는 더 이상 스케일 할 사용자 정의 메트릭을 지정하는 이전 어노테이션을 사용할 수 없다.
+{{< /note >}}
+
+쿠버네티스 1.6에서는 Horizontal Pod Autoscaler에서 사용자 정의 메트릭을 사용할 수 있도록 지원한다.
+`autoscaling/v2beta2` API에서 사용할 Horizontal Pod Autoscaler에 대한 사용자 정의 메트릭을 추가 할 수 있다. 그리고 쿠버네티스는 새 사용자 정의 메트릭 API에 질의하여 적절한 사용자 정의 메트릭의 값을 가져온다.
+
+요구 사항은 [메트릭을 위한 지원](#메트릭-API를-위한-지원)을 참조한다.
+
+## 메트릭 API를 위한 지원
+
+기본적으로 HorizontalPodAutoscaler 컨트롤러는 일련의 API에서 메트릭을 검색한다. 이러한 API에 접속하려면 클러스터 관리자는 다음을 확인해야 한다.
+
+* [API 집합 레이어](/docs/tasks/access-kubernetes-api/configure-aggregation-layer/) 활성화
+* 해당 API 등록:
+ * 리소스 메트릭의 경우, 일반적으로 이것은 [메트릭-서버](https://github.com/kubernetes-incubator/metrics-server)가 제공하는 `metrics.k8s.io` API이다. 클러스터 애드온으로 시작할 수 있다.
+
+ * 사용자 정의 메트릭의 경우, 이것은 `custom.metrics.k8s.io` API이다. 메트릭 솔루션 공급 업체에서 제공하는 "어댑터" API 서버에서 제공한다. 메트릭 파이프라인 또는 [알려진 솔루션 목록](https://github.com/kubernetes/metrics/blob/master/IMPLEMENTATIONS.md#custom-metrics-api)으로 확인한다. 직접 작성하고 싶다면 [샘플](https://github.com/kubernetes-incubator/custom-metrics-apiserver)을 확인하라.
+
+ * 외부 메트릭의 경우, 이것은 `external.metrics.k8s.io` API이다. 위에 제공된 사용자 정의 메트릭 어댑터에서 제공될 수 있다.
+
+* `--horizontal-pod-autoscaler-use-rest-clients`는 `true`이거나 설정되지 않음. 이것을 false로 설정하면 더 이상 사용되지 않는 힙스터 기반 오토스케일링으로 전환된다.
+
+{{% /capture %}}
+
+{{% capture whatsnext %}}
+
+* 디자인 문서: [Horizontal Pod Autoscaling](https://git.k8s.io/community/contributors/design-proposals/autoscaling/horizontal-pod-autoscaler.md).
+* kubectl 오토스케일 커맨드: [kubectl autoscale](/docs/reference/generated/kubectl/kubectl-commands/#autoscale).
+* [Horizontal Pod Autoscaler](/ko/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/)의 사용 예제.
+
+{{% /capture %}}
diff --git a/content/ko/docs/tasks/tools/install-minikube.md b/content/ko/docs/tasks/tools/install-minikube.md
index bc8a8d0a56c17..305a2ab350fc5 100644
--- a/content/ko/docs/tasks/tools/install-minikube.md
+++ b/content/ko/docs/tasks/tools/install-minikube.md
@@ -12,7 +12,11 @@ weight: 20
{{% capture prerequisites %}}
-컴퓨터의 바이오스에서 VT-x 또는 AMD-v 가상화가 필수적으로 활성화되어 있어야 한다.
+컴퓨터의 바이오스에서 VT-x 또는 AMD-v 가상화가 필수적으로 활성화되어 있어야 한다. 이를 확인하려면 리눅스 상에서 아래의 명령을 실행하고,
+출력이 비어있지 않은지 확인한다.
+```shell
+egrep --color 'vmx|svm' /proc/cpuinfo
+```
{{% /capture %}}
diff --git a/content/ko/docs/tutorials/hello-minikube.md b/content/ko/docs/tutorials/hello-minikube.md
index 0909d4cb9aa80..3f02fa6df9bc9 100644
--- a/content/ko/docs/tutorials/hello-minikube.md
+++ b/content/ko/docs/tutorials/hello-minikube.md
@@ -153,9 +153,9 @@ Katacode는 무료로 브라우저에서 쿠버네티스 환경을 제공한다.
4. Katacoda 환경에서만: 플러스를 클릭한 후에 **Select port to view on Host 1** 를 클릭.
-5. Katacoda 환경에서만: 포트 번호를 `8080`로 입력하고, **Display Port** 클릭.
+5. Katacoda 환경에서만: 포트 번호를 `30369`로 입력하고(서비스 출력 `8080`과 반대편의 포트를 참조), **Display Port** 클릭.
- 이렇게 하면 당신의 앱을 서비스하는 브라우저 윈도우를 띄우고 Hellow World" 메시지를 보여준다.
+ 이렇게 하면 당신의 앱을 서비스하는 브라우저 윈도우를 띄우고 "Hello World" 메시지를 보여준다.
## 애드온 사용하기
diff --git a/content/zh/docs/_index.md b/content/zh/docs/_index.md
index 05b658bceaade..e31ea1aea7473 100644
--- a/content/zh/docs/_index.md
+++ b/content/zh/docs/_index.md
@@ -1,4 +1,3 @@
---
-title: Home
-weight: 5
+title: 文档
---
diff --git a/content/zh/docs/concepts/workloads/controllers/deployment.md b/content/zh/docs/concepts/workloads/controllers/deployment.md
index e4abe40b110dd..fd0e9800365dc 100644
--- a/content/zh/docs/concepts/workloads/controllers/deployment.md
+++ b/content/zh/docs/concepts/workloads/controllers/deployment.md
@@ -398,7 +398,7 @@ $ kubectl rollout undo deployment/nginx-deployment
deployment "nginx-deployment" rolled back
```
-Alternatively, you can rollback to a specific revision by specify that in `--to-revision`:
+Alternatively, you can rollback to a specific revision by specifying it with `--to-revision`:
```shell
$ kubectl rollout undo deployment/nginx-deployment --to-revision=2
diff --git a/content/zh/docs/user-guide/docker-cli-to-kubectl.md b/content/zh/docs/user-guide/docker-cli-to-kubectl.md
index 4775e97a8c054..5cf5a3ecda692 100644
--- a/content/zh/docs/user-guide/docker-cli-to-kubectl.md
+++ b/content/zh/docs/user-guide/docker-cli-to-kubectl.md
@@ -12,7 +12,7 @@ title: Docker 用户使用 kubectl 命令指南
#### docker run
-如何运行一个 nginx Deployment 并将其暴露出来? 查看 [kubectl run](/docs/user-guide/kubectl/{{< param "version" >}}/#run) 。
+如何运行一个 nginx Deployment 并将其暴露出来? 查看 [kubectl run](/docs/reference/generated/kubectl/kubectl-commands/#run) 。
使用 docker 命令:
@@ -33,7 +33,7 @@ deployment "nginx-app" created
```
在 1.2 及以上版本的 Kubernetes 集群中,使用`kubectl run` 命令将创建一个名为 "nginx-app" 的 Deployment。如果您运行的是老版本,将会创建一个 replication controller。
-如果您想沿用旧的行为,使用 `--generation=run/v1` 参数,这样就会创建 replication controller。查看 [`kubectl run`](/docs/user-guide/kubectl/{{< param "version" >}}/#run) 获取更多详细信息。
+如果您想沿用旧的行为,使用 `--generation=run/v1` 参数,这样就会创建 replication controller。查看 [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) 获取更多详细信息。
```shell
# expose a port through with a service
@@ -56,7 +56,7 @@ kubectl run [-i] [--tty] --attach --image=
#### docker ps
-如何列出哪些正在运行?查看 [kubectl get](/docs/user-guide/kubectl/{{< param "version" >}}/#get)。
+如何列出哪些正在运行?查看 [kubectl get](/docs/reference/generated/kubectl/kubectl-commands/#get)。
使用 docker 命令:
@@ -76,7 +76,7 @@ nginx-app-5jyvm 1/1 Running 0 1h
#### docker attach
-如何连接到已经运行在容器中的进程?查看 [kubectl attach](/docs/user-guide/kubectl/{{< param "version" >}}/#attach)。
+如何连接到已经运行在容器中的进程?查看 [kubectl attach](/docs/reference/generated/kubectl/kubectl-commands/#attach)。
使用 docker 命令:
@@ -100,7 +100,7 @@ $ kubectl attach -it nginx-app-5jyvm
#### docker exec
-如何在容器中执行命令?查看 [kubectl exec](/docs/user-guide/kubectl/{{< param "version" >}}/#exec)。
+如何在容器中执行命令?查看 [kubectl exec](/docs/reference/generated/kubectl/kubectl-commands/#exec)。
使用 docker 命令:
@@ -142,7 +142,7 @@ $ kubectl exec -ti nginx-app-5jyvm -- /bin/sh
#### docker logs
-如何查看运行中进程的 stdout/stderr?查看 [kubectl logs](/docs/user-guide/kubectl/{{< param "version" >}}/#logs)。
+如何查看运行中进程的 stdout/stderr?查看 [kubectl logs](/docs/reference/generated/kubectl/kubectl-commands/#logs)。
使用 docker 命令:
@@ -172,7 +172,7 @@ $ kubectl logs --previous nginx-app-zibvs
#### docker stop 和 docker rm
-如何停止和删除运行中的进程?查看 [kubectl delete](/docs/user-guide/kubectl/{{< param "version" >}}/#delete)。
+如何停止和删除运行中的进程?查看 [kubectl delete](/docs/reference/generated/kubectl/kubectl-commands/#delete)。
使用 docker 命令:
@@ -209,7 +209,7 @@ $ kubectl get po -l run=nginx-app
#### docker version
-如何查看客户端和服务端的版本?查看 [kubectl version](/docs/user-guide/kubectl/{{< param "version" >}}/#version)。
+如何查看客户端和服务端的版本?查看 [kubectl version](/docs/reference/generated/kubectl/kubectl-commands/#version)。
使用 docker 命令:
@@ -237,7 +237,7 @@ Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.9+a3d1dfa6f4
#### docker info
-如何获取有关环境和配置的各种信息?查看 [kubectl cluster-info](/docs/user-guide/kubectl/{{< param "version" >}}/#cluster-info)。
+如何获取有关环境和配置的各种信息?查看 [kubectl cluster-info](/docs/reference/generated/kubectl/kubectl-commands/#cluster-info)。
使用 docker 命令:
diff --git a/data/setup.yml b/data/setup.yml
index 8510c915b07e0..4b52876d37a23 100644
--- a/data/setup.yml
+++ b/data/setup.yml
@@ -39,7 +39,7 @@ toc:
- title: Running Kubernetes on Azure Container Service
path: https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough
- title: Running Kubernetes on IBM Cloud Kubernetes Service
- path: https://console.bluemix.net/docs/containers/container_index.html
+ path: https://cloud.ibm.com/docs/containers?topic=containers-container_index#container_index
- title: Turn-key Cloud Solutions
landing_page: /docs/getting-started-guides/alibaba-cloud/
diff --git a/i18n/en.toml b/i18n/en.toml
index af47e8cb97126..283eeb7262997 100644
--- a/i18n/en.toml
+++ b/i18n/en.toml
@@ -30,6 +30,24 @@ other = "No"
[latest_version]
other = "latest version."
+[version_check_mustbe]
+other = "Your Kubernetes server must be version "
+
+[version_check_mustbeorlater]
+other = "Your Kubernetes server must be at or later than version "
+
+[version_check_tocheck]
+other = "To check the version, enter "
+
+[caution]
+other = "Caution:"
+
+[note]
+other = "Note:"
+
+[warning]
+other = "Warning:"
+
[main_read_about]
other = "Read about"
@@ -54,6 +72,12 @@ other = "Kubernetes Features"
[main_cncf_project]
other = """We are a CNCF graduated project
"""
+[main_kubeweekly_baseline]
+other = "Interested in receiving the latest Kubernetes news? Sign up for KubeWeekly."
+
+[main_kubernetes_past_link]
+other = "View past newsletters"
+
[main_kubeweekly_signup]
other = "Subscribe"
diff --git a/i18n/fr.toml b/i18n/fr.toml
new file mode 100644
index 0000000000000..6d33bf619d90e
--- /dev/null
+++ b/i18n/fr.toml
@@ -0,0 +1,112 @@
+# i18n strings for the French (main) site.
+
+[deprecation_warning]
+other = " documentation non maintenue. Vous consultez une version statique. Pour une documentation à jour, veuillez consulter: "
+
+[objectives_heading]
+other = "Objectifs"
+
+[cleanup_heading]
+other = "Cleanup"
+
+[prerequisites_heading]
+other = "Pré-requis"
+
+[whatsnext_heading]
+other = "A suivre"
+
+[feedback_heading]
+other = "Feedback"
+
+[feedback_question]
+other = "Cette page est elle utile ?"
+
+[feedback_yes]
+other = "Oui"
+
+[feedback_no]
+other = "Non"
+
+[latest_version]
+other = "dernière version."
+
+[main_read_about]
+other = "A propos"
+
+[main_read_more]
+other = "Autres ressources"
+
+[main_github_invite]
+other = "Souhaitez vous contribuer au code de Kubernetes ?"
+
+[main_github_view_on]
+other = "Voir sur Github"
+
+[main_github_create_an_issue]
+other = "Ouvrez un ticket"
+
+[main_community_explore]
+other = "Explorez la communauté"
+
+[main_kubernetes_features]
+other = "Kubernetes fonctionnalités"
+
+[main_cncf_project]
+other = """Nous sommes un projet CNCF diplômé"""
+
+[main_kubeweekly_signup]
+other = "S'abonner"
+
+[main_contribute]
+other = "Contribuer"
+
+[main_edit_this_page]
+other = "Editez cette page"
+
+[main_page_history]
+other ="Historique"
+
+[main_page_last_modified_on]
+other = "Dernière modification le"
+
+[main_by]
+other = "de"
+
+[main_documentation_license]
+other = """The Kubernetes Authors | Documentation Distributed under CC BY 4.0"""
+
+[main_copyright_notice]
+other = """The Linux Foundation ®. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page"""
+
+# Labels for the docs portal home page.
+[docs_label_browse]
+other = "Parcourir la documentation"
+
+[docs_label_contributors]
+other = "Contributeurs"
+
+[docs_label_users]
+other = "Utilisateurs"
+
+[docs_label_i_am]
+other = "JE SUIS..."
+
+
+
+# Community links
+[community_twitter_name]
+other = "Twitter"
+[community_github_name]
+other = "GitHub"
+[community_slack_name]
+other = "Slack"
+[community_stack_overflow_name]
+other = "Stack Overflow"
+[community_forum_name]
+other = "Forum"
+[community_events_calendar]
+other = "Calendrier"
+
+# UI elements
+[ui_search_placeholder]
+other = "Recherche"
diff --git a/i18n/it.toml b/i18n/it.toml
new file mode 100644
index 0000000000000..a6f5ad63e902a
--- /dev/null
+++ b/i18n/it.toml
@@ -0,0 +1,82 @@
+# i18n strings for the Italian site.
+
+[main_read_about]
+other = "Leggi"
+
+[main_read_more]
+other = "Leggi di più"
+
+[main_github_invite]
+other = "Ti interessa l'hacking sulla base del codice di Kubernetes?"
+
+[main_github_view_on]
+other = "Visualizza su Github"
+
+[main_github_create_an_issue]
+other = "Crea un issue"
+
+[main_community_explore]
+other = "Explora la community"
+
+[main_kubernetes_features]
+other = "Caratteristiche di Kubernetes"
+
+[main_cncf_project]
+other = """Noi siamo un CNCF progetto """
+
+[main_kubeweekly_signup]
+other = "Sottoscrivi"
+
+[main_contribute]
+other = "Contribuire"
+
+[main_edit_this_page]
+other = "Modifica questa pagina"
+
+[main_page_history]
+other = "Lo Storico della Pagina"
+
+[main_page_last_modified_on]
+other = "Ultima modifica alla pagina"
+
+[main_by]
+other = "di"
+
+[main_documentation_license]
+other = """Gli autori di Kubernetes| Documentazione distribuita sotto CC BY 4.0"""
+
+[main_copyright_notice]
+other = """The Linux Foundation ®. Tutti i diritti riservati. The Linux Foundation ha marchi registrati e utilizza marchi commerciali. Per un elenco dei marchi di Linux Foundation, consultare il nostro sitoTrademark Usage page"""
+
+# Labels for the docs portal home page.
+[docs_label_browse]
+other = "Sfoglia documenti"
+
+[docs_label_contributors]
+other = "Contributori"
+
+[docs_label_users]
+other = "Utenti"
+
+[docs_label_i_am]
+other = "Io Sono..."
+
+
+
+# Community links
+[community_twitter_name]
+other = "Twitter"
+[community_github_name]
+other = "GitHub"
+[community_slack_name]
+other = "Slack"
+[community_stack_overflow_name]
+other = "Stack Overflow"
+[community_forum_name]
+other = "Forum"
+[community_events_calendar]
+other = "Events Calendar"
+
+# UI elements
+[ui_search_placeholder]
+other = "Search"
diff --git a/i18n/ko.toml b/i18n/ko.toml
index 8a4c73197a0e3..556256fab56e9 100644
--- a/i18n/ko.toml
+++ b/i18n/ko.toml
@@ -1,5 +1,53 @@
# i18n strings for the Korean translation.
+[deprecation_warning]
+other = " 문서는 더 이상 적극적으로 관리되지 않음. 현재 보고있는 문서는 정적 스냅샷임. 최신 문서를 위해서는, 다음을 참고. "
+
+[objectives_heading]
+other = "목적"
+
+[cleanup_heading]
+other = "정리하기"
+
+[prerequisites_heading]
+other = "시작하기 전에"
+
+[whatsnext_heading]
+other = "다음 내용"
+
+[feedback_heading]
+other = "피드백"
+
+[feedback_question]
+other = "이 페이지가 도움이 되었나요?"
+
+[feedback_yes]
+other = "네"
+
+[feedback_no]
+other = "아니요"
+
+[latest_version]
+other = "최신 버전."
+
+[version_check_mustbe]
+other = "쿠버네티스 서버의 버전은 다음과 같아야 함. 버전: "
+
+[version_check_mustbeorlater]
+other = "쿠버네티스 서버의 버전은 다음과 같거나 더 높아야 함. 버전: "
+
+[version_check_tocheck]
+other = "버전 확인을 위해서, 다음 커맨드를 실행 "
+
+[caution]
+other = "주의:"
+
+[note]
+other = "참고:"
+
+[warning]
+other = "경고:"
+
[main_read_about]
other = "Read about"
diff --git a/layouts/blog/baseof.html b/layouts/blog/baseof.html
index ebb537d6293f8..fe8919e95c6f4 100644
--- a/layouts/blog/baseof.html
+++ b/layouts/blog/baseof.html
@@ -20,14 +20,14 @@
{{ partialCached "blog/archive.html" . }}
diff --git a/layouts/docs/docsportal_home.html b/layouts/docs/docsportal_home.html
index cfa922972362d..4ac3f50597a0e 100644
--- a/layouts/docs/docsportal_home.html
+++ b/layouts/docs/docsportal_home.html
@@ -2,144 +2,48 @@
{{ if not .Params.notitle }}
{{ .Title }}
{{ end }}
- {{ if eq .Lang "en" }}
- {{ template "docs-portal-cards" . }}
- {{ end }}
+ {{ template "docs-portal-content" . }}
{{ end }}
{{ define "content-id" }}content{{ end }}
-{{ define "docs-portal-cards" }}
-
+{{ define "docs-portal-content" }}
+
-
-
Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications. The open source project is hosted by the Cloud Native Computing Foundation (CNCF).
-
+
{{ .Params.overview | safeHTML }}
-
-
-
Understand the basics
-
-
Learn about Kubernetes and its fundamental concepts.
+
diff --git a/layouts/shortcodes/code.html b/layouts/shortcodes/code.html
index d4137392f6d6b..e99b852ad1197 100644
--- a/layouts/shortcodes/code.html
+++ b/layouts/shortcodes/code.html
@@ -4,7 +4,7 @@
{{ $fileDir := path.Split $file }}
{{ $bundlePath := path.Join .Page.Dir $fileDir.Dir }}
{{ $filename := path.Join $p.Dir $file }}
-{{ $ghlink := printf "https://%s/blob/master/content/%s/%s" site.Params.githubWebsiteRepo .Page.Lang $filename | safeURL }}
+{{ $ghlink := printf "https://%s/blob/master/content/%s/%s" site.Params.githubwebsiterepo .Page.Lang $filename | safeURL }}
{{/* First assume this is a bundle and the file is inside it. */}}
{{ $resource := $p.Resources.GetMatch (printf "%s*" $file ) }}
{{ with $resource }}
diff --git a/layouts/shortcodes/codenew.html b/layouts/shortcodes/codenew.html
index 95d24d69d9106..76084c71c801b 100644
--- a/layouts/shortcodes/codenew.html
+++ b/layouts/shortcodes/codenew.html
@@ -4,7 +4,7 @@
{{ $fileDir := path.Split $file }}
{{ $bundlePath := path.Join .Page.Dir $fileDir.Dir }}
{{ $filename := printf "/content/%s/examples/%s" .Page.Lang $file | safeURL }}
-{{ $ghlink := printf "https://%s/master%s" site.Params.githubWebsiteRaw $filename | safeURL }}
+{{ $ghlink := printf "https://%s/master%s" site.Params.githubwebsiteraw $filename | safeURL }}
{{/* First assume this is a bundle and the file is inside it. */}}
{{ $resource := $p.Resources.GetMatch (printf "%s*" $file ) }}
{{ with $resource }}
diff --git a/layouts/shortcodes/deprecationwarning.html b/layouts/shortcodes/deprecationwarning.html
index 2a9d2e083ebf1..e7812ec916feb 100644
--- a/layouts/shortcodes/deprecationwarning.html
+++ b/layouts/shortcodes/deprecationwarning.html
@@ -3,8 +3,9 @@
- Documentation for Kubernetes {{ .Page.Param "version" }} is no longer actively maintained. The version you are currently viewing is a static snapshot.
- For up-to-date documentation, see the latest version.
+ Kubernetes {{ .Param "version" }}
+ {{ T "deprecation_warning" }}
+ {{ T "latest_version" }}
diff --git a/layouts/shortcodes/version-check.html b/layouts/shortcodes/version-check.html
index 979268bf57127..55f6754c11e3e 100644
--- a/layouts/shortcodes/version-check.html
+++ b/layouts/shortcodes/version-check.html
@@ -1,6 +1,6 @@
{{ $minVersion := .Page.Param "min-kubernetes-server-version" }}
{{ if eq $minVersion (.Page.Param "version") }}
-Your Kubernetes server must be version {{ $minVersion }}.
+{{ T "version_check_mustbe" }}{{ $minVersion }}.
{{ else if $minVersion}}
-Your Kubernetes server must be version {{ $minVersion }} or later.
-{{ end }} To check the version, enter kubectl version.
+{{ T "version_check_mustbeorlater" }}{{ $minVersion }}.
+{{ end }} {{ T "version_check_tocheck" }}kubectl version.
diff --git a/layouts/shortcodes/warning.html b/layouts/shortcodes/warning.html
index b90f76d49c2a2..d91f4296462ff 100644
--- a/layouts/shortcodes/warning.html
+++ b/layouts/shortcodes/warning.html
@@ -1,3 +1,3 @@
-
Warning: {{ .Inner | markdownify }}
+
{{ T "warning" }} {{ .Inner | markdownify }}
diff --git a/static/images/docs/aggregation-api-auth-flow.png b/static/images/docs/aggregation-api-auth-flow.png
new file mode 100644
index 0000000000000..759d8df0e0d86
Binary files /dev/null and b/static/images/docs/aggregation-api-auth-flow.png differ