diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index ecf6d21378fad..2ec17186422a3 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -137,6 +137,38 @@ aliases: - stewart-yu - xiangpengzhao - zhangxiaoyu-zidif + sig-docs-fr-owners: #Team: Documentation; GH: sig-docs-fr-owners + - sieben + - perriea + - rekcah78 + - lledru + - yastij + - smana + - rbenzair + - abuisine + - erickhun + - jygastaud + - awkif + sig-docs-fr-reviews: #Team: Documentation; GH: sig-docs-fr-reviews + - sieben + - perriea + - rekcah78 + - lledru + - yastij + - smana + - rbenzair + - abuisine + - erickhun + - jygastaud + - awkif + sig-docs-it-owners: #Team: Italian docs localization; GH: sig-docs-it-owners + - rlenferink + - lledru + - micheleberardi + sig-docs-it-reviews: #Team: Italian docs PR reviews; GH:sig-docs-it-reviews + - rlenferink + - lledru + - micheleberardi sig-docs-ja-owners: #Team: Japanese docs localization; GH: sig-docs-ja-owners - cstoku - nasa9084 diff --git a/README-fr.md b/README-fr.md new file mode 100644 index 0000000000000..cca46595ad023 --- /dev/null +++ b/README-fr.md @@ -0,0 +1,83 @@ +# Documentation de Kubernetes + +[![Build Status](https://api.travis-ci.org/kubernetes/website.svg?branch=master)](https://travis-ci.org/kubernetes/website) +[![GitHub release](https://img.shields.io/github/release/kubernetes/website.svg)](https://github.com/kubernetes/website/releases/latest) + +Bienvenue ! +Ce référentiel contient toutes les informations nécessaires à la construction du site web et de la documentation de Kubernetes. +Nous sommes très heureux que vous vouliez contribuer ! + +## Contribuer à la rédaction des docs + +Vous pouvez cliquer sur le bouton **Fork** en haut à droite de l'écran pour créer une copie de ce dépôt dans votre compte GitHub. +Cette copie s'appelle un *fork*. +Faites tous les changements que vous voulez dans votre fork, et quand vous êtes prêt à nous envoyer ces changements, allez dans votre fork et créez une nouvelle pull request pour nous le faire savoir. + +Une fois votre pull request créée, un examinateur de Kubernetes se chargera de vous fournir une revue claire et exploitable. +En tant que propriétaire de la pull request, **il est de votre responsabilité de modifier votre pull request pour tenir compte des commentaires qui vous ont été fournis par l'examinateur de Kubernetes.** +Notez également que vous pourriez vous retrouver avec plus d'un examinateur de Kubernetes pour vous fournir des commentaires ou vous pourriez finir par recevoir des commentaires d'un autre examinateur que celui qui vous a été initialement affecté pour vous fournir ces commentaires. +De plus, dans certains cas, l'un de vos examinateur peut demander un examen technique à un [examinateur technique de Kubernetes](https://github.com/kubernetes/website/wiki/Tech-reviewers) au besoin. +Les examinateurs feront de leur mieux pour fournir une revue rapidement, mais le temps de réponse peut varier selon les circonstances. + +Pour plus d'informations sur la contribution à la documentation Kubernetes, voir : + +* [Commencez à contribuer](https://kubernetes.io/docs/contribute/start/) +* [Apperçu des modifications apportées à votre documentation](http://kubernetes.io/docs/contribute/intermediate#view-your-changes-locally) +* [Utilisation des modèles de page](http://kubernetes.io/docs/contribute/style/page-templates/) +* [Documentation Style Guide](http://kubernetes.io/docs/contribute/style/style-guide/) +* [Traduction de la documentation Kubernetes](https://kubernetes.io/docs/contribute/localization/) + +## Exécuter le site localement en utilisant Docker + +La façon recommandée d'exécuter le site web Kubernetes localement est d'utiliser une image spécialisée [Docker](https://docker.com) qui inclut le générateur de site statique [Hugo](https://gohugo.io). + +> Si vous êtes sous Windows, vous aurez besoin de quelques outils supplémentaires que vous pouvez installer avec [Chocolatey](https://chocolatey.org). `choco install install make` + +> Si vous préférez exécuter le site Web localement sans Docker, voir [Exécuter le site localement avec Hugo](#running-the-site-locally-using-hugo) ci-dessous. + +Si vous avez Docker [up and running](https://www.docker.com/get-started), construisez l'image Docker `kubernetes-hugo' localement: + +```bash +make docker-image +``` + +Une fois l'image construite, vous pouvez exécuter le site localement : + +```bash +make docker-serve +``` + +Ouvrez votre navigateur à l'adresse: http://localhost:1313 pour voir le site. +Lorsque vous apportez des modifications aux fichiers sources, Hugo met à jour le site et force le navigateur à rafraîchir la page. + +## Exécuter le site localement en utilisant Hugo + +Voir la [documentation officielle Hugo](https://gohugo.io/getting-started/installing/) pour les instructions d'installation Hugo. +Assurez-vous d'installer la version Hugo spécifiée par la variable d'environnement `HUGO_VERSION` dans le fichier [`netlify.toml`](netlify.toml#L9). + +Pour exécuter le site localement lorsque vous avez Hugo installé : + +```bash +make serve +``` + +Le serveur Hugo local démarrera sur le port 1313. +Ouvrez votre navigateur à l'adresse: http://localhost:1313 pour voir le site. +Lorsque vous apportez des modifications aux fichiers sources, Hugo met à jour le site et force le navigateur à rafraîchir la page. + +## Communauté, discussion, contribution et assistance + +Apprenez comment vous engager avec la communauté Kubernetes sur la [page communauté](http://kubernetes.io/community/). + +Vous pouvez joindre les responsables de ce projet à l'adresse : + +- [Slack](https://kubernetes.slack.com/messages/sig-docs) +- [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs) + +### Code de conduite + +La participation à la communauté Kubernetes est régie par le [Code de conduite de Kubernetes](code-of-conduct.md). + +## Merci ! + +Kubernetes prospère grâce à la participation de la communauté, et nous apprécions vraiment vos contributions à notre site et à notre documentation ! diff --git a/config.toml b/config.toml index 98bf0d4a33d42..c8b6633e9df33 100644 --- a/config.toml +++ b/config.toml @@ -166,3 +166,26 @@ time_format_blog = "02.01.2006" # A list of language codes to look for untranslated content, ordered from left to right. language_alternatives = ["en"] +[languages.fr] +title = "Kubernetes" +description = "Production-Grade Container Orchestration" +languageName ="Français" +weight = 5 +contentDir = "content/fr" + +[languages.fr.params] +time_format_blog = "02.01.2006" +# A list of language codes to look for untranslated content, ordered from left to right. +language_alternatives = ["en"] + +[languages.it] +title = "Kubernetes" +description = "Production-Grade Container Orchestration" +languageName ="Italian" +weight = 6 +contentDir = "content/it" + +[languages.it.params] +time_format_blog = "02.01.2006" +# A list of language codes to look for untranslated content, ordered from left to right. +language_alternatives = ["en"] diff --git a/content/en/blog/_posts/2016-08-00-Security-Best-Practices-Kubernetes-Deployment.md b/content/en/blog/_posts/2016-08-00-Security-Best-Practices-Kubernetes-Deployment.md index 74bb85843ad28..28b9a2ccb77fe 100644 --- a/content/en/blog/_posts/2016-08-00-Security-Best-Practices-Kubernetes-Deployment.md +++ b/content/en/blog/_posts/2016-08-00-Security-Best-Practices-Kubernetes-Deployment.md @@ -4,6 +4,8 @@ date: 2016-08-31 slug: security-best-practices-kubernetes-deployment url: /blog/2016/08/Security-Best-Practices-Kubernetes-Deployment --- +_Note: some of the recommendations in this post are no longer current. Current cluster hardening options are described in this [documentation](https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/)._ + _Editor’s note: today’s post is by Amir Jerbi and Michael Cherny of Aqua Security, describing security best practices for Kubernetes deployments, based on data they’ve collected from various use-cases seen in both on-premises and cloud deployments._ Kubernetes provides many controls that can greatly improve your application security. Configuring them requires intimate knowledge with Kubernetes and the deployment’s security requirements. The best practices we highlight here are aligned to the container lifecycle: build, ship and run, and are specifically tailored to Kubernetes deployments. We adopted these best practices in [our own SaaS deployment](http://blog.aquasec.com/running-a-security-service-in-google-cloud-real-world-example) that runs Kubernetes on Google Cloud Platform. diff --git a/content/en/blog/_posts/2017-07-00-How-Watson-Health-Cloud-Deploys.md b/content/en/blog/_posts/2017-07-00-How-Watson-Health-Cloud-Deploys.md index 2917f55e79394..0d6e6481cd2cc 100644 --- a/content/en/blog/_posts/2017-07-00-How-Watson-Health-Cloud-Deploys.md +++ b/content/en/blog/_posts/2017-07-00-How-Watson-Health-Cloud-Deploys.md @@ -19,7 +19,7 @@ I was able to run more processes on a single physical server than I could using -To orchestrate container deployment, we are using[Armada infrastructure](https://console.bluemix.net/containers-kubernetes/launch), a Kubernetes implementation by IBM for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. +To orchestrate container deployment, we are using [IBM Cloud Kubernetes Service infrastructure](https://cloud.ibm.com/containers-kubernetes/landing), a Kubernetes implementation by IBM for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. @@ -39,7 +39,7 @@ Here is a snapshot of Watson Care Manager, running inside a Kubernetes cluster: -Before deploying an app, a user must create a worker node cluster. I can create a cluster using the kubectl cli commands or create it from[a Bluemix](http://bluemix.net/) dashboard. +Before deploying an app, a user must create a worker node cluster. I can create a cluster using the kubectl cli commands or create it from the [IBM Cloud](https://cloud.ibm.com/) dashboard. @@ -107,16 +107,16 @@ If needed, run a rolling update to update the existing pod. -Deploying the application in Armada: +Deploying the application in IBM Cloud Kubernetes Service: -Provision a cluster in Armada with \ worker nodes. Create Kubernetes controllers for deploying the containers in worker nodes, the Armada infrastructure pulls the Docker images from IBM Bluemix Docker registry to create containers. We tried deploying an application container and running a logmet agent (see Reading and displaying logs using logmet container, below) inside the containers that forwards the application logs to an IBM cloud logging service. As part of the process, YAML files are used to create a controller resource for the UrbanCode Deploy (UCD). UCD agent is deployed as a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) controller, which is used to connect to the UCD server. The whole process of deployment of application happens in UCD. To support the application for public access, we created a service resource to interact between pods and access container services. For storage support, we created persistent volume claims and mounted the volume for the containers. +Provision a cluster in IBM Cloud Kubernetes Service with \ worker nodes. Create Kubernetes controllers for deploying the containers in worker nodes, the IBM Cloud Kubernetes Service infrastructure pulls the Docker images from IBM Cloud Container Registry to create containers. We tried deploying an application container and running a logmet agent (see Reading and displaying logs using logmet container, below) inside the containers that forwards the application logs to an IBM Cloud logging service. As part of the process, YAML files are used to create a controller resource for the UrbanCode Deploy (UCD). UCD agent is deployed as a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) controller, which is used to connect to the UCD server. The whole process of deployment of application happens in UCD. To support the application for public access, we created a service resource to interact between pods and access container services. For storage support, we created persistent volume claims and mounted the volume for the containers. | ![](https://lh6.googleusercontent.com/iFKlbBX8rjWTuygIfjImdxP8R7xXuvaaoDwldEIC3VRL03XIehxagz8uePpXllYMSxoyai5a6N-0NB4aTGK9fwwd8leFyfypxtbmaWBK-b2Kh9awcA76-_82F7ZZl7lgbf0gyFN7) | -| UCD: IBM UrbanCode Deploy is a tool for automating application deployments through your environments. Armada: Kubernetes implementation of IBM. WH Docker Registry: Docker Private image registry. Common agent containers: We expect to configure our services to use the WHC mandatory agents. We deployed all ion containers. | +| UCD: IBM UrbanCode Deploy is a tool for automating application deployments through your environments. IBM Cloud Kubernetes Service: Kubernetes implementation of IBM. WH Docker Registry: Docker Private image registry. Common agent containers: We expect to configure our services to use the WHC mandatory agents. We deployed all ion containers. | @@ -142,7 +142,7 @@ Exposing services with Ingress: -To expose our services to outside the cluster, we used Ingress. In Armada, if we create a paid cluster, an Ingress controller is automatically installed for us to use. We were able to access services through Ingress by creating a YAML resource file that specifies the service path. +To expose our services to outside the cluster, we used Ingress. In IBM Cloud Kubernetes Service, if we create a paid cluster, an Ingress controller is automatically installed for us to use. We were able to access services through Ingress by creating a YAML resource file that specifies the service path. diff --git a/content/en/blog/_posts/2018-05-04-Announcing-Kubeflow-0-1.md b/content/en/blog/_posts/2018-05-04-Announcing-Kubeflow-0-1.md index 2b23ac523b961..9bc8ceb2c9cf7 100644 --- a/content/en/blog/_posts/2018-05-04-Announcing-Kubeflow-0-1.md +++ b/content/en/blog/_posts/2018-05-04-Announcing-Kubeflow-0-1.md @@ -94,7 +94,7 @@ If you’d like to try out Kubeflow, we have a number of options for you: 1. You can use sample walkthroughs hosted on [Katacoda](https://www.katacoda.com/kubeflow) 2. You can follow a guided tutorial with existing models from the [examples repository](https://github.com/kubeflow/examples). These include the [Github Issue Summarization](https://github.com/kubeflow/examples/tree/master/github_issue_summarization), [MNIST](https://github.com/kubeflow/examples/tree/master/mnist) and [Reinforcement Learning with Agents](https://github.com/kubeflow/examples/tree/master/agents). -3. You can start a cluster on your own and try your own model. Any Kubernetes conformant cluster will support Kubeflow including those from contributors [Caicloud](https://www.prnewswire.com/news-releases/caicloud-releases-its-kubernetes-based-cluster-as-a-service-product-claas-20-and-the-first-tensorflow-as-a-service-taas-11-while-closing-6m-series-a-funding-300418071.html), [Canonical](https://jujucharms.com/canonical-kubernetes/), [Google](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-container-cluster), [Heptio](https://heptio.com/products/kubernetes-subscription/), [Mesosphere](https://github.com/mesosphere/dcos-kubernetes-quickstart), [Microsoft](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough), [IBM](https://console.bluemix.net/docs/containers/cs_tutorials.html#cs_cluster_tutorial), [Red Hat/Openshift ](https://docs.openshift.com/container-platform/3.3/install_config/install/quick_install.html#install-config-install-quick-install)and [Weaveworks](https://www.weave.works/product/cloud/). +3. You can start a cluster on your own and try your own model. Any Kubernetes conformant cluster will support Kubeflow including those from contributors [Caicloud](https://www.prnewswire.com/news-releases/caicloud-releases-its-kubernetes-based-cluster-as-a-service-product-claas-20-and-the-first-tensorflow-as-a-service-taas-11-while-closing-6m-series-a-funding-300418071.html), [Canonical](https://jujucharms.com/canonical-kubernetes/), [Google](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-container-cluster), [Heptio](https://heptio.com/products/kubernetes-subscription/), [Mesosphere](https://github.com/mesosphere/dcos-kubernetes-quickstart), [Microsoft](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough), [IBM](https://cloud.ibm.com/docs/containers?topic=containers-cs_cluster_tutorial#cs_cluster_tutorial), [Red Hat/Openshift ](https://docs.openshift.com/container-platform/3.3/install_config/install/quick_install.html#install-config-install-quick-install)and [Weaveworks](https://www.weave.works/product/cloud/). There were also a number of sessions at KubeCon + CloudNativeCon EU 2018 covering Kubeflow. The links to the talks are here; the associated videos will be posted in the coming days. diff --git a/content/en/blog/_posts/2019-02-11-runc-CVE-2019-5736.md b/content/en/blog/_posts/2019-02-11-runc-CVE-2019-5736.md new file mode 100644 index 0000000000000..b5414e8c5be3f --- /dev/null +++ b/content/en/blog/_posts/2019-02-11-runc-CVE-2019-5736.md @@ -0,0 +1,92 @@ +--- +title: Runc and CVE-2019-5736 +date: 2019-02-11 +--- + +This morning [a container escape vulnerability in runc was announced](https://www.openwall.com/lists/oss-security/2019/02/11/2). We wanted to provide some guidance to Kubernetes users to ensure everyone is safe and secure. + +## What Is Runc? + +Very briefly, runc is the low-level tool which does the heavy lifting of spawning a Linux container. Other tools like Docker, Containerd, and CRI-O sit on top of runc to deal with things like data formatting and serialization, but runc is at the heart of all of these systems. + +Kubernetes in turn sits on top of those tools, and so while no part of Kubernetes itself is vulnerable, most Kubernetes installations are using runc under the hood. + +### What Is The Vulnerability? + +While full details are still embargoed to give people time to patch, the rough version is that when running a process as root (UID 0) inside a container, that process can exploit a bug in runc to gain root privileges on the host running the container. This then allows them unlimited access to the server as well as any other containers on that server. + +If the process inside the container is either trusted (something you know is not hostile) or is not running as UID 0, then the vulnerability does not apply. It can also be prevented by SELinux, if an appropriate policy has been applied. RedHat Enterprise Linux and CentOS both include appropriate SELinux permissions with their packages and so are believed to be unaffected if SELinux is enabled. + +The most common source of risk is attacker-controller container images, such as unvetted images from public repositories. + +### What Should I Do? + +As with all security issues, the two main options are to mitigate the vulnerability or upgrade your version of runc to one that includes the fix. + +As the exploit requires UID 0 within the container, a direct mitigation is to ensure all your containers are running as a non-0 user. This can be set within the container image, or via your pod specification: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: run-as-uid-1000 +spec: + securityContext: + runAsUser: 1000 + # ... +``` + +This can also be enforced globally using a PodSecurityPolicy: + +```yaml +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: non-root +spec: + privileged: false + allowPrivilegeEscalation: false + runAsUser: + # Require the container to run without root privileges. + rule: 'MustRunAsNonRoot' +``` + +Setting a policy like this is highly encouraged given the overall risks of running as UID 0 inside a container. + +Another potential mitigation is to ensure all your container images are vetted and trusted. This can be accomplished by building all your images yourself, or by vetting the contents of an image and then pinning to the image version hash (`image: external/someimage@sha256:7832659873hacdef`). + +Upgrading runc can generally be accomplished by upgrading the package `runc` for your distribution or by upgrading your OS image if using immutable images. This is a list of known safe versions for various distributions and platforms: + +* Ubuntu - [`runc 1.0.0~rc4+dfsg1-6ubuntu0.18.10.1`](https://people.canonical.com/~ubuntu-security/cve/2019/CVE-2019-5736.html) +* Debian - [`runc 1.0.0~rc6+dfsg1-2`](https://security-tracker.debian.org/tracker/CVE-2019-5736) +* RedHat Enterprise Linux - [`docker 1.13.1-91.git07f3374.el7`](https://access.redhat.com/security/vulnerabilities/runcescape) (if SELinux is disabled) +* Amazon Linux - [`docker 18.06.1ce-7.25.amzn1.x86_64`](https://alas.aws.amazon.com/ALAS-2019-1156.html) +* CoreOS - Stable: [`1967.5.0`](https://coreos.com/releases/#1967.5.0) / Beta: [`2023.2.0`](https://coreos.com/releases/#2023.2.0) / Alpha: [`2051.0.0`](https://coreos.com/releases/#2051.0.0) +* Kops Debian - [in progress](https://github.com/kubernetes/kops/pull/6460) +* Docker - [`18.09.2`](https://github.com/docker/docker-ce/releases/tag/v18.09.2) + +Some platforms have also posted more specific instructions: + +#### Google Container Engine (GKE) + +Google has issued a [security bulletin](https://cloud.google.com/kubernetes-engine/docs/security-bulletins#february-11-2019-runc) with more detailed information but in short, if you are using the default GKE node image then you are safe. If you are using an Ubuntu node image then you will need to mitigate or upgrade to an image with a fixed version of runc. + +#### Amazon Elastic Container Service for Kubernetes (EKS) + +Amazon has also issued a [security bulletin](https://aws.amazon.com/security/security-bulletins/AWS-2019-002/) with more detailed information. All EKS users should mitigate the issue or upgrade to a new node image. + +#### Azure Kubernetes Service (AKS) + +Microsoft has issued a [security bulletin](https://azure.microsoft.com/en-us/updates/cve-2019-5736-and-runc-vulnerability/) with detailed information on mitigating the issue. Microsoft recommends all AKS users to upgrade their cluster to mitigate the issue. + +### Docker + +We don't have specific confirmation that Docker for Mac and Docker for Windows are vulnerable, however it seems likely. Docker has released a fix in [version 18.09.2](https://github.com/docker/docker-ce/releases/tag/v18.09.2) and it is recommended you upgrade to it. This also applies to other deploy systems using Docker under the hood. + +If you are unable to upgrade Docker, the Rancher team has provided backports of the fix for many older versions at [github.com/rancher/runc-cve](https://github.com/rancher/runc-cve). + +## Getting More Information + +If you have any further questions about how this vulnerability impacts Kubernetes, please join us at [discuss.kubernetes.io](https://discuss.kubernetes.io/). + +If you would like to get in contact with the [runc team](https://github.com/opencontainers/org/blob/master/README.md#communications), you can reach them on [Google Groups](https://groups.google.com/a/opencontainers.org/forum/#!forum/dev) or `#opencontainers` on Freenode IRC. diff --git a/content/en/blog/_posts/2019-02-12-building-a-kubernetes-edge-control-plane-for-envoy-v2.md b/content/en/blog/_posts/2019-02-12-building-a-kubernetes-edge-control-plane-for-envoy-v2.md new file mode 100644 index 0000000000000..7751712f0e703 --- /dev/null +++ b/content/en/blog/_posts/2019-02-12-building-a-kubernetes-edge-control-plane-for-envoy-v2.md @@ -0,0 +1,107 @@ +--- +title: Building a Kubernetes Edge (Ingress) Control Plane for Envoy v2 +date: 2019-02-12 +slug: building-a-kubernetes-edge-control-plane-for-envoy-v2 +--- + + +**Author:** +Daniel Bryant, Product Architect, Datawire; +Flynn, Ambassador Lead Developer, Datawire; +Richard Li, CEO and Co-founder, Datawire + + +Kubernetes has become the de facto runtime for container-based microservice applications, but this orchestration framework alone does not provide all of the infrastructure necessary for running a distributed system. Microservices typically communicate through Layer 7 protocols such as HTTP, gRPC, or WebSockets, and therefore having the ability to make routing decisions, manipulate protocol metadata, and observe at this layer is vital. However, traditional load balancers and edge proxies have predominantly focused on L3/4 traffic. This is where the [Envoy Proxy](https://www.envoyproxy.io/) comes into play. + +Envoy proxy was designed as a [universal data plane](https://blog.envoyproxy.io/the-universal-data-plane-api-d15cec7a) from the ground-up by the Lyft Engineering team for today's distributed, L7-centric world, with broad support for L7 protocols, a real-time API for managing its configuration, first-class observability, and high performance within a small memory footprint. However, Envoy's vast feature set and flexibility of operation also makes its configuration highly complicated -- this is evident from looking at its rich but verbose [control plane](https://blog.envoyproxy.io/service-mesh-data-plane-vs-control-plane-2774e720f7fc) syntax. + +With the open source [Ambassador API Gateway](https://www.getambassador.io), we wanted to tackle the challenge of creating a new control plane that focuses on the use case of deploying Envoy as an forward-facing edge proxy within a Kubernetes cluster, in a way that is idiomatic to Kubernetes operators. In this article, we'll walk through two major iterations of the Ambassador design, and how we integrated Ambassador with Kubernetes. + + +## Ambassador pre-2019: Envoy v1 APIs, Jinja Template Files, and Hot Restarts + +Ambassador itself is deployed within a container as a Kubernetes service, and uses annotations added to Kubernetes Services as its [core configuration model](https://www.getambassador.io/reference/configuration). This approach [enables application developers to manage routing](https://www.getambassador.io/concepts/developers) as part of the Kubernetes service definition. We explicitly decided to go down this route because of [limitations](https://blog.getambassador.io/kubernetes-ingress-nodeport-load-balancers-and-ingress-controllers-6e29f1c44f2d) in the current [Ingress API spec](https://kubernetes.io/docs/concepts/services-networking/ingress/), and we liked the simplicity of extending Kubernetes services, rather than introducing another custom resource type. An example of an Ambassador annotation can be seen here: + + +``` +kind: Service +apiVersion: v1 +metadata: + name: my-service + annotations: + getambassador.io/config: | + --- + apiVersion: ambassador/v0 + kind: Mapping + name: my_service_mapping + prefix: /my-service/ + service: my-service +spec: + selector: + app: MyApp + ports: + - protocol: TCP + port: 80 + targetPort: 9376 +``` + + +Translating this simple Ambassador annotation config into valid [Envoy v1](https://www.envoyproxy.io/docs/envoy/v1.6.0/configuration/overview/v1_overview) config was not a trivial task. By design, Ambassador's configuration isn't based on the same conceptual model as Envoy's configuration -- we deliberately wanted to aggregate and simplify operations and config. Therefore, translating between one set of concepts to the other involves a fair amount of logic within Ambassador. + +In this first iteration of Ambassador we created a Python-based service that watched the Kubernetes API for changes to Service objects. When new or updated Ambassador annotations were detected, these were translated from the Ambassador syntax into an intermediate representation (IR) which embodied our core configuration model and concepts. Next, Ambassador translated this IR into a representative Envoy configuration which was saved as a file within pods associated with the running Ambassador k8s Service. Ambassador then "hot-restarted" the Envoy process running within the Ambassador pods, which triggered the loading of the new configuration. + +There were many benefits with this initial implementation. The mechanics involved were fundamentally simple, the transformation of Ambassador config into Envoy config was reliable, and the file-based hot restart integration with Envoy was dependable. + +However, there were also notable challenges with this version of Ambassador. First, although the hot restart was effective for the majority of our customers' use cases, it was not very fast, and some customers (particularly those with huge application deployments) found it was limiting the frequency with which they could change their configuration. Hot restart can also drop connections, especially long-lived connections like WebSockets or gRPC streams. + +More crucially, though, the first implementation of the IR allowed rapid prototyping but was primitive enough that it proved very difficult to make substantial changes. While this was a pain point from the beginning, it became a critical issue as Envoy shifted to the [Envoy v2 API](https://www.envoyproxy.io/docs/envoy/latest/configuration/overview/v2_overview). It was clear that the v2 API would offer Ambassador many benefits -- as Matt Klein outlined in his blog post, "[The universal data plane API](https://blog.envoyproxy.io/the-universal-data-plane-api-d15cec7a)" -- including access to new features and a solution to the connection-drop problem noted above, but it was also clear that the existing IR implementation was not capable of making the leap. + + +## Ambassador >= v0.50: Envoy v2 APIs (ADS), Testing with KAT, and Golang + +In consultation with the [Ambassador community](http://d6e.co/slack), the [Datawire](www.datawire.io) team undertook a redesign of the internals of Ambassador in 2018. This was driven by two key goals. First, we wanted to integrate Envoy's v2 configuration format, which would enable the support of features such as [SNI](https://www.getambassador.io/user-guide/sni/), [rate limiting](https://www.getambassador.io/user-guide/rate-limiting) and [gRPC authentication APIs](https://www.getambassador.io/user-guide/auth-tutorial). Second, we also wanted to do much more robust semantic validation of Envoy configuration due to its increasing complexity (particularly when operating with large-scale application deployments). + + +### Initial stages + +We started by restructuring the Ambassador internals more along the lines of a multipass compiler. The class hierarchy was made to more closely mirror the separation of concerns between the Ambassador configuration resources, the IR, and the Envoy configuration resources. Core parts of Ambassador were also redesigned to facilitate contributions from the community outside Datawire. We decided to take this approach for several reasons. First, Envoy Proxy is a very fast moving project, and we realized that we needed an approach where a seemingly minor Envoy configuration change didn't result in days of reengineering within Ambassador. In addition, we wanted to be able to provide semantic verification of configuration. + +As we started working more closely with Envoy v2, a testing challenge was quickly identified. As more and more features were being supported in Ambassador, more and more bugs appeared in Ambassador's handling of less common but completely valid combinations of features. This drove to creation of a new testing requirement that meant Ambassador's test suite needed to be reworked to automatically manage many combinations of features, rather than relying on humans to write each test individually. Moreover, we wanted the test suite to be fast in order to maximize engineering productivity. + +Thus, as part of the Ambassador rearchitecture, we introduced the [Kubernetes Acceptance Test (KAT)](https://github.com/datawire/ambassador/tree/master/kat) framework. KAT is an extensible test framework that: + + + +1. Deploys a bunch of services (along with Ambassador) to a Kubernetes cluster +1. Run a series of verification queries against the spun up APIs +1. Perform a bunch of assertions on those query results + +KAT is designed for performance -- it batches test setup upfront, and then runs all the queries in step 3 asynchronously with a high performance client. The traffic driver in KAT runs locally using [Telepresence](https://www.telepresence.io), which makes it easier to debug issues. + +### Introducing Golang to the Ambassador Stack + +With the KAT test framework in place, we quickly ran into some issues with Envoy v2 configuration and hot restart, which presented the opportunity to switch to use Envoy’s Aggregated Discovery Service (ADS) APIs instead of hot restart. This completely eliminated the requirement for restart on configuration changes, which we found could lead to dropped connection under high loads or long-lived connections. + +However, we faced an interesting question as we considered the move to the ADS. The ADS is not as simple as one might expect: there are explicit ordering dependencies when sending updates to Envoy. The Envoy project has reference implementations of the ordering logic, but only in Go and Java, where Ambassador was primarily in Python. We agonized a bit, and decided that the simplest way forward was to accept the polyglot nature of our world, and do our ADS implementation in Go. + +We also found, with KAT, that our testing had reached the point where Python’s performance with many network connections was a limitation, so we took advantage of Go here, as well, writing KAT’s querying and backend services primarily in Go. After all, what’s another Golang dependency when you’ve already taken the plunge? + +With a new test framework, new IR generating valid Envoy v2 configuration, and the ADS, we thought we were done with the major architectural changes in Ambassador 0.50. Alas, we hit one more issue. On the Azure Kubernetes Service, Ambassador annotation changes were no longer being detected. + +Working with the highly-responsive AKS engineering team, we were able to identify the issue -- namely, the Kubernetes API server in AKS is exposed through a chain of proxies, requiring clients to be updating to understand how to connect using the FQDN of the API server, which is provided through a mutating webhook in AKS. Unfortunately, support for this feature was not available in the official Kubernetes Python client, so this was the third spot where we chose to switch to Go instead of Python. + +This raises the interesting question of, “why not ditch all the Python code, and just rewrite Ambassador entirely in Go?” It’s a valid question. The main concern with a rewrite is that Ambassador and Envoy operate at different conceptual levels rather than simply expressing the same concepts with different syntax. Being certain that we’ve expressed the conceptual bridges in a new language is not a trivial challenge, and not something to undertake without already having really excellent test coverage in place + +At this point, we use Go to coverage very specific, well-contained functions that can be verified for correctness much more easily that we could verify a complete Golang rewrite. In the future, who knows? But for 0.50.0, this functional split let us both take advantage of Golang’s strengths, while letting us retain more confidence about all the changes already in 0.50. + +## Lessons Learned + +We've learned a lot in the process of building [Ambassador 0.50](https://blog.getambassador.io/ambassador-0-50-ga-release-notes-sni-new-authservice-and-envoy-v2-support-3b30a4d04c81). Some of our key takeaways: + +* Kubernetes and Envoy are very powerful frameworks, but they are also extremely fast moving targets -- there is sometimes no substitute for reading the source code and talking to the maintainers (who are fortunately all quite accessible!) +* The best supported libraries in the Kubernetes / Envoy ecosystem are written in Go. While we love Python, we have had to adopt Go so that we're not forced to maintain too many components ourselves. +* Redesigning a test harness is sometimes necessary to move your software forward. +* The real cost in redesigning a test harness is often in porting your old tests to the new harness implementation. +* Designing (and implementing) an effective control plane for the edge proxy use case has been challenging, and the feedback from the open source community around Kubernetes, Envoy and Ambassador has been extremely useful. + +Migrating Ambassador to the Envoy v2 configuration and ADS APIs was a long and difficult journey that required lots of architecture and design discussions and plenty of coding, but early feedback from results have been positive. [Ambassador 0.50 is available now](https://blog.getambassador.io/announcing-ambassador-0-50-8dffab5b05e0), so you can take it for a test run and share your feedback with the community on our [Slack channel](http://d6e.co/slack) or on [Twitter](https://www.twitter.com/getambassadorio). diff --git a/content/en/docs/concepts/cluster-administration/cloud-providers.md b/content/en/docs/concepts/cluster-administration/cloud-providers.md index 68099874c8e4c..ff3df214b43ee 100644 --- a/content/en/docs/concepts/cluster-administration/cloud-providers.md +++ b/content/en/docs/concepts/cluster-administration/cloud-providers.md @@ -367,14 +367,21 @@ The `--hostname-override` parameter is ignored by the VSphere cloud provider. ## IBM Cloud Kubernetes Service ### Compute nodes -By using the IBM Cloud Kubernetes Service provider, you can create clusters with a mixture of virtual and physical (bare metal) nodes in a single zone or across multiple zones in a region. For more information, see [Planning your cluster and worker node setup](https://console.bluemix.net/docs/containers/cs_clusters_planning.html#plan_clusters). +By using the IBM Cloud Kubernetes Service provider, you can create clusters with a mixture of virtual and physical (bare metal) nodes in a single zone or across multiple zones in a region. For more information, see [Planning your cluster and worker node setup](https://cloud.ibm.com/docs/containers?topic=containers-plan_clusters#plan_clusters). The name of the Kubernetes Node object is the private IP address of the IBM Cloud Kubernetes Service worker node instance. ### Networking -The IBM Cloud Kubernetes Service provider provides VLANs for quality network performance and network isolation for nodes. You can set up custom firewalls and Calico network policies to add an extra layer of security for your cluster, or connect your cluster to your on-prem data center via VPN. For more information, see [Planning in-cluster and private networking](https://console.bluemix.net/docs/containers/cs_network_cluster.html#planning). +The IBM Cloud Kubernetes Service provider provides VLANs for quality network performance and network isolation for nodes. You can set up custom firewalls and Calico network policies to add an extra layer of security for your cluster, or connect your cluster to your on-prem data center via VPN. For more information, see [Planning in-cluster and private networking](https://cloud.ibm.com/docs/containers?topic=containers-cs_network_cluster#cs_network_cluster). -To expose apps to the public or within the cluster, you can leverage NodePort, LoadBalancer, or Ingress services. You can also customize the Ingress application load balancer with annotations. For more information, see [Planning to expose your apps with external networking](https://console.bluemix.net/docs/containers/cs_network_planning.html#planning). +To expose apps to the public or within the cluster, you can leverage NodePort, LoadBalancer, or Ingress services. You can also customize the Ingress application load balancer with annotations. For more information, see [Planning to expose your apps with external networking](https://cloud.ibm.com/docs/containers?topic=containers-cs_network_planning#cs_network_planning). ### Storage -The IBM Cloud Kubernetes Service provider leverages Kubernetes-native persistent volumes to enable users to mount file, block, and cloud object storage to their apps. You can also use database-as-a-service and third-party add-ons for persistent storage of your data. For more information, see [Planning highly available persistent storage](https://console.bluemix.net/docs/containers/cs_storage_planning.html#storage_planning). +The IBM Cloud Kubernetes Service provider leverages Kubernetes-native persistent volumes to enable users to mount file, block, and cloud object storage to their apps. You can also use database-as-a-service and third-party add-ons for persistent storage of your data. For more information, see [Planning highly available persistent storage](https://cloud.ibm.com/docs/containers?topic=containers-storage_planning#storage_planning). + +## Baidu Cloud Container Engine + +### Node Name + +The Baidu cloud provider uses the private IP address of the node (as determined by the kubelet or overridden with `--hostname-override`) as the name of the Kubernetes Node object. +Note that the Kubernetes Node name must match the Baidu VM private IP. diff --git a/content/en/docs/concepts/cluster-administration/networking.md b/content/en/docs/concepts/cluster-administration/networking.md index d4fda1a89293d..bbd223e9def89 100644 --- a/content/en/docs/concepts/cluster-administration/networking.md +++ b/content/en/docs/concepts/cluster-administration/networking.md @@ -7,8 +7,9 @@ weight: 50 --- {{% capture overview %}} -Kubernetes approaches networking somewhat differently than Docker does by -default. There are 4 distinct networking problems to solve: +Networking is a central part of Kubernetes, but it can be challenging to +understand exactly how it is expected to work. There are 4 distinct networking +problems to address: 1. Highly-coupled container-to-container communications: this is solved by [pods](/docs/concepts/workloads/pods/pod/) and `localhost` communications. @@ -21,80 +22,56 @@ default. There are 4 distinct networking problems to solve: {{% capture body %}} -Kubernetes assumes that pods can communicate with other pods, regardless of -which host they land on. Every pod gets its own IP address so you do not -need to explicitly create links between pods and you almost never need to deal -with mapping container ports to host ports. This creates a clean, -backwards-compatible model where pods can be treated much like VMs or physical -hosts from the perspectives of port allocation, naming, service discovery, load -balancing, application configuration, and migration. - -There are requirements imposed on how you set up your cluster networking to -achieve this. - -## Docker model - -Before discussing the Kubernetes approach to networking, it is worthwhile to -review the "normal" way that networking works with Docker. By default, Docker -uses host-private networking. It creates a virtual bridge, called `docker0` by -default, and allocates a subnet from one of the private address blocks defined -in [RFC1918](https://tools.ietf.org/html/rfc1918) for that bridge. For each -container that Docker creates, it allocates a virtual Ethernet device (called -`veth`) which is attached to the bridge. The veth is mapped to appear as `eth0` -in the container, using Linux namespaces. The in-container `eth0` interface is -given an IP address from the bridge's address range. - -The result is that Docker containers can talk to other containers only if they -are on the same machine (and thus the same virtual bridge). Containers on -different machines can not reach each other - in fact they may end up with the -exact same network ranges and IP addresses. - -In order for Docker containers to communicate across nodes, there must -be allocated ports on the machine’s own IP address, which are then -forwarded or proxied to the containers. This obviously means that -containers must either coordinate which ports they use very carefully -or ports must be allocated dynamically. - -## Kubernetes model - -Coordinating ports across multiple developers is very difficult to do at -scale and exposes users to cluster-level issues outside of their control. +Kubernetes is all about sharing machines between applications. Typically, +sharing machines requires ensuring that two applications do not try to use the +same ports. Coordinating ports across multiple developers is very difficult to +do at scale and exposes users to cluster-level issues outside of their control. + Dynamic port allocation brings a lot of complications to the system - every application has to take ports as flags, the API servers have to know how to insert dynamic port numbers into configuration blocks, services have to know how to find each other, etc. Rather than deal with this, Kubernetes takes a different approach. +## The Kubernetes network model + +Every `Pod` gets its own IP address. This means you do not need to explicitly +create links between `Pods` and you almost never need to deal with mapping +container ports to host ports. This creates a clean, backwards-compatible +model where `Pods` can be treated much like VMs or physical hosts from the +perspectives of port allocation, naming, service discovery, load balancing, +application configuration, and migration. + Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies): - * all containers can communicate with all other containers without NAT - * all nodes can communicate with all containers (and vice-versa) without NAT - * the IP that a container sees itself as is the same IP that others see it as + * pods on a node can communicate with all pods on all nodes without NAT + * agents on a node (e.g. system daemons, kubelet) can communicate with all + pods on that node -What this means in practice is that you can not just take two computers -running Docker and expect Kubernetes to work. You must ensure that the -fundamental requirements are met. +Note: For those platforms that support `Pods` running in the host network (e.g. +Linux): + + * pods in the host network of a node can communicate with all pods on all + nodes without NAT This model is not only less complex overall, but it is principally compatible with the desire for Kubernetes to enable low-friction porting of apps from VMs to containers. If your job previously ran in a VM, your VM had an IP and could talk to other VMs in your project. This is the same basic model. -Until now this document has talked about containers. In reality, Kubernetes -applies IP addresses at the `Pod` scope - containers within a `Pod` share their -network namespaces - including their IP address. This means that containers -within a `Pod` can all reach each other's ports on `localhost`. This does imply -that containers within a `Pod` must coordinate port usage, but this is no -different than processes in a VM. This is called the "IP-per-pod" model. This -is implemented, using Docker, as a "pod container" which holds the network namespace -open while "app containers" (the things the user specified) join that namespace -with Docker's `--net=container:` function. - -As with Docker, it is possible to request host ports, but this is reduced to a -very niche operation. In this case a port will be allocated on the host `Node` -and traffic will be forwarded to the `Pod`. The `Pod` itself is blind to the -existence or non-existence of host ports. +Kubernetes IP addresses exist at the `Pod` scope - containers within a `Pod` +share their network namespaces - including their IP address. This means that +containers within a `Pod` can all reach each other's ports on `localhost`. This +also means that containers within a `Pod` must coordinate port usage, but this +is no different than processes in a VM. This is called the "IP-per-pod" model. + +How this is implemented is a detail of the particular container runtime in use. + +It is possible to request ports on the `Node` itself which forward to your `Pod` +(called host ports), but this is a very niche operation. How that forwarding is +implemented is also a detail of the container runtime. The `Pod` itself is +blind to the existence or non-existence of host ports. ## How to implement the Kubernetes networking model diff --git a/content/en/docs/concepts/configuration/secret.md b/content/en/docs/concepts/configuration/secret.md index cac58727d7b39..fa654f776e66c 100644 --- a/content/en/docs/concepts/configuration/secret.md +++ b/content/en/docs/concepts/configuration/secret.md @@ -13,10 +13,10 @@ weight: 50 {{% capture overview %}} -Objects of type `secret` are intended to hold sensitive information, such as -passwords, OAuth tokens, and ssh keys. Putting this information in a `secret` -is safer and more flexible than putting it verbatim in a `pod` definition or in -a docker image. See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) for more information. +Kubernetes `secret` objects let you store and manage sensitive information, such +as passwords, OAuth tokens, and ssh keys. Putting this information in a `secret` +is safer and more flexible than putting it verbatim in a +{{< glossary_tooltip term_id="pod" >}} definition or in a {{< glossary_tooltip text="container image" term_id="image" >}}. See [Secrets design document](https://git.k8s.io/community/contributors/design-proposals/auth/secrets.md) for more information. {{% /capture %}} @@ -32,7 +32,8 @@ more control over how it is used, and reduces the risk of accidental exposure. Users can create secrets, and the system also creates some secrets. To use a secret, a pod needs to reference the secret. -A secret can be used with a pod in two ways: as files in a [volume](/docs/concepts/storage/volumes/) mounted on one or more of +A secret can be used with a pod in two ways: as files in a +{{< glossary_tooltip text="volume" term_id="volume" >}} mounted on one or more of its containers, or used by kubelet when pulling images for the pod. ### Built-in Secrets @@ -94,11 +95,14 @@ password.txt: 12 bytes username.txt: 5 bytes ``` -Note that neither `get` nor `describe` shows the contents of the file by default. -This is to protect the secret from being exposed accidentally to someone looking +{{< note >}} +`kubectl get` and `kubectl describe` avoid showing the contents of a secret by +default. +This is to protect the secret from being exposed accidentally to an onlooker, or from being stored in a terminal log. +{{< /note >}} -See [decoding a secret](#decoding-a-secret) for how to see the contents. +See [decoding a secret](#decoding-a-secret) for how to see the contents of a secret. #### Creating a Secret Manually @@ -271,8 +275,9 @@ $ echo 'MWYyZDFlMmU2N2Rm' | base64 --decode ### Using Secrets -Secrets can be mounted as data volumes or be exposed as environment variables to -be used by a container in a pod. They can also be used by other parts of the +Secrets can be mounted as data volumes or be exposed as +{{< glossary_tooltip text="environment variables" term_id="container-env-variables" >}} +to be used by a container in a pod. They can also be used by other parts of the system, without being directly exposed to the pod. For example, they can hold credentials that other parts of the system should use to interact with external systems on your behalf. @@ -458,7 +463,8 @@ Secret updates. #### Using Secrets as Environment Variables -To use a secret in an environment variable in a pod: +To use a secret in an {{< glossary_tooltip text="environment variable" term_id="container-env-variables" >}} +in a pod: 1. Create a secret or use an existing one. Multiple pods can reference the same secret. 1. Modify your Pod definition in each container that you wish to consume the value of a secret key to add an environment variable for each secret key you wish to consume. The environment variable that consumes the secret key should populate the secret's name and key in `env[].valueFrom.secretKeyRef`. @@ -534,10 +540,10 @@ Secret volume sources are validated to ensure that the specified object reference actually points to an object of type `Secret`. Therefore, a secret needs to be created before any pods that depend on it. -Secret API objects reside in a namespace. They can only be referenced by pods -in that same namespace. +Secret API objects reside in a {{< glossary_tooltip text="namespace" term_id="namespace" >}}. +They can only be referenced by pods in that same namespace. -Individual secrets are limited to 1MB in size. This is to discourage creation +Individual secrets are limited to 1MiB in size. This is to discourage creation of very large secrets which would exhaust apiserver and kubelet memory. However, creation of many smaller secrets could also exhaust memory. More comprehensive limits on memory usage due to secrets is a planned feature. @@ -549,8 +555,8 @@ controller. It does not include pods created via the kubelets not common ways to create pods.) Secrets must be created before they are consumed in pods as environment -variables unless they are marked as optional. References to Secrets that do not exist will prevent -the pod from starting. +variables unless they are marked as optional. References to Secrets that do +not exist will prevent the pod from starting. References via `secretKeyRef` to keys that do not exist in a named Secret will prevent the pod from starting. @@ -821,6 +827,7 @@ be available in future releases of Kubernetes. ## Security Properties + ### Protections Because `secret` objects can be created independently of the `pods` that use @@ -829,51 +836,52 @@ creating, viewing, and editing pods. The system can also take additional precautions with `secret` objects, such as avoiding writing them to disk where possible. -A secret is only sent to a node if a pod on that node requires it. It is not -written to disk. It is stored in a tmpfs. It is deleted once the pod that -depends on it is deleted. - -On most Kubernetes-project-maintained distributions, communication between user -to the apiserver, and from apiserver to the kubelets, is protected by SSL/TLS. -Secrets are protected when transmitted over these channels. - -Secret data on nodes is stored in tmpfs volumes and thus does not come to rest -on the node. +A secret is only sent to a node if a pod on that node requires it. +Kubelet stores the secret into a `tmpfs` so that the secret is not written +to disk storage. Once the Pod that depends on the secret is deleted, kubelet +will delete its local copy of the secret data as well. There may be secrets for several pods on the same node. However, only the secrets that a pod requests are potentially visible within its containers. -Therefore, one Pod does not have access to the secrets of another pod. +Therefore, one Pod does not have access to the secrets of another Pod. There may be several containers in a pod. However, each container in a pod has to request the secret volume in its `volumeMounts` for it to be visible within the container. This can be used to construct useful [security partitions at the Pod level](#use-case-secret-visible-to-one-container-in-a-pod). +On most Kubernetes-project-maintained distributions, communication between user +to the apiserver, and from apiserver to the kubelets, is protected by SSL/TLS. +Secrets are protected when transmitted over these channels. + +{{< feature-state for_k8s_version="v1.13" state="beta" >}} + +You can enable [encryption at rest](/docs/tasks/administer-cluster/encrypt-data/) +for secret data, so that the secrets are not stored in the clear into {{< glossary_tooltip term_id="etcd" >}}. + ### Risks - - In the API server secret data is stored as plaintext in etcd; therefore: + - In the API server secret data is stored in {{< glossary_tooltip term_id="etcd" >}}; + therefore: + - Administrators should enable encryption at rest for cluster data (requires v1.13 or later) - Administrators should limit access to etcd to admin users - - Secret data in the API server is at rest on the disk that etcd uses; admins may want to wipe/shred disks - used by etcd when no longer in use + - Administrators may want to wipe/shred disks used by etcd when no longer in use + - If running etcd in a cluster, administrators should make sure to use SSL/TLS + for etcd peer-to-peer communication. - If you configure the secret through a manifest (JSON or YAML) file which has the secret data encoded as base64, sharing this file or checking it in to a - source repository means the secret is compromised. Base64 encoding is not an + source repository means the secret is compromised. Base64 encoding is _not_ an encryption method and is considered the same as plain text. - Applications still need to protect the value of secret after reading it from the volume, such as not accidentally logging it or transmitting it to an untrusted party. - A user who can create a pod that uses a secret can also see the value of that secret. Even if apiserver policy does not allow that user to read the secret object, the user could run a pod which exposes the secret. - - If multiple replicas of etcd are run, then the secrets will be shared between them. - By default, etcd does not secure peer-to-peer communication with SSL/TLS, though this can be configured. - - Currently, anyone with root on any node can read any secret from the apiserver, + - Currently, anyone with root on any node can read _any_ secret from the apiserver, by impersonating the kubelet. It is a planned feature to only send secrets to nodes that actually require them, to restrict the impact of a root exploit on a single node. -{{< note >}} -As of 1.7 [encryption of secret data at rest is supported](/docs/tasks/administer-cluster/encrypt-data/). -{{< /note >}} {{% capture whatsnext %}} diff --git a/content/en/docs/concepts/containers/container-lifecycle-hooks.md b/content/en/docs/concepts/containers/container-lifecycle-hooks.md index 3825868850c51..08d855732fabb 100644 --- a/content/en/docs/concepts/containers/container-lifecycle-hooks.md +++ b/content/en/docs/concepts/containers/container-lifecycle-hooks.md @@ -36,7 +36,7 @@ No parameters are passed to the handler. `PreStop` -This hook is called immediately before a container is terminated. +This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state. It is blocking, meaning it is synchronous, so it must complete before the call to delete the container can be sent. No parameters are passed to the handler. diff --git a/content/en/docs/concepts/containers/images.md b/content/en/docs/concepts/containers/images.md index 8885784e4c76f..acb3cd43fc892 100644 --- a/content/en/docs/concepts/containers/images.md +++ b/content/en/docs/concepts/containers/images.md @@ -149,9 +149,9 @@ Once you have those variables filled in you can ### Using IBM Cloud Container Registry IBM Cloud Container Registry provides a multi-tenant private image registry that you can use to safely store and share your Docker images. By default, images in your private registry are scanned by the integrated Vulnerability Advisor to detect security issues and potential vulnerabilities. Users in your IBM Cloud account can access your images, or you can create a token to grant access to registry namespaces. -To install the IBM Cloud Container Registry CLI plug-in and create a namespace for your images, see [Getting started with IBM Cloud Container Registry](https://console.bluemix.net/docs/services/Registry/index.html#index). +To install the IBM Cloud Container Registry CLI plug-in and create a namespace for your images, see [Getting started with IBM Cloud Container Registry](https://cloud.ibm.com/docs/services/Registry?topic=registry-index#index). -You can use the IBM Cloud Container Registry to deploy containers from [IBM Cloud public images](https://console.bluemix.net/docs/services/RegistryImages/index.html#ibm_images) and your private images into the `default` namespace of your IBM Cloud Kubernetes Service cluster. To deploy a container into other namespaces, or to use an image from a different IBM Cloud Container Registry region or IBM Cloud account, create a Kubernetes `imagePullSecret`. For more information, see [Building containers from images](https://console.bluemix.net/docs/containers/cs_images.html#images). +You can use the IBM Cloud Container Registry to deploy containers from [IBM Cloud public images](https://cloud.ibm.com/docs/services/Registry?topic=registry-public_images#public_images) and your private images into the `default` namespace of your IBM Cloud Kubernetes Service cluster. To deploy a container into other namespaces, or to use an image from a different IBM Cloud Container Registry region or IBM Cloud account, create a Kubernetes `imagePullSecret`. For more information, see [Building containers from images](https://cloud.ibm.com/docs/containers?topic=containers-images#images). ### Configuring Nodes to Authenticate to a Private Registry @@ -318,7 +318,7 @@ type: kubernetes.io/dockerconfigjson If you get the error message `error: no objects passed to create`, it may mean the base64 encoded string is invalid. If you get an error message like `Secret "myregistrykey" is invalid: data[.dockerconfigjson]: invalid value ...`, it means -the data was successfully un-base64 encoded, but could not be parsed as a `.docker/config.json` file. +the base64 encoded string in the data was successfully decoded, but could not be parsed as a `.docker/config.json` file. #### Referring to an imagePullSecrets on a Pod diff --git a/content/en/docs/concepts/overview/components.md b/content/en/docs/concepts/overview/components.md index c473a42df98a2..af38f156e486a 100644 --- a/content/en/docs/concepts/overview/components.md +++ b/content/en/docs/concepts/overview/components.md @@ -4,6 +4,9 @@ reviewers: title: Kubernetes Components content_template: templates/concept weight: 20 +card: + name: concepts + weight: 20 --- {{% capture overview %}} @@ -76,7 +79,8 @@ network rules on the host and performing connection forwarding. ### Container Runtime -The container runtime is the software that is responsible for running containers. Kubernetes supports several runtimes: [Docker](http://www.docker.com), [rkt](https://coreos.com/rkt/), [runc](https://github.com/opencontainers/runc) and any OCI [runtime-spec](https://github.com/opencontainers/runtime-spec) implementation. +The container runtime is the software that is responsible for running containers. +Kubernetes supports several runtimes: [Docker](http://www.docker.com), [containerd](https://containerd.io), [cri-o](https://cri-o.io/), [rktlet](https://github.com/kubernetes-incubator/rktlet) and any implementation of the [Kubernetes CRI (Container Runtime Interface)](https://github.com/kubernetes/community/blob/master/contributors/devel/container-runtime-interface.md). ## Addons diff --git a/content/en/docs/concepts/overview/kubernetes-api.md b/content/en/docs/concepts/overview/kubernetes-api.md index 179b471dcd891..2ef08ade264bb 100644 --- a/content/en/docs/concepts/overview/kubernetes-api.md +++ b/content/en/docs/concepts/overview/kubernetes-api.md @@ -4,6 +4,9 @@ reviewers: title: The Kubernetes API content_template: templates/concept weight: 30 +card: + name: concepts + weight: 30 --- {{% capture overview %}} diff --git a/content/en/docs/concepts/overview/object-management-kubectl/imperative-command.md b/content/en/docs/concepts/overview/object-management-kubectl/imperative-command.md index 38b194d2a4fbd..bc83bd6b03e95 100644 --- a/content/en/docs/concepts/overview/object-management-kubectl/imperative-command.md +++ b/content/en/docs/concepts/overview/object-management-kubectl/imperative-command.md @@ -73,7 +73,7 @@ that must be set: The `kubectl` command also supports update commands driven by an aspect of the object. Setting this aspect may set different fields for different object types: -- `set` : Set an aspect of an object. +- `set` ``: Set an aspect of an object. {{< note >}} In Kubernetes version 1.5, not every verb-driven command has an associated aspect-driven command. @@ -160,5 +160,3 @@ kubectl create --edit -f /tmp/srv.yaml - [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl/) - [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/) {{% /capture %}} - - diff --git a/content/en/docs/concepts/overview/what-is-kubernetes.md b/content/en/docs/concepts/overview/what-is-kubernetes.md index 014d4945c03f4..6bfd4404f0cae 100644 --- a/content/en/docs/concepts/overview/what-is-kubernetes.md +++ b/content/en/docs/concepts/overview/what-is-kubernetes.md @@ -5,6 +5,9 @@ reviewers: title: What is Kubernetes? content_template: templates/concept weight: 10 +card: + name: concepts + weight: 10 --- {{% capture overview %}} diff --git a/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md b/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md index ae529e39bcfe7..00d1cc65f8171 100644 --- a/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md +++ b/content/en/docs/concepts/overview/working-with-objects/kubernetes-objects.md @@ -2,6 +2,9 @@ title: Understanding Kubernetes Objects content_template: templates/concept weight: 10 +card: + name: concepts + weight: 40 --- {{% capture overview %}} @@ -28,7 +31,7 @@ Every Kubernetes object includes two nested object fields that govern the object For example, a Kubernetes Deployment is an object that can represent an application running on your cluster. When you create the Deployment, you might set the Deployment spec to specify that you want three replicas of the application to be running. The Kubernetes system reads the Deployment spec and starts three instances of your desired application--updating the status to match your spec. If any of those instances should fail (a status change), the Kubernetes system responds to the difference between spec and status by making a correction--in this case, starting a replacement instance. -For more information on the object spec, status, and metadata, see the [Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/api-conventions.md). +For more information on the object spec, status, and metadata, see the [Kubernetes API Conventions](https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md). ### Describing a Kubernetes Object diff --git a/content/en/docs/concepts/policy/pod-security-policy.md b/content/en/docs/concepts/policy/pod-security-policy.md index 4f1c658aa81d6..607d2c93e6db9 100644 --- a/content/en/docs/concepts/policy/pod-security-policy.md +++ b/content/en/docs/concepts/policy/pod-security-policy.md @@ -41,7 +41,7 @@ administrator to control the following: | Restricting escalation to root privileges | [`allowPrivilegeEscalation`, `defaultAllowPrivilegeEscalation`](#privilege-escalation) | | Linux capabilities | [`defaultAddCapabilities`, `requiredDropCapabilities`, `allowedCapabilities`](#capabilities) | | The SELinux context of the container | [`seLinux`](#selinux) | -| The Allowed Proc Mount types for the container | [`allowedProcMountTypes`](#allowedProcMountTypes) | +| The Allowed Proc Mount types for the container | [`allowedProcMountTypes`](#allowedprocmounttypes) | | The AppArmor profile used by containers | [annotations](#apparmor) | | The seccomp profile used by containers | [annotations](#seccomp) | | The sysctl profile used by containers | [annotations](#sysctl) | @@ -336,7 +336,6 @@ pause-7774d79b5-qrgcb 0/1 Pending 0 1s pause-7774d79b5-qrgcb 0/1 Pending 0 1s pause-7774d79b5-qrgcb 0/1 ContainerCreating 0 1s pause-7774d79b5-qrgcb 1/1 Running 0 2s -^C ``` ### Clean up diff --git a/content/en/docs/concepts/services-networking/connect-applications-service.md b/content/en/docs/concepts/services-networking/connect-applications-service.md index ae0160ad9fb3b..796e66d30612d 100644 --- a/content/en/docs/concepts/services-networking/connect-applications-service.md +++ b/content/en/docs/concepts/services-networking/connect-applications-service.md @@ -17,7 +17,7 @@ Now that you have a continuously running, replicated application you can expose By default, Docker uses host-private networking, so containers can talk to other containers only if they are on the same machine. In order for Docker containers to communicate across nodes, there must be allocated ports on the machine’s own IP address, which are then forwarded or proxied to the containers. This obviously means that containers must either coordinate which ports they use very carefully or ports must be allocated dynamically. -Coordinating ports across multiple developers is very difficult to do at scale and exposes users to cluster-level issues outside of their control. Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. We give every pod its own cluster-private-IP address so you do not need to explicitly create links between pods or mapping container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document will elaborate on how you can run reliable services on such a networking model. +Coordinating ports across multiple developers is very difficult to do at scale and exposes users to cluster-level issues outside of their control. Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. We give every pod its own cluster-private-IP address so you do not need to explicitly create links between pods or map container ports to host ports. This means that containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document will elaborate on how you can run reliable services on such a networking model. This guide uses a simple nginx server to demonstrate proof of concept. The same principles are embodied in a more complete [Jenkins CI application](https://kubernetes.io/blog/2015/07/strong-simple-ssl-for-kubernetes). diff --git a/content/en/docs/concepts/services-networking/dns-pod-service.md b/content/en/docs/concepts/services-networking/dns-pod-service.md index da5162073b750..5c519d479a904 100644 --- a/content/en/docs/concepts/services-networking/dns-pod-service.md +++ b/content/en/docs/concepts/services-networking/dns-pod-service.md @@ -84,7 +84,7 @@ the hostname of the pod. For example, given a Pod with `hostname` set to The Pod spec also has an optional `subdomain` field which can be used to specify its subdomain. For example, a Pod with `hostname` set to "`foo`", and `subdomain` set to "`bar`", in namespace "`my-namespace`", will have the fully qualified -domain name (FQDN) "`foo.bar.my-namespace.pod.cluster.local`". +domain name (FQDN) "`foo.bar.my-namespace.svc.cluster.local`". Example: @@ -141,7 +141,7 @@ record for the Pod's fully qualified hostname. For example, given a Pod with the hostname set to "`busybox-1`" and the subdomain set to "`default-subdomain`", and a headless Service named "`default-subdomain`" in the same namespace, the pod will see its own FQDN as -"`busybox-1.default-subdomain.my-namespace.pod.cluster.local`". DNS serves an +"`busybox-1.default-subdomain.my-namespace.svc.cluster.local`". DNS serves an A record at that name, pointing to the Pod's IP. Both pods "`busybox1`" and "`busybox2`" can have their distinct A records. diff --git a/content/en/docs/concepts/services-networking/ingress-controllers.md b/content/en/docs/concepts/services-networking/ingress-controllers.md new file mode 100644 index 0000000000000..57af46a01a0c5 --- /dev/null +++ b/content/en/docs/concepts/services-networking/ingress-controllers.md @@ -0,0 +1,74 @@ +--- +title: Ingress Controllers +reviewers: +content_template: templates/concept +weight: 40 +--- + +{{% capture overview %}} + +In order for the Ingress resource to work, the cluster must have an ingress controller running. + +Unlike other types of controllers which run as part of the `kube-controller-manager` binary, Ingress controllers +are not started automatically with a cluster. Use this page to choose the ingress controller implementation +that best fits your cluster. + +Kubernetes as a project currently supports and maintains [GCE](https://git.k8s.io/ingress-gce/README.md) and + [nginx](https://git.k8s.io/ingress-nginx/README.md) controllers. + +{{% /capture %}} + +{{% capture body %}} + +## Additional controllers + +* [Ambassador](https://www.getambassador.io/) API Gateway is an [Envoy](https://www.envoyproxy.io) based ingress + controller with [community](https://www.getambassador.io/docs) or + [commercial](https://www.getambassador.io/pro/) support from [Datawire](https://www.datawire.io/). +* [AppsCode Inc.](https://appscode.com) offers support and maintenance for the most widely used [HAProxy](http://www.haproxy.org/) based ingress controller [Voyager](https://appscode.com/products/voyager). +* [Contour](https://github.com/heptio/contour) is an [Envoy](https://www.envoyproxy.io) based ingress controller + provided and supported by Heptio. +* Citrix provides an [Ingress Controller](https://github.com/citrix/citrix-k8s-ingress-controller) for its hardware (MPX), virtualized (VPX) and [free containerized (CPX) ADC](https://www.citrix.com/products/citrix-adc/cpx-express.html) for [baremetal](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment/baremetal) and [cloud](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment) deployments. +* F5 Networks provides [support and maintenance](https://support.f5.com/csp/article/K86859508) + for the [F5 BIG-IP Controller for Kubernetes](http://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest). +* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io) which offers API Gateway functionality with enterprise support from [solo.io](https://www.solo.io). +* [HAProxy](http://www.haproxy.org/) based ingress controller + [jcmoraisjr/haproxy-ingress](https://github.com/jcmoraisjr/haproxy-ingress) which is mentioned on the blog post + [HAProxy Ingress Controller for Kubernetes](https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/). + [HAProxy Technologies](https://www.haproxy.com/) offers support and maintenance for HAProxy Enterprise and + the ingress controller [jcmoraisjr/haproxy-ingress](https://github.com/jcmoraisjr/haproxy-ingress). +* [Istio](https://istio.io/) based ingress controller + [Control Ingress Traffic](https://istio.io/docs/tasks/traffic-management/ingress/). +* [Kong](https://konghq.com/) offers [community](https://discuss.konghq.com/c/kubernetes) or + [commercial](https://konghq.com/kong-enterprise/) support and maintenance for the + [Kong Ingress Controller for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller). +* [NGINX, Inc.](https://www.nginx.com/) offers support and maintenance for the + [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx/kubernetes-ingress-controller). +* [Traefik](https://github.com/containous/traefik) is a fully featured ingress controller + ([Let's Encrypt](https://letsencrypt.org), secrets, http2, websocket), and it also comes with commercial + support by [Containous](https://containo.us/services). + +## Using multiple Ingress controllers + +You may deploy [any number of ingress controllers](https://git.k8s.io/ingress-nginx/docs/user-guide/multiple-ingress.md#multiple-ingress-controllers) +within a cluster. When you create an ingress, you should annotate each ingress with the appropriate +[`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster) +to indicate which ingress controller should be used if more than one exists within your cluster. + +If you do not define a class, your cloud provider may use a default ingress provider. + +Ideally, all ingress controllers should fulfill this specification, but the various ingress +controllers operate slightly differently. + +{{< note >}} +Make sure you review your ingress controller's documentation to understand the caveats of choosing it. +{{< /note >}} + +{{% /capture %}} + +{{% capture whatsnext %}} + +* Learn more about [Ingress](/docs/concepts/services-networking/ingress/). +* [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube). + +{{% /capture %}} diff --git a/content/en/docs/concepts/services-networking/ingress.md b/content/en/docs/concepts/services-networking/ingress.md index a29ff81515d19..64b094ff21a7a 100644 --- a/content/en/docs/concepts/services-networking/ingress.md +++ b/content/en/docs/concepts/services-networking/ingress.md @@ -25,7 +25,7 @@ For the sake of clarity, this guide defines the following terms: Ingress, added in Kubernetes v1.1, exposes HTTP and HTTPS routes from outside the cluster to {{< link text="services" url="/docs/concepts/services-networking/service/" >}} within the cluster. -Traffic routing is controlled by rules defined on the ingress resource. +Traffic routing is controlled by rules defined on the Ingress resource. ```none internet @@ -35,9 +35,9 @@ Traffic routing is controlled by rules defined on the ingress resource. [ Services ] ``` -An ingress can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, and offer name based virtual hosting. An [ingress controller](#ingress-controllers) is responsible for fulfilling the ingress, usually with a loadbalancer, though it may also configure your edge router or additional frontends to help handle the traffic. +An Ingress can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, and offer name based virtual hosting. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers) is responsible for fulfilling the Ingress, usually with a loadbalancer, though it may also configure your edge router or additional frontends to help handle the traffic. -An ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically +An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport) or [Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer). @@ -45,52 +45,19 @@ uses a service of type [Service.Type=NodePort](/docs/concepts/services-networkin {{< feature-state for_k8s_version="v1.1" state="beta" >}} -Before you start using an ingress, there are a few things you should understand. The ingress is a beta resource. You will need an ingress controller to satisfy an ingress, simply creating the resource will have no effect. +Before you start using an Ingress, there are a few things you should understand. The Ingress is a beta resource. -GCE/Google Kubernetes Engine deploys an [ingress controller](#ingress-controllers) on the master. Review the +{{< note >}} +You must have an [Ingress controller](/docs/concepts/services-networking/ingress-controllers) to satisfy an Ingress. Only creating an Ingress resource has no effect. +{{< /note >}} + +GCE/Google Kubernetes Engine deploys an Ingress controller on the master. Review the [beta limitations](https://github.com/kubernetes/ingress-gce/blob/master/BETA_LIMITATIONS.md#glbc-beta-limitations) of this controller if you are using GCE/GKE. In environments other than GCE/Google Kubernetes Engine, you may need to [deploy an ingress controller](https://kubernetes.github.io/ingress-nginx/deploy/). There are a number of -[ingress controller](#ingress-controllers) you may choose from. - -## Ingress controllers - -In order for the ingress resource to work, the cluster must have an ingress controller running. This is unlike other types of controllers, which run as part of the `kube-controller-manager` binary, and are typically started automatically with a cluster. Choose the ingress controller implementation that best fits your cluster. - -* Kubernetes as a project currently supports and maintains [GCE](https://git.k8s.io/ingress-gce/README.md) and - [nginx](https://git.k8s.io/ingress-nginx/README.md) controllers. - -Additional controllers include: - -* [Contour](https://github.com/heptio/contour) is an [Envoy](https://www.envoyproxy.io) based ingress controller - provided and supported by Heptio. -* Citrix provides an [Ingress Controller](https://github.com/citrix/citrix-k8s-ingress-controller) for its hardware (MPX), virtualized (VPX) and [free containerized (CPX) ADC](https://www.citrix.com/products/citrix-adc/cpx-express.html) for [baremetal](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment/baremetal) and [cloud](https://github.com/citrix/citrix-k8s-ingress-controller/tree/master/deployment) deployments. -* F5 Networks provides [support and maintenance](https://support.f5.com/csp/article/K86859508) - for the [F5 BIG-IP Controller for Kubernetes](http://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest). -* [Gloo](https://gloo.solo.io) is an open-source ingress controller based on [Envoy](https://www.envoyproxy.io) which offers API Gateway functionality with enterprise support from [solo.io](https://www.solo.io). -* [HAProxy](http://www.haproxy.org/) based ingress controller - [jcmoraisjr/haproxy-ingress](https://github.com/jcmoraisjr/haproxy-ingress) which is mentioned on the blog post - [HAProxy Ingress Controller for Kubernetes](https://www.haproxy.com/blog/haproxy_ingress_controller_for_kubernetes/). - [HAProxy Technologies](https://www.haproxy.com/) offers support and maintenance for HAProxy Enterprise and - the ingress controller [jcmoraisjr/haproxy-ingress](https://github.com/jcmoraisjr/haproxy-ingress). -* [Istio](https://istio.io/) based ingress controller - [Control Ingress Traffic](https://istio.io/docs/tasks/traffic-management/ingress/). -* [Kong](https://konghq.com/) offers [community](https://discuss.konghq.com/c/kubernetes) or - [commercial](https://konghq.com/kong-enterprise/) support and maintenance for the - [Kong Ingress Controller for Kubernetes](https://github.com/Kong/kubernetes-ingress-controller). -* [NGINX, Inc.](https://www.nginx.com/) offers support and maintenance for the - [NGINX Ingress Controller for Kubernetes](https://www.nginx.com/products/nginx/kubernetes-ingress-controller). -* [Traefik](https://github.com/containous/traefik) is a fully featured ingress controller - ([Let's Encrypt](https://letsencrypt.org), secrets, http2, websocket), and it also comes with commercial - support by [Containous](https://containo.us/services). - -You may deploy [any number of ingress controllers](https://git.k8s.io/ingress-nginx/docs/user-guide/multiple-ingress.md#multiple-ingress-controllers) within a cluster. -When you create an ingress, you should annotate each ingress with the appropriate -[`ingress.class`](https://git.k8s.io/ingress-gce/docs/faq/README.md#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster) to indicate which ingress -controller should be used if more than one exists within your cluster. -If you do not define a class, your cloud provider may use a default ingress provider. +[ingress controllers](/docs/concepts/services-networking/ingress-controllers) you may choose from. ### Before you begin @@ -122,14 +89,14 @@ spec: servicePort: 80 ``` - As with all other Kubernetes resources, an ingress needs `apiVersion`, `kind`, and `metadata` fields. + As with all other Kubernetes resources, an Ingress needs `apiVersion`, `kind`, and `metadata` fields. For general information about working with config files, see [deploying applications](/docs/tasks/run-application/run-stateless-application-deployment/), [configuring containers](/docs/tasks/configure-pod-container/configure-pod-configmap/), [managing resources](/docs/concepts/cluster-administration/manage-deployment/). - Ingress frequently uses annotations to configure some options depending on the ingress controller, an example of which + Ingress frequently uses annotations to configure some options depending on the Ingress controller, an example of which is the [rewrite-target annotation](https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md). - Different [ingress controller](#ingress-controllers) support different annotations. Review the documentation for - your choice of ingress controller to learn which annotations are supported. + Different [Ingress controller]((/docs/concepts/services-networking/ingress-controllers)) support different annotations. Review the documentation for + your choice of Ingress controller to learn which annotations are supported. -The ingress [spec](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status) +The Ingress [spec](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status) has all the information needed to configure a loadbalancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Ingress resource only supports rules for directing HTTP traffic. @@ -146,18 +113,17 @@ Each http rule contains the following information: loadbalancer will direct traffic to the referenced service. * A backend is a combination of service and port names as described in the [services doc](/docs/concepts/services-networking/service/). HTTP (and HTTPS) requests to the - ingress matching the host and path of the rule will be sent to the listed backend. + Ingress matching the host and path of the rule will be sent to the listed backend. -A default backend is often configured in an ingress controller that will service any requests that do not +A default backend is often configured in an Ingress controller that will service any requests that do not match a path in the spec. ### Default Backend -An ingress with no rules sends all traffic to a single default backend. The default -backend is typically a configuration option of the [ingress controller](#ingress-controllers) -and is not specified in your ingress resources. +An Ingress with no rules sends all traffic to a single default backend. The default +backend is typically a configuration option of the [Ingress controller](/docs/concepts/services-networking/ingress-controllers) and is not specified in your Ingress resources. -If none of the hosts or paths match the HTTP request in the ingress objects, the traffic is +If none of the hosts or paths match the HTTP request in the Ingress objects, the traffic is routed to your default backend. ## Types of Ingress @@ -165,7 +131,7 @@ routed to your default backend. ### Single Service Ingress There are existing Kubernetes concepts that allow you to expose a single Service -(see [alternatives](#alternatives)). You can also do this with an ingress by specifying a +(see [alternatives](#alternatives)). You can also do this with an Ingress by specifying a *default backend* with no rules. {{< codenew file="service/networking/ingress.yaml" >}} @@ -181,8 +147,8 @@ NAME HOSTS ADDRESS PORTS AGE test-ingress * 107.178.254.228 80 59s ``` -Where `107.178.254.228` is the IP allocated by the ingress controller to satisfy -this ingress. +Where `107.178.254.228` is the IP allocated by the Ingress controller to satisfy +this Ingress. {{< note >}} Ingress controllers and load balancers may take a minute or two to allocate an IP address. @@ -192,7 +158,7 @@ Until that time you will often see the address listed as ``. ### Simple fanout A fanout configuration routes traffic from a single IP address to more than one service, -based on the HTTP URI being requested. An ingress allows you to keep the number of loadbalancers +based on the HTTP URI being requested. An Ingress allows you to keep the number of loadbalancers down to a minimum. For example, a setup like: ```shell @@ -200,7 +166,7 @@ foo.bar.com -> 178.91.123.132 -> / foo service1:4200 / bar service2:8080 ``` -would require an ingress such as: +would require an Ingress such as: ```yaml apiVersion: extensions/v1beta1 @@ -224,7 +190,7 @@ spec: servicePort: 8080 ``` -When you create the ingress with `kubectl create -f`: +When you create the Ingress with `kubectl create -f`: ```shell kubectl describe ingress simple-fanout-example @@ -249,13 +215,13 @@ Events: Normal ADD 22s loadbalancer-controller default/test ``` -The ingress controller will provision an implementation specific loadbalancer -that satisfies the ingress, as long as the services (`s1`, `s2`) exist. -When it has done so, you will see the address of the loadbalancer at the +The Ingress controller provisions an implementation specific loadbalancer +that satisfies the Ingress, as long as the services (`s1`, `s2`) exist. +When it has done so, you can see the address of the loadbalancer at the Address field. {{< note >}} -Depending on the [ingress controller](#ingress-controllers) you are using, you may need to +Depending on the [Ingress controller](/docs/concepts/services-networking/ingress-controllers) you are using, you may need to create a default-http-backend [Service](/docs/concepts/services-networking/service/). {{< /note >}} @@ -269,7 +235,7 @@ foo.bar.com --| |-> foo.bar.com s1:80 bar.foo.com --| |-> bar.foo.com s2:80 ``` -The following ingress tells the backing loadbalancer to route requests based on +The following Ingress tells the backing loadbalancer to route requests based on the [Host header](https://tools.ietf.org/html/rfc7230#section-5.4). ```yaml @@ -293,9 +259,9 @@ spec: servicePort: 80 ``` -If you create an ingress resource without any hosts defined in the rules, then any -web traffic to the IP address of your ingress controller can be matched without a name based -virtual host being required. For example, the following ingress resource will route traffic +If you create an Ingress resource without any hosts defined in the rules, then any +web traffic to the IP address of your Ingress controller can be matched without a name based +virtual host being required. For example, the following Ingress resource will route traffic requested for `first.bar.com` to `service1`, `second.foo.com` to `service2`, and any traffic to the IP address without a hostname defined in request (that is, without a request header being presented) to `service3`. @@ -328,12 +294,12 @@ spec: ### TLS -You can secure an ingress by specifying a [secret](/docs/concepts/configuration/secret) -that contains a TLS private key and certificate. Currently the ingress only +You can secure an Ingress by specifying a [secret](/docs/concepts/configuration/secret) +that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. If the TLS -configuration section in an ingress specifies different hosts, they will be +configuration section in an Ingress specifies different hosts, they will be multiplexed on the same port according to the hostname specified through the -SNI TLS extension (provided the ingress controller supports SNI). The TLS secret +SNI TLS extension (provided the Ingress controller supports SNI). The TLS secret must contain keys named `tls.crt` and `tls.key` that contain the certificate and private key to use for TLS, e.g.: @@ -346,10 +312,10 @@ kind: Secret metadata: name: testsecret-tls namespace: default -type: Opaque +type: kubernetes.io/tls ``` -Referencing this secret in an ingress will tell the ingress controller to +Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS. You need to make sure the TLS secret you created came from a certificate that contains a CN for `sslexample.foo.com`. @@ -375,24 +341,24 @@ spec: ``` {{< note >}} -There is a gap between TLS features supported by various ingress +There is a gap between TLS features supported by various Ingress controllers. Please refer to documentation on [nginx](https://git.k8s.io/ingress-nginx/README.md#https), [GCE](https://git.k8s.io/ingress-gce/README.md#frontend-https), or any other -platform specific ingress controller to understand how TLS works in your environment. +platform specific Ingress controller to understand how TLS works in your environment. {{< /note >}} ### Loadbalancing -An ingress controller is bootstrapped with some load balancing policy settings -that it applies to all ingress, such as the load balancing algorithm, backend +An Ingress controller is bootstrapped with some load balancing policy settings +that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced load balancing concepts (e.g. persistent sessions, dynamic weights) are not yet exposed through the -ingress. You can still get these features through the +Ingress. You can still get these features through the [service loadbalancer](https://github.com/kubernetes/ingress-nginx). It's also worth noting that even though health checks are not exposed directly -through the ingress, there exist parallel concepts in Kubernetes such as +through the Ingress, there exist parallel concepts in Kubernetes such as [readiness probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) which allow you to achieve the same end result. Please review the controller specific docs to see how they handle health checks ( @@ -401,7 +367,7 @@ specific docs to see how they handle health checks ( ## Updating an Ingress -To update an existing ingress to add a new Host, you can update it by editing the resource: +To update an existing Ingress to add a new Host, you can update it by editing the resource: ```shell kubectl describe ingress test @@ -452,7 +418,7 @@ spec: ``` Saving the yaml will update the resource in the API server, which should tell the -ingress controller to reconfigure the loadbalancer. +Ingress controller to reconfigure the loadbalancer. ```shell kubectl describe ingress test @@ -478,25 +444,24 @@ Events: Normal ADD 45s loadbalancer-controller default/test ``` -You can achieve the same by invoking `kubectl replace -f` on a modified ingress yaml file. +You can achieve the same by invoking `kubectl replace -f` on a modified Ingress yaml file. ## Failing across availability zones Techniques for spreading traffic across failure domains differs between cloud providers. -Please check the documentation of the relevant [ingress controller](#ingress-controllers) for -details. You can also refer to the [federation documentation](/docs/concepts/cluster-administration/federation/) -for details on deploying ingress in a federated cluster. +Please check the documentation of the relevant [Ingress controller](/docs/concepts/services-networking/ingress-controllers) for details. You can also refer to the [federation documentation](/docs/concepts/cluster-administration/federation/) +for details on deploying Ingress in a federated cluster. ## Future Work Track [SIG Network](https://github.com/kubernetes/community/tree/master/sig-network) for more details on the evolution of the ingress and related resources. You may also track the -[ingress repository](https://github.com/kubernetes/ingress/tree/master) for more details on the -evolution of various ingress controllers. +[Ingress repository](https://github.com/kubernetes/ingress/tree/master) for more details on the +evolution of various Ingress controllers. ## Alternatives -You can expose a Service in multiple ways that don't directly involve the ingress resource: +You can expose a Service in multiple ways that don't directly involve the Ingress resource: * Use [Service.Type=LoadBalancer](/docs/concepts/services-networking/service/#loadbalancer) * Use [Service.Type=NodePort](/docs/concepts/services-networking/service/#nodeport) @@ -505,6 +470,5 @@ You can expose a Service in multiple ways that don't directly involve the ingres {{% /capture %}} {{% capture whatsnext %}} - +* [Set up Ingress on Minikube with the NGINX Controller](/docs/tasks/access-application-cluster/ingress-minikube) {{% /capture %}} - diff --git a/content/en/docs/concepts/services-networking/network-policies.md b/content/en/docs/concepts/services-networking/network-policies.md index 570ce7f65423c..cd18e6ecaad9f 100644 --- a/content/en/docs/concepts/services-networking/network-policies.md +++ b/content/en/docs/concepts/services-networking/network-policies.md @@ -92,11 +92,12 @@ __egress__: Each `NetworkPolicy` may include a list of whitelist `egress` rules. So, the example NetworkPolicy: 1. isolates "role=db" pods in the "default" namespace for both ingress and egress traffic (if they weren't already isolated) -2. allows connections to TCP port 6379 of "role=db" pods in the "default" namespace from: +2. (Ingress rules) allows connections to all pods in the “default” namespace with the label “role=db” on TCP port 6379 from: + * any pod in the "default" namespace with the label "role=frontend" * any pod in a namespace with the label "project=myproject" * IP addresses in the ranges 172.17.0.0–172.17.0.255 and 172.17.2.0–172.17.255.255 (ie, all of 172.17.0.0/16 except 172.17.1.0/24) -3. allows connections from any pod in the "default" namespace with the label "role=db" to CIDR 10.0.0.0/24 on TCP port 5978 +3. (Egress rules) allows connections from any pod in the "default" namespace with the label "role=db" to CIDR 10.0.0.0/24 on TCP port 5978 See the [Declare Network Policy](/docs/tasks/administer-cluster/declare-network-policy/) walkthrough for further examples. @@ -266,4 +267,3 @@ The CNI plugin has to support SCTP as `protocol` value in `NetworkPolicy`. - See more [Recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for common scenarios enabled by the NetworkPolicy resource. {{% /capture %}} - diff --git a/content/en/docs/concepts/services-networking/service.md b/content/en/docs/concepts/services-networking/service.md index 4b5bbba41f087..b25bb0ec5f506 100644 --- a/content/en/docs/concepts/services-networking/service.md +++ b/content/en/docs/concepts/services-networking/service.md @@ -519,6 +519,16 @@ metadata: [...] ``` {{% /tab %}} +{{% tab name="Baidu Cloud" %}} +```yaml +[...] +metadata: + name: my-service + annotations: + service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true" +[...] +``` +{{% /tab %}} {{< /tabs >}} diff --git a/content/en/docs/concepts/storage/volumes.md b/content/en/docs/concepts/storage/volumes.md index 48fb78be5f84d..1638ee981ea7f 100644 --- a/content/en/docs/concepts/storage/volumes.md +++ b/content/en/docs/concepts/storage/volumes.md @@ -1150,12 +1150,12 @@ CSI support was introduced as alpha in Kubernetes v1.9, moved to beta in Kubernetes v1.10, and is GA in Kubernetes v1.13. {{< note >}} -**Note:** Support for CSI spec versions 0.2 and 0.3 are deprecated in Kubernetes +Support for CSI spec versions 0.2 and 0.3 are deprecated in Kubernetes v1.13 and will be removed in a future release. {{< /note >}} {{< note >}} -**Note:** CSI drivers may not be compatible across all Kubernetes releases. +CSI drivers may not be compatible across all Kubernetes releases. Please check the specific CSI driver's documentation for supported deployments steps for each Kubernetes release and a compatibility matrix. {{< /note >}} @@ -1306,8 +1306,8 @@ MountFlags=shared ``` Or, remove `MountFlags=slave` if present. Then restart the Docker daemon: ```shell -$ sudo systemctl daemon-reload -$ sudo systemctl restart docker +sudo systemctl daemon-reload +sudo systemctl restart docker ``` diff --git a/content/en/docs/concepts/workloads/controllers/cron-jobs.md b/content/en/docs/concepts/workloads/controllers/cron-jobs.md index 413547e2cb151..339cb81f78855 100644 --- a/content/en/docs/concepts/workloads/controllers/cron-jobs.md +++ b/content/en/docs/concepts/workloads/controllers/cron-jobs.md @@ -46,11 +46,13 @@ It is important to note that if the `startingDeadlineSeconds` field is set (not A CronJob is counted as missed if it has failed to be created at its scheduled time. For example, If `concurrencyPolicy` is set to `Forbid` and a CronJob was attempted to be scheduled when there was a previous schedule still running, then it would count as missed. -For example, suppose a cron job is set to start at exactly `08:30:00` and its -`startingDeadlineSeconds` is set to 10, if the CronJob controller happens to -be down from `08:29:00` to `08:42:00`, the job will not start. -Set a longer `startingDeadlineSeconds` if starting later is better than not -starting at all. +For example, suppose a CronJob is set to schedule a new Job every one minute beginning at `08:30:00`, and its +`startingDeadlineSeconds` field is not set. The default for this field is `100` seconds. If the CronJob controller happens to +be down from `08:29:00` to `10:21:00`, the job will not start as the number of missed jobs which missed their schedule is greater than 100. + +To illustrate this concept further, suppose a CronJob is set to schedule a new Job every one minute beginning at `08:30:00`, and its +`startingDeadlineSeconds` is set to 200 seconds. If the CronJob controller happens to +be down for the same period as the previous example (`08:29:00` to `10:21:00`,) the Job will still start at 10:22:00. This happens as the controller now checks how many missed schedules happened in the last 200 seconds (ie, 3 missed schedules), rather than from the last scheduled time until now. The Cronjob is only responsible for creating Jobs that match its schedule, and the Job in turn is responsible for the management of the Pods it represents. diff --git a/content/en/docs/concepts/workloads/controllers/deployment.md b/content/en/docs/concepts/workloads/controllers/deployment.md index e3ed2574ccd93..7ec2c97600a7e 100644 --- a/content/en/docs/concepts/workloads/controllers/deployment.md +++ b/content/en/docs/concepts/workloads/controllers/deployment.md @@ -55,9 +55,9 @@ In this example: In this case, you simply select a label that is defined in the Pod template (`app: nginx`). However, more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule. - + {{< note >}} - `matchLabels` is a map of {key,value} pairs. A single {key,value} in the `matchLabels` map + `matchLabels` is a map of {key,value} pairs. A single {key,value} in the `matchLabels` map is equivalent to an element of `matchExpressions`, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. {{< /note >}} @@ -128,18 +128,19 @@ To see the ReplicaSet (`rs`) created by the deployment, run `kubectl get rs`: ```shell NAME DESIRED CURRENT READY AGE -nginx-deployment-2035384211 3 3 3 18s +nginx-deployment-75675f5897 3 3 3 18s ``` -Notice that the name of the ReplicaSet is always formatted as `[DEPLOYMENT-NAME]-[POD-TEMPLATE-HASH-VALUE]`. The hash value is automatically generated when the Deployment is created. +Notice that the name of the ReplicaSet is always formatted as `[DEPLOYMENT-NAME]-[RANDOM-STRING]`. The random string is +randomly generated and uses the pod-template-hash as a seed. To see the labels automatically generated for each pod, run `kubectl get pods --show-labels`. The following output is returned: ```shell NAME READY STATUS RESTARTS AGE LABELS -nginx-deployment-2035384211-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211 -nginx-deployment-2035384211-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211 -nginx-deployment-2035384211-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211 +nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 +nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 +nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 ``` The created ReplicaSet ensures that there are three `nginx` Pods running at all times. @@ -171,8 +172,8 @@ Suppose that you now want to update the nginx Pods to use the `nginx:1.9.1` imag instead of the `nginx:1.7.9` image. ```shell -$ kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment -nginx=nginx:1.9.1 image updated +$ kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 +image updated ``` Alternatively, you can `edit` the Deployment and change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`: @@ -467,7 +468,7 @@ $ kubectl rollout undo deployment.v1.apps/nginx-deployment deployment.apps/nginx-deployment ``` -Alternatively, you can rollback to a specific revision by specify that in `--to-revision`: +Alternatively, you can rollback to a specific revision by specifying it with `--to-revision`: ```shell $ kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2 @@ -988,15 +989,12 @@ Field `.spec.rollbackTo` has been deprecated in API versions `extensions/v1beta1 ### Revision History Limit -A Deployment's revision history is stored in the replica sets it controls. +A Deployment's revision history is stored in the ReplicaSets it controls. `.spec.revisionHistoryLimit` is an optional field that specifies the number of old ReplicaSets to retain -to allow rollback. Its ideal value depends on the frequency and stability of new Deployments. All old -ReplicaSets will be kept by default, consuming resources in `etcd` and crowding the output of `kubectl get rs`, -if this field is not set. The configuration of each Deployment revision is stored in its ReplicaSets; -therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment. +to allow rollback. These old ReplicaSets consume resources in `etcd` and crowd the output of `kubectl get rs`. The configuration of each Deployment revision is stored in its ReplicaSets; therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment. By default, 10 old ReplicaSets will be kept, however its ideal value depends on the frequency and stability of new Deployments. -More specifically, setting this field to zero means that all old ReplicaSets with 0 replica will be cleaned up. +More specifically, setting this field to zero means that all old ReplicaSets with 0 replicas will be cleaned up. In this case, a new Deployment rollout cannot be undone, since its revision history is cleaned up. ### Paused @@ -1015,5 +1013,3 @@ in a similar fashion. But Deployments are recommended, since they are declarativ additional features, such as rolling back to any previous revision even after the rolling update is done. {{% /capture %}} - - diff --git a/content/en/docs/concepts/workloads/controllers/garbage-collection.md b/content/en/docs/concepts/workloads/controllers/garbage-collection.md index a2b8517afaa62..ee3fb1fcd5da8 100644 --- a/content/en/docs/concepts/workloads/controllers/garbage-collection.md +++ b/content/en/docs/concepts/workloads/controllers/garbage-collection.md @@ -60,6 +60,14 @@ metadata: ... ``` +{{< note >}} +Cross-namespace owner references is disallowed by design. This means: +1) Namespace-scoped dependents can only specify owners in the same namespace, +and owners that are cluster-scoped. +2) Cluster-scoped dependents can only specify cluster-scoped owners, but not +namespace-scoped owners. +{{< /note >}} + ## Controlling how the garbage collector deletes dependents When you delete an object, you can specify whether the object's dependents are diff --git a/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md index 214fe74b41a2b..6a6a2275011b6 100644 --- a/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md +++ b/content/en/docs/concepts/workloads/controllers/jobs-run-to-completion.md @@ -13,16 +13,16 @@ weight: 70 {{% capture overview %}} -A _job_ creates one or more pods and ensures that a specified number of them successfully terminate. -As pods successfully complete, the _job_ tracks the successful completions. When a specified number -of successful completions is reached, the job itself is complete. Deleting a Job will cleanup the -pods it created. +A Job creates one or more Pods and ensures that a specified number of them successfully terminate. +As pods successfully complete, the Job tracks the successful completions. When a specified number +of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up +the Pods it created. A simple case is to create one Job object in order to reliably run one Pod to completion. -The Job object will start a new Pod if the first pod fails or is deleted (for example +The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot). -A Job can also be used to run multiple pods in parallel. +You can also use a Job to run multiple Pods in parallel. {{% /capture %}} @@ -36,14 +36,14 @@ It takes around 10s to complete. {{< codenew file="controllers/job.yaml" >}} -Run the example job by downloading the example file and then running this command: +You can run the example with this command: ```shell $ kubectl create -f https://k8s.io/examples/controllers/job.yaml job "pi" created ``` -Check on the status of the job using this command: +Check on the status of the Job with `kubectl`: ```shell $ kubectl describe jobs/pi @@ -78,18 +78,18 @@ Events: 1m 1m 1 {job-controller } Normal SuccessfulCreate Created pod: pi-dtn4q ``` -To view completed pods of a job, use `kubectl get pods`. +To view completed Pods of a Job, use `kubectl get pods`. -To list all the pods that belong to a job in a machine readable form, you can use a command like this: +To list all the Pods that belong to a Job in a machine readable form, you can use a command like this: ```shell -$ pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath={.items..metadata.name}) +$ pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}') $ echo $pods pi-aiw0a ``` -Here, the selector is the same as the selector for the job. The `--output=jsonpath` option specifies an expression -that just gets the name from each pod in the returned list. +Here, the selector is the same as the selector for the Job. The `--output=jsonpath` option specifies an expression +that just gets the name from each Pod in the returned list. View the standard output of one of the pods: @@ -110,7 +110,7 @@ The `.spec.template` is the only required field of the `.spec`. The `.spec.template` is a [pod template](/docs/concepts/workloads/pods/pod-overview/#pod-templates). It has exactly the same schema as a [pod](/docs/user-guide/pods), except it is nested and does not have an `apiVersion` or `kind`. -In addition to required fields for a Pod, a pod template in a job must specify appropriate +In addition to required fields for a Pod, a pod template in a Job must specify appropriate labels (see [pod selector](#pod-selector)) and an appropriate restart policy. Only a [`RestartPolicy`](/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy) equal to `Never` or `OnFailure` is allowed. @@ -123,31 +123,30 @@ See section [specifying your own pod selector](#specifying-your-own-pod-selector ### Parallel Jobs -There are three main types of jobs: +There are three main types of task suitable to run as a Job: 1. Non-parallel Jobs - - normally only one pod is started, unless the pod fails. - - job is complete as soon as Pod terminates successfully. + - normally, only one Pod is started, unless the Pod fails. + - the Job is complete as soon as its Pod terminates successfully. 1. Parallel Jobs with a *fixed completion count*: - specify a non-zero positive value for `.spec.completions`. - - the job is complete when there is one successful pod for each value in the range 1 to `.spec.completions`. - - **not implemented yet:** Each pod passed a different index in the range 1 to `.spec.completions`. + - the Job represents the overall task, and is complete when there is one successful Pod for each value in the range 1 to `.spec.completions`. + - **not implemented yet:** Each Pod is passed a different index in the range 1 to `.spec.completions`. 1. Parallel Jobs with a *work queue*: - do not specify `.spec.completions`, default to `.spec.parallelism`. - - the pods must coordinate with themselves or an external service to determine what each should work on. - - each pod is independently capable of determining whether or not all its peers are done, thus the entire Job is done. - - when _any_ pod terminates with success, no new pods are created. - - once at least one pod has terminated with success and all pods are terminated, then the job is completed with success. - - once any pod has exited with success, no other pod should still be doing any work or writing any output. They should all be - in the process of exiting. - -For a Non-parallel job, you can leave both `.spec.completions` and `.spec.parallelism` unset. When both are + - the Pods must coordinate amongst themselves or an external service to determine what each should work on. For example, a Pod might fetch a batch of up to N items from the work queue. + - each Pod is independently capable of determining whether or not all its peers are done, and thus that the entire Job is done. + - when _any_ Pod from the Job terminates with success, no new Pods are created. + - once at least one Pod has terminated with success and all Pods are terminated, then the Job is completed with success. + - once any Pod has exited with success, no other Pod should still be doing any work for this task or writing any output. They should all be in the process of exiting. + +For a _non-parallel_ Job, you can leave both `.spec.completions` and `.spec.parallelism` unset. When both are unset, both are defaulted to 1. -For a Fixed Completion Count job, you should set `.spec.completions` to the number of completions needed. +For a _fixed completion count_ Job, you should set `.spec.completions` to the number of completions needed. You can set `.spec.parallelism`, or leave it unset and it will default to 1. -For a Work Queue Job, you must leave `.spec.completions` unset, and set `.spec.parallelism` to +For a _work queue_ Job, you must leave `.spec.completions` unset, and set `.spec.parallelism` to a non-negative integer. For more information about how to make use of the different types of job, see the [job patterns](#job-patterns) section. @@ -162,28 +161,28 @@ If it is specified as 0, then the Job is effectively paused until it is increase Actual parallelism (number of pods running at any instant) may be more or less than requested parallelism, for a variety of reasons: -- For Fixed Completion Count jobs, the actual number of pods running in parallel will not exceed the number of +- For _fixed completion count_ Jobs, the actual number of pods running in parallel will not exceed the number of remaining completions. Higher values of `.spec.parallelism` are effectively ignored. -- For work queue jobs, no new pods are started after any pod has succeeded -- remaining pods are allowed to complete, however. +- For _work queue_ Jobs, no new Pods are started after any Pod has succeeded -- remaining Pods are allowed to complete, however. - If the controller has not had time to react. -- If the controller failed to create pods for any reason (lack of ResourceQuota, lack of permission, etc.), +- If the controller failed to create Pods for any reason (lack of `ResourceQuota`, lack of permission, etc.), then there may be fewer pods than requested. -- The controller may throttle new pod creation due to excessive previous pod failures in the same Job. -- When a pod is gracefully shutdown, it takes time to stop. +- The controller may throttle new Pod creation due to excessive previous pod failures in the same Job. +- When a Pod is gracefully shut down, it takes time to stop. ## Handling Pod and Container Failures -A Container in a Pod may fail for a number of reasons, such as because the process in it exited with -a non-zero exit code, or the Container was killed for exceeding a memory limit, etc. If this +A container in a Pod may fail for a number of reasons, such as because the process in it exited with +a non-zero exit code, or the container was killed for exceeding a memory limit, etc. If this happens, and the `.spec.template.spec.restartPolicy = "OnFailure"`, then the Pod stays -on the node, but the Container is re-run. Therefore, your program needs to handle the case when it is +on the node, but the container is re-run. Therefore, your program needs to handle the case when it is restarted locally, or else specify `.spec.template.spec.restartPolicy = "Never"`. -See [pods-states](/docs/concepts/workloads/pods/pod-lifecycle/#example-states) for more information on `restartPolicy`. +See [pod lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/#example-states) for more information on `restartPolicy`. An entire Pod can also fail, for a number of reasons, such as when the pod is kicked off the node (node is upgraded, rebooted, deleted, etc.), or if a container of the Pod fails and the `.spec.template.spec.restartPolicy = "Never"`. When a Pod fails, then the Job controller -starts a new Pod. Therefore, your program needs to handle the case when it is restarted in a new +starts a new Pod. This means that your application needs to handle the case when it is restarted in a new pod. In particular, it needs to handle temporary files, locks, incomplete output and the like caused by previous runs. @@ -194,7 +193,7 @@ sometimes be started twice. If you do specify `.spec.parallelism` and `.spec.completions` both greater than 1, then there may be multiple pods running at once. Therefore, your pods must also be tolerant of concurrency. -### Pod Backoff failure policy +### Pod backoff failure policy There are situations where you want to fail a Job after some amount of retries due to a logical error in configuration etc. @@ -244,7 +243,7 @@ spec: restartPolicy: Never ``` -Note that both the Job Spec and the [Pod Template Spec](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#detailed-behavior) within the Job have an `activeDeadlineSeconds` field. Ensure that you set this field at the proper level. +Note that both the Job spec and the [Pod template spec](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#detailed-behavior) within the Job have an `activeDeadlineSeconds` field. Ensure that you set this field at the proper level. ## Clean Up Finished Jobs Automatically @@ -316,7 +315,7 @@ The tradeoffs are: - One Job object for each work item, vs. a single Job object for all work items. The latter is better for large numbers of work items. The former creates some overhead for the user and for the system to manage large numbers of Job objects. -- Number of pods created equals number of work items, vs. each pod can process multiple work items. +- Number of pods created equals number of work items, vs. each Pod can process multiple work items. The former typically requires less modification to existing code and containers. The latter is better for large numbers of work items, for similar reasons to the previous bullet. - Several approaches use a work queue. This requires running a queue service, @@ -336,7 +335,7 @@ The pattern names are also links to examples and more detailed description. When you specify completions with `.spec.completions`, each Pod created by the Job controller has an identical [`spec`](https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status). This means that -all pods will have the same command line and the same +all pods for a task will have the same command line and the same image, the same volumes, and (almost) the same environment variables. These patterns are different ways to arrange for pods to work on different things. @@ -355,29 +354,29 @@ Here, `W` is the number of work items. ### Specifying your own pod selector -Normally, when you create a job object, you do not specify `.spec.selector`. -The system defaulting logic adds this field when the job is created. +Normally, when you create a Job object, you do not specify `.spec.selector`. +The system defaulting logic adds this field when the Job is created. It picks a selector value that will not overlap with any other jobs. However, in some cases, you might need to override this automatically set selector. -To do this, you can specify the `.spec.selector` of the job. +To do this, you can specify the `.spec.selector` of the Job. Be very careful when doing this. If you specify a label selector which is not -unique to the pods of that job, and which matches unrelated pods, then pods of the unrelated -job may be deleted, or this job may count other pods as completing it, or one or both -of the jobs may refuse to create pods or run to completion. If a non-unique selector is -chosen, then other controllers (e.g. ReplicationController) and their pods may behave +unique to the pods of that Job, and which matches unrelated Pods, then pods of the unrelated +job may be deleted, or this Job may count other Pods as completing it, or one or both +Jobs may refuse to create Pods or run to completion. If a non-unique selector is +chosen, then other controllers (e.g. ReplicationController) and their Pods may behave in unpredictable ways too. Kubernetes will not stop you from making a mistake when specifying `.spec.selector`. Here is an example of a case when you might want to use this feature. -Say job `old` is already running. You want existing pods -to keep running, but you want the rest of the pods it creates -to use a different pod template and for the job to have a new name. -You cannot update the job because these fields are not updatable. -Therefore, you delete job `old` but leave its pods -running, using `kubectl delete jobs/old --cascade=false`. +Say Job `old` is already running. You want existing Pods +to keep running, but you want the rest of the Pods it creates +to use a different pod template and for the Job to have a new name. +You cannot update the Job because these fields are not updatable. +Therefore, you delete Job `old` but _leave its pods +running_, using `kubectl delete jobs/old --cascade=false`. Before deleting it, you make a note of what selector it uses: ``` @@ -392,11 +391,11 @@ spec: ... ``` -Then you create a new job with name `new` and you explicitly specify the same selector. -Since the existing pods have label `job-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`, -they are controlled by job `new` as well. +Then you create a new Job with name `new` and you explicitly specify the same selector. +Since the existing Pods have label `job-uid=a8f3d00d-c6d2-11e5-9f87-42010af00002`, +they are controlled by Job `new` as well. -You need to specify `manualSelector: true` in the new job since you are not using +You need to specify `manualSelector: true` in the new Job since you are not using the selector that the system normally generates for you automatically. ``` @@ -420,25 +419,25 @@ mismatch. ### Bare Pods -When the node that a pod is running on reboots or fails, the pod is terminated -and will not be restarted. However, a Job will create new pods to replace terminated ones. -For this reason, we recommend that you use a job rather than a bare pod, even if your application -requires only a single pod. +When the node that a Pod is running on reboots or fails, the pod is terminated +and will not be restarted. However, a Job will create new Pods to replace terminated ones. +For this reason, we recommend that you use a Job rather than a bare Pod, even if your application +requires only a single Pod. ### Replication Controller Jobs are complementary to [Replication Controllers](/docs/user-guide/replication-controller). -A Replication Controller manages pods which are not expected to terminate (e.g. web servers), and a Job -manages pods that are expected to terminate (e.g. batch jobs). +A Replication Controller manages Pods which are not expected to terminate (e.g. web servers), and a Job +manages Pods that are expected to terminate (e.g. batch tasks). -As discussed in [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/), `Job` is *only* appropriate for pods with -`RestartPolicy` equal to `OnFailure` or `Never`. (Note: If `RestartPolicy` is not set, the default -value is `Always`.) +As discussed in [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/), `Job` is *only* appropriate +for pods with `RestartPolicy` equal to `OnFailure` or `Never`. +(Note: If `RestartPolicy` is not set, the default value is `Always`.) ### Single Job starts Controller Pod -Another pattern is for a single Job to create a pod which then creates other pods, acting as a sort -of custom controller for those pods. This allows the most flexibility, but may be somewhat +Another pattern is for a single Job to create a Pod which then creates other Pods, acting as a sort +of custom controller for those Pods. This allows the most flexibility, but may be somewhat complicated to get started with and offers less integration with Kubernetes. One example of this pattern would be a Job which starts a Pod which runs a script that in turn @@ -446,10 +445,10 @@ starts a Spark master controller (see [spark example](https://github.com/kuberne driver, and then cleans up. An advantage of this approach is that the overall process gets the completion guarantee of a Job -object, but complete control over what pods are created and how work is assigned to them. +object, but complete control over what Pods are created and how work is assigned to them. -## Cron Jobs +## Cron Jobs {#cron-jobs} -Support for creating Jobs at specified times/dates (i.e. cron) is available in Kubernetes [1.4](https://github.com/kubernetes/kubernetes/pull/11980). More information is available in the [cron job documents](/docs/concepts/workloads/controllers/cron-jobs/) +You can use a [`CronJob`](/docs/concepts/workloads/controllers/cron-jobs/) to create a Job that will run at specified times/dates, similar to the Unix tool `cron`. {{% /capture %}} diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index adeaadc469674..9ca82477fe7fd 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -239,6 +239,10 @@ matchLabels: In the ReplicaSet, `.spec.template.metadata.labels` must match `spec.selector`, or it will be rejected by the API. +{{< note >}} +For 2 ReplicaSets specifying the same `.spec.selector` but different `.spec.template.metadata.labels` and `.spec.template.spec` fields, each ReplicaSet ignores the Pods created by the other ReplicaSet. +{{< /note >}} + ### Replicas You can specify how many Pods should run concurrently by setting `.spec.replicas`. The ReplicaSet will create/delete diff --git a/content/en/docs/concepts/workloads/controllers/statefulset.md b/content/en/docs/concepts/workloads/controllers/statefulset.md index 0e3a4e85698e9..e83a4f3c8bea6 100644 --- a/content/en/docs/concepts/workloads/controllers/statefulset.md +++ b/content/en/docs/concepts/workloads/controllers/statefulset.md @@ -134,6 +134,10 @@ As each Pod is created, it gets a matching DNS subdomain, taking the form: `$(podname).$(governing service domain)`, where the governing service is defined by the `serviceName` field on the StatefulSet. +As mentioned in the [limitations](#limitations) section, you are responsible for +creating the [Headless Service](/docs/concepts/services-networking/service/#headless-services) +responsible for the network identity of the pods. + Here are some examples of choices for Cluster Domain, Service name, StatefulSet name, and how that affects the DNS names for the StatefulSet's Pods. diff --git a/content/en/docs/concepts/workloads/pods/disruptions.md b/content/en/docs/concepts/workloads/pods/disruptions.md index 3a3ecb36cfc0d..6725c887f49cb 100644 --- a/content/en/docs/concepts/workloads/pods/disruptions.md +++ b/content/en/docs/concepts/workloads/pods/disruptions.md @@ -63,6 +63,11 @@ Ask your cluster administrator or consult your cloud provider or distribution do to determine if any sources of voluntary disruptions are enabled for your cluster. If none are enabled, you can skip creating Pod Disruption Budgets. +{{< caution >}} +Not all voluntary disruptions are constrained by Pod Disruption Budgets. For example, +deleting deployments or pods bypasses Pod Disruption Budgets. +{{< /caution >}} + ## Dealing with Disruptions Here are some ways to mitigate involuntary disruptions: @@ -102,7 +107,7 @@ percentage of the total. Cluster managers and hosting providers should use tools which respect Pod Disruption Budgets by calling the [Eviction API](/docs/tasks/administer-cluster/safely-drain-node/#the-eviction-api) -instead of directly deleting pods. Examples are the `kubectl drain` command +instead of directly deleting pods or deployments. Examples are the `kubectl drain` command and the Kubernetes-on-GCE cluster upgrade script (`cluster/gce/upgrade.sh`). When a cluster administrator wants to drain a node diff --git a/content/en/docs/concepts/workloads/pods/init-containers.md b/content/en/docs/concepts/workloads/pods/init-containers.md index 6ee6dd45ce7aa..a6c3e64b112d1 100644 --- a/content/en/docs/concepts/workloads/pods/init-containers.md +++ b/content/en/docs/concepts/workloads/pods/init-containers.md @@ -83,7 +83,7 @@ Here are some ideas for how to use Init Containers: * Register this Pod with a remote server from the downward API with a command like: - curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$()&ip=$()' + `curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$()&ip=$()'` * Wait for some time before starting the app Container with a command like `sleep 60`. * Clone a git repository into a volume. diff --git a/content/en/docs/concepts/workloads/pods/pod-overview.md b/content/en/docs/concepts/workloads/pods/pod-overview.md index 8bb3f68504655..4e5d8109a7ff1 100644 --- a/content/en/docs/concepts/workloads/pods/pod-overview.md +++ b/content/en/docs/concepts/workloads/pods/pod-overview.md @@ -4,6 +4,9 @@ reviewers: title: Pod Overview content_template: templates/concept weight: 10 +card: + name: concepts + weight: 60 --- {{% capture overview %}} @@ -101,5 +104,5 @@ Rather than specifying the current desired state of all replicas, pod templates {{% capture whatsnext %}} * Learn more about Pod behavior: * [Pod Termination](/docs/concepts/workloads/pods/pod/#termination-of-pods) - * Other Pod Topics + * [Pod Lifecycle](../pod-lifecycle) {{% /capture %}} diff --git a/content/en/docs/contribute/intermediate.md b/content/en/docs/contribute/intermediate.md index a786fbb955cfe..2ef7278621798 100644 --- a/content/en/docs/contribute/intermediate.md +++ b/content/en/docs/contribute/intermediate.md @@ -3,6 +3,9 @@ title: Intermediate contributing slug: intermediate content_template: templates/concept weight: 20 +card: + name: contribute + weight: 50 --- {{% capture overview %}} diff --git a/content/en/docs/contribute/localization.md b/content/en/docs/contribute/localization.md index 6e73492a14cc7..db3ef4e9801d9 100644 --- a/content/en/docs/contribute/localization.md +++ b/content/en/docs/contribute/localization.md @@ -5,29 +5,25 @@ approvers: - chenopis - zacharysarah - zparnold +card: + name: contribute + weight: 30 + title: Translating the docs --- {{% capture overview %}} -Documentation for Kubernetes is available in multiple languages: - -- English -- Chinese -- Japanese -- Korean - -We encourage you to add new [localizations](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/)! +Documentation for Kubernetes is available in multiple languages. We encourage you to add new [localizations](https://blog.mozilla.org/l10n/2011/12/14/i18n-vs-l10n-whats-the-diff/)! {{% /capture %}} - {{% capture body %}} ## Getting started -Localizations must meet some requirements for workflow (*how* to localize) and output (*what* to localize). +Localizations must meet some requirements for workflow (*how* to localize) and output (*what* to localize) before publishing. -To add a new localization of the Kubernetes documentation, you'll need to update the website by modifying the [site configuration](#modify-the-site-configuration) and [directory structure](#add-a-new-localization-directory). Then you can start [translating documents](#translating-documents)! +To add a new localization of the Kubernetes documentation, you'll need to update the website by modifying the [site configuration](#modify-the-site-configuration) and [directory structure](#add-a-new-localization-directory). Then you can start [translating documents](#translating-documents)! {{< note >}} For an example localization-related [pull request](../create-pull-request), see [this pull request](https://github.com/kubernetes/website/pull/8636) to the [Kubernetes website repo](https://github.com/kubernetes/website) adding Korean localization to the Kubernetes docs. @@ -209,7 +205,7 @@ SIG Docs welcomes [upstream contributions and corrections](/docs/contribute/inte {{% capture whatsnext %}} -Once a l10n meets requirements for workflow and minimum output, SIG docs will: +Once a localization meets requirements for workflow and minimum output, SIG docs will: - Enable language selection on the website - Publicize the localization's availability through [Cloud Native Computing Foundation](https://www.cncf.io/) (CNCF) channels, including the [Kubernetes blog](https://kubernetes.io/blog/). diff --git a/content/en/docs/contribute/participating.md b/content/en/docs/contribute/participating.md index 029f3bf7463ae..ca3be91d7d725 100644 --- a/content/en/docs/contribute/participating.md +++ b/content/en/docs/contribute/participating.md @@ -1,6 +1,9 @@ --- title: Participating in SIG Docs content_template: templates/concept +card: + name: contribute + weight: 40 --- {{% capture overview %}} diff --git a/content/en/docs/contribute/start.md b/content/en/docs/contribute/start.md index f4ce637c65634..75fdfb5cb8904 100644 --- a/content/en/docs/contribute/start.md +++ b/content/en/docs/contribute/start.md @@ -3,6 +3,9 @@ title: Start contributing slug: start content_template: templates/concept weight: 10 +card: + name: contribute + weight: 10 --- {{% capture overview %}} @@ -133,10 +136,9 @@ The SIG Docs team communicates using the following mechanisms: introduce yourself! - [Join the `kubernetes-sig-docs` mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-docs), where broader discussions take place and official decisions are recorded. -- Participate in the weekly SIG Docs video meeting, which is announced on the - Slack channel and the mailing list. Currently, these meetings take place on - Zoom, so you'll need to download the [Zoom client](https://zoom.us/download) - or dial in using a phone. +- Participate in the [weekly SIG Docs](https://github.com/kubernetes/community/tree/master/sig-docs) video meeting, which is announced on the Slack channel and the mailing list. Currently, these meetings take place on Zoom, so you'll need to download the [Zoom client](https://zoom.us/download) or dial in using a phone. + +{{< note >}} You can also check the SIG Docs weekly meeting on the [K8s calnedar meetings](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles) {{< /note >}} ## Improve existing content diff --git a/content/en/docs/contribute/style/page-templates.md b/content/en/docs/contribute/style/page-templates.md index 1684ed3c2699c..033ddf6223eef 100644 --- a/content/en/docs/contribute/style/page-templates.md +++ b/content/en/docs/contribute/style/page-templates.md @@ -2,6 +2,9 @@ title: Using Page Templates content_template: templates/concept weight: 30 +card: + name: contribute + weight: 30 --- {{% capture overview %}} diff --git a/content/en/docs/contribute/style/style-guide.md b/content/en/docs/contribute/style/style-guide.md index 8cd5e1433703e..8bae85f61d66c 100644 --- a/content/en/docs/contribute/style/style-guide.md +++ b/content/en/docs/contribute/style/style-guide.md @@ -3,6 +3,10 @@ title: Documentation Style Guide linktitle: Style guide content_template: templates/concept weight: 10 +card: + name: contribute + weight: 20 + title: Documentation Style Guide --- {{% capture overview %}} diff --git a/content/en/docs/doc-contributor-tools/snippets/atom-snippets.cson b/content/en/docs/doc-contributor-tools/snippets/atom-snippets.cson index 5d4573938be10..878ccc4ed73fc 100644 --- a/content/en/docs/doc-contributor-tools/snippets/atom-snippets.cson +++ b/content/en/docs/doc-contributor-tools/snippets/atom-snippets.cson @@ -116,7 +116,7 @@ 'body': '{{< toc >}}' 'Insert code from file': 'prefix': 'codefile' - 'body': '{{< code file="$1" >}}' + 'body': '{{< codenew file="$1" >}}' 'Insert feature state': 'prefix': 'fstate' 'body': '{{< feature-state for_k8s_version="$1" state="$2" >}}' @@ -223,4 +223,4 @@ ${7:"next-steps-or-delete"} {{% /capture %}} """ - \ No newline at end of file + diff --git a/content/en/docs/getting-started-guides/ubuntu/_index.md b/content/en/docs/getting-started-guides/ubuntu/_index.md index 1b1eced18964a..ed60790b67388 100644 --- a/content/en/docs/getting-started-guides/ubuntu/_index.md +++ b/content/en/docs/getting-started-guides/ubuntu/_index.md @@ -51,7 +51,7 @@ These are more in-depth guides for users choosing to run Kubernetes in productio - [Decommissioning](/docs/getting-started-guides/ubuntu/decommissioning/) - [Operational Considerations](/docs/getting-started-guides/ubuntu/operational-considerations/) - [Glossary](/docs/getting-started-guides/ubuntu/glossary/) - + - [Authenticating with LDAP](https://www.ubuntu.com/kubernetes/docs/ldap) ## Third-party Product Integrations @@ -73,5 +73,3 @@ We're normally following the following Slack channels: and we monitor the Kubernetes mailing lists. {{% /capture %}} - - diff --git a/content/en/docs/home/_index.md b/content/en/docs/home/_index.md index f6f5f6150a22b..31f37880ff631 100644 --- a/content/en/docs/home/_index.md +++ b/content/en/docs/home/_index.md @@ -16,4 +16,43 @@ menu: weight: 20 post: >

Learn how to use Kubernetes with conceptual, tutorial, and reference documentation. You can even help contribute to the docs!

+overview: > + Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications. The open source project is hosted by the Cloud Native Computing Foundation (CNCF). +cards: +- name: concepts + title: "Understand the basics" + description: "Learn about Kubernetes and its fundamental concepts." + button: "Learn Concepts" + button_path: "/docs/concepts" +- name: tutorials + title: "Try Kubernetes" + description: "Follow tutorials to learn how to deploy applications in Kubernetes." + button: "View Tutorials" + button_path: "/docs/tutorials" +- name: setup + title: "Set up a cluster" + description: "Get Kubernetes running based on your resources and needs." + button: "Set up Kubernetes" + button_path: "/docs/setup" +- name: tasks + title: "Learn how to use Kubernetes" + description: "Look up common tasks and how to perform them using a short sequence of steps." + button: "View Tasks" + button_path: "/docs/tasks" +- name: reference + title: Look up reference information + description: Browse terminology, command line syntax, API resource types, and setup tool documentation. + button: View Reference + button_path: /docs/reference +- name: contribute + title: Contribute to the docs + description: Anyone can contribute, whether you’re new to the project or you’ve been around a long time. + button: Contribute to the docs + button_path: /docs/contribute +- name: download + title: Download Kubernetes + description: If you are installing Kubernetes or upgrading to the newest version, refer to the current release notes. +- name: about + title: About the documentation + description: This website contains documentation for the current and previous 4 versions of Kubernetes. --- diff --git a/content/en/docs/home/supported-doc-versions.md b/content/en/docs/home/supported-doc-versions.md index 7747ea2b76435..45a6012eaa148 100644 --- a/content/en/docs/home/supported-doc-versions.md +++ b/content/en/docs/home/supported-doc-versions.md @@ -1,6 +1,10 @@ --- title: Supported Versions of the Kubernetes Documentation content_template: templates/concept +card: + name: about + weight: 10 + title: Supported Versions of the Documentation --- {{% capture overview %}} diff --git a/content/en/docs/reference/access-authn-authz/admission-controllers.md b/content/en/docs/reference/access-authn-authz/admission-controllers.md index 440a9a1d86307..17201656276fd 100644 --- a/content/en/docs/reference/access-authn-authz/admission-controllers.md +++ b/content/en/docs/reference/access-authn-authz/admission-controllers.md @@ -142,20 +142,24 @@ if the pods don't already have toleration for taints This admission controller will intercept all requests to exec a command in a pod if that pod has a privileged container. -If your cluster supports privileged containers, and you want to restrict the ability of end-users to exec -commands in those containers, we strongly encourage enabling this admission controller. - This functionality has been merged into [DenyEscalatingExec](#denyescalatingexec). +The DenyExecOnPrivileged admission plugin is deprecated and will be removed in v1.18. + +Use of a policy-based admission plugin (like [PodSecurityPolicy](#podsecuritypolicy) or a custom admission plugin) +which can be targeted at specific users or Namespaces and also protects against creation of overly privileged Pods +is recommended instead. -### DenyEscalatingExec {#denyescalatingexec} +### DenyEscalatingExec (deprecated) {#denyescalatingexec} This admission controller will deny exec and attach commands to pods that run with escalated privileges that allow host access. This includes pods that run as privileged, have access to the host IPC namespace, and have access to the host PID namespace. -If your cluster supports containers that run with escalated privileges, and you want to -restrict the ability of end-users to exec commands in those containers, we strongly encourage -enabling this admission controller. +The DenyEscalatingExec admission plugin is deprecated and will be removed in v1.18. + +Use of a policy-based admission plugin (like [PodSecurityPolicy](#podsecuritypolicy) or a custom admission plugin) +which can be targeted at specific users or Namespaces and also protects against creation of overly privileged Pods +is recommended instead. ### EventRateLimit (alpha) {#eventratelimit} diff --git a/content/en/docs/reference/access-authn-authz/authentication.md b/content/en/docs/reference/access-authn-authz/authentication.md index d7a96526929a0..ae42fb5e49ead 100644 --- a/content/en/docs/reference/access-authn-authz/authentication.md +++ b/content/en/docs/reference/access-authn-authz/authentication.md @@ -322,6 +322,7 @@ To enable the plugin, configure the following flags on the API server: | `--oidc-username-prefix` | Prefix prepended to username claims to prevent clashes with existing names (such as `system:` users). For example, the value `oidc:` will create usernames like `oidc:jane.doe`. If this flag isn't provided and `--oidc-user-claim` is a value other than `email` the prefix defaults to `( Issuer URL )#` where `( Issuer URL )` is the value of `--oidc-issuer-url`. The value `-` can be used to disable all prefixing. | `oidc:` | No | | `--oidc-groups-claim` | JWT claim to use as the user's group. If the claim is present it must be an array of strings. | groups | No | | `--oidc-groups-prefix` | Prefix prepended to group claims to prevent clashes with existing names (such as `system:` groups). For example, the value `oidc:` will create group names like `oidc:engineering` and `oidc:infra`. | `oidc:` | No | +| `--oidc-required-claim` | A key=value pair that describes a required claim in the ID Token. If set, the claim is verified to be present in the ID Token with a matching value. Repeat this flag to specify multiple claims. | `claim=value` | No | | `--oidc-ca-file` | The path to the certificate for the CA that signed your identity provider's web certificate. Defaults to the host's root CAs. | `/etc/kubernetes/ssl/kc-ca.pem` | No | Importantly, the API server is not an OAuth2 client, rather it can only be diff --git a/content/en/docs/reference/access-authn-authz/authorization.md b/content/en/docs/reference/access-authn-authz/authorization.md index 366fbefe21ff8..e955053c63374 100644 --- a/content/en/docs/reference/access-authn-authz/authorization.md +++ b/content/en/docs/reference/access-authn-authz/authorization.md @@ -47,7 +47,7 @@ Kubernetes reviews only the following API request attributes: * **extra** - A map of arbitrary string keys to string values, provided by the authentication layer. * **API** - Indicates whether the request is for an API resource. * **Request path** - Path to miscellaneous non-resource endpoints like `/api` or `/healthz`. - * **API request verb** - API verbs `get`, `list`, `create`, `update`, `patch`, `watch`, `proxy`, `redirect`, `delete`, and `deletecollection` are used for resource requests. To determine the request verb for a resource API endpoint, see [Determine the request verb](/docs/reference/access-authn-authz/authorization/#determine-whether-a-request-is-allowed-or-denied) below. + * **API request verb** - API verbs `get`, `list`, `create`, `update`, `patch`, `watch`, `proxy`, `redirect`, `delete`, and `deletecollection` are used for resource requests. To determine the request verb for a resource API endpoint, see [Determine the request verb](/docs/reference/access-authn-authz/authorization/#determine-the-request-verb). * **HTTP request verb** - HTTP verbs `get`, `post`, `put`, and `delete` are used for non-resource requests. * **Resource** - The ID or name of the resource that is being accessed (for resource requests only) -- For resource requests using `get`, `update`, `patch`, and `delete` verbs, you must provide the resource name. * **Subresource** - The subresource that is being accessed (for resource requests only). diff --git a/content/en/docs/reference/access-authn-authz/rbac.md b/content/en/docs/reference/access-authn-authz/rbac.md index b90509b924341..ed51dcefb5cf7 100644 --- a/content/en/docs/reference/access-authn-authz/rbac.md +++ b/content/en/docs/reference/access-authn-authz/rbac.md @@ -148,6 +148,24 @@ roleRef: apiGroup: rbac.authorization.k8s.io ``` +You cannot modify which `Role` or `ClusterRole` a binding object refers to. +Attempts to change the `roleRef` field of a binding object will result in a validation error. +To change the `roleRef` field on an existing binding object, the binding object must be deleted and recreated. +There are two primary reasons for this restriction: + +1. A binding to a different role is a fundamentally different binding. +Requiring a binding to be deleted/recreated in order to change the `roleRef` +ensures the full list of subjects in the binding is intended to be granted +the new role (as opposed to enabling accidentally modifying just the roleRef +without verifying all of the existing subjects should be given the new role's permissions). +2. Making `roleRef` immutable allows giving `update` permission on an existing binding object +to a user, which lets them manage the list of subjects, without being able to change the +role that is granted to those subjects. + +The `kubectl auth reconcile` command-line utility creates or updates a manifest file containing RBAC objects, +and handles deleting and recreating binding objects if required to change the role they refer to. +See [command usage and examples](#kubectl-auth-reconcile) for more information. + ### Referring to Resources Most resources are represented by a string representation of their name, such as "pods", just as it @@ -471,14 +489,19 @@ NOTE: editing the role is not recommended as changes will be overwritten on API system:basic-user -system:authenticated and system:unauthenticated groups +system:authenticated group Allows a user read-only access to basic information about themselves. system:discovery -system:authenticated and system:unauthenticated groups +system:authenticated group Allows read-only access to API discovery endpoints needed to discover and negotiate an API level. + +system:public-info-viewer +system:authenticated and system:unauthenticated groups +Allows read-only access to non-sensitive cluster information. + ### User-facing Roles @@ -738,46 +761,156 @@ To bootstrap initial roles and role bindings: ## Command-line Utilities -Two `kubectl` commands exist to grant roles within a namespace or across the entire cluster. +### `kubectl create role` + +Creates a `Role` object defining permissions within a single namespace. Examples: + +* Create a `Role` named "pod-reader" that allows user to perform "get", "watch" and "list" on pods: + + ``` + kubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods + ``` + +* Create a `Role` named "pod-reader" with resourceNames specified: + + ``` + kubectl create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod + ``` + +* Create a `Role` named "foo" with apiGroups specified: + + ``` + kubectl create role foo --verb=get,list,watch --resource=replicasets.apps + ``` + +* Create a `Role` named "foo" with subresource permissions: + + ``` + kubectl create role foo --verb=get,list,watch --resource=pods,pods/status + ``` + +* Create a `Role` named "my-component-lease-holder" with permissions to get/update a resource with a specific name: + + ``` + kubectl create role my-component-lease-holder --verb=get,list,watch,update --resource=lease --resource-name=my-component + ``` + +### `kubectl create clusterrole` + +Creates a `ClusterRole` object. Examples: + +* Create a `ClusterRole` named "pod-reader" that allows user to perform "get", "watch" and "list" on pods: + + ``` + kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods + ``` + +* Create a `ClusterRole` named "pod-reader" with resourceNames specified: + + ``` + kubectl create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod + ``` + +* Create a `ClusterRole` named "foo" with apiGroups specified: + + ``` + kubectl create clusterrole foo --verb=get,list,watch --resource=replicasets.apps + ``` + +* Create a `ClusterRole` named "foo" with subresource permissions: + + ``` + kubectl create clusterrole foo --verb=get,list,watch --resource=pods,pods/status + ``` + +* Create a `ClusterRole` name "foo" with nonResourceURL specified: + + ``` + kubectl create clusterrole "foo" --verb=get --non-resource-url=/logs/* + ``` + +* Create a `ClusterRole` name "monitoring" with aggregationRule specified: + + ``` + kubectl create clusterrole monitoring --aggregation-rule="rbac.example.com/aggregate-to-monitoring=true" + ``` ### `kubectl create rolebinding` Grants a `Role` or `ClusterRole` within a specific namespace. Examples: -* Grant the `admin` `ClusterRole` to a user named "bob" in the namespace "acme": +* Within the namespace "acme", grant the permissions in the `admin` `ClusterRole` to a user named "bob": ``` kubectl create rolebinding bob-admin-binding --clusterrole=admin --user=bob --namespace=acme ``` -* Grant the `view` `ClusterRole` to a service account named "myapp" in the namespace "acme": +* Within the namespace "acme", grant the permissions in the `view` `ClusterRole` to the service account in the namespace "acme" named "myapp" : ``` kubectl create rolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp --namespace=acme ``` +* Within the namespace "acme", grant the permissions in the `view` `ClusterRole` to a service account in the namespace "myappnamespace" named "myapp": + + ``` + kubectl create rolebinding myappnamespace-myapp-view-binding --clusterrole=view --serviceaccount=myappnamespace:myapp --namespace=acme + ``` + ### `kubectl create clusterrolebinding` Grants a `ClusterRole` across the entire cluster, including all namespaces. Examples: -* Grant the `cluster-admin` `ClusterRole` to a user named "root" across the entire cluster: +* Across the entire cluster, grant the permissions in the `cluster-admin` `ClusterRole` to a user named "root": ``` kubectl create clusterrolebinding root-cluster-admin-binding --clusterrole=cluster-admin --user=root ``` -* Grant the `system:node` `ClusterRole` to a user named "kubelet" across the entire cluster: +* Across the entire cluster, grant the permissions in the `system:node-proxier ` `ClusterRole` to a user named "system:kube-proxy": ``` - kubectl create clusterrolebinding kubelet-node-binding --clusterrole=system:node --user=kubelet + kubectl create clusterrolebinding kube-proxy-binding --clusterrole=system:node-proxier --user=system:kube-proxy ``` -* Grant the `view` `ClusterRole` to a service account named "myapp" in the namespace "acme" across the entire cluster: +* Across the entire cluster, grant the permissions in the `view` `ClusterRole` to a service account named "myapp" in the namespace "acme": ``` kubectl create clusterrolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp ``` +### `kubectl auth reconcile` {#kubectl-auth-reconcile} + +Creates or updates `rbac.authorization.k8s.io/v1` API objects from a manifest file. + +Missing objects are created, and the containing namespace is created for namespaced objects, if required. + +Existing roles are updated to include the permissions in the input objects, +and remove extra permissions if `--remove-extra-permissions` is specified. + +Existing bindings are updated to include the subjects in the input objects, +and remove extra subjects if `--remove-extra-subjects` is specified. + +Examples: + +* Test applying a manifest file of RBAC objects, displaying changes that would be made: + + ``` + kubectl auth reconcile -f my-rbac-rules.yaml --dry-run + ``` + +* Apply a manifest file of RBAC objects, preserving any extra permissions (in roles) and any extra subjects (in bindings): + + ``` + kubectl auth reconcile -f my-rbac-rules.yaml + ``` + +* Apply a manifest file of RBAC objects, removing any extra permissions (in roles) and any extra subjects (in bindings): + + ``` + kubectl auth reconcile -f my-rbac-rules.yaml --remove-extra-subjects --remove-extra-permissions + ``` + See the CLI help for detailed usage. ## Service Account Permissions diff --git a/content/en/docs/reference/glossary/container-lifecycle-hooks.md b/content/en/docs/reference/glossary/container-lifecycle-hooks.md index 527e2f3e6efc8..5f19d0606f6f9 100644 --- a/content/en/docs/reference/glossary/container-lifecycle-hooks.md +++ b/content/en/docs/reference/glossary/container-lifecycle-hooks.md @@ -6,13 +6,12 @@ full_link: /docs/concepts/containers/container-lifecycle-hooks/ short_description: > The lifecycle hooks expose events in the container management lifecycle and let the user run code when the events occur. -aka: +aka: tags: - extension --- - The lifecycle hooks expose events in the {{< glossary_tooltip text="Container" term_id="container" >}}container management lifecycle and let the user run code when the events occur. + The lifecycle hooks expose events in the {{< glossary_tooltip text="Container" term_id="container" >}} management lifecycle and let the user run code when the events occur. - + Two hooks are exposed to Containers: PostStart which executes immediately after a container is created and PreStop which is blocking and is called immediately before a container is terminated. - diff --git a/content/en/docs/reference/glossary/cronjob.md b/content/en/docs/reference/glossary/cronjob.md index 3173740b5b6d8..d09dc8e0d4263 100755 --- a/content/en/docs/reference/glossary/cronjob.md +++ b/content/en/docs/reference/glossary/cronjob.md @@ -13,7 +13,7 @@ tags: --- Manages a [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/) that runs on a periodic schedule. - + -Similar to a line in a *crontab* file, a Cronjob object specifies a schedule using the [Cron](https://en.wikipedia.org/wiki/Cron) format. +Similar to a line in a *crontab* file, a CronJob object specifies a schedule using the [cron](https://en.wikipedia.org/wiki/Cron) format. diff --git a/content/en/docs/reference/glossary/device-plugin.md b/content/en/docs/reference/glossary/device-plugin.md new file mode 100644 index 0000000000000..02e5500677d45 --- /dev/null +++ b/content/en/docs/reference/glossary/device-plugin.md @@ -0,0 +1,17 @@ +--- +title: Device Plugin +id: device-plugin +date: 2019-02-02 +full_link: https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/ +short_description: > + Device Plugins are containers running in Kubernetes that provide access to a vendor specific resource. +aka: +tags: +- fundamental +- extension +--- + Device Plugins are containers running in Kubernetes that provide access to a vendor specific resource. + + + +[Device Plugin](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) are containers running in Kubernetes that provide access to a vendor specific resource. Device Plugins advertise these resources to kubelet and can be deployed manually or as a DeamonSet, rather than writing custom Kubernetes code. diff --git a/content/en/docs/reference/glossary/index.md b/content/en/docs/reference/glossary/index.md index a8c229569a05a..1fb8799a16b51 100755 --- a/content/en/docs/reference/glossary/index.md +++ b/content/en/docs/reference/glossary/index.md @@ -7,5 +7,9 @@ layout: glossary noedit: true default_active_tag: fundamental weight: 5 +card: + name: reference + weight: 10 + title: Glossary --- diff --git a/content/en/docs/reference/glossary/pod-disruption-budget.md b/content/en/docs/reference/glossary/pod-disruption-budget.md new file mode 100644 index 0000000000000..1e3f2f7d6ea28 --- /dev/null +++ b/content/en/docs/reference/glossary/pod-disruption-budget.md @@ -0,0 +1,19 @@ +--- +id: pod-disruption-budget +title: Pod Disruption Budget +full-link: /docs/concepts/workloads/pods/disruptions/ +date: 2019-02-12 +short-description: > + An object that limits the number of {{< glossary_tooltip text="Pods" term_id="pod" >}} of a replicated application, that are down simultaneously from voluntary disruptions. + +aka: + - PDB +related: + - pod + - container +tags: + - operation +--- + + A [Pod Disruption Budget](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/) allows an application owner to create an object for a replicated application, that ensures a certain number or percentage of Pods with an assigned label will not be voluntarily evicted at any point in time. PDBs cannot prevent an involuntary disruption, but will count against the budget. + diff --git a/content/en/docs/reference/glossary/pod-lifecycle.md b/content/en/docs/reference/glossary/pod-lifecycle.md new file mode 100644 index 0000000000000..ec496de440ac7 --- /dev/null +++ b/content/en/docs/reference/glossary/pod-lifecycle.md @@ -0,0 +1,16 @@ +--- +title: Pod Lifecycle +id: pod-lifecycle +date: 2019-02-17 +full-link: /docs/concepts/workloads/pods/pod-lifecycle/ +related: + - pod + - container +tags: + - fundamental +short-description: > + A high-level summary of what phase the Pod is in within its lifecyle. + +--- + +The [Pod Lifecycle](/docs/concepts/workloads/pods/pod-lifecycle/) is a high level summary of where a Pod is in its lifecyle. A Pod’s `status` field is a [PodStatus](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#podstatus-v1-core) object, which has a `phase` field that displays one of the following phases: Running, Pending, Succeeded, Failed, Unknown, Completed, or CrashLoopBackOff. diff --git a/content/en/docs/reference/glossary/Preemption.md b/content/en/docs/reference/glossary/preemption.md similarity index 97% rename from content/en/docs/reference/glossary/Preemption.md rename to content/en/docs/reference/glossary/preemption.md index c94d9a3a3e378..ac1334c9793a6 100644 --- a/content/en/docs/reference/glossary/Preemption.md +++ b/content/en/docs/reference/glossary/preemption.md @@ -1,6 +1,6 @@ --- title: Preemption -id: Preemption +id: preemption date: 2019-01-31 full_link: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#preemption short_description: > diff --git a/content/en/docs/reference/glossary/rkt.md b/content/en/docs/reference/glossary/rkt.md new file mode 100644 index 0000000000000..455484c8ca2de --- /dev/null +++ b/content/en/docs/reference/glossary/rkt.md @@ -0,0 +1,18 @@ +--- +title: rkt +id: rkt +date: 2019-01-24 +full_link: https://coreos.com/rkt/ +short_description: > + A security-minded, standards-based container engine. + +aka: +tags: +- security +- tool +--- + A security-minded, standards-based container engine. + + + +rkt is an application {% glossary_tooltip text="container" term_id="container" %} engine featuring a {% glossary_tooltip text="pod" term_id="pod" %}-native approach, a pluggable execution environment, and a well-defined surface area. rkt allows users to apply different configurations at both the pod and application level and each pod executes directly in the classic Unix process model, in a self-contained, isolated environment. diff --git a/content/en/docs/reference/glossary/sysctl.md b/content/en/docs/reference/glossary/sysctl.md new file mode 100755 index 0000000000000..7b73af4c56776 --- /dev/null +++ b/content/en/docs/reference/glossary/sysctl.md @@ -0,0 +1,23 @@ +--- +title: sysctl +id: sysctl +date: 2019-02-12 +full_link: /docs/tasks/administer-cluster/sysctl-cluster/ +short_description: > + An interface for getting and setting Unix kernel parameters + +aka: +tags: +- tool +--- + `sysctl` is a semi-standardized interface for reading or changing the + attributes of the running Unix kernel. + + + +On Unix-like systems, `sysctl` is both the name of the tool that administrators +use to view and modify these settings, and also the system call that the tool +uses. + +{{< glossary_tooltip text="Container" term_id="container" >}} runtimes and +network plugins may rely on `sysctl` values being set a certain way. diff --git a/content/en/docs/reference/glossary/workload.md b/content/en/docs/reference/glossary/workload.md new file mode 100644 index 0000000000000..1730e7b93f3ce --- /dev/null +++ b/content/en/docs/reference/glossary/workload.md @@ -0,0 +1,28 @@ +--- +title: Workload +id: workload +date: 2019-02-12 +full_link: /docs/concepts/workloads/ +short_description: > + A set of applications for processing information to serve a purpose that is valuable to a single user or group of users. + +aka: +tags: +- workload +--- +A workload consists of a system of services or applications that can run to fulfill a +task or carry out a business process. + + + +Alongside the computer code that runs to carry out the task, a workload also entails +the infrastructure resources that actually run that code. + +For example, a workload that has a web element and a database element might run the +database in one {{< glossary_tooltip term_id="StatefulSet" >}} of +{{< glossary_tooltip text="pods" term_id="pod" >}} and the webserver via +a {{< glossary_tooltip term_id="Deployment" >}} that consists of many web app +{{< glossary_tooltip text="pods" term_id="pod" >}}, all alike. + +The organisation running this workload may well have other workloads that together +provide a valuable outcome to its users. diff --git a/content/en/docs/reference/kubectl/cheatsheet.md b/content/en/docs/reference/kubectl/cheatsheet.md index d46e6336bd056..47c9a505e50f9 100644 --- a/content/en/docs/reference/kubectl/cheatsheet.md +++ b/content/en/docs/reference/kubectl/cheatsheet.md @@ -6,6 +6,9 @@ reviewers: - krousey - clove content_template: templates/concept +card: + name: reference + weight: 30 --- {{% capture overview %}} @@ -29,6 +32,13 @@ source <(kubectl completion bash) # setup autocomplete in bash into the current echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell. ``` +You can also use a shorthand alias for `kubectl` that also works with completion: + +```bash +alias k=kubectl +complete -F __start_kubectl k +``` + ### ZSH ```bash @@ -139,6 +149,10 @@ kubectl get pods --sort-by='.status.containerStatuses[0].restartCount' kubectl get pods --selector=app=cassandra rc -o \ jsonpath='{.items[*].metadata.labels.version}' +# Get all worker nodes (use a selector to exclude results that have a label +# named 'node-role.kubernetes.io/master') +kubectl get node --selector='!node-role.kubernetes.io/master' + # Get all running pods in the namespace kubectl get pods --field-selector=status.phase=Running @@ -150,6 +164,10 @@ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP sel=${$(kubectl get rc my-rc --output=json | jq -j '.spec.selector | to_entries | .[] | "\(.key)=\(.value),"')%?} echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name}) +# Show labels for all pods (or any other Kubernetes object that supports labelling) +# Also uses "jq" +for item in $( kubectl get pod --output=name); do printf "Labels for %s\n" "$item" | grep --color -E '[^/]+$' && kubectl get "$item" --output=json | jq -r -S '.metadata.labels | to_entries | .[] | " \(.key)=\(.value)"' 2>/dev/null; printf "\n"; done + # Check which nodes are ready JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \ && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True" diff --git a/content/en/docs/reference/kubectl/overview.md b/content/en/docs/reference/kubectl/overview.md index 3fa3836460726..71befbd9664b8 100644 --- a/content/en/docs/reference/kubectl/overview.md +++ b/content/en/docs/reference/kubectl/overview.md @@ -5,6 +5,9 @@ reviewers: title: Overview of kubectl content_template: templates/concept weight: 20 +card: + name: reference + weight: 20 --- {{% capture overview %}} @@ -258,52 +261,52 @@ Use the following set of examples to help you familiarize yourself with running `kubectl create` - Create a resource from a file or stdin. ```shell -// Create a service using the definition in example-service.yaml. +# Create a service using the definition in example-service.yaml. $ kubectl create -f example-service.yaml -// Create a replication controller using the definition in example-controller.yaml. +# Create a replication controller using the definition in example-controller.yaml. $ kubectl create -f example-controller.yaml -// Create the objects that are defined in any .yaml, .yml, or .json file within the directory. +# Create the objects that are defined in any .yaml, .yml, or .json file within the directory. $ kubectl create -f ``` `kubectl get` - List one or more resources. ```shell -// List all pods in plain-text output format. +# List all pods in plain-text output format. $ kubectl get pods -// List all pods in plain-text output format and include additional information (such as node name). +# List all pods in plain-text output format and include additional information (such as node name). $ kubectl get pods -o wide -// List the replication controller with the specified name in plain-text output format. Tip: You can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'. +# List the replication controller with the specified name in plain-text output format. Tip: You can shorten and replace the 'replicationcontroller' resource type with the alias 'rc'. $ kubectl get replicationcontroller -// List all replication controllers and services together in plain-text output format. +# List all replication controllers and services together in plain-text output format. $ kubectl get rc,services -// List all daemon sets, including uninitialized ones, in plain-text output format. +# List all daemon sets, including uninitialized ones, in plain-text output format. $ kubectl get ds --include-uninitialized -// List all pods running on node server01 +# List all pods running on node server01 $ kubectl get pods --field-selector=spec.nodeName=server01 ``` `kubectl describe` - Display detailed state of one or more resources, including the uninitialized ones by default. ```shell -// Display the details of the node with name . +# Display the details of the node with name . $ kubectl describe nodes -// Display the details of the pod with name . +# Display the details of the pod with name . $ kubectl describe pods/ -// Display the details of all the pods that are managed by the replication controller named . -// Remember: Any pods that are created by the replication controller get prefixed with the name of the replication controller. +# Display the details of all the pods that are managed by the replication controller named . +# Remember: Any pods that are created by the replication controller get prefixed with the name of the replication controller. $ kubectl describe pods -// Describe all pods, not including uninitialized ones +# Describe all pods, not including uninitialized ones $ kubectl describe pods --include-uninitialized=false ``` @@ -322,39 +325,39 @@ the pods running on it, the events generated for the node etc. `kubectl delete` - Delete resources either from a file, stdin, or specifying label selectors, names, resource selectors, or resources. ```shell -// Delete a pod using the type and name specified in the pod.yaml file. +# Delete a pod using the type and name specified in the pod.yaml file. $ kubectl delete -f pod.yaml -// Delete all the pods and services that have the label name=. +# Delete all the pods and services that have the label name=. $ kubectl delete pods,services -l name= -// Delete all the pods and services that have the label name=, including uninitialized ones. +# Delete all the pods and services that have the label name=, including uninitialized ones. $ kubectl delete pods,services -l name= --include-uninitialized -// Delete all pods, including uninitialized ones. +# Delete all pods, including uninitialized ones. $ kubectl delete pods --all ``` `kubectl exec` - Execute a command against a container in a pod. ```shell -// Get output from running 'date' from pod . By default, output is from the first container. +# Get output from running 'date' from pod . By default, output is from the first container. $ kubectl exec date -// Get output from running 'date' in container of pod . +# Get output from running 'date' in container of pod . $ kubectl exec -c date -// Get an interactive TTY and run /bin/bash from pod . By default, output is from the first container. +# Get an interactive TTY and run /bin/bash from pod . By default, output is from the first container. $ kubectl exec -ti /bin/bash ``` `kubectl logs` - Print the logs for a container in a pod. ```shell -// Return a snapshot of the logs from pod . +# Return a snapshot of the logs from pod . $ kubectl logs -// Start streaming the logs from pod . This is similar to the 'tail -f' Linux command. +# Start streaming the logs from pod . This is similar to the 'tail -f' Linux command. $ kubectl logs -f ``` @@ -363,26 +366,26 @@ $ kubectl logs -f Use the following set of examples to help you familiarize yourself with writing and using `kubectl` plugins: ```shell -// create a simple plugin in any language and name the resulting executable file -// so that it begins with the prefix "kubectl-" +# create a simple plugin in any language and name the resulting executable file +# so that it begins with the prefix "kubectl-" $ cat ./kubectl-hello #!/bin/bash # this plugin prints the words "hello world" echo "hello world" -// with our plugin written, let's make it executable +# with our plugin written, let's make it executable $ sudo chmod +x ./kubectl-hello -// and move it to a location in our PATH +# and move it to a location in our PATH $ sudo mv ./kubectl-hello /usr/local/bin -// we have now created and "installed" a kubectl plugin. -// we can begin using our plugin by invoking it from kubectl as if it were a regular command +# we have now created and "installed" a kubectl plugin. +# we can begin using our plugin by invoking it from kubectl as if it were a regular command $ kubectl hello hello world -// we can "uninstall" a plugin, by simply removing it from our PATH +# we can "uninstall" a plugin, by simply removing it from our PATH $ sudo rm /usr/local/bin/kubectl-hello ``` @@ -397,9 +400,9 @@ The following kubectl-compatible plugins are available: /usr/local/bin/kubectl-foo /usr/local/bin/kubectl-bar -// this command can also warn us about plugins that are -// not executable, or that are overshadowed by other -// plugins, for example +# this command can also warn us about plugins that are +# not executable, or that are overshadowed by other +# plugins, for example $ sudo chmod -x /usr/local/bin/kubectl-foo $ kubectl plugin list The following kubectl-compatible plugins are available: @@ -428,10 +431,10 @@ Running the above plugin gives us an output containing the user for the currentl context in our KUBECONFIG file: ```shell -// make the file executable +# make the file executable $ sudo chmod +x ./kubectl-whoami -// and move it into our PATH +# and move it into our PATH $ sudo mv ./kubectl-whoami /usr/local/bin $ kubectl whoami diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm.md index eccf635588e72..80d4dff5b3e61 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm.md @@ -5,6 +5,9 @@ reviewers: - jbeda title: Overview of kubeadm weight: 10 +card: + name: reference + weight: 40 --- Kubeadm is a tool built to provide `kubeadm init` and `kubeadm join` as best-practice “fast paths” for creating Kubernetes clusters. diff --git a/content/en/docs/reference/using-api/api-overview.md b/content/en/docs/reference/using-api/api-overview.md index 38baa5aa92ac8..f74a204f38116 100644 --- a/content/en/docs/reference/using-api/api-overview.md +++ b/content/en/docs/reference/using-api/api-overview.md @@ -7,6 +7,10 @@ reviewers: - jbeda content_template: templates/concept weight: 10 +card: + name: reference + weight: 50 + title: Overview of API --- {{% capture overview %}} diff --git a/content/en/docs/setup/cri.md b/content/en/docs/setup/cri.md index 6f16f6a8f6644..dfe484aa2b220 100644 --- a/content/en/docs/setup/cri.md +++ b/content/en/docs/setup/cri.md @@ -7,20 +7,56 @@ content_template: templates/concept weight: 100 --- {{% capture overview %}} -Since v1.6.0, Kubernetes has enabled the use of CRI, Container Runtime Interface, by default. -This page contains installation instruction for various runtimes. +{{< feature-state for_k8s_version="v1.6" state="stable" >}} +To run containers in Pods, Kubernetes uses a container runtime. Here are +the installation instruction for various runtimes. {{% /capture %}} {{% capture body %}} -Please proceed with executing the following commands based on your OS as root. -You may become the root user by executing `sudo -i` after SSH-ing to each host. + +{{< caution >}} +A flaw was found in the way runc handled system file descriptors when running containers. +A malicious container could use this flaw to overwrite contents of the runc binary and +consequently run arbitrary commands on the container host system. + +Please refer to this link for more information about this issue +[cve-2019-5736 : runc vulnerability ] (https://access.redhat.com/security/cve/cve-2019-5736) +{{< /caution >}} + +### Applicability + +{{< note >}} +This document is written for users installing CRI onto Linux. For other operating +systems, look for documentation specific to your platform. +{{< /note >}} + +You should execute all the commands in this guide as `root`. For example, prefix commands +with `sudo `, or become `root` and run the commands as that user. + +### Cgroup drivers + +When systemd is chosen as the init system for a Linux distribution, the init process generates +and consumes a root control group (`cgroup`) and acts as a cgroup manager. Systemd has a tight +integration with cgroups and will allocate cgroups per process. It's possible to configure your +container runtime and the kubelet to use `cgroupfs`. Using `cgroupfs` alongside systemd means +that there will then be two different cgroup managers. + +Control groups are used to constrain resources that are allocated to processes. +A single cgroup manager will simplify the view of what resources are being allocated +and will by default have a more consistent view of the available and in-use resources. When we have +two managers we end up with two views of those resources. We have seen cases in the field +where nodes that are configured to use `cgroupfs` for the kubelet and Docker, and `systemd` +for the rest of the processes running on the node becomes unstable under resource pressure. + +Changing the settings such that your container runtime and kubelet use `systemd` as the cgroup driver +stabilized the system. Please note the `native.cgroupdriver=systemd` option in the Docker setup below. ## Docker On each of your machines, install Docker. -Version 18.06 is recommended, but 1.11, 1.12, 1.13, 17.03 and 18.09 are known to work as well. +Version 18.06.2 is recommended, but 1.11, 1.12, 1.13, 17.03 and 18.09 are known to work as well. Keep track of the latest verified Docker version in the Kubernetes release notes. Use the following commands to install Docker on your system: @@ -45,7 +81,7 @@ Use the following commands to install Docker on your system: stable" ## Install docker ce. -apt-get update && apt-get install docker-ce=18.06.0~ce~3-0~ubuntu +apt-get update && apt-get install docker-ce=18.06.2~ce~3-0~ubuntu # Setup daemon. cat > /etc/docker/daemon.json < node-role.kubernetes.io/master:NoSchedule- +``` -# Generate and deploy etcd certificates -export CLUSTER_DOMAIN=$(kubectl get ConfigMap --namespace kube-system coredns -o yaml | awk '/kubernetes/ {print $2}') -tls/certs/gen-cert.sh $CLUSTER_DOMAIN -tls/deploy-certs.sh +To deploy Cilium you just need to run: -# Label kube-dns with fixed identity label -kubectl label -n kube-system pod $(kubectl -n kube-system get pods -l k8s-app=kube-dns -o jsonpath='{range .items[]}{.metadata.name}{" "}{end}') io.cilium.fixed-identity=kube-dns +```shell +kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.4/examples/kubernetes/1.13/cilium.yaml +``` -kubectl create -f ./ +Once all Cilium pods are marked as `READY`, you start using your cluster. -# Wait several minutes for Cilium, coredns and etcd pods to converge to a working state +```shell +$ kubectl get pods -n kube-system --selector=k8s-app=cilium +NAME READY STATUS RESTARTS AGE +cilium-drxkl 1/1 Running 0 18m ``` - {{% /tab %}} {{% tab name="Flannel" %}} @@ -337,10 +336,11 @@ Set `/proc/sys/net/bridge/bridge-nf-call-iptables` to `1` by running `sysctl net to pass bridged IPv4 traffic to iptables' chains. This is a requirement for some CNI plugins to work, for more information please see [here](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements). -Note that `flannel` works on `amd64`, `arm`, `arm64` and `ppc64le`. +Note that `flannel` works on `amd64`, `arm`, `arm64`, `ppc64le` and `s390x` under Linux. +Windows (`amd64`) is claimed as supported in v0.11.0 but the usage is undocumented. ```shell -kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml +kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml ``` For more information about `flannel`, see [the CoreOS flannel repository on GitHub @@ -398,6 +398,16 @@ There are multiple, flexible ways to install JuniperContrail/TungstenFabric CNI. Kindly refer to this quickstart: [TungstenFabric](https://tungstenfabric.github.io/website/) {{% /tab %}} + +{{% tab name="Contiv-VPP" %}} +[Contiv-VPP](https://contivpp.io/) employs a programmable CNF vSwitch based on [FD.io VPP](https://fd.io/), +offering feature-rich & high-performance cloud-native networking and services. + +It implements k8s services and network policies in the user space (on VPP). + +Please refer to this installation guide: [Contiv-VPP Manual Installation](https://github.com/contiv/vpp/blob/master/docs/setup/MANUAL_INSTALL.md) +{{% /tab %}} + {{< /tabs >}} diff --git a/content/en/docs/setup/independent/high-availability.md b/content/en/docs/setup/independent/high-availability.md index 10e0e2b32ce37..1ae27f0236e64 100644 --- a/content/en/docs/setup/independent/high-availability.md +++ b/content/en/docs/setup/independent/high-availability.md @@ -69,7 +69,7 @@ networking provider, make sure to replace any default values as needed. ## First steps for both methods {{< note >}} -**Note**: All commands on any control plane or etcd node should be +All commands on any control plane or etcd node should be run as root. {{< /note >}} diff --git a/content/en/docs/setup/independent/install-kubeadm.md b/content/en/docs/setup/independent/install-kubeadm.md index afc92e220f0f6..16570b2a36e82 100644 --- a/content/en/docs/setup/independent/install-kubeadm.md +++ b/content/en/docs/setup/independent/install-kubeadm.md @@ -2,6 +2,10 @@ title: Installing kubeadm content_template: templates/task weight: 20 +card: + name: setup + weight: 20 + title: Install the kubeadm setup tool --- {{% capture overview %}} @@ -105,7 +109,7 @@ You will install these packages on all of your machines: * `kubectl`: the command line util to talk to your cluster. kubeadm **will not** install or manage `kubelet` or `kubectl` for you, so you will -need to ensure they match the version of the Kubernetes control panel you want +need to ensure they match the version of the Kubernetes control plane you want kubeadm to install for you. If you do not, there is a risk of a version skew occurring that can lead to unexpected, buggy behaviour. However, _one_ minor version skew between the kubelet and the control plane is supported, but the kubelet version may never exceed the API diff --git a/content/en/docs/setup/independent/kubelet-integration.md b/content/en/docs/setup/independent/kubelet-integration.md index 03feb7cf4dcf2..d5cc7d31326a8 100644 --- a/content/en/docs/setup/independent/kubelet-integration.md +++ b/content/en/docs/setup/independent/kubelet-integration.md @@ -193,10 +193,10 @@ The DEB and RPM packages shipped with the Kubernetes releases are: | Package name | Description | |--------------|-------------| -| `kubeadm` | Installs the `/usr/bin/kubeadm` CLI tool and [The kubelet drop-in file(#the-kubelet-drop-in-file-for-systemd) for the kubelet. | +| `kubeadm` | Installs the `/usr/bin/kubeadm` CLI tool and the [kubelet drop-in file](#the-kubelet-drop-in-file-for-systemd) for the kubelet. | | `kubelet` | Installs the `/usr/bin/kubelet` binary. | | `kubectl` | Installs the `/usr/bin/kubectl` binary. | | `kubernetes-cni` | Installs the official CNI binaries into the `/opt/cni/bin` directory. | -| `cri-tools` | Installs the `/usr/bin/crictl` binary from [https://github.com/kubernetes-incubator/cri-tools](https://github.com/kubernetes-incubator/cri-tools). | +| `cri-tools` | Installs the `/usr/bin/crictl` binary from the [cri-tools git repository](https://github.com/kubernetes-incubator/cri-tools). | {{% /capture %}} diff --git a/content/en/docs/setup/independent/troubleshooting-kubeadm.md b/content/en/docs/setup/independent/troubleshooting-kubeadm.md index cf59d2c191480..edc0f2e9ef3c0 100644 --- a/content/en/docs/setup/independent/troubleshooting-kubeadm.md +++ b/content/en/docs/setup/independent/troubleshooting-kubeadm.md @@ -56,7 +56,7 @@ This may be caused by a number of problems. The most common are: ``` There are two common ways to fix the cgroup driver problem: - + 1. Install Docker again following instructions [here](/docs/setup/independent/install-kubeadm/#installing-docker). 1. Change the kubelet config to match the Docker cgroup driver manually, you can refer to @@ -100,9 +100,8 @@ Right after `kubeadm init` there should not be any pods in these states. until you have deployed the network solution. - If you see Pods in the `RunContainerError`, `CrashLoopBackOff` or `Error` state after deploying the network solution and nothing happens to `coredns` (or `kube-dns`), - it's very likely that the Pod Network solution and nothing happens to the DNS server, it's very - likely that the Pod Network solution that you installed is somehow broken. You - might have to grant it more RBAC privileges or use a newer version. Please file + it's very likely that the Pod Network solution that you installed is somehow broken. + You might have to grant it more RBAC privileges or use a newer version. Please file an issue in the Pod Network providers' issue tracker and get the issue triaged there. - If you install a version of Docker older than 1.12.1, remove the `MountFlags=slave` option when booting `dockerd` with `systemd` and restart `docker`. You can see the MountFlags in `/usr/lib/systemd/system/docker.service`. @@ -155,6 +154,18 @@ Unable to connect to the server: x509: certificate signed by unknown authority ( regenerate a certificate if necessary. The certificates in a kubeconfig file are base64 encoded. The `base64 -d` command can be used to decode the certificate and `openssl x509 -text -noout` can be used for viewing the certificate information. +- Unset the `KUBECONFIG` environment variable using: + + ```sh + unset KUBECONFIG + ``` + + Or set it to the default `KUBECONFIG` location: + + ```sh + export KUBECONFIG=/etc/kubernetes/admin.conf + ``` + - Another workaround is to overwrite the existing `kubeconfig` for the "admin" user: ```sh diff --git a/content/en/docs/setup/minikube.md b/content/en/docs/setup/minikube.md index 122ce8b598841..1b148756c430f 100644 --- a/content/en/docs/setup/minikube.md +++ b/content/en/docs/setup/minikube.md @@ -407,8 +407,9 @@ For more information about Minikube, see the [proposal](https://git.k8s.io/commu * **Goals and Non-Goals**: For the goals and non-goals of the Minikube project, please see our [roadmap](https://git.k8s.io/minikube/docs/contributors/roadmap.md). * **Development Guide**: See [CONTRIBUTING.md](https://git.k8s.io/minikube/CONTRIBUTING.md) for an overview of how to send pull requests. * **Building Minikube**: For instructions on how to build/test Minikube from source, see the [build guide](https://git.k8s.io/minikube/docs/contributors/build_guide.md). -* **Adding a New Dependency**: For instructions on how to add a new dependency to Minikube see the [adding dependencies guide](https://git.k8s.io/minikube/docs/contributors/adding_a_dependency.md). -* **Adding a New Addon**: For instruction on how to add a new addon for Minikube see the [adding an addon guide](https://git.k8s.io/minikube/docs/contributors/adding_an_addon.md). +* **Adding a New Dependency**: For instructions on how to add a new dependency to Minikube, see the [adding dependencies guide](https://git.k8s.io/minikube/docs/contributors/adding_a_dependency.md). +* **Adding a New Addon**: For instructions on how to add a new addon for Minikube, see the [adding an addon guide](https://git.k8s.io/minikube/docs/contributors/adding_an_addon.md). +* **MicroK8s**: Linux users wishing to avoid running a virtual machine may consider [MicroK8s](https://microk8s.io/) as an alternative. ## Community diff --git a/content/en/docs/setup/on-premises-metal/krib.md b/content/en/docs/setup/on-premises-metal/krib.md index 3762068ccddd1..4ee90777e29b7 100644 --- a/content/en/docs/setup/on-premises-metal/krib.md +++ b/content/en/docs/setup/on-premises-metal/krib.md @@ -8,7 +8,7 @@ author: Rob Hirschfeld (zehicle) This guide helps to install a Kubernetes cluster hosted on bare metal with [Digital Rebar Provision](https://github.com/digitalrebar/provision) using only its Content packages and *kubeadm*. -Digital Rebar Provision (DRP) is an integrated Golang DHCP, bare metal provisioning (PXE/iPXE) and workflow automation platform. While [DRP can be used to invoke](https://provision.readthedocs.io/en/tip/doc/integrations/ansible.html) [kubespray](../kubespray), it also offers a self-contained Kubernetes installation known as [KRIB (Kubernetes Rebar Integrated Bootstrap)](https://github.com/digitalrebar/provision-content/tree/master/krib). +Digital Rebar Provision (DRP) is an integrated Golang DHCP, bare metal provisioning (PXE/iPXE) and workflow automation platform. While [DRP can be used to invoke](https://provision.readthedocs.io/en/tip/doc/integrations/ansible.html) [kubespray](/docs/setup/custom-cloud/kubespray), it also offers a self-contained Kubernetes installation known as [KRIB (Kubernetes Rebar Integrated Bootstrap)](https://github.com/digitalrebar/provision-content/tree/master/krib). {{< note >}} KRIB is not a _stand-alone_ installer: Digital Rebar templates drive a standard *[kubeadm](/docs/admin/kubeadm/)* configuration that manages the Kubernetes installation with the [Digital Rebar cluster pattern](https://provision.readthedocs.io/en/tip/doc/arch/cluster.html#rs-cluster-pattern) to elect leaders _without external supervision_. diff --git a/content/en/docs/setup/pick-right-solution.md b/content/en/docs/setup/pick-right-solution.md index e32cbc0d4fd91..96514bfca5d40 100644 --- a/content/en/docs/setup/pick-right-solution.md +++ b/content/en/docs/setup/pick-right-solution.md @@ -6,6 +6,20 @@ reviewers: title: Picking the Right Solution weight: 10 content_template: templates/concept +card: + name: setup + weight: 20 + anchors: + - anchor: "#hosted-solutions" + title: Hosted Solutions + - anchor: "#turnkey-cloud-solutions" + title: Turnkey Cloud Solutions + - anchor: "#on-premises-turnkey-cloud-solutions" + title: On-Premises Solutions + - anchor: "#custom-solutions" + title: Custom Solutions + - anchor: "#local-machine-solutions" + title: Local Machine --- {{% capture overview %}} @@ -34,12 +48,12 @@ a Kubernetes cluster from scratch. * [Minikube](/docs/setup/minikube/) is a method for creating a local, single-node Kubernetes cluster for development and testing. Setup is completely automated and doesn't require a cloud provider account. -* [Docker Desktop](https://www.docker.com/products/docker-desktop) is an -easy-to-install application for your Mac or Windows environment that enables you to +* [Docker Desktop](https://www.docker.com/products/docker-desktop) is an +easy-to-install application for your Mac or Windows environment that enables you to start coding and deploying in containers in minutes on a single-node Kubernetes cluster. -* [Minishift](https://docs.okd.io/latest/minishift/) installs the community version of the Kubernetes enterprise platform OpenShift for local development & testing. It offers an All-In-One VM (`minishift start`) for Windows, macOS and Linux and the containeriz based `oc cluster up` (Linux only) and [comes with some easy to install Add Ons](https://github.com/minishift/minishift-addons/tree/master/add-ons). +* [Minishift](https://docs.okd.io/latest/minishift/) installs the community version of the Kubernetes enterprise platform OpenShift for local development & testing. It offers an all-in-one VM (`minishift start`) for Windows, macOS, and Linux. The container start is based on `oc cluster up` (Linux only). You can also install [the included add-ons](https://github.com/minishift/minishift-addons/tree/master/add-ons). * [MicroK8s](https://microk8s.io/) provides a single command installation of the latest Kubernetes release on a local machine for development and testing. Setup is quick, fast (~30 sec) and supports many plugins including Istio with a single command. @@ -69,12 +83,14 @@ cluster. * [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) offers managed Kubernetes clusters. -* [IBM Cloud Kubernetes Service](https://console.bluemix.net/docs/containers/container_index.html) offers managed Kubernetes clusters with isolation choice, operational tools, integrated security insight into images and containers, and integration with Watson, IoT, and data. +* [IBM Cloud Kubernetes Service](https://cloud.ibm.com/docs/containers?topic=containers-container_index#container_index) offers managed Kubernetes clusters with isolation choice, operational tools, integrated security insight into images and containers, and integration with Watson, IoT, and data. * [Kubermatic](https://www.loodse.com) provides managed Kubernetes clusters for various public clouds, including AWS and Digital Ocean, as well as on-premises with OpenStack integration. * [Kublr](https://kublr.com) offers enterprise-grade secure, scalable, highly reliable Kubernetes clusters on AWS, Azure, GCP, and on-premise. It includes out-of-the-box backup and disaster recovery, multi-cluster centralized logging and monitoring, and built-in alerting. +* [KubeSail](https://kubesail.com) is an easy, free way to try Kubernetes. + * [Madcore.Ai](https://madcore.ai) is devops-focused CLI tool for deploying Kubernetes infrastructure in AWS. Master, auto-scaling group nodes with spot-instances, ingress-ssl-lego, Heapster, and Grafana. * [Nutanix Karbon](https://www.nutanix.com/products/karbon/) is a multi-cluster, highly available Kubernetes management and operational platform that simplifies the provisioning, operations, and lifecycle management of Kubernetes. @@ -121,6 +137,7 @@ few commands. These solutions are actively developed and have active community s * [Pivotal Container Service](https://pivotal.io/platform/pivotal-container-service) * [Rancher 2.0](https://rancher.com/docs/rancher/v2.x/en/) * [Stackpoint.io](/docs/setup/turnkey/stackpoint/) +* [Supergiant.io](https://supergiant.io/) * [Tectonic by CoreOS](https://coreos.com/tectonic) * [VMware Cloud PKS](https://cloud.vmware.com/vmware-cloud-pks) @@ -155,13 +172,10 @@ it will be easier than starting from scratch. If you do want to start from scrat have special requirements, or just because you want to understand what is underneath a Kubernetes cluster, try the [Getting Started from Scratch](/docs/setup/scratch/) guide. -If you are interested in supporting Kubernetes on a new platform, see -[Writing a Getting Started Guide](https://git.k8s.io/community/contributors/devel/writing-a-getting-started-guide.md). - ### Universal If you already have a way to configure hosting resources, use -[kubeadm](/docs/setup/independent/create-cluster-kubeadm/) to easily bring up a cluster +[kubeadm](/docs/setup/independent/create-cluster-kubeadm/) to bring up a cluster with a single command per machine. ### Cloud @@ -216,6 +230,7 @@ IaaS Provider | Config. Mgmt. | OS | Networking | Docs any | any | multi-support | any CNI | [docs](/docs/setup/independent/create-cluster-kubeadm/) | Project ([SIG-cluster-lifecycle](https://git.k8s.io/community/sig-cluster-lifecycle)) Google Kubernetes Engine | | | GCE | [docs](https://cloud.google.com/kubernetes-engine/docs/) | Commercial Docker Enterprise | custom | [multi-support](https://success.docker.com/article/compatibility-matrix) | [multi-support](https://docs.docker.com/ee/ucp/kubernetes/install-cni-plugin/) | [docs](https://docs.docker.com/ee/) | Commercial +IBM Cloud Private | Ansible | multi-support | multi-support | [docs](https://www.ibm.com/support/knowledgecenter/SSBS6K/product_welcome_cloud_private.html) | [Commercial](https://www.ibm.com/mysupport/s/topic/0TO500000001o0fGAA/ibm-cloud-private?language=en_US&productId=01t50000004X1PWAA0) and [Community](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.2/troubleshoot/support_types.html) | Red Hat OpenShift | Ansible & CoreOS | RHEL & CoreOS | [multi-support](https://docs.openshift.com/container-platform/3.11/architecture/networking/network_plugins.html) | [docs](https://docs.openshift.com/container-platform/3.11/welcome/index.html) | Commercial Stackpoint.io | | multi-support | multi-support | [docs](https://stackpoint.io/) | Commercial AppsCode.com | Saltstack | Debian | multi-support | [docs](https://appscode.com/products/cloud-deployment/) | Commercial @@ -223,7 +238,7 @@ Madcore.Ai | Jenkins DSL | Ubuntu | flannel | [docs](https://madc Platform9 | | multi-support | multi-support | [docs](https://platform9.com/managed-kubernetes/) | Commercial Kublr | custom | multi-support | multi-support | [docs](http://docs.kublr.com/) | Commercial Kubermatic | | multi-support | multi-support | [docs](http://docs.kubermatic.io/) | Commercial -IBM Cloud Kubernetes Service | | Ubuntu | IBM Cloud Networking + Calico | [docs](https://console.bluemix.net/docs/containers/) | Commercial +IBM Cloud Kubernetes Service | | Ubuntu | IBM Cloud Networking + Calico | [docs](https://cloud.ibm.com/docs/containers?topic=containers-container_index#container_index) | Commercial Giant Swarm | | CoreOS | flannel and/or Calico | [docs](https://docs.giantswarm.io/) | Commercial GCE | Saltstack | Debian | GCE | [docs](/docs/setup/turnkey/gce/) | Project Azure Kubernetes Service | | Ubuntu | Azure | [docs](https://docs.microsoft.com/en-us/azure/aks/) | Commercial @@ -257,7 +272,7 @@ any | RKE | multi-support | flannel or canal any | [Gardener Cluster-Operator](https://kubernetes.io/blog/2018/05/17/gardener/) | multi-support | multi-support | [docs](https://gardener.cloud) | [Project/Community](https://github.com/gardener) and [Commercial]( https://cloudplatform.sap.com/) Alibaba Cloud Container Service For Kubernetes | ROS | CentOS | flannel/Terway | [docs](https://www.aliyun.com/product/containerservice) | Commercial Agile Stacks | Terraform | CoreOS | multi-support | [docs](https://www.agilestacks.com/products/kubernetes) | Commercial -IBM Cloud Kubernetes Service | | Ubuntu | calico | [docs](https://console.bluemix.net/docs/containers/container_index.html) | Commercial +IBM Cloud Kubernetes Service | | Ubuntu | calico | [docs](https://cloud.ibm.com/docs/containers?topic=containers-container_index#container_index) | Commercial Digital Rebar | kubeadm | any | metal | [docs](/docs/setup/on-premises-metal/krib/) | Community ([@digitalrebar](https://github.com/digitalrebar)) VMware Cloud PKS | | Photon OS | Canal | [docs](https://docs.vmware.com/en/VMware-Kubernetes-Engine/index.html) | Commercial Mirantis Cloud Platform | Salt | Ubuntu | multi-support | [docs](https://docs.mirantis.com/mcp/) | Commercial diff --git a/content/en/docs/setup/release/building-from-source.md b/content/en/docs/setup/release/building-from-source.md index 866d3d7b23a90..973ea3b3944d8 100644 --- a/content/en/docs/setup/release/building-from-source.md +++ b/content/en/docs/setup/release/building-from-source.md @@ -2,13 +2,19 @@ reviewers: - david-mcmahon - jbeda -title: Building from Source +title: Building a release +card: + name: download + weight: 20 + title: Building a release --- - +{{% capture overview %}} You can either build a release from source or download a pre-built release. If you do not plan on developing Kubernetes itself, we suggest using a pre-built version of the current release, which can be found in the [Release Notes](/docs/setup/release/notes/). The Kubernetes source code can be downloaded from the [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) repo. +{{% /capture %}} +{{% capture body %}} ## Building from source If you are simply building a release from source there is no need to set up a full golang environment as all building happens in a Docker container. @@ -22,3 +28,5 @@ make release ``` For more details on the release process see the kubernetes/kubernetes [`build`](http://releases.k8s.io/{{< param "githubbranch" >}}/build/) directory. + +{{% /capture %}} diff --git a/content/en/docs/setup/release/notes.md b/content/en/docs/setup/release/notes.md index 336579e5a8ca8..c024e441356c2 100644 --- a/content/en/docs/setup/release/notes.md +++ b/content/en/docs/setup/release/notes.md @@ -1,5 +1,13 @@ --- title: v1.13 Release Notes +card: + name: download + weight: 10 + anchors: + - anchor: "#" + title: Current Release Notes + - anchor: "#urgent-upgrade-notes" + title: Urgent Upgrade Notes --- diff --git a/content/en/docs/setup/turnkey/icp.md b/content/en/docs/setup/turnkey/icp.md index 7fdf2e7ecf884..df2c835b2a3a0 100644 --- a/content/en/docs/setup/turnkey/icp.md +++ b/content/en/docs/setup/turnkey/icp.md @@ -6,7 +6,7 @@ title: Running Kubernetes on Multiple Clouds with IBM Cloud Private IBM® Cloud Private is a turnkey cloud solution and an on-premises turnkey cloud solution. IBM Cloud Private delivers pure upstream Kubernetes with the typical management components that are required to run real enterprise workloads. These workloads include health management, log management, audit trails, and metering for tracking usage of workloads on the platform. -IBM Cloud Private is available in a community edition and a fully supported enterprise edition. The community edition is available at no charge from Docker Hub. The enterprise edition supports high availability topologies and includes commercial support from IBM for Kubernetes and the IBM Cloud Private management platform. +IBM Cloud Private is available in a community edition and a fully supported enterprise edition. The community edition is available at no charge from [Docker Hub](https://hub.docker.com/r/ibmcom/icp-inception/). The enterprise edition supports high availability topologies and includes commercial support from IBM for Kubernetes and the IBM Cloud Private management platform. If you want to try IBM Cloud Private, you can use either the hosted trial, the tutorial, or the self-guided demo. You can also try the free community edition. For details, see [Get started with IBM Cloud Private](https://www.ibm.com/cloud/private/get-started). For more information, explore the following resources: @@ -18,37 +18,26 @@ For more information, explore the following resources: The following modules are available where you can deploy IBM Cloud Private by using Terraform: -* Terraform module: [Deploy IBM Cloud Private on any supported infrastructure vendor](https://github.com/ibm-cloud-architecture/terraform-module-icp-deploy) -* IBM Cloud: [Deploy IBM Cloud Private cluster to IBM Cloud](https://github.com/ibm-cloud-architecture/terraform-icp-ibmcloud) -* VMware: [Deploy IBM Cloud Private to VMware](https://github.com/ibm-cloud-architecture/terraform-icp-vmware) * AWS: [Deploy IBM Cloud Private to AWS](https://github.com/ibm-cloud-architecture/terraform-icp-aws) -* OpenStack: [Deploy IBM Cloud Private to OpenStack](https://github.com/ibm-cloud-architecture/terraform-icp-openstack) * Azure: [Deploy IBM Cloud Private to Azure](https://github.com/ibm-cloud-architecture/terraform-icp-azure) +* IBM Cloud: [Deploy IBM Cloud Private cluster to IBM Cloud](https://github.com/ibm-cloud-architecture/terraform-icp-ibmcloud) +* OpenStack: [Deploy IBM Cloud Private to OpenStack](https://github.com/ibm-cloud-architecture/terraform-icp-openstack) +* Terraform module: [Deploy IBM Cloud Private on any supported infrastructure vendor](https://github.com/ibm-cloud-architecture/terraform-module-icp-deploy) +* VMware: [Deploy IBM Cloud Private to VMware](https://github.com/ibm-cloud-architecture/terraform-icp-vmware) -## IBM Cloud Private on Azure - -You can enable Microsoft Azure as a cloud provider for IBM Cloud Private deployment and take advantage of all the IBM Cloud Private features on the Azure public cloud. For more information, see [Configuring settings to enable Azure Cloud Provider](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.1/manage_cluster/azure_conf_settings.html). - -## IBM Cloud Private on VMware - -You can install IBM Cloud Private on VMware with either Ubuntu or RHEL images. For details, see the following projects: - -* [Installing IBM Cloud Private with Ubuntu](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_prem_ubuntu.md) -* [Installing IBM Cloud Private with Red Hat Enterprise](https://github.com/ibm-cloud-architecture/refarch-privatecloud/tree/master/icp-on-rhel) - -The IBM Cloud Private Hosted service automatically deploys IBM Cloud Private Hosted on your VMware vCenter Server instances. This service brings the power of microservices and containers to your VMware environment on IBM Cloud. With this service, you can extend the same familiar VMware and IBM Cloud Private operational model and tools from on-premises into the IBM Cloud. +## IBM Cloud Private on AWS -For more information, see [IBM Cloud Private Hosted service](https://console.bluemix.net/docs/services/vmwaresolutions/services/icp_overview.html#ibm-cloud-private-hosted-overview). +You can deploy an IBM Cloud Private cluster on Amazon Web Services (AWS) by using either AWS CloudFormation or Terraform. -## IBM Cloud Private on AWS +IBM Cloud Private has a Quick Start that automatically deploys IBM Cloud Private into a new virtual private cloud (VPC) on the AWS Cloud. A regular deployment takes about 60 minutes, and a high availability (HA) deployment takes about 75 minutes to complete. The Quick Start includes AWS CloudFormation templates and a deployment guide. -IBM Cloud Private can run on the AWS cloud platform. To deploy IBM Cloud Private in an AWS EC2 environment, see [Installing IBM Cloud Private on AWS](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_aws.md). +This Quick Start is for users who want to explore application modernization and want to accelerate meeting their digital transformation goals, by using IBM Cloud Private and IBM tooling. The Quick Start helps users rapidly deploy a high availability (HA), production-grade, IBM Cloud Private reference architecture on AWS. For all of the details and the deployment guide, see the [IBM Cloud Private on AWS Quick Start](https://aws.amazon.com/quickstart/architecture/ibm-cloud-private/). -Stay tuned for the IBM Cloud Private on AWS Quick Start Guide. +IBM Cloud Private can also run on the AWS cloud platform by using Terraform. To deploy IBM Cloud Private in an AWS EC2 environment, see [Installing IBM Cloud Private on AWS](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_aws.md). -## IBM Cloud Private on VirtualBox +## IBM Cloud Private on Azure -To install IBM Cloud Private to a VirtualBox environment, see [Installing IBM Cloud Private on VirtualBox](https://github.com/ibm-cloud-architecture/refarch-privatecloud-virtualbox). +You can enable Microsoft Azure as a cloud provider for IBM Cloud Private deployment and take advantage of all the IBM Cloud Private features on the Azure public cloud. For more information, see [IBM Cloud Private on Azure](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.2/supported_environments/azure_overview.html). ## IBM Cloud Private on Red Hat OpenShift @@ -62,4 +51,19 @@ Integration capabilities: * Integrated core platform services, such as monitoring, metering, and logging * IBM Cloud Private uses the OpenShift image registry -For more information see, [IBM Cloud Private on OpenShift](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.1/supported_environments/openshift/overview.html). +For more information see, [IBM Cloud Private on OpenShift](https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.2/supported_environments/openshift/overview.html). + +## IBM Cloud Private on VirtualBox + +To install IBM Cloud Private to a VirtualBox environment, see [Installing IBM Cloud Private on VirtualBox](https://github.com/ibm-cloud-architecture/refarch-privatecloud-virtualbox). + +## IBM Cloud Private on VMware + +You can install IBM Cloud Private on VMware with either Ubuntu or RHEL images. For details, see the following projects: + +* [Installing IBM Cloud Private with Ubuntu](https://github.com/ibm-cloud-architecture/refarch-privatecloud/blob/master/Installing_ICp_on_prem_ubuntu.md) +* [Installing IBM Cloud Private with Red Hat Enterprise](https://github.com/ibm-cloud-architecture/refarch-privatecloud/tree/master/icp-on-rhel) + +The IBM Cloud Private Hosted service automatically deploys IBM Cloud Private Hosted on your VMware vCenter Server instances. This service brings the power of microservices and containers to your VMware environment on IBM Cloud. With this service, you can extend the same familiar VMware and IBM Cloud Private operational model and tools from on-premises into the IBM Cloud. + +For more information, see [IBM Cloud Private Hosted service](https://cloud.ibm.com/docs/services/vmwaresolutions/vmonic?topic=vmware-solutions-prod_overview#ibm-cloud-private-hosted). diff --git a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md index cb8b67e557cd0..1b47eee140269 100644 --- a/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md +++ b/content/en/docs/tasks/access-application-cluster/configure-access-multiple-clusters.md @@ -2,6 +2,9 @@ title: Configure Access to Multiple Clusters content_template: templates/task weight: 30 +card: + name: tasks + weight: 40 --- @@ -251,22 +254,31 @@ The preceding configuration file defines a new context named `dev-ramp-up`. See whether you have an environment variable named `KUBECONFIG`. If so, save the current value of your `KUBECONFIG` environment variable, so you can restore it later. -For example, on Linux: +For example: +### Linux ```shell export KUBECONFIG_SAVED=$KUBECONFIG ``` - +### Windows PowerShell +```shell + $Env:KUBECONFIG_SAVED=$ENV:KUBECONFIG + ``` The `KUBECONFIG` environment variable is a list of paths to configuration files. The list is colon-delimited for Linux and Mac, and semicolon-delimited for Windows. If you have a `KUBECONFIG` environment variable, familiarize yourself with the configuration files in the list. -Temporarily append two paths to your `KUBECONFIG` environment variable. For example, on Linux: +Temporarily append two paths to your `KUBECONFIG` environment variable. For example:
+### Linux ```shell export KUBECONFIG=$KUBECONFIG:config-demo:config-demo-2 ``` +### Windows PowerShell +```shell +$Env:KUBECONFIG_SAVED=(config-demo;config-demo-2) +``` In your `config-exercise` directory, enter this command: @@ -320,11 +332,16 @@ familiarize yourself with the contents of these files. If you have a `$HOME/.kube/config` file, and it's not already listed in your `KUBECONFIG` environment variable, append it to your `KUBECONFIG` environment variable now. -For example, on Linux: +For example: +### Linux ```shell export KUBECONFIG=$KUBECONFIG:$HOME/.kube/config ``` +### Windows Powershell +```shell + $Env:KUBECONFIG=($Env:KUBECONFIG;$HOME/.kube/config) +``` View configuration information merged from all the files that are now listed in your `KUBECONFIG` environment variable. In your config-exercise directory, enter: @@ -335,11 +352,15 @@ kubectl config view ## Clean up -Return your `KUBECONFIG` environment variable to its original value. For example, on Linux: - +Return your `KUBECONFIG` environment variable to its original value. For example:
+Linux: ```shell export KUBECONFIG=$KUBECONFIG_SAVED ``` +Windows PowerShell +```shell + $Env:KUBECONFIG=$ENV:KUBECONFIG_SAVED +``` {{% /capture %}} diff --git a/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md b/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md index a447e85d16d10..0033f51c9b28c 100644 --- a/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md +++ b/content/en/docs/tasks/access-application-cluster/connecting-frontend-backend.md @@ -169,16 +169,16 @@ This displays the configuration for the `frontend` Service and watches for changes. Initially, the external IP is listed as ``: ``` -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -frontend ClusterIP 10.51.252.116 80/TCP 10s +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +frontend LoadBalancer 10.51.252.116 80/TCP 10s ``` As soon as an external IP is provisioned, however, the configuration updates to include the new IP under the `EXTERNAL-IP` heading: ``` -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -frontend ClusterIP 10.51.252.116 XXX.XXX.XXX.XXX 80/TCP 1m +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +frontend LoadBalancer 10.51.252.116 XXX.XXX.XXX.XXX 80/TCP 1m ``` That IP can now be used to interact with the `frontend` service from outside the diff --git a/content/en/docs/tasks/access-application-cluster/ingress-minikube.md b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md new file mode 100644 index 0000000000000..a810603f19aa2 --- /dev/null +++ b/content/en/docs/tasks/access-application-cluster/ingress-minikube.md @@ -0,0 +1,292 @@ +--- +title: Set up Ingress on Minikube with the NGINX Ingress Controller +content_template: templates/task +weight: 100 +--- + +{{% capture overview %}} + +An [Ingress](/docs/concepts/services-networking/ingress/) is an API object that defines rules which allow external access +to services in a cluster. An [Ingress controller](/docs/concepts/services-networking/ingress-controllers/) fulfills the rules set in the Ingress. + +{{< caution >}} +For the Ingress resource to work, the cluster **must** also have an Ingress controller running. +{{< /caution >}} + +This page shows you how to set up a simple Ingress which routes requests to Service web or web2 depending on the HTTP URI. + +{{% /capture %}} + +{{% capture prerequisites %}} + +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} + +{{% /capture %}} + +{{% capture steps %}} + +## Create a Minikube cluster + +1. Click **Launch Terminal** + + {{< kat-button >}} + +1. (Optional) If you installed Minikube locally, run the following command: + + ```shell + minikube start + ``` + +## Enable the Ingress controller + +1. To enable the NGINX Ingress controller, run the following command: + + ```shell + minikube addons enable ingress + ``` + +1. Verify that the NGINX Ingress controller is running + + ```shell + kubectl get pods -n kube-system + ``` + + {{< note >}}This can take up to a minute.{{< /note >}} + + Output: + + ```shell + NAME READY STATUS RESTARTS AGE + default-http-backend-59868b7dd6-xb8tq 1/1 Running 0 1m + kube-addon-manager-minikube 1/1 Running 0 3m + kube-dns-6dcb57bcc8-n4xd4 3/3 Running 0 2m + kubernetes-dashboard-5498ccf677-b8p5h 1/1 Running 0 2m + nginx-ingress-controller-5984b97644-rnkrg 1/1 Running 0 1m + storage-provisioner 1/1 Running 0 2m + ``` + +## Deploy a hello, world app + +1. Create a Deployment using the following command: + + ```shell + kubectl run web --image=gcr.io/google-samples/hello-app:1.0 --port=8080 + ``` + + Output: + + ```shell + deployment.apps/web created + ``` + +1. Expose the Deployment: + + ```shell + kubectl expose deployment web --target-port=8080 --type=NodePort + ``` + + Output: + + ```shell + service/web exposed + ``` + +1. Verify the Service is created and is available on a node port: + + ```shell + kubectl get service web + ``` + + Output: + + ```shell + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + web NodePort 10.104.133.249 8080:31637/TCP 12m + ``` + +1. Visit the service via NodePort: + + ```shell + minikube service web --url + ``` + + Output: + + ```shell + http://172.17.0.15:31637 + ``` + + {{< note >}}Katacoda environment only: at the top of the terminal panel, click the plus sign, and then click **Select port to view on Host 1**. Enter the NodePort, in this case `31637`, and then click **Display Port**.{{< /note >}} + + Output: + + ```shell + Hello, world! + Version: 1.0.0 + Hostname: web-55b8c6998d-8k564 + ``` + + You can now access the sample app via the Minikube IP address and NodePort. The next step lets you access + the app using the Ingress resource. + +## Create an Ingress resource + +The following file is an Ingress resource that sends traffic to your Service via hello-world.info. + +1. Create `example-ingress.yaml` from the following file: + + ```yaml + --- + apiVersion: extensions/v1beta1 + kind: Ingress + metadata: + name: example-ingress + annotations: + nginx.ingress.kubernetes.io/rewrite-target: / + spec: + rules: + - host: hello-world.info + http: + paths: + - path: /* + backend: + serviceName: web + servicePort: 8080 + ``` + +1. Create the Ingress resource by running the following command: + + ```shell + kubectl apply -f example-ingress.yaml + ``` + + Output: + + ```shell + ingress.extensions/example-ingress created + ``` + +1. Verify the IP address is set: + + ```shell + kubectl get ingress + ``` + + {{< note >}}This can take a couple of minutes.{{< /note >}} + + ```shell + NAME HOSTS ADDRESS PORTS AGE + example-ingress hello-world.info 172.17.0.15 80 38s + ``` + +1. Add the following line to the bottom of the `/etc/hosts` file. + + ``` + 172.17.0.15 hello-world.info + ``` + + This sends requests from hello-world.info to Minikube. + +1. Verify that the Ingress controller is directing traffic: + + ```shell + curl hello-world.info + ``` + + Output: + + ```shell + Hello, world! + Version: 1.0.0 + Hostname: web-55b8c6998d-8k564 + ``` + + {{< note >}}If you are running Minikube locally, you can visit hello-world.info from your browser.{{< /note >}} + +## Create Second Deployment + +1. Create a v2 Deployment using the following command: + + ```shell + kubectl run web2 --image=gcr.io/google-samples/hello-app:2.0 --port=8080 + ``` + Output: + + ```shell + deployment.apps/web2 created + ``` + +1. Expose the Deployment: + + ```shell + kubectl expose deployment web2 --target-port=8080 --type=NodePort + ``` + + Output: + + ```shell + service/web2 exposed + ``` + +## Edit Ingress + +1. Edit the existing `example-ingress.yaml` and add the following lines: + + ```yaml + - path: /v2/* + backend: + serviceName: web2 + servicePort: 8080 + ``` + +1. Apply the changes: + + ```shell + kubectl apply -f example-ingress.yaml + ``` + + Output: + ```shell + ingress.extensions/example-ingress configured + ``` + +## Test Your Ingress + +1. Access the 1st version of the Hello World app. + + ```shell + curl hello-world.info + ``` + + Output: + ```shell + Hello, world! + Version: 1.0.0 + Hostname: web-55b8c6998d-8k564 + ``` + +1. Access the 2nd version of the Hello World app. + + ```shell + curl hello-world.info/v2 + ``` + + Output: + ```shell + Hello, world! + Version: 2.0.0 + Hostname: web2-75cd47646f-t8cjk + ``` + + {{< note >}}If you are running Minikube locally, you can visit hello-world.info and hello-world.info/v2 from your browser.{{< /note >}} + +{{% /capture %}} + + +{{% capture whatsnext %}} +* Read more about [Ingress](/docs/concepts/services-networking/ingress/) +* Read more about [Ingress Controllers](/docs/concepts/services-networking/ingress-controllers/) +* Read more about [Services](/docs/concepts/services-networking/service/) + +{{% /capture %}} + diff --git a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md index ceea70165fec8..e62aeca24e7c2 100644 --- a/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md +++ b/content/en/docs/tasks/access-application-cluster/web-ui-dashboard.md @@ -6,6 +6,10 @@ reviewers: title: Web UI (Dashboard) content_template: templates/concept weight: 10 +card: + name: tasks + weight: 30 + title: Use the Web UI Dashboard --- {{% capture overview %}} diff --git a/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md b/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md index 225951dc64145..a66be5a1add79 100644 --- a/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md +++ b/content/en/docs/tasks/access-kubernetes-api/configure-aggregation-layer.md @@ -22,13 +22,173 @@ Configuring the [aggregation layer](/docs/concepts/extend-kubernetes/api-extensi There are a few setup requirements for getting the aggregation layer working in your environment to support mutual TLS auth between the proxy and extension apiservers. Kubernetes and the kube-apiserver have multiple CAs, so make sure that the proxy is signed by the aggregation layer CA and not by something else, like the master CA. {{< /note >}} +{{% /capture %}} + +{{% capture authflow %}} + +## Authentication Flow + +Unlike Custom Resource Definitions (CRDs), the Aggregation API involves another server - your Extension apiserver - in addition to the standard Kubernetes apiserver. The Kubernetes apiserver will need to communicate with your extension apiserver, and your extension apiserver will need to communicate with the Kubernetes apiserver. In order for this communication to be secured, the Kubernetes apiserver uses x509 certificates to authenticate itself to the extension apiserver. + +This section describes how the authentication and authorization flows work, and how to configure them. + +The high-level flow is as follows: + +1. Kubenetes apiserver: authenticate the requesting user and authorize their rights to the requested API path. +2. Kubenetes apiserver: proxy the request to the extension apiserver +3. Extension apiserver: authenticate the request from the Kubernetes apiserver +4. Extension apiserver: authorize the request from the original user +5. Extension apiserver: execute + +The rest of this section describes these steps in detail. + +The flow can be seen in the following diagram. + +![aggregation auth flows](/images/docs/aggregation-api-auth-flow.png). + +The source for the above swimlanes can be found in the source of this document. + + + +### Kubernetes Apiserver Authentication and Authorization + +A request to an API path that is served by an extension apiserver begins the same way as all API requests: communication to the Kubernetes apiserver. This path already has been registered with the Kubernetes apiserver by the extension apiserver. + +The user communicates with the Kubernetes apiserver, requesting access to the path. The Kubernetes apiserver uses standard authentication and authorization configured with the Kubernetes apiserver to authenticate the user and authorize access to the specific path. + +For an overview of authenticating to a Kubernetes cluster, see ["Authenticating to a Cluster"](/docs/reference/access-authn-authz/authentication/). For an overview of authorization of access to Kubernetes cluster resources, see ["Authorization Overview"](/docs/reference/access-authn-authz/authorization/). + +Everything to this point has been standard Kubernetes API requests, authentication and authorization. + +The Kubernetes apiserver now is prepared to send the request to the extension apiserver. + +### Kubernetes Apiserver Proxies the Request + +The Kubernetes apiserver now will send, or proxy, the request to the extension apiserver that registered to handle the request. In order to do so, it needs to know several things: + +1. How should the Kubernetes apiserver authenticate to the extension apiserver, informing the extension apiserver that the request, which comes over the network, is coming from a valid Kubernetes apiserver? +2. How should the Kubernetes apiserver inform the extension apiserver of the username and group for which the original request was authenticated? + +In order to provide for these two, you must configure the Kubernetes apiserver using several flags. + +#### Kubernetes Apiserver Client Authentication + +The Kubernetes apiserver connects to the extension apiserver over TLS, authenticating itself using a client certificate. You must provide the following to the Kubernetes apiserver upon startup, using the provided flags: + +* private key file via `--proxy-client-key-file` +* signed client certificate file via `--proxy-client-cert-file` +* certificate of the CA that signed the client certificate file via `--requestheader-client-ca-file` +* valid Common Names (CN) in the signed client certificate via `--requestheader-allowed-names` + +The Kubernetes apiserver will use the files indicated by `--proxy-client-*-file` to authenticate to the extension apiserver. In order for the request to be considered valid by a compliant extension apiserver, the following conditions must be met: + +1. The connection must be made using a client certificate that is signed by the CA whose certificate is in `--requestheader-client-ca-file`. +2. The connection must be made using a client certificate whose CN is one of those listed in `--requestheader-allowed-names`. **Note:** You can set this option to blank as `--requestheader-allowed-names=""`. This will indicate to an extension apiserver that _any_ CN is acceptable. + +When started with these options, the Kubernetes apiserver will: + +1. Use them to authenticate to the extension apiserver. +2. Create a configmap in the `kube-system` namespace called `extension-apiserver-authentication`, in which it will place the CA certificate and the allowed CNs. These in turn can be retrieved by extension apiservers to validate requests. + +Note that the same client certificate is used by the Kubernetes apiserver to authenticate against _all_ extension apiservers. It does not create a client certificate per extension apiserver, but rather a single one to authenticate as the Kubernetes apiserver. This same one is reused for all extension apiserver requests. + +#### Original Request Username and Group + +When the Kubernetes apiserver proxies the request to the extension apiserver, it informs the extension apiserver of the username and group with which the original request successfully authenticated. It provides these in http headers of its proxied request. You must inform the Kubernetes apiserver of the names of the headers to be used. + +* the header in which to store the username via `--requestheader-username-headers` +* the header in which to store the group via `--requestheader-group-headers` +* the prefix to append to all extra headers via `--requestheader-extra-headers-prefix` + +These header names are also placed in the `extension-apiserver-authentication` configmap, so they can be retrieved and used by extension apiservers. + +### Extension Apiserver Authenticates the Request + +The extension apiserver, upon receiving a proxied request from the Kubernetes apiserver, must validate that the request actually did come from a valid authenticating proxy, which role the Kubernetes apiserver is fulfilling. The extension apiserver validates it via: + +1. Retrieve the following from the configmap in `kube-system`, as described above: + * Client CA certificate + * List of allowed names (CNs) + * Header names for username, group and extra info +2. Check that the TLS connection was authenticated using a client certificate which: + * Was signed by the CA whose certificate matches the retrieved CA certificate. + * Has a CN in the list of allowed CNs, unless the list is blank, in which case all CNs are allowed. + * Extract the username and group from the appropriate headers + +If the above passes, then the request is a valid proxied request from a legitimate authenticating proxy, in this case the Kubernetes apiserver. + +Note that it is the responsibility of the extension apiserver implementation to provide the above. Many do it by default, leveraging the `k8s.io/apiserver/` package. Others may provide options to override it using command-line options. + +In order to have permission to retrieve the configmap, an extension apiserver requires the appropriate role. There is a default role named `extension-apiserver-authentication-reader` in the `kube-system` namespace which can be assigned. + +### Extension Apiserver Authorizes the Request + +The extension apiserver now can validate that the user/group retrieved from the headers are authorized to execute the given request. It does so by sending a standard [SubjectAccessReview](/docs/reference/access-authn-authz/authorization/) request to the Kubernetes apiserver. + +In order for the extension apiserver to be authorized itself to submit the `SubjectAccessReview` request to the Kubernetes apiserver, it needs the correct permissions. Kubernetes includes a default `ClusterRole` named `system:auth-delegator` that has the appropriate permissions. It can be granted to the extension apiserver's service account. + +### Extension Apiserver Executes + +If the `SubjectAccessReview` passes, the extension apiserver executes the request. + + {{% /capture %}} {{% capture steps %}} -## Enable apiserver flags +## Enable Kubernetes Apiserver flags -Enable the aggregation layer via the following kube-apiserver flags. They may have already been taken care of by your provider. +Enable the aggregation layer via the following `kube-apiserver` flags. They may have already been taken care of by your provider. --requestheader-client-ca-file= --requestheader-allowed-names=front-proxy-client @@ -42,7 +202,7 @@ Enable the aggregation layer via the following kube-apiserver flags. They may ha Do **not** reuse a CA that is used in a different context unless you understand the risks and the mechanisms to protect the CA's usage. {{< /warning >}} -If you are not running kube-proxy on a host running the API server, then you must make sure that the system is enabled with the following apiserver flag: +If you are not running kube-proxy on a host running the API server, then you must make sure that the system is enabled with the following `kube-apiserver` flag: --enable-aggregator-routing=true @@ -56,5 +216,3 @@ If you are not running kube-proxy on a host running the API server, then you mus {{% /capture %}} - - diff --git a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md index 034200a1e46e9..275955c8f5238 100644 --- a/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md +++ b/content/en/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning.md @@ -110,7 +110,7 @@ same way that the Kubernetes project sorts Kubernetes versions. Versions start w `v` followed by a number, an optional `beta` or `alpha` designation, and optional additional numeric versioning information. Broadly, a version string might look like `v2` or `v2beta1`. Versions are sorted using the following algorithm: - + - Entries that follow Kubernetes version patterns are sorted before those that do not. - For entries that follow Kubernetes version patterns, the numeric portions of @@ -185,7 +185,7 @@ how to [authenticate API servers](/docs/reference/access-authn-authz/extensible- ### Deploy the conversion webhook service Documentation for deploying the conversion webhook is the same as for the [admission webhook example service](/docs/reference/access-authn-authz/extensible-admission-controllers/#deploy_the_admission_webhook_service). -The assumption for next sections is that the conversion webhook server is deployed to a service named `example-conversion-webhook-server` in `default` namespace. +The assumption for next sections is that the conversion webhook server is deployed to a service named `example-conversion-webhook-server` in `default` namespace and serving traffic on path `/crdconvert`. {{< note >}} When the webhook server is deployed into the Kubernetes cluster as a @@ -242,6 +242,8 @@ spec: service: namespace: default name: example-conversion-webhook-server + # path is the url the API server will call. It should match what the webhook is serving at. The default is '/'. + path: /crdconvert caBundle: # either Namespaced or Cluster scope: Namespaced diff --git a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md index eaded418130ea..362a455ab1c10 100644 --- a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md +++ b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md @@ -73,14 +73,14 @@ If you have a DNS Deployment, your scale target is: Deployment/ -where is the name of your DNS Deployment. For example, if +where `` is the name of your DNS Deployment. For example, if your DNS Deployment name is coredns, your scale target is Deployment/coredns. If you have a DNS ReplicationController, your scale target is: ReplicationController/ -where is the name of your DNS ReplicationController. For example, +where `` is the name of your DNS ReplicationController. For example, if your DNS ReplicationController name is kube-dns-v20, your scale target is ReplicationController/kube-dns-v20. @@ -238,6 +238,3 @@ is under consideration as a future development. Learn more about the [implementation of cluster-proportional-autoscaler](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler). {{% /capture %}} - - - diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md index 436b6ccfbd302..c3a4444f35da5 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md @@ -23,7 +23,9 @@ You should be familiar with [PKI certificates and requirements in Kubernetes](/d ## Renew certificates with the certificates API -Kubeadm can renew certificates with the `kubeadm alpha certs renew` commands. +The Kubernetes certificates normally reach their expiration date after one year. + +Kubeadm can renew certificates with the `kubeadm alpha certs renew` commands; you should run these commands on control-plane nodes only. Typically this is done by loading on-disk CA certificates and keys and using them to issue new certificates. This approach works well if your certificate tree is self-contained. However, if your certificates are externally @@ -89,16 +91,16 @@ To better integrate with external CAs, kubeadm can also produce certificate sign A CSR represents a request to a CA for a signed certificate for a client. In kubeadm terms, any certificate that would normally be signed by an on-disk CA can be produced as a CSR instead. A CA, however, cannot be produced as a CSR. -You can create an individual CSR with `kubeadm init phase certs apiserver --use-csr`. -The `--use-csr` flag can be applied only to individual phases. After [all certificates are in place][certs], you can run `kubeadm init --external-ca`. +You can create an individual CSR with `kubeadm init phase certs apiserver --csr-only`. +The `--csr-only` flag can be applied only to individual phases. After [all certificates are in place][certs], you can run `kubeadm init --external-ca`. You can pass in a directory with `--csr-dir` to output the CSRs to the specified location. -If `--csr-dire` is not specified, the default certificate directory (`/etc/kubernetes/pki`) is used. +If `--csr-dir` is not specified, the default certificate directory (`/etc/kubernetes/pki`) is used. Both the CSR and the accompanying private key are given in the output. After a certificate is signed, the certificate and the private key must be copied to the PKI directory (by default `/etc/kubernetes/pki`). ### Renew certificates -Certificates can be renewed with `kubeadm alpha certs renew --use-csr`. +Certificates can be renewed with `kubeadm alpha certs renew --csr-only`. As with `kubeadm init`, an output directory can be specified with the `--csr-dir` flag. To use the new certificates, copy the signed certificate and private key into the PKI directory (by default `/etc/kubernetes/pki`) diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-12.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-12.md index 314b5cde7065b..fa8831a92ca05 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-12.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-12.md @@ -39,12 +39,14 @@ This page explains how to upgrade a Kubernetes cluster created with `kubeadm` fr {{< tabs name="k8s_install" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} + # replace "x" with the latest patch version apt-mark unhold kubeadm && \ - apt-get update && apt-get upgrade -y kubeadm && \ + apt-get update && apt-get upgrade -y kubeadm=1.12.x-00 && \ apt-mark hold kubeadm {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - yum upgrade -y kubeadm --disableexcludes=kubernetes + # replace "x" with the latest patch version + yum upgrade -y kubeadm-1.12.x --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}} @@ -230,11 +232,13 @@ This page explains how to upgrade a Kubernetes cluster created with `kubeadm` fr {{< tabs name="k8s_upgrade" >}} {{% tab name="Ubuntu, Debian or HypriotOS" %}} + # replace "x" with the latest patch version apt-get update - apt-get upgrade -y kubelet kubeadm + apt-get upgrade -y kubelet=1.12.x-00 kubeadm=1.12.x-00 {{% /tab %}} {{% tab name="CentOS, RHEL or Fedora" %}} - yum upgrade -y kubelet kubeadm --disableexcludes=kubernetes + # replace "x" with the latest patch version + yum upgrade -y kubelet-1.12.x kubeadm-1.12.x --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}} diff --git a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md index 4cd6eba7461c2..5ae36d2a6d88c 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md +++ b/content/en/docs/tasks/administer-cluster/namespaces-walkthrough.md @@ -7,7 +7,8 @@ content_template: templates/task --- {{% capture overview %}} -Kubernetes _namespaces_ help different projects, teams, or customers to share a Kubernetes cluster. +Kubernetes {{< glossary_tooltip text="namespaces" term_id="namespace" >}} +help different projects, teams, or customers to share a Kubernetes cluster. It does this by providing the following: @@ -62,25 +63,25 @@ are relaxed to enable agile development. The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of Pods, Services, and Deployments that run the production site. -One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production. +One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: `development` and `production`. Let's create two new namespaces to hold our work. -Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which describes a development namespace: +Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which describes a `development` namespace: {{< codenew language="json" file="admin/namespace-dev.json" >}} -Create the development namespace using kubectl. +Create the `development` namespace using kubectl. ```shell $ kubectl create -f https://k8s.io/examples/admin/namespace-dev.json ``` -Save the following contents into file [`namespace-prod.json`](/examples/admin/namespace-prod.json) which describes a production namespace: +Save the following contents into file [`namespace-prod.json`](/examples/admin/namespace-prod.json) which describes a `production` namespace: {{< codenew language="json" file="admin/namespace-prod.json" >}} -And then let's create the production namespace using kubectl. +And then let's create the `production` namespace using kubectl. ```shell $ kubectl create -f https://k8s.io/examples/admin/namespace-prod.json @@ -102,7 +103,7 @@ A Kubernetes namespace provides the scope for Pods, Services, and Deployments in Users interacting with one namespace do not see the content in another namespace. -To demonstrate this, let's spin up a simple Deployment and Pods in the development namespace. +To demonstrate this, let's spin up a simple Deployment and Pods in the `development` namespace. We first check what is the current context: @@ -192,7 +193,7 @@ users: username: admin ``` -Let's switch to operate in the development namespace. +Let's switch to operate in the `development` namespace. ```shell $ kubectl config use-context dev @@ -205,14 +206,14 @@ $ kubectl config current-context dev ``` -At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace. +At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the `development` namespace. Let's create some contents. ```shell $ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2 ``` -We have just created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname. +We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname. Note that `kubectl run` creates deployments only on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead. If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) for more details. @@ -227,15 +228,15 @@ snowflake-3968820950-9dgr8 1/1 Running 0 2m snowflake-3968820950-vgc4n 1/1 Running 0 2m ``` -And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace. +And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the `production` namespace. -Let's switch to the production namespace and show how resources in one namespace are hidden from the other. +Let's switch to the `production` namespace and show how resources in one namespace are hidden from the other. ```shell $ kubectl config use-context prod ``` -The production namespace should be empty, and the following commands should return nothing. +The `production` namespace should be empty, and the following commands should return nothing. ```shell $ kubectl get deployment diff --git a/content/en/docs/tasks/administer-cluster/namespaces.md b/content/en/docs/tasks/administer-cluster/namespaces.md index ce1ebbce3df85..6311ba8b64194 100644 --- a/content/en/docs/tasks/administer-cluster/namespaces.md +++ b/content/en/docs/tasks/administer-cluster/namespaces.md @@ -7,7 +7,7 @@ content_template: templates/task --- {{% capture overview %}} -This page shows how to view, work in, and delete namespaces. The page also shows how to use Kubernetes namespaces to subdivide your cluster. +This page shows how to view, work in, and delete {{< glossary_tooltip text="namespaces" term_id="namespace" >}}. The page also shows how to use Kubernetes namespaces to subdivide your cluster. {{% /capture %}} {{% capture prerequisites %}} @@ -140,21 +140,21 @@ are relaxed to enable agile development. The operations team would like to maintain a space in the cluster where they can enforce strict procedures on who can or cannot manipulate the set of Pods, Services, and Deployments that run the production site. -One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: development and production. +One pattern this organization could follow is to partition the Kubernetes cluster into two namespaces: `development` and `production`. Let's create two new namespaces to hold our work. -Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which describes a development namespace: +Use the file [`namespace-dev.json`](/examples/admin/namespace-dev.json) which describes a `development` namespace: {{< codenew language="json" file="admin/namespace-dev.json" >}} -Create the development namespace using kubectl. +Create the `development` namespace using kubectl. ```shell $ kubectl create -f https://k8s.io/examples/admin/namespace-dev.json ``` -And then let's create the production namespace using kubectl. +And then let's create the `production` namespace using kubectl. ```shell $ kubectl create -f https://k8s.io/examples/admin/namespace-prod.json @@ -176,7 +176,7 @@ A Kubernetes namespace provides the scope for Pods, Services, and Deployments in Users interacting with one namespace do not see the content in another namespace. -To demonstrate this, let's spin up a simple Deployment and Pods in the development namespace. +To demonstrate this, let's spin up a simple Deployment and Pods in the `development` namespace. We first check what is the current context: @@ -221,7 +221,7 @@ $ kubectl config set-context prod --namespace=production --cluster=lithe-cocoa-9 The above commands provided two request contexts you can alternate against depending on what namespace you wish to work against. -Let's switch to operate in the development namespace. +Let's switch to operate in the `development` namespace. ```shell $ kubectl config use-context dev @@ -234,14 +234,14 @@ $ kubectl config current-context dev ``` -At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the development namespace. +At this point, all requests we make to the Kubernetes cluster from the command line are scoped to the `development` namespace. Let's create some contents. ```shell $ kubectl run snowflake --image=kubernetes/serve_hostname --replicas=2 ``` -We have just created a deployment whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname. +We have just created a deployment whose replica size is 2 that is running the pod called `snowflake` with a basic container that just serves the hostname. Note that `kubectl run` creates deployments only on Kubernetes cluster >= v1.2. If you are running older versions, it creates replication controllers instead. If you want to obtain the old behavior, use `--generator=run/v1` to create replication controllers. See [`kubectl run`](/docs/reference/generated/kubectl/kubectl-commands/#run) for more details. @@ -256,15 +256,15 @@ snowflake-3968820950-9dgr8 1/1 Running 0 2m snowflake-3968820950-vgc4n 1/1 Running 0 2m ``` -And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace. +And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the `production` namespace. -Let's switch to the production namespace and show how resources in one namespace are hidden from the other. +Let's switch to the `production` namespace and show how resources in one namespace are hidden from the other. ```shell $ kubectl config use-context prod ``` -The production namespace should be empty, and the following commands should return nothing. +The `production` namespace should be empty, and the following commands should return nothing. ```shell $ kubectl get deployment diff --git a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md index ed3fd0f7157c8..4c9f1b07df801 100644 --- a/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md +++ b/content/en/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy.md @@ -26,20 +26,22 @@ To get familiar with Cilium easily you can follow the [Cilium Kubernetes Getting Started Guide](https://cilium.readthedocs.io/en/stable/gettingstarted/minikube/) to perform a basic DaemonSet installation of Cilium in minikube. -As Cilium requires a standalone etcd instance, for minikube you can deploy it -by running: +To start minikube, minimal version required is >= v0.33.1, run the with the +following arguments: ```shell -kubectl create -n kube-system -f https://raw.githubusercontent.com/cilium/cilium/v1.3/examples/kubernetes/addons/etcd/standalone-etcd.yaml +$ minikube version +minikube version: v0.33.1 +$ +$ minikube start --network-plugin=cni --memory=4096 ``` -After etcd is up and running you can deploy Cilium Kubernetes descriptor which -is a simple ''all-in-one'' YAML file that includes DaemonSet configurations for -Cilium, to connect to the etcd instance previously deployed as well as -appropriate RBAC settings: +For minikube you can deploy this simple ''all-in-one'' YAML file that includes +DaemonSet configurations for Cilium, and the necessary configurations to connect +to the etcd instance deployed in minikube as well as appropriate RBAC settings: ```shell -$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.3/examples/kubernetes/1.12/cilium.yaml +$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.4/examples/kubernetes/1.13/cilium-minikube.yaml configmap/cilium-config created daemonset.apps/cilium created clusterrolebinding.rbac.authorization.k8s.io/cilium created @@ -54,7 +56,7 @@ policies using an example application. ## Deploying Cilium for Production Use For detailed instructions around deploying Cilium for production, see: -[Cilium Kubernetes Installation Guide](https://cilium.readthedocs.io/en/latest/kubernetes/install/) +[Cilium Kubernetes Installation Guide](https://cilium.readthedocs.io/en/stable/kubernetes/intro/) This documentation includes detailed requirements, instructions and example production DaemonSet files. @@ -83,7 +85,7 @@ There are two main components to be aware of: - One `cilium` Pod runs on each node in your cluster and enforces network policy on the traffic to/from Pods on that node using Linux BPF. - For production deployments, Cilium should leverage a key-value store -(e.g., etcd). The [Cilium Kubernetes Installation Guide](https://cilium.readthedocs.io/en/latest/kubernetes/install/) +(e.g., etcd). The [Cilium Kubernetes Installation Guide](https://cilium.readthedocs.io/en/stable/kubernetes/intro/) will provide the necessary steps on how to install this required key-value store as well how to configure it in Cilium. diff --git a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md index 2110c98385ede..83c24f639efee 100644 --- a/content/en/docs/tasks/administer-cluster/running-cloud-controller.md +++ b/content/en/docs/tasks/administer-cluster/running-cloud-controller.md @@ -36,11 +36,6 @@ Successfully running cloud-controller-manager requires some changes to your clus * `kube-apiserver` and `kube-controller-manager` MUST NOT specify the `--cloud-provider` flag. This ensures that it does not run any cloud specific loops that would be run by cloud controller manager. In the future, this flag will be deprecated and removed. * `kubelet` must run with `--cloud-provider=external`. This is to ensure that the kubelet is aware that it must be initialized by the cloud controller manager before it is scheduled any work. -* `kube-apiserver` SHOULD NOT run the `PersistentVolumeLabel` admission controller - since the cloud controller manager takes over labeling persistent volumes. -* For the `cloud-controller-manager` to label persistent volumes, initializers will need to be enabled and an InitializerConfiguration needs to be added to the system. Follow [these instructions](/docs/reference/access-authn-authz/extensible-admission-controllers/#enable-initializers-alpha-feature) to enable initializers. Use the following YAML to create the InitializerConfiguration: - -{{< codenew file="admin/cloud/pvl-initializer-config.yaml" >}} Keep in mind that setting up your cluster to use cloud controller manager will change your cluster behaviour in a few ways: @@ -53,7 +48,6 @@ As of v1.8, cloud controller manager can implement: * node controller - responsible for updating kubernetes nodes using cloud APIs and deleting kubernetes nodes that were deleted on your cloud. * service controller - responsible for loadbalancers on your cloud against services of type LoadBalancer. * route controller - responsible for setting up network routes on your cloud -* persistent volume labels controller - responsible for setting the zone and region labels on PersistentVolumes created in GCP and AWS clouds. * any other features you would like to implement if you are running an out-of-tree provider. diff --git a/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md b/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md index fd8e610a212f9..c7cc8c46bb162 100644 --- a/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md +++ b/content/en/docs/tasks/configure-pod-container/assign-memory-resource.md @@ -223,7 +223,7 @@ kubectl describe nodes The output includes a record of the Container being killed because of an out-of-memory condition: ``` -Warning OOMKilling Memory cgroup out of memory: Kill process 4481 (stress) score 1994 or sacrifice child +Warning OOMKilling Memory cgroup out of memory: Kill process 4481 (stress) score 1994 or sacrifice child ``` Delete your Pod: diff --git a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md index a34de6eb45bcb..9203476fe40d7 100644 --- a/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md +++ b/content/en/docs/tasks/configure-pod-container/configure-pod-configmap.md @@ -2,6 +2,9 @@ title: Configure a Pod to Use a ConfigMap content_template: templates/task weight: 150 +card: + name: tasks + weight: 50 --- {{% capture overview %}} diff --git a/content/en/docs/tasks/debug-application-cluster/audit.md b/content/en/docs/tasks/debug-application-cluster/audit.md index ec099f595f415..6d7735cb499ee 100644 --- a/content/en/docs/tasks/debug-application-cluster/audit.md +++ b/content/en/docs/tasks/debug-application-cluster/audit.md @@ -207,13 +207,13 @@ By default truncate is disabled in both `webhook` and `log`, a cluster administr {{< feature-state for_k8s_version="v1.13" state="alpha" >}} -In Kubernetes version 1.13, you can configure dynamic audit webhook backends AuditSink API objects. +In Kubernetes version 1.13, you can configure dynamic audit webhook backends AuditSink API objects. To enable dynamic auditing you must set the following apiserver flags: -- `--audit-dynamic-configuration`: the primary switch. When the feature is at GA, the only required flag. -- `--feature-gates=DynamicAuditing=true`: feature gate at alpha and beta. -- `--runtime-config=auditregistration.k8s.io/v1alpha1=true`: enable API. +- `--audit-dynamic-configuration`: the primary switch. When the feature is at GA, the only required flag. +- `--feature-gates=DynamicAuditing=true`: feature gate at alpha and beta. +- `--runtime-config=auditregistration.k8s.io/v1alpha1=true`: enable API. When enabled, an AuditSink object can be provisioned: @@ -301,7 +301,11 @@ Fluent-plugin-forest and fluent-plugin-rewrite-tag-filter are plugins for fluent # route audit according to namespace element in context @type rewrite_tag_filter - rewriterule1 namespace ^(.+) ${tag}.$1 + + key namespace + pattern /^(.+)/ + tag ${tag}.$1 + @@ -420,8 +424,8 @@ plugin which supports full-text search and analytics. [gce-audit-profile]: https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh#L735 [kubeconfig]: /docs/tasks/access-application-cluster/configure-access-multiple-clusters/ [fluentd]: http://www.fluentd.org/ -[fluentd_install_doc]: http://docs.fluentd.org/v0.12/articles/quickstart#step1-installing-fluentd -[fluentd_plugin_management_doc]: https://docs.fluentd.org/v0.12/articles/plugin-management +[fluentd_install_doc]: https://docs.fluentd.org/v1.0/articles/quickstart#step-1:-installing-fluentd +[fluentd_plugin_management_doc]: https://docs.fluentd.org/v1.0/articles/plugin-management [logstash]: https://www.elastic.co/products/logstash [logstash_install_doc]: https://www.elastic.co/guide/en/logstash/current/installing-logstash.html [kube-aggregator]: /docs/concepts/api-extension/apiserver-aggregation diff --git a/content/en/docs/tasks/debug-application-cluster/debug-service.md b/content/en/docs/tasks/debug-application-cluster/debug-service.md index 29a3cb047bb05..872d956e4e5c6 100644 --- a/content/en/docs/tasks/debug-application-cluster/debug-service.md +++ b/content/en/docs/tasks/debug-application-cluster/debug-service.md @@ -489,7 +489,7 @@ u@node$ iptables-save | grep hostnames There should be 2 rules for each port on your `Service` (just one in this example) - a "KUBE-PORTALS-CONTAINER" and a "KUBE-PORTALS-HOST". If you do -not see these, try restarting `kube-proxy` with the `-V` flag set to 4, and +not see these, try restarting `kube-proxy` with the `-v` flag set to 4, and then look at the logs again. Almost nobody should be using the "userspace" mode any more, so we won't spend @@ -559,7 +559,7 @@ If this still fails, look at the `kube-proxy` logs for specific lines like: Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376] ``` -If you don't see those, try restarting `kube-proxy` with the `-V` flag set to 4, and +If you don't see those, try restarting `kube-proxy` with the `-v` flag set to 4, and then look at the logs again. ### A Pod cannot reach itself via Service IP diff --git a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md index 91ae88bb8e17e..c751d00261cae 100644 --- a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md +++ b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md @@ -142,9 +142,9 @@ Report issues with this device plugin and installation method to [GoogleCloudPla Instructions for using NVIDIA GPUs on GKE are [here](https://cloud.google.com/kubernetes-engine/docs/how-to/gpus) -## Clusters containing different types of NVIDIA GPUs +## Clusters containing different types of GPUs -If different nodes in your cluster have different types of NVIDIA GPUs, then you +If different nodes in your cluster have different types of GPUs, then you can use [Node Labels and Node Selectors](/docs/tasks/configure-pod-container/assign-pods-nodes/) to schedule pods to appropriate nodes. @@ -156,6 +156,39 @@ kubectl label nodes accelerator=nvidia-tesla-k80 kubectl label nodes accelerator=nvidia-tesla-p100 ``` +For AMD GPUs, you can deploy [Node Labeller](https://github.com/RadeonOpenCompute/k8s-device-plugin/tree/master/cmd/k8s-node-labeller), which automatically labels your nodes with GPU properties. Currently supported properties: + +* Device ID (-device-id) +* VRAM Size (-vram) +* Number of SIMD (-simd-count) +* Number of Compute Unit (-cu-count) +* Firmware and Feature Versions (-firmware) +* GPU Family, in two letters acronym (-family) + * SI - Southern Islands + * CI - Sea Islands + * KV - Kaveri + * VI - Volcanic Islands + * CZ - Carrizo + * AI - Arctic Islands + * RV - Raven + +Example result: + + $ kubectl describe node cluster-node-23 + Name: cluster-node-23 + Roles: + Labels: beta.amd.com/gpu.cu-count.64=1 + beta.amd.com/gpu.device-id.6860=1 + beta.amd.com/gpu.family.AI=1 + beta.amd.com/gpu.simd-count.256=1 + beta.amd.com/gpu.vram.16G=1 + beta.kubernetes.io/arch=amd64 + beta.kubernetes.io/os=linux + kubernetes.io/hostname=cluster-node-23 + Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock + node.alpha.kubernetes.io/ttl: 0 + ...... + Specify the GPU type in the pod spec: ```yaml diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 8cdf150782167..010511f8069f4 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -288,7 +288,7 @@ spec: resource: name: cpu target: - kind: AverageUtilization + type: AverageUtilization averageUtilization: 50 - type: Pods pods: diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md index 2e3df51fd9eab..38d615293d7a9 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -249,7 +249,7 @@ Kubernetes 1.6 adds support for making use of custom metrics in the Horizontal P You can add custom metrics for the Horizontal Pod Autoscaler to use in the `autoscaling/v2beta2` API. Kubernetes then queries the new custom metrics API to fetch the values of the appropriate custom metrics. -See [Support for metrics APIs](#support-for-metrics-APIs) for the requirements. +See [Support for metrics APIs](#support-for-metrics-apis) for the requirements. ## Support for metrics APIs diff --git a/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md b/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md index 728d7a8950854..a265c91974538 100644 --- a/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md +++ b/content/en/docs/tasks/service-catalog/install-service-catalog-using-helm.md @@ -53,12 +53,20 @@ svc-cat/catalog 0.0.1 service-catalog API server and controller-manag... Your Kubernetes cluster must have RBAC enabled, which requires your Tiller Pod(s) to have `cluster-admin` access. -If you are using Minikube, run the `minikube start` command with the following flag: +When using Minikube v0.25 or older, you must run Minikube with RBAC explicitly enabled: ```shell minikube start --extra-config=apiserver.Authorization.Mode=RBAC ``` +When using Minikube v0.26+, run: + +```shell +minikube start +``` + +With Minikube v0.26+, do not specify `--extra-config`. The flag has since been changed to --extra-config=apiserver.authorization-mode and Minikube now uses RBAC by default. Specifying the older flag may cause the start command to hang. + If you are using `hack/local-up-cluster.sh`, set the `AUTHORIZATION_MODE` environment variable with the following values: ``` diff --git a/content/en/docs/tasks/tools/install-kubectl.md b/content/en/docs/tasks/tools/install-kubectl.md index 7a0af0190c101..28b46d9178013 100644 --- a/content/en/docs/tasks/tools/install-kubectl.md +++ b/content/en/docs/tasks/tools/install-kubectl.md @@ -5,6 +5,10 @@ reviewers: title: Install and Set Up kubectl content_template: templates/task weight: 10 +card: + name: tasks + weight: 20 + title: Install kubectl --- {{% capture overview %}} @@ -122,32 +126,39 @@ If you are on Windows and using [Powershell Gallery](https://www.powershellgalle Updating the installation is performed by rerunning the two commands listed in step 1. {{< /note >}} -## Install with Chocolatey on Windows +## Install on Windows using Chocolatey or scoop -If you are on Windows and using [Chocolatey](https://chocolatey.org) package manager, you can install kubectl with Chocolatey. +To install kubectl on Windows you can use either [Chocolatey](https://chocolatey.org) package manager or [scoop](https://scoop.sh) command-line installer. +{{< tabs name="kubectl_win_install" >}} +{{% tab name="choco" %}} -1. Run the installation command: - - ``` choco install kubernetes-cli - ``` - + +{{% /tab %}} +{{% tab name="scoop" %}} + + scoop install kubectl + +{{% /tab %}} +{{< /tabs >}} 2. Test to ensure the version you installed is sufficiently up-to-date: ``` kubectl version ``` -3. Change to your %HOME% directory: - For example: `cd C:\users\yourusername` +3. Navigate to your home directory: -4. Create the .kube directory: + ``` + cd %USERPROFILE% + ``` +4. Create the `.kube` directory: ``` mkdir .kube ``` -5. Change to the .kube directory you just created: +5. Change to the `.kube` directory you just created: ``` cd .kube diff --git a/content/en/docs/tasks/tools/install-minikube.md b/content/en/docs/tasks/tools/install-minikube.md index 32edccfd2c00a..3bb8609e838a0 100644 --- a/content/en/docs/tasks/tools/install-minikube.md +++ b/content/en/docs/tasks/tools/install-minikube.md @@ -2,6 +2,9 @@ title: Install Minikube content_template: templates/task weight: 20 +card: + name: tasks + weight: 10 --- {{% capture overview %}} @@ -59,7 +62,7 @@ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/miniku Here's an easy way to add the Minikube executable to your path: ```shell -sudo cp minikube /usr/local/bin && rm minikube +sudo mv minikube /usr/local/bin ``` ### Linux diff --git a/content/en/docs/tutorials/clusters/apparmor.md b/content/en/docs/tutorials/clusters/apparmor.md index 34d85179fbbf1..bfb453b226d99 100644 --- a/content/en/docs/tutorials/clusters/apparmor.md +++ b/content/en/docs/tutorials/clusters/apparmor.md @@ -390,7 +390,7 @@ Pod is running. To debug problems with AppArmor, you can check the system logs to see what, specifically, was denied. AppArmor logs verbose messages to `dmesg`, and errors can usually be found in the system logs or through `journalctl`. More information is provided in -[AppArmor failures](http://wiki.apparmor.net/index.php/AppArmor_Failures). +[AppArmor failures](https://gitlab.com/apparmor/apparmor/wikis/AppArmor_Failures). ## API Reference @@ -414,7 +414,7 @@ Specifying the profile a container will run with: containers, and unconfined (no profile) for privileged containers. - `localhost/`: Refers to a profile loaded on the node (localhost) by name. - The possible profile names are detailed in the - [core policy reference](http://wiki.apparmor.net/index.php/AppArmor_Core_Policy_Reference#Profile_names_and_attachment_specifications). + [core policy reference](https://gitlab.com/apparmor/apparmor/wikis/AppArmor_Core_Policy_Reference#profile-names-and-attachment-specifications). - `unconfined`: This effectively disables AppArmor on the container. Any other profile reference format is invalid. diff --git a/content/en/docs/tutorials/hello-minikube.md b/content/en/docs/tutorials/hello-minikube.md index 8a5de37cbba8e..5099beadf1725 100644 --- a/content/en/docs/tutorials/hello-minikube.md +++ b/content/en/docs/tutorials/hello-minikube.md @@ -8,6 +8,9 @@ menu: weight: 10 post: >

Ready to get your hands dirty? Build a simple Kubernetes cluster that runs "Hello World" for Node.js.

+card: + name: tutorials + weight: 10 --- {{% capture overview %}} @@ -161,7 +164,6 @@ Kubernetes [*Service*](/docs/concepts/services-networking/service/). 4. Katacoda environment only: Click the plus sign, and then click **Select port to view on Host 1**. 5. Katacoda environment only: Type `30369` (see port opposite to `8080` in services output), and then click -**Display Port**. This opens up a browser window that serves your app and shows the "Hello World" message. diff --git a/content/en/docs/tutorials/kubernetes-basics/_index.html b/content/en/docs/tutorials/kubernetes-basics/_index.html index 6830dca167a58..342cf2cdd7c16 100644 --- a/content/en/docs/tutorials/kubernetes-basics/_index.html +++ b/content/en/docs/tutorials/kubernetes-basics/_index.html @@ -2,6 +2,10 @@ title: Learn Kubernetes Basics linkTitle: Learn Kubernetes Basics weight: 10 +card: + name: tutorials + weight: 20 + title: Walkthrough the basics --- diff --git a/content/en/docs/tutorials/online-training/overview.md b/content/en/docs/tutorials/online-training/overview.md index e52c100556774..7521e2a28a252 100644 --- a/content/en/docs/tutorials/online-training/overview.md +++ b/content/en/docs/tutorials/online-training/overview.md @@ -11,26 +11,36 @@ Here are some of the sites that offer online training for Kubernetes: {{% capture body %}} -* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615) +* [Certified Kubernetes Administrator Preparation Course (LinuxAcademy.com)](https://linuxacademy.com/linux/training/course/name/certified-kubernetes-administrator-preparation-course) -* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x) +* [Certified Kubernetes Application Developer Preparation Course with Practice Tests (KodeKloud.com)](https://kodekloud.com/p/kubernetes-certification-course) + +* [Getting Started with Google Kubernetes Engine (Coursera)](https://www.coursera.org/learn/google-kubernetes-engine) * [Getting Started with Kubernetes (Pluralsight)](https://www.pluralsight.com/courses/getting-started-kubernetes) +* [Google Kubernetes Engine Deep Dive (Linux Academy)] (https://linuxacademy.com/google-cloud-platform/training/course/name/google-kubernetes-engine-deep-dive) + * [Hands-on Introduction to Kubernetes (Instruqt)](https://play.instruqt.com/public/topics/getting-started-with-kubernetes) -* [Learn Kubernetes using Interactive Hands-on Scenarios (Katacoda)](https://www.katacoda.com/courses/kubernetes/) +* [IBM Cloud: Deploying Microservices with Kubernetes (Coursera)](https://www.coursera.org/learn/deploy-micro-kube-ibm-cloud) -* [Certified Kubernetes Administrator Preparation Course (LinuxAcademy.com)](https://linuxacademy.com/linux/training/course/name/certified-kubernetes-administrator-preparation-course) +* [Introduction to Kubernetes (edX)](https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x) -* [Kubernetes the Hard Way (LinuxAcademy.com)](https://linuxacademy.com/linux/training/course/name/kubernetes-the-hard-way) +* [Kubernetes Essentials (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/kubernetes-essentials) * [Kubernetes for the Absolute Beginners with Hands-on Labs (KodeKloud.com)](https://kodekloud.com/p/kubernetes-for-the-absolute-beginners-hands-on) -* [Certified Kubernetes Application Developer Preparation Course with Practice Tests (KodeKloud.com)](https://kodekloud.com/p/kubernetes-certification-course) +* [Kubernetes Quick Start (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/kubernetes-quick-start) -{{% /capture %}} +* [Kubernetes the Hard Way (LinuxAcademy.com)](https://linuxacademy.com/linux/training/course/name/kubernetes-the-hard-way) +* [Learn Kubernetes using Interactive Hands-on Scenarios (Katacoda)](https://www.katacoda.com/courses/kubernetes/) + +* [Monitoring Kubernetes With Prometheus (Linux Academy)] (https://linuxacademy.com/linux/training/course/name/kubernetes-and-prometheus) +* [Scalable Microservices with Kubernetes (Udacity)](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615) +* [Self-paced Kubernetes online course (Learnk8s Academy)](https://learnk8s.io/academy) +{{% /capture %}} diff --git a/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md b/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md index 63b6b032db012..f518c4bb0ddc6 100644 --- a/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md +++ b/content/en/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume.md @@ -4,6 +4,10 @@ reviewers: - ahmetb content_template: templates/tutorial weight: 20 +card: + name: tutorials + weight: 40 + title: "Stateful Example: Wordpress with Persistent Volumes" --- {{% capture overview %}} @@ -104,12 +108,15 @@ The following manifest describes a single-instance MySQL Deployment. The MySQL c kubectl create -f https://k8s.io/examples/application/wordpress/mysql-deployment.yaml ``` -2. Verify that a PersistentVolume got dynamically provisioned. Note that it can - It can take up to a few minutes for the PVs to be provisioned and bound. - +2. Verify that a PersistentVolume got dynamically provisioned. + ```shell kubectl get pvc ``` + + {{< note >}} + It can take up to a few minutes for the PVs to be provisioned and bound. + {{< /note >}} The response should be like this: diff --git a/content/en/docs/tutorials/stateless-application/guestbook.md b/content/en/docs/tutorials/stateless-application/guestbook.md index 2d82a7a045d1c..b8d7045e325ef 100644 --- a/content/en/docs/tutorials/stateless-application/guestbook.md +++ b/content/en/docs/tutorials/stateless-application/guestbook.md @@ -4,6 +4,10 @@ reviewers: - ahmetb content_template: templates/tutorial weight: 20 +card: + name: tutorials + weight: 30 + title: "Stateless Example: PHP Guestbook with Redis" --- {{% capture overview %}} diff --git a/content/en/examples/admin/cloud/pvl-initializer-config.yaml b/content/en/examples/admin/cloud/pvl-initializer-config.yaml deleted file mode 100644 index 4a2576cc2a55e..0000000000000 --- a/content/en/examples/admin/cloud/pvl-initializer-config.yaml +++ /dev/null @@ -1,13 +0,0 @@ -kind: InitializerConfiguration -apiVersion: admissionregistration.k8s.io/v1alpha1 -metadata: - name: pvlabel.kubernetes.io -initializers: - - name: pvlabel.kubernetes.io - rules: - - apiGroups: - - "" - apiVersions: - - "*" - resources: - - persistentvolumes diff --git a/content/en/examples/examples_test.go b/content/en/examples/examples_test.go index 3d0fefdc25580..08cceb5fd174a 100644 --- a/content/en/examples/examples_test.go +++ b/content/en/examples/examples_test.go @@ -298,8 +298,7 @@ func TestExampleObjectSchemas(t *testing.T) { "namespace-prod": {&api.Namespace{}}, }, "admin/cloud": { - "ccm-example": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &extensions.DaemonSet{}}, - "pvl-initializer-config": {&admissionregistration.InitializerConfiguration{}}, + "ccm-example": {&api.ServiceAccount{}, &rbac.ClusterRoleBinding{}, &extensions.DaemonSet{}}, }, "admin/dns": { "busybox": {&api.Pod{}}, diff --git a/content/en/examples/pods/lifecycle-events.yaml b/content/en/examples/pods/lifecycle-events.yaml index e5fcffcc9e755..4b79d7289c568 100644 --- a/content/en/examples/pods/lifecycle-events.yaml +++ b/content/en/examples/pods/lifecycle-events.yaml @@ -12,5 +12,5 @@ spec: command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"] preStop: exec: - command: ["/usr/sbin/nginx","-s","quit"] + command: ["/bin/sh","-c","nginx -s quit; while killall -0 nginx; do sleep 1; done"] diff --git a/content/en/examples/pods/probe/http-liveness.yaml b/content/en/examples/pods/probe/http-liveness.yaml index 23d37b480a06e..670af18399e20 100644 --- a/content/en/examples/pods/probe/http-liveness.yaml +++ b/content/en/examples/pods/probe/http-liveness.yaml @@ -15,7 +15,7 @@ spec: path: /healthz port: 8080 httpHeaders: - - name: X-Custom-Header + - name: Custom-Header value: Awesome initialDelaySeconds: 3 periodSeconds: 3 diff --git a/content/fr/OWNERS b/content/fr/OWNERS new file mode 100644 index 0000000000000..c91ec02821e6f --- /dev/null +++ b/content/fr/OWNERS @@ -0,0 +1,13 @@ +# See the OWNERS docs at https://go.k8s.io/owners + +# This is the localization project for French. +# Teams and members are visible at https://github.com/orgs/kubernetes/teams. + +reviewers: +- sig-docs-fr-reviews + +approvers: +- sig-docs-fr-owners + +labels: +- language/fr diff --git a/content/fr/_common-resources/index.md b/content/fr/_common-resources/index.md new file mode 100644 index 0000000000000..3d65eaa0ff97e --- /dev/null +++ b/content/fr/_common-resources/index.md @@ -0,0 +1,3 @@ +--- +headless: true +--- \ No newline at end of file diff --git a/content/fr/docs/_index.md b/content/fr/docs/_index.md new file mode 100644 index 0000000000000..05e96e2901631 --- /dev/null +++ b/content/fr/docs/_index.md @@ -0,0 +1,3 @@ +--- +title: Documentation +--- diff --git a/content/fr/docs/concepts/_index.md b/content/fr/docs/concepts/_index.md new file mode 100644 index 0000000000000..cd1ea84780bee --- /dev/null +++ b/content/fr/docs/concepts/_index.md @@ -0,0 +1,91 @@ +--- +title: Concepts +main_menu: true +content_template: templates/concept +weight: 40 +--- + +{{% capture overview %}} + +La section Concepts vous aide à mieux comprendre les composants du système Kubernetes et les abstractions que Kubernetes utilise pour représenter votre cluster. +Elle vous aide également à mieux comprendre le fonctionnement de Kubernetes en général. + +{{% /capture %}} + +{{% capture body %}} + +## Vue d'ensemble + +Pour utiliser Kubernetes, vous utilisez *les objets de l'API Kubernetes* pour décrire *l'état souhaité* de votre cluster: quelles applications ou autres processus que vous souhaitez exécuter, quelles images de conteneur elles utilisent, le nombre de réplicas, les ressources réseau et disque que vous mettez à disposition, et plus encore. +Vous définissez l'état souhaité en créant des objets à l'aide de l'API Kubernetes, généralement via l'interface en ligne de commande, `kubectl`. +Vous pouvez également utiliser l'API Kubernetes directement pour interagir avec le cluster et définir ou modifier l'état souhaité. + +Une fois que vous avez défini l'état souhaité, le *plan de contrôle Kubernetes* (control plane en anglais) permet de faire en sorte que l'état actuel du cluster corresponde à l'état souhaité. +Pour ce faire, Kubernetes effectue automatiquement diverses tâches, telles que le démarrage ou le redémarrage de conteneurs, la mise à jour du nombre de réplicas d'une application donnée, etc. +Le control plane Kubernetes comprend un ensemble de processus en cours d'exécution sur votre cluster: + +* Le **maître Kubernetes** (Kubernetes master en anglais) qui est un ensemble de trois processus qui s'exécutent sur un seul nœud de votre cluster, désigné comme nœud maître (master node en anglais). Ces processus sont: [kube-apiserver](/docs/admin/kube-apiserver/), [kube-controller-manager](/docs/admin/kube-controller-manager/) et [kube-scheduler](/docs/admin/kube-scheduler/). +* Chaque nœud non maître de votre cluster exécute deux processus: + * **[kubelet](/docs/admin/kubelet/)**, qui communique avec le Kubernetes master. + * **[kube-proxy](/docs/admin/kube-proxy/)**, un proxy réseau reflétant les services réseau Kubernetes sur chaque nœud. + +## Objets Kubernetes + +Kubernetes contient un certain nombre d'abstractions représentant l'état de votre système: applications et processus conteneurisés déployés, leurs ressources réseau et disque associées, ainsi que d'autres informations sur les activités de votre cluster. +Ces abstractions sont représentées par des objets de l'API Kubernetes; consultez [Vue d'ensemble des objets Kubernetes](/docs/concepts/abstractions/overview/) pour plus d'informations. + +Les objets de base de Kubernetes incluent: + +* [Pod](/docs/concepts/workloads/pods/pod-overview/) +* [Service](/docs/concepts/services-networking/service/) +* [Volume](/docs/concepts/storage/volumes/) +* [Namespace](/docs/concepts/overview/working-with-objects/namespaces/) + +En outre, Kubernetes contient un certain nombre d'abstractions de niveau supérieur appelées Contrôleurs. +Les contrôleurs s'appuient sur les objets de base et fournissent des fonctionnalités supplémentaires. + +Voici quelques exemples: + +* [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) +* [Deployment](/docs/concepts/workloads/controllers/deployment/) +* [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) +* [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) +* [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/) + +## Kubernetes control plane + +Les différentes parties du control plane Kubernetes, telles que les processus Kubernetes master et kubelet, déterminent la manière dont Kubernetes communique avec votre cluster. +Le control plane conserve un enregistrement de tous les objets Kubernetes du système et exécute des boucles de contrôle continues pour gérer l'état de ces objets. +À tout moment, les boucles de contrôle du control plane répondent aux modifications du cluster et permettent de faire en sorte que l'état réel de tous les objets du système corresponde à l'état souhaité que vous avez fourni. + +Par exemple, lorsque vous utilisez l'API Kubernetes pour créer un objet Deployment, vous fournissez un nouvel état souhaité pour le système. +Le control plane Kubernetes enregistre la création de cet objet et exécute vos instructions en lançant les applications requises et en les planifiant vers des nœuds de cluster, afin que l'état actuel du cluster corresponde à l'état souhaité. + +### Kubernetes master + +Le Kubernetes master est responsable du maintien de l'état souhaité pour votre cluster. +Lorsque vous interagissez avec Kubernetes, par exemple en utilisant l'interface en ligne de commande `kubectl`, vous communiquez avec le master Kubernetes de votre cluster. + +> Le "master" fait référence à un ensemble de processus gérant l'état du cluster. +En règle générale, tous les processus sont exécutés sur un seul nœud du cluster. +Ce nœud est également appelé master. +Le master peut également être répliqué pour la disponibilité et la redondance. + +### Noeuds Kubernetes + +Les nœuds d’un cluster sont les machines (serveurs physiques, machines virtuelles, etc.) qui exécutent vos applications et vos workflows. +Le master node Kubernetes contrôle chaque noeud; vous interagirez rarement directement avec les nœuds. + +#### Metadonnées des objets Kubernetes + +* [Annotations](/docs/concepts/overview/working-with-objects/annotations/) + +{{% /capture %}} + +{{% capture whatsnext %}} + +Si vous souhaitez écrire une page de concept, consultez +[Utilisation de modèles de page](/docs/home/contribute/page-templates/) +pour plus d'informations sur le type de page pour la documentation d'un concept. + +{{% /capture %}} diff --git a/content/fr/docs/concepts/containers/_index.md b/content/fr/docs/concepts/containers/_index.md new file mode 100644 index 0000000000000..9a86e2af74afe --- /dev/null +++ b/content/fr/docs/concepts/containers/_index.md @@ -0,0 +1,4 @@ +--- +title: "Les conteneurs" +weight: 40 +--- \ No newline at end of file diff --git a/content/fr/docs/concepts/containers/container-environment-variables.md b/content/fr/docs/concepts/containers/container-environment-variables.md new file mode 100644 index 0000000000000..efe686422bf0e --- /dev/null +++ b/content/fr/docs/concepts/containers/container-environment-variables.md @@ -0,0 +1,69 @@ +--- +reviewers: +- sieben +- perriea +- lledru +- awkif +- yastij +- rbenzair +- oussemos +title: Les variables d’environnement du conteneur +content_template: templates/concept +weight: 20 +--- + +{{% capture overview %}} + +Cette page décrit les ressources disponibles pour les conteneurs dans l'environnement de conteneur. + +{{% /capture %}} + + +{{% capture body %}} + +## L'environnement du conteneur + +L’environnement Kubernetes conteneur fournit plusieurs ressources importantes aux conteneurs: + +* Un système de fichier, qui est une combinaison d'une [image](/docs/concepts/containers/images/) et un ou plusieurs [volumes](/docs/concepts/storage/volumes/). +* Informations sur le conteneur lui-même. +* Informations sur les autres objets du cluster. + +### Informations sur le conteneur + +Le nom d'*hôte* d'un conteneur est le nom du pod dans lequel le conteneur est en cours d'exécution. +Il est disponible via la commande `hostname` ou +[`gethostname`](http://man7.org/linux/man-pages/man2/gethostname.2.html) +dans libc. + +Le nom du pod et le namespace sont disponibles en tant que variables d'environnement via +[l'API downward](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/). + +Les variables d'environnement définies par l'utilisateur à partir de la définition de pod sont également disponibles pour le conteneur, +de même que toutes les variables d'environnement spécifiées de manière statique dans l'image Docker. + +### Informations sur le cluster + +Une liste de tous les services en cours d'exécution lors de la création d'un conteneur est disponible pour ce conteneur en tant que variables d'environnement. +Ces variables d'environnement correspondent à la syntaxe des liens Docker. + +Pour un service nommé *foo* qui correspond à un conteneur *bar*, +les variables suivantes sont définies: + +```shell +FOO_SERVICE_HOST= +FOO_SERVICE_PORT= +``` + +Les services ont des adresses IP dédiées et sont disponibles pour le conteneur avec le DNS, +si le [module DNS](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/dns/) est activé.  + +{{% /capture %}} + +{{% capture whatsnext %}} + +* En savoir plus sur [les hooks du cycle de vie d'un conteneur](/docs/concepts/containers/container-lifecycle-hooks/). +* Acquérir une expérience pratique + [en attachant les handlers aux événements du cycle de vie du conteneur](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/). + +{{% /capture %}} diff --git a/content/fr/docs/concepts/overview/_index.md b/content/fr/docs/concepts/overview/_index.md new file mode 100644 index 0000000000000..df9dc83e3d831 --- /dev/null +++ b/content/fr/docs/concepts/overview/_index.md @@ -0,0 +1,4 @@ +--- +title: "Vue d'ensemble" +weight: 20 +--- diff --git a/content/fr/docs/concepts/overview/what-is-kubernetes.md b/content/fr/docs/concepts/overview/what-is-kubernetes.md new file mode 100644 index 0000000000000..a6166f73a9e7e --- /dev/null +++ b/content/fr/docs/concepts/overview/what-is-kubernetes.md @@ -0,0 +1,136 @@ +--- +reviewers: + - jygastaud + - lledru + - sieben +title: Qu'est-ce-que Kubernetes ? +content_template: templates/concept +weight: 10 +card: + name: concepts + weight: 10 +--- + +{{% capture overview %}} +Cette page est une vue d'ensemble de Kubernetes. +{{% /capture %}} + +{{% capture body %}} +Kubernetes est une plate-forme open-source extensible et portable pour la gestion de charges de travail (workloads) et des services conteneurisés. +Elle favorise à la fois l'écriture de configuration déclarative (declarative configuration) et l'automatisation. +C'est un large écosystème en rapide expansion. +Les services, le support et les outils Kubernetes sont largement disponibles. + +Google a rendu open-source le projet Kubernetes en 2014. +Le développement de Kubernetes est basé sur une [décennie et demie d’expérience de Google avec la gestion de la charge et de la mise à l'échelle (scale) en production](https://research.google.com/pubs/pub43438.html), associé aux meilleures idées et pratiques de la communauté. + +## Pourquoi ai-je besoin de Kubernetes et que peut-il faire ? + +Kubernetes a un certain nombre de fonctionnalités. Il peut être considéré comme: + +- une plate-forme de conteneur +- une plate-forme de microservices +- une plate-forme cloud portable +et beaucoup plus. + +Kubernetes fournit un environnement de gestion **focalisé sur le conteneur** (container-centric). +Il orchestre les ressources machines (computing), la mise en réseau et l’infrastructure de stockage sur les workloads des utilisateurs. +Cela permet de se rapprocher de la simplicité des Platform as a Service (PaaS) avec la flexibilité des solutions d'Infrastructure as a Service (IaaS), tout en gardant de la portabilité entre les différents fournisseurs d'infrastructures (providers). + +## Comment Kubernetes est-il une plate-forme ? + +Même si Kubernetes fournit de nombreuses fonctionnalités, il existe toujours de nouveaux scénarios qui bénéficieraient de fonctionnalités complémentaires. +Ces workflows spécifiques à une application permettent d'accélérer la vitesse de développement. +Si l'orchestration fournie de base est acceptable pour commencer, il est souvent nécessaire d'avoir une automatisation robuste lorsque l'on doit la faire évoluer. +C'est pourquoi Kubernetes a également été conçu pour servir de plate-forme et favoriser la construction d’un écosystème de composants et d’outils facilitant le déploiement, la mise à l’échelle et la gestion des applications. + +[Les Labels](/docs/concepts/overview/working-with-objects/labels/) permettent aux utilisateurs d'organiser leurs ressources comme ils/elles le souhaitent. +[Les Annotations](/docs/concepts/overview/working-with-objects/annotations/) autorisent les utilisateurs à définir des informations personnalisées sur les ressources pour faciliter leurs workflows et fournissent un moyen simple aux outils de gérer la vérification d'un état (checkpoint state). + +De plus, le [plan de contrôle Kubernetes (control +plane)](/docs/concepts/overview/components/) est construit sur les mêmes [APIs](/docs/reference/using-api/api-overview/) que celles accessibles aux développeurs et utilisateurs. +Les utilisateurs peuvent écrire leurs propres controlleurs (controllers), tels que les [ordonnanceurs (schedulers)](https://github.com/kubernetes/community/blob/{{< param "githubbranch" >}}/contributors/devel/scheduler.md), +avec [leurs propres APIs](/docs/concepts/api-extension/custom-resources/) qui peuvent être utilisés par un [outil en ligne de commande](/docs/user-guide/kubectl-overview/). + +Ce choix de [conception](https://git.k8s.io/community/contributors/design-proposals/architecture/architecture.md) a permis de construire un ensemble d'autres systèmes par dessus Kubernetes. + +## Ce que Kubernetes n'est pas + +Kubernetes n’est pas une solution PaaS (Platform as a Service). +Kubernetes opérant au niveau des conteneurs plutôt qu'au niveau du matériel, il fournit une partie des fonctionnalités des offres PaaS, telles que le déploiement, la mise à l'échelle, l'équilibrage de charge (load balancing), la journalisation (logging) et la surveillance (monitoring). +Cependant, Kubernetes n'est pas monolithique. +Ces implémentations par défaut sont optionnelles et interchangeables. Kubernetes fournit les bases permettant de construire des plates-formes orientées développeurs, en laissant la possibilité à l'utilisateur de faire ses propres choix. + +Kubernetes: + +- Ne limite pas les types d'applications supportées. Kubernetes prend en charge des workloads extrêmement divers, dont des applications stateless, stateful ou orientées traitement de données (data-processing). +Si l'application peut fonctionner dans un conteneur, elle devrait bien fonctionner sur Kubernetes. +- Ne déploie pas de code source et ne build pas d'application non plus. Les workflows d'Intégration Continue, de Livraison Continue et de Déploiement Continu (CI/CD) sont réalisés en fonction de la culture d'entreprise, des préférences ou des pré-requis techniques. +- Ne fournit pas nativement de services au niveau applicatif tels que des middlewares (e.g., message buses), des frameworks de traitement de données (par exemple, Spark), des bases de données (e.g., mysql), caches, ou systèmes de stockage clusterisés (e.g., Ceph). +Ces composants peuvent être lancés dans Kubernetes et/ou être accessibles à des applications tournant dans Kubernetes via des mécaniques d'intermédiation tel que Open Service Broker. +- N'impose pas de solutions de logging, monitoring, ou alerting. +Kubernetes fournit quelques intégrations primaires et des mécanismes de collecte et export de métriques. +- Ne fournit ou n'impose un langague/système de configuration (e.g., [jsonnet](https://github.com/google/jsonnet)). +Il fournit une API déclarative qui peut être ciblée par n'importe quelle forme de spécifications déclaratives. +- Ne fournit ou n'adopte aucune mécanique de configuration des machines, de maintenance, de gestion ou de contrôle de la santé des systèmes. + +De plus, Kubernetes n'est pas vraiment un _système d'orchestration_. En réalité, il élimine le besoin d'orchestration. +Techniquement, l'_orchestration_ se définie par l'exécution d'un workflow défini : premièrement faire A, puis B, puis C. +Kubernetes quant à lui est composé d'un ensemble de processus de contrôle qui pilote l'état courant vers l'état désiré. +Peu importe comment on arrive du point A au point C. +Un contrôle centralisé n'est pas non plus requis. +Cela abouti à un système plus simple à utiliser et plus puissant, robuste, résiliant et extensible. + +## Pourquoi les conteneurs ? + +Vous cherchez des raisons d'utiliser des conteneurs ? + +![Pourquoi les conteneurs ?](/images/docs/why_containers.svg) + +L'_ancienne façon (old way)_ de déployer des applications consistait à installer les applications sur un hôte en utilisant les systèmes de gestions de paquets natifs. +Cela avait pour principale inconvénient de lier fortement les exécutables, la configuration, les librairies et le cycle de vie de chacun avec l'OS. +Il est bien entendu possible de construire une image de machine virtuelle (VM) immuable pour arriver à produire des publications (rollouts) ou retours arrières (rollbacks), mais les VMs sont lourdes et non-portables. + +La _nouvelle façon (new way)_ consiste à déployer des conteneurs basés sur une virtualisation au niveau du système d'opération (operation-system-level) plutôt que de la virtualisation hardware. +Ces conteneurs sont isolés les uns des autres et de l'hôte : +ils ont leurs propres systèmes de fichiers, ne peuvent voir que leurs propres processus et leur usage des ressources peut être contraint. +Ils sont aussi plus facile à construire que des VMs, et vu qu'ils sont décorrélés de l'infrastructure sous-jacente et du système de fichiers de l'hôte, ils sont aussi portables entre les différents fournisseurs de Cloud et les OS. + +Étant donné que les conteneurs sont petits et rapides, une application peut être packagées dans chaque image de conteneurs. +Cette relation application-image tout-en-un permet de bénéficier de tous les bénéfices des conteneurs. Avec les conteneurs, des images immuables de conteneur peuvent être créées au moment du build/release plutôt qu'au déploiement, vu que chaque application ne dépend pas du reste de la stack applicative et n'est pas liée à l'environnement de production. +La génération d'images de conteneurs au moment du build permet d'obtenir un environnement constant qui peut être déployé tant en développement qu'en production. De la même manière, les conteneurs sont bien plus transparents que les VMs, ce qui facilite le monitoring et le management. +Cela est particulièrement vrai lorsque le cycle de vie des conteneurs est géré par l'infrastructure plutôt que caché par un gestionnaire de processus à l'intérieur du conteneur. Avec une application par conteneur, gérer ces conteneurs équivaut à gérer le déploiement de son application. + +Résumé des bénéfices des conteneurs : + +- **Création et déploiement agile d'application** : + Augmente la simplicité et l'efficacité de la création d'images par rapport à l'utilisation d'image de VM. +- **Développement, intégration et déploiement Continus**: + Fournit un processus pour constuire et déployer fréquemment et de façon fiable avec la capacité de faire des rollbacks rapide et simple (grâce à l'immuabilité de l'image). +- **Séparation des besoins entre Dev et Ops**: + Création d'images applicatives au moment du build plutôt qu'au déploiement, tout en séparant l'application de l'infrastructure. +- **Observabilité** + Pas seulement des informations venant du système d'exploitation sous-jacent mais aussi des signaux propres de l'application. +- **Consistance entre les environnements de développement, tests et production**: + Fonctionne de la même manière que ce soit sur un poste local que chez un fournisseur d'hébergement / dans le Cloud. +- **Portabilité entre Cloud et distribution système**: + Fonctionne sur Ubuntu, RHEL, CoreOS, on-prem, Google Kubernetes Engine, et n'importe où. +- **Gestion centrée Application**: + Bascule le niveau d'abstraction d'une virtualisation hardware liée à l'OS à une logique de ressources orientée application. +- **[Micro-services](https://martinfowler.com/articles/microservices.html) faiblement couplés, distribués, élastiques**: + Les applications sont séparées en petits morceaux indépendants et peuvent être déployés et gérés dynamiquement -- pas une stack monolithique dans une seule machine à tout faire. +- **Isolation des ressources**: + Performances de l'application prédictible. +- **Utilisation des ressources**: + Haute efficacité et densité. + +## Qu'est-ce-que Kubenetes signifie ? K8s ? + +Le nom **Kubernetes** tire son origine du grec ancien, signifiant _capitaine_ ou _pilôte_ et est la racine de _gouverneur_ et [cybernetic](http://www.etymonline.com/index.php?term=cybernetics). _K8s_ est l'abréviation dérivée par le remplacement des 8 lettres "ubernete" par "8". + +{{% /capture %}} + +{{% capture whatsnext %}} +* Prêt à [commencer](/docs/setup/) ? +* Pour plus de détails, voir la [documentation Kubernetes](/docs/home/). +{{% /capture %}} diff --git a/content/fr/docs/home/_index.md b/content/fr/docs/home/_index.md new file mode 100644 index 0000000000000..46b0678291a4b --- /dev/null +++ b/content/fr/docs/home/_index.md @@ -0,0 +1,19 @@ +--- +approvers: +- chenopis +title: Documentation de Kubernetes +noedit: true +cid: docsHome +layout: docsportal_home +class: gridPage +linkTitle: "Home" +main_menu: true +weight: 10 +hide_feedback: true +menu: + main: + title: "Documentation" + weight: 20 + post: > +

Apprenez à utiliser Kubernetes à l'aide d'une documentation conceptuelle, didactique et de référence. Vous pouvez même aider en contribuant à la documentation!

+--- diff --git a/content/fr/docs/home/supported-doc-versions.md b/content/fr/docs/home/supported-doc-versions.md new file mode 100644 index 0000000000000..3be5b0d2d83be --- /dev/null +++ b/content/fr/docs/home/supported-doc-versions.md @@ -0,0 +1,22 @@ +--- +title: Versions supportées de la documentation Kubernetes +content_template: templates/concept +--- + +{{% capture overview %}} + +Ce site contient la documentation de la version actuelle de Kubernetes et les quatre versions précédentes de Kubernetes. + +{{% /capture %}} + +{{% capture body %}} + +## Version courante + +La version actuelle est [{{< param "version" >}}](/). + +## Versions précédentes + +{{< versions-other >}} + +{{% /capture %}} diff --git a/content/fr/docs/reference/kubectl/cheatsheet.md b/content/fr/docs/reference/kubectl/cheatsheet.md new file mode 100644 index 0000000000000..d8252970e7e05 --- /dev/null +++ b/content/fr/docs/reference/kubectl/cheatsheet.md @@ -0,0 +1,342 @@ +--- +title: Aide-mémoire kubectl +content_template: templates/concept +card: + name: reference + weight: 30 +--- + +{{% capture overview %}} + +Voir aussi : [Aperçu Kubectl](/docs/reference/kubectl/overview/) et [Guide JsonPath](/docs/reference/kubectl/jsonpath). + +Cette page donne un aperçu de la commande `kubectl`. + +{{% /capture %}} + +{{% capture body %}} + +# Aide-mémoire kubectl + +## Auto-complétion avec Kubectl + +### BASH + +```bash +source <(kubectl completion bash) # active l'auto-complétion pour bash dans le shell courant, le paquet bash-completion devant être installé au préalable +echo "source <(kubectl completion bash)" >> ~/.bashrc # ajoute l'auto-complétion de manière permanente à votre shell bash +``` + +Vous pouvez de plus déclarer un alias pour `kubectl` qui fonctionne aussi avec l'auto-complétion : + +```bash +alias k=kubectl +complete -F __start_kubectl k +``` + +### ZSH + +```bash +source <(kubectl completion zsh) # active l'auto-complétion pour zsh dans le shell courant +echo "if [ $commands[kubectl] ]; then source <(kubectl completion zsh); fi" >> ~/.zshrc # ajoute l'auto-complétion de manière permanente à votre shell zsh +``` + +## Contexte et configuration de Kubectl + +Indique avec quel cluster Kubernetes `kubectl` communique et modifie les informations de configuration. Voir la documentation [Authentification multi-clusters avec kubeconfig](/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) pour des informations détaillées sur le fichier de configuration. + +```bash +kubectl config view # Affiche les paramètres fusionnés de kubeconfig + +# Utilise plusieurs fichiers kubeconfig en même temps et affiche la configuration fusionnée +KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 kubectl config view + +# Affiche le mot de passe pour l'utilisateur e2e +kubectl config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}' + +kubectl config current-context # Affiche le contexte courant (current-context) +kubectl config use-context my-cluster-name # Définit my-cluster-name comme contexte courant + +# Ajoute un nouveau cluster à votre kubeconf, prenant en charge l'authentification de base (basic auth) +kubectl config set-credentials kubeuser/foo.kubernetes.com --username=kubeuser --password=kubepassword + +# Définit et utilise un contexte qui utilise un nom d'utilisateur et un namespace spécifiques +kubectl config set-context gce --user=cluster-admin --namespace=foo \ + && kubectl config use-context gce +``` + +## Création d'objets + +Les manifests Kubernetes peuvent être définis en json ou yaml. Les extensions de fichier `.yaml`, +`.yml`, et `.json` peuvent être utilisés. + +```bash +kubectl create -f ./my-manifest.yaml # crée une ou plusieurs ressources +kubectl create -f ./my1.yaml -f ./my2.yaml # crée depuis plusieurs fichiers +kubectl create -f ./dir # crée une ou plusieurs ressources depuis tous les manifests dans dir +kubectl create -f https://git.io/vPieo # crée une ou plusieurs ressources depuis une url +kubectl create deployment nginx --image=nginx # démarre une instance unique de nginx +kubectl explain pods,svc # affiche la documentation pour les manifests pod et svc + +# Crée plusieurs objets YAML depuis l'entrée standard (stdin) +cat </dev/null; printf "\n"; done + +# Vérifie quels noeuds sont prêts +JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \ + && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True" + +# Liste tous les Secrets actuellement utilisés par un pod +kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq + +# Liste les événements (Events) classés par timestamp +kubectl get events --sort-by=.metadata.creationTimestamp +``` + +## Mise à jour de ressources + +Depuis la version 1.11, `rolling-update` a été déprécié (voir [CHANGELOG-1.11.md](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md)), utilisez plutôt `rollout`. + +```bash +kubectl set image deployment/frontend www=image:v2 # Rolling update du conteneur "www" du déploiement "frontend", par mise à jour de son image +kubectl rollout undo deployment/frontend # Rollback du déploiement précédent +kubectl rollout status -w deployment/frontend # Écoute (Watch) le status du rolling update du déploiement "frontend" jusqu'à ce qu'il se termine + +# déprécié depuis la version 1.11 +kubectl rolling-update frontend-v1 -f frontend-v2.json # (déprécié) Rolling update des pods de frontend-v1 +kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2 # (déprécié) Modifie le nom de la ressource et met à jour l'image +kubectl rolling-update frontend --image=image:v2 # (déprécié) Met à jour l'image du pod du déploiement frontend +kubectl rolling-update frontend-v1 frontend-v2 --rollback # (déprécié) Annule (rollback) le rollout en cours + +cat pod.json | kubectl replace -f - # Remplace un pod, en utilisant un JSON passé en entrée standard + +# Remplace de manière forcée (Force replace), supprime puis re-crée la ressource. Provoque une interruption de service. +kubectl replace --force -f ./pod.json + +# Crée un service pour un nginx repliqué, qui rend le service sur le port 80 et se connecte aux conteneurs sur le port 8000 +kubectl expose rc nginx --port=80 --target-port=8000 + +# Modifie la version (tag) de l'image du conteneur unique du pod à v4 +kubectl get pod mypod -o yaml | sed 's/\(image: myimage\):.*$/\1:v4/' | kubectl replace -f - + +kubectl label pods my-pod new-label=awesome # Ajoute un Label +kubectl annotate pods my-pod icon-url=http://goo.gl/XXBTWq # Ajoute une annotation +kubectl autoscale deployment foo --min=2 --max=10 # Mise à l'échelle automatique (Auto scale) d'un déploiement "foo" +``` + +## Mise à jour partielle de ressources + +```bash +kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}' # Met à jour partiellement un noeud + +# Met à jour l'image d'un conteneur ; spec.containers[*].name est requis car c'est une clé du merge +kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}' + +# Met à jour l'image d'un conteneur en utilisant un patch json avec tableaux indexés +kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]' + +# Désactive la livenessProbe d'un déploiement en utilisant un patch json avec tableaux indexés +kubectl patch deployment valid-deployment --type json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/livenessProbe"}]' + +# Ajoute un nouvel élément à un tableau indexé +kubectl patch sa default --type='json' -p='[{"op": "add", "path": "/secrets/1", "value": {"name": "whatever" } }]' +``` + +## Édition de ressources +Ceci édite n'importe quelle ressource de l'API dans un éditeur. + +```bash +kubectl edit svc/docker-registry # Édite le service nommé docker-registry +KUBE_EDITOR="nano" kubectl edit svc/docker-registry # Utilise un autre éditeur +``` + +## Mise à l'échelle de ressources + +```bash +kubectl scale --replicas=3 rs/foo # Scale un replicaset nommé 'foo' à 3 +kubectl scale --replicas=3 -f foo.yaml # Scale une ressource spécifiée dans foo.yaml" à 3 +kubectl scale --current-replicas=2 --replicas=3 deployment/mysql # Si la taille du déploiement nommé mysql est actuellement 2, scale mysql à 3 +kubectl scale --replicas=5 rc/foo rc/bar rc/baz # Scale plusieurs contrôleurs de réplication +``` + +## Suppression de ressources + +```bash +kubectl delete -f ./pod.json # Supprime un pod en utilisant le type et le nom spécifiés dans pod.json +kubectl delete pod,service baz foo # Supprime les pods et services ayant les mêmes noms "baz" et "foo" +kubectl delete pods,services -l name=myLabel # Supprime les pods et services ayant le label name=myLabel +kubectl delete pods,services -l name=myLabel --include-uninitialized # Supprime les pods et services, dont ceux non initialisés, ayant le label name=myLabel +kubectl -n my-ns delete po,svc --all # Supprime tous les pods et services, dont ceux non initialisés, dans le namespace my-ns +``` + +## Interaction avec des Pods en cours d'exécution + +```bash +kubectl logs my-pod # Affiche les logs du pod (stdout) +kubectl logs my-pod --previous # Affiche les logs du pod (stdout) pour une instance précédente du conteneur +kubectl logs my-pod -c my-container # Affiche les logs d'un conteneur particulier du pod (stdout, cas d'un pod multi-conteneurs) +kubectl logs my-pod -c my-container --previous # Affiche les logs d'un conteneur particulier du pod (stdout, cas d'un pod multi-conteneurs) pour une instance précédente du conteneur +kubectl logs -f my-pod # Fait défiler (stream) les logs du pod (stdout) +kubectl logs -f my-pod -c my-container # Fait défiler (stream) les logs d'un conteneur particulier du pod (stdout, cas d'un pod multi-conteneurs) +kubectl run -i --tty busybox --image=busybox -- sh # Exécute un pod comme un shell interactif +kubectl attach my-pod -i # Attache à un conteneur en cours d'exécution +kubectl port-forward my-pod 5000:6000 # Écoute le port 5000 de la machine locale et forwarde vers le port 6000 de my-pod +kubectl exec my-pod -- ls / # Exécute une commande dans un pod existant (cas d'un seul conteneur) +kubectl exec my-pod -c my-container -- ls / # Exécute une commande dans un pod existant (cas multi-conteneurs) +kubectl top pod POD_NAME --containers # Affiche les métriques pour un pod donné et ses conteneurs +``` + +## Interaction avec des Noeuds et Clusters + +```bash +kubectl cordon mon-noeud # Marque mon-noeud comme non assignable (unschedulable) +kubectl drain mon-noeud # Draine mon-noeud en préparation d'une mise en maintenance +kubectl uncordon mon-noeud # Marque mon-noeud comme assignable +kubectl top node mon-noeud # Affiche les métriques pour un noeud donné +kubectl cluster-info # Affiche les adresses du master et des services +kubectl cluster-info dump # Affiche l'état courant du cluster sur stdout +kubectl cluster-info dump --output-directory=/path/to/cluster-state # Affiche l'état courant du cluster sur /path/to/cluster-state + +# Si une teinte avec cette clé et cet effet existe déjà, sa valeur est remplacée comme spécifié. +kubectl taint nodes foo dedicated=special-user:NoSchedule +``` + +### Types de ressources + +Liste tous les types de ressources pris en charge avec leurs noms courts (shortnames), [groupe d'API (API group)](/docs/concepts/overview/kubernetes-api/#api-groups), si elles sont [cantonnées à un namespace (namespaced)](/docs/concepts/overview/working-with-objects/namespaces), et leur [Genre (Kind)](/docs/concepts/overview/working-with-objects/kubernetes-objects): + +```bash +kubectl api-resources +``` + +Autres opérations pour explorer les ressources de l'API : + +```bash +kubectl api-resources --namespaced=true # Toutes les ressources cantonnées à un namespace +kubectl api-resources --namespaced=false # Toutes les ressources non cantonnées à un namespace +kubectl api-resources -o name # Toutes les ressources avec un affichage simple (uniquement le nom de la ressource) +kubectl api-resources -o wide # Toutes les ressources avec un affichage étendu (alias "wide") +kubectl api-resources --verbs=list,get # Toutes les ressources prenant en charge les verbes de requête "list" et "get" +kubectl api-resources --api-group=extensions # Toutes les ressources dans le groupe d'API "extensions" +``` + +### Formattage de l'affichage + +Pour afficher les détails sur votre terminal dans un format spécifique, vous pouvez utiliser une des options `-o` ou `--output` avec les commandes `kubectl` qui les prennent en charge. + +| Format d'affichage | Description | +|-------------------------------------|-----------------------------------------------------------------------------------------------------------------------| +| `-o=custom-columns=` | Affiche un tableau en spécifiant une liste de colonnes séparées par des virgules | +| `-o=custom-columns-file=` | Affiche un tableau en utilisant les colonnes spécifiées dans le fichier `` | +| `-o=json` | Affiche un objet de l'API formaté en JSON | +| `-o=jsonpath=